WorldWideScience

Sample records for network demonstrating scalability

  1. Scalable and Media Aware Adaptive Video Streaming over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Béatrice Pesquet-Popescu

    2008-07-01

    Full Text Available This paper proposes an advanced video streaming system based on scalable video coding in order to optimize resource utilization in wireless networks with retransmission mechanisms at radio protocol level. The key component of this system is a packet scheduling algorithm which operates on the different substreams of a main scalable video stream and which is implemented in a so-called media aware network element. The concerned type of transport channel is a dedicated channel subject to parameters (bitrate, loss rate variations on the long run. Moreover, we propose a combined scalability approach in which common temporal and SNR scalability features can be used jointly with a partitioning of the image into regions of interest. Simulation results show that our approach provides substantial quality gain compared to classical packet transmission methods and they demonstrate how ROI coding combined with SNR scalability allows to improve again the visual quality.

  2. Experimental demonstration of an improved EPON architecture using OFDMA for bandwidth scalable LAN emulation

    DEFF Research Database (Denmark)

    Deng, Lei; Zhao, Ying; Yu, Xianbin

    2011-01-01

    We propose and demonstrate an improved Ethernet passive optical network (EPON) architecture supporting bandwidth-scalable physical layer local area network (LAN) emulation. Due to the use of orthogonal frequency division multiple access (OFDMA) technology for the LAN traffic transmission, there i...

  3. Scalable Networking for Cloud Datacenters

    CERN Multimedia

    CERN. Geneva

    2010-01-01

    Andy will discuss the architectural evolution of Ethernet networks and switch architectures as they are being designed to address much larger cloud networking applications that require predictable throughput and latency.About the speakerAs Chief Development Officer, Andy Bechtolsheim is responsible for the overall product development and technical direction of Arista Networks.Previously Andy was a Founder and Chief System Architect at Sun Microsystems, where most recently he was responsible for industry standard server architecture. Andy was also a Founder and President of Granite Systems, a Gigabit Ethernet startup acquired by Cisco Systems in 1996. From 1996 until 2003 Andy served as VP/GM of the Gigabit Systems Business Unit at Cisco that developed the very successful Catalyst 4500 family of switches. Andy was also a Founder and President of Kealia, a next generation server company acquired by Sun in 2004.Andy received an M.S. in Computer Engineering from Carnegie Mellon University in 1976 and was a Ph.D. ...

  4. Scalable Optical-Fiber Communication Networks

    Science.gov (United States)

    Chow, Edward T.; Peterson, John C.

    1993-01-01

    Scalable arbitrary fiber extension network (SAFEnet) is conceptual fiber-optic communication network passing digital signals among variety of computers and input/output devices at rates from 200 Mb/s to more than 100 Gb/s. Intended for use with very-high-speed computers and other data-processing and communication systems in which message-passing delays must be kept short. Inherent flexibility makes it possible to match performance of network to computers by optimizing configuration of interconnections. In addition, interconnections made redundant to provide tolerance to faults.

  5. High performance SDN enabled flat data center network architecture based on scalable and flow-controlled optical switching system

    NARCIS (Netherlands)

    Calabretta, N.; Miao, W.; Dorren, H.J.S.

    2015-01-01

    We demonstrate a reconfigurable virtual datacenter network by utilizing statistical multiplexing offered by scalable and flow-controlled optical switching system. Results show QoS guarantees by the priority assignment and load balancing for applications in virtual networks.

  6. Scalability of optical networks : crosstalk limitations

    NARCIS (Netherlands)

    Tafur Monroy, I.

    2000-01-01

    Optical networks represent a promising solution for the future high capacity and flexible transport network. This paper presents a model for the performance evaluation of optical networks with respect to linear crosstalk and accumulated spontaneous emission noise. The proposed model is intended for

  7. Scalable power selection method for wireless mesh networks

    CSIR Research Space (South Africa)

    Olwal, TO

    2009-01-01

    Full Text Available This paper addresses the problem of a scalable dynamic power control (SDPC) for wireless mesh networks (WMNs) based on IEEE 802.11 standards. An SDPC model that accounts for architectural complexities witnessed in multiple radios and hops...

  8. Scalable Coverage Maintenance for Dense Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jun Lu

    2007-06-01

    Full Text Available Owing to numerous potential applications, wireless sensor networks have been attracting significant research effort recently. The critical challenge that wireless sensor networks often face is to sustain long-term operation on limited battery energy. Coverage maintenance schemes can effectively prolong network lifetime by selecting and employing a subset of sensors in the network to provide sufficient sensing coverage over a target region. We envision future wireless sensor networks composed of a vast number of miniaturized sensors in exceedingly high density. Therefore, the key issue of coverage maintenance for future sensor networks is the scalability to sensor deployment density. In this paper, we propose a novel coverage maintenance scheme, scalable coverage maintenance (SCOM, which is scalable to sensor deployment density in terms of communication overhead (i.e., number of transmitted and received beacons and computational complexity (i.e., time and space complexity. In addition, SCOM achieves high energy efficiency and load balancing over different sensors. We have validated our claims through both analysis and simulations.

  9. Novel flat datacenter network architecture based on scalable and flow-controlled optical switch system

    NARCIS (Netherlands)

    Miao, W.; Luo, J.; Di Lucente, S.; Dorren, H.J.S.; Calabretta, N.

    2013-01-01

    We propose and demonstrate an optical flat datacenter network based on scalable optical switch system with optical flow control. 4×4 dynamic switch operation at 40 Gb/s reported 300ns minimum end-to-end latency (including 25m transmission link) and

  10. A Scalable Policy and SNMP Based Network Management Framework

    Institute of Scientific and Technical Information of China (English)

    LIU Su-ping; DING Yong-sheng

    2009-01-01

    Traditional SNMP-based network management can not deal with the task of managing large-scaled distributed network,while policy-based management is one of the effective solutions in network and distributed systems management. However,cross-vendor hardware compatibility is one of the limitations in policy-based management. Devices existing in current network mostly support SNMP rather than Common Open Policy Service (COPS) protocol. By analyzing traditional network management and policy-based network management, a scalable network management framework is proposed. It is combined with Internet Engineering Task Force (IETF) framework for policybased management and SNMP-based network management. By interpreting and translating policy decision to SNMP message,policy can be executed in traditional SNMP-based device.

  11. Scalable infrastructure for distributed sensor networks

    National Research Council Canada - National Science Library

    Chakrabarty, Krishnendu; Iyengar, S. S

    2005-01-01

    ... network application is inventory tracking in factory warehouses. A single sensor node can be attached to each item in the warehouse. These sensor nodes can then be used for tracking the location of the items as they are moved within the warehouse. They can also provide information on the location of nearby items as well as the history of movement...

  12. Scalable Lunar Surface Networks and Adaptive Orbit Access

    Science.gov (United States)

    Wang, Xudong

    2015-01-01

    Teranovi Technologies, Inc., has developed innovative network architecture, protocols, and algorithms for both lunar surface and orbit access networks. A key component of the overall architecture is a medium access control (MAC) protocol that includes a novel mechanism of overlaying time division multiple access (TDMA) and carrier sense multiple access with collision avoidance (CSMA/CA), ensuring scalable throughput and quality of service. The new MAC protocol is compatible with legacy Institute of Electrical and Electronics Engineers (IEEE) 802.11 networks. Advanced features include efficiency power management, adaptive channel width adjustment, and error control capability. A hybrid routing protocol combines the advantages of ad hoc on-demand distance vector (AODV) routing and disruption/delay-tolerant network (DTN) routing. Performance is significantly better than AODV or DTN and will be particularly effective for wireless networks with intermittent links, such as lunar and planetary surface networks and orbit access networks.

  13. Scalable Brain Network Construction on White Matter Fibers.

    Science.gov (United States)

    Chung, Moo K; Adluru, Nagesh; Dalton, Kim M; Alexander, Andrew L; Davidson, Richard J

    2011-02-12

    DTI offers a unique opportunity to characterize the structural connectivity of the human brain non-invasively by tracing white matter fiber tracts. Whole brain tractography studies routinely generate up to half million tracts per brain, which serves as edges in an extremely large 3D graph with up to half million edges. Currently there is no agreed-upon method for constructing the brain structural network graphs out of large number of white matter tracts. In this paper, we present a scalable iterative framework called the ε-neighbor method for building a network graph and apply it to testing abnormal connectivity in autism.

  14. Scalable and Hybrid Radio Resource Management for Future Wireless Networks

    DEFF Research Database (Denmark)

    Mino, E.; Luo, Jijun; Tragos, E.

    2007-01-01

    The concept of ubiquitous and scalable system is applied in the IST WINNER II [1] project to deliver optimum performance for different deployment scenarios, from local area to wide area wireless networks. The integration in a unique radio system of a cellular and local area type networks supposes...... a great advantage for the final user and for the operator, compared with the current situation, with disconnected systems, usually with different subscriptions, radio interfaces and terminals. To be a ubiquitous wireless system, the IST project WINNER II has defined three system modes. This contribution...

  15. fastBMA: scalable network inference and transitive reduction.

    Science.gov (United States)

    Hung, Ling-Hong; Shi, Kaiyuan; Wu, Migao; Young, William Chad; Raftery, Adrian E; Yeung, Ka Yee

    2017-10-01

    Inferring genetic networks from genome-wide expression data is extremely demanding computationally. We have developed fastBMA, a distributed, parallel, and scalable implementation of Bayesian model averaging (BMA) for this purpose. fastBMA also includes a computationally efficient module for eliminating redundant indirect edges in the network by mapping the transitive reduction to an easily solved shortest-path problem. We evaluated the performance of fastBMA on synthetic data and experimental genome-wide time series yeast and human datasets. When using a single CPU core, fastBMA is up to 100 times faster than the next fastest method, LASSO, with increased accuracy. It is a memory-efficient, parallel, and distributed application that scales to human genome-wide expression data. A 10 000-gene regulation network can be obtained in a matter of hours using a 32-core cloud cluster (2 nodes of 16 cores). fastBMA is a significant improvement over its predecessor ScanBMA. It is more accurate and orders of magnitude faster than other fast network inference methods such as the 1 based on LASSO. The improved scalability allows it to calculate networks from genome scale data in a reasonable time frame. The transitive reduction method can improve accuracy in denser networks. fastBMA is available as code (M.I.T. license) from GitHub (https://github.com/lhhunghimself/fastBMA), as part of the updated networkBMA Bioconductor package (https://www.bioconductor.org/packages/release/bioc/html/networkBMA.html) and as ready-to-deploy Docker images (https://hub.docker.com/r/biodepot/fastbma/). © The Authors 2017. Published by Oxford University Press.

  16. Silicon nanophotonics for scalable quantum coherent feedback networks

    Energy Technology Data Exchange (ETDEWEB)

    Sarovar, Mohan; Brif, Constantin [Sandia National Laboratories, Livermore, CA (United States); Soh, Daniel B.S. [Sandia National Laboratories, Livermore, CA (United States); Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States); Cox, Jonathan; DeRose, Christopher T.; Camacho, Ryan; Davids, Paul [Sandia National Laboratories, Albuquerque, NM (United States)

    2016-12-15

    The emergence of coherent quantum feedback control (CQFC) as a new paradigm for precise manipulation of dynamics of complex quantum systems has led to the development of efficient theoretical modeling and simulation tools and opened avenues for new practical implementations. This work explores the applicability of the integrated silicon photonics platform for implementing scalable CQFC networks. If proven successful, on-chip implementations of these networks would provide scalable and efficient nanophotonic components for autonomous quantum information processing devices and ultra-low-power optical processing systems at telecommunications wavelengths. We analyze the strengths of the silicon photonics platform for CQFC applications and identify the key challenges to both the theoretical formalism and experimental implementations. In particular, we determine specific extensions to the theoretical CQFC framework (which was originally developed with bulk-optics implementations in mind), required to make it fully applicable to modeling of linear and nonlinear integrated optics networks. We also report the results of a preliminary experiment that studied the performance of an in situ controllable silicon nanophotonic network of two coupled cavities and analyze the properties of this device using the CQFC formalism. (orig.)

  17. Silicon nanophotonics for scalable quantum coherent feedback networks

    International Nuclear Information System (INIS)

    Sarovar, Mohan; Brif, Constantin; Soh, Daniel B.S.; Cox, Jonathan; DeRose, Christopher T.; Camacho, Ryan; Davids, Paul

    2016-01-01

    The emergence of coherent quantum feedback control (CQFC) as a new paradigm for precise manipulation of dynamics of complex quantum systems has led to the development of efficient theoretical modeling and simulation tools and opened avenues for new practical implementations. This work explores the applicability of the integrated silicon photonics platform for implementing scalable CQFC networks. If proven successful, on-chip implementations of these networks would provide scalable and efficient nanophotonic components for autonomous quantum information processing devices and ultra-low-power optical processing systems at telecommunications wavelengths. We analyze the strengths of the silicon photonics platform for CQFC applications and identify the key challenges to both the theoretical formalism and experimental implementations. In particular, we determine specific extensions to the theoretical CQFC framework (which was originally developed with bulk-optics implementations in mind), required to make it fully applicable to modeling of linear and nonlinear integrated optics networks. We also report the results of a preliminary experiment that studied the performance of an in situ controllable silicon nanophotonic network of two coupled cavities and analyze the properties of this device using the CQFC formalism. (orig.)

  18. A Secure and Stable Multicast Overlay Network with Load Balancing for Scalable IPTV Services

    Directory of Open Access Journals (Sweden)

    Tsao-Ta Wei

    2012-01-01

    Full Text Available The emerging multimedia Internet application IPTV over P2P network preserves significant advantages in scalability. IPTV media content delivered in P2P networks over public Internet still preserves the issues of privacy and intellectual property rights. In this paper, we use SIP protocol to construct a secure application-layer multicast overlay network for IPTV, called SIPTVMON. SIPTVMON can secure all the IPTV media delivery paths against eavesdroppers via elliptic-curve Diffie-Hellman (ECDH key exchange on SIP signaling and AES encryption. Its load-balancing overlay tree is also optimized from peer heterogeneity and churn of peer joining and leaving to minimize both service degradation and latency. The performance results from large-scale simulations and experiments on different optimization criteria demonstrate SIPTVMON's cost effectiveness in quality of privacy protection, stability from user churn, and good perceptual quality of objective PSNR values for scalable IPTV services over Internet.

  19. CX: A Scalable, Robust Network for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Peter Cappello

    2002-01-01

    Full Text Available CX, a network-based computational exchange, is presented. The system's design integrates variations of ideas from other researchers, such as work stealing, non-blocking tasks, eager scheduling, and space-based coordination. The object-oriented API is simple, compact, and cleanly separates application logic from the logic that supports interprocess communication and fault tolerance. Computations, of course, run to completion in the presence of computational hosts that join and leave the ongoing computation. Such hosts, or producers, use task caching and prefetching to overlap computation with interprocessor communication. To break a potential task server bottleneck, a network of task servers is presented. Even though task servers are envisioned as reliable, the self-organizing, scalable network of n- servers, described as a sibling-connected height-balanced fat tree, tolerates a sequence of n-1 server failures. Tasks are distributed throughout the server network via a simple "diffusion" process. CX is intended as a test bed for research on automated silent auctions, reputation services, authentication services, and bonding services. CX also provides a test bed for algorithm research into network-based parallel computation.

  20. High-performance, scalable optical network-on-chip architectures

    Science.gov (United States)

    Tan, Xianfang

    The rapid advance of technology enables a large number of processing cores to be integrated into a single chip which is called a Chip Multiprocessor (CMP) or a Multiprocessor System-on-Chip (MPSoC) design. The on-chip interconnection network, which is the communication infrastructure for these processing cores, plays a central role in a many-core system. With the continuously increasing complexity of many-core systems, traditional metallic wired electronic networks-on-chip (NoC) became a bottleneck because of the unbearable latency in data transmission and extremely high energy consumption on chip. Optical networks-on-chip (ONoC) has been proposed as a promising alternative paradigm for electronic NoC with the benefits of optical signaling communication such as extremely high bandwidth, negligible latency, and low power consumption. This dissertation focus on the design of high-performance and scalable ONoC architectures and the contributions are highlighted as follow: 1. A micro-ring resonator (MRR)-based Generic Wavelength-routed Optical Router (GWOR) is proposed. A method for developing any sized GWOR is introduced. GWOR is a scalable non-blocking ONoC architecture with simple structure, low cost and high power efficiency compared to existing ONoC designs. 2. To expand the bandwidth and improve the fault tolerance of the GWOR, a redundant GWOR architecture is designed by cascading different type of GWORs into one network. 3. The redundant GWOR built with MRR-based comb switches is proposed. Comb switches can expand the bandwidth while keep the topology of GWOR unchanged by replacing the general MRRs with comb switches. 4. A butterfly fat tree (BFT)-based hybrid optoelectronic NoC (HONoC) architecture is developed in which GWORs are used for global communication and electronic routers are used for local communication. The proposed HONoC uses less numbers of electronic routers and links than its counterpart of electronic BFT-based NoC. It takes the advantages of

  1. Scalable Approaches to Control Network Dynamics: Prospects for City Networks

    Science.gov (United States)

    Motter, Adilson E.; Gray, Kimberly A.

    2014-07-01

    A city is a complex, emergent system and as such can be conveniently represented as a network of interacting components. A fundamental aspect of networks is that the systemic properties can depend as much on the interactions as they depend on the properties of the individual components themselves. Another fundamental aspect is that changes to one component can affect other components, in a process that may cause the entire or a substantial part of the system to change behavior. Over the past 2 decades, much research has been done on the modeling of large and complex networks involved in communication and transportation, disease propagation, and supply chains, as well as emergent phenomena, robustness and optimization in such systems...

  2. Scalable, ultra-resistant structural colors based on network metamaterials

    KAUST Repository

    Galinski, Henning

    2017-05-05

    Structural colors have drawn wide attention for their potential as a future printing technology for various applications, ranging from biomimetic tissues to adaptive camouflage materials. However, an efficient approach to realize robust colors with a scalable fabrication technique is still lacking, hampering the realization of practical applications with this platform. Here, we develop a new approach based on large-scale network metamaterials that combine dealloyed subwavelength structures at the nanoscale with lossless, ultra-thin dielectric coatings. By using theory and experiments, we show how subwavelength dielectric coatings control a mechanism of resonant light coupling with epsilon-near-zero regions generated in the metallic network, generating the formation of saturated structural colors that cover a wide portion of the spectrum. Ellipsometry measurements support the efficient observation of these colors, even at angles of 70°. The network-like architecture of these nanomaterials allows for high mechanical resistance, which is quantified in a series of nano-scratch tests. With such remarkable properties, these metastructures represent a robust design technology for real-world, large-scale commercial applications.

  3. An open, interoperable, and scalable prehospital information technology network architecture.

    Science.gov (United States)

    Landman, Adam B; Rokos, Ivan C; Burns, Kevin; Van Gelder, Carin M; Fisher, Roger M; Dunford, James V; Cone, David C; Bogucki, Sandy

    2011-01-01

    Some of the most intractable challenges in prehospital medicine include response time optimization, inefficiencies at the emergency medical services (EMS)-emergency department (ED) interface, and the ability to correlate field interventions with patient outcomes. Information technology (IT) can address these and other concerns by ensuring that system and patient information is received when and where it is needed, is fully integrated with prior and subsequent patient information, and is securely archived. Some EMS agencies have begun adopting information technologies, such as wireless transmission of 12-lead electrocardiograms, but few agencies have developed a comprehensive plan for management of their prehospital information and integration with other electronic medical records. This perspective article highlights the challenges and limitations of integrating IT elements without a strategic plan, and proposes an open, interoperable, and scalable prehospital information technology (PHIT) architecture. The two core components of this PHIT architecture are 1) routers with broadband network connectivity to share data between ambulance devices and EMS system information services and 2) an electronic patient care report to organize and archive all electronic prehospital data. To successfully implement this comprehensive PHIT architecture, data and technology requirements must be based on best available evidence, and the system must adhere to health data standards as well as privacy and security regulations. Recent federal legislation prioritizing health information technology may position federal agencies to help design and fund PHIT architectures.

  4. Artificial brains. A million spiking-neuron integrated circuit with a scalable communication network and interface.

    Science.gov (United States)

    Merolla, Paul A; Arthur, John V; Alvarez-Icaza, Rodrigo; Cassidy, Andrew S; Sawada, Jun; Akopyan, Filipp; Jackson, Bryan L; Imam, Nabil; Guo, Chen; Nakamura, Yutaka; Brezzo, Bernard; Vo, Ivan; Esser, Steven K; Appuswamy, Rathinakumar; Taba, Brian; Amir, Arnon; Flickner, Myron D; Risk, William P; Manohar, Rajit; Modha, Dharmendra S

    2014-08-08

    Inspired by the brain's structure, we have developed an efficient, scalable, and flexible non-von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts. Copyright © 2014, American Association for the Advancement of Science.

  5. A Scalable Distribution Network Risk Evaluation Framework via Symbolic Dynamics

    Science.gov (United States)

    Yuan, Kai; Liu, Jian; Liu, Kaipei; Tan, Tianyuan

    2015-01-01

    Background Evaluations of electric power distribution network risks must address the problems of incomplete information and changing dynamics. A risk evaluation framework should be adaptable to a specific situation and an evolving understanding of risk. Methods This study investigates the use of symbolic dynamics to abstract raw data. After introducing symbolic dynamics operators, Kolmogorov-Sinai entropy and Kullback-Leibler relative entropy are used to quantitatively evaluate relationships between risk sub-factors and main factors. For layered risk indicators, where the factors are categorized into four main factors – device, structure, load and special operation – a merging algorithm using operators to calculate the risk factors is discussed. Finally, an example from the Sanya Power Company is given to demonstrate the feasibility of the proposed method. Conclusion Distribution networks are exposed and can be affected by many things. The topology and the operating mode of a distribution network are dynamic, so the faults and their consequences are probabilistic. PMID:25789859

  6. A scalable and continuous-upgradable optical wireless and wired convergent access network.

    Science.gov (United States)

    Sung, J Y; Cheng, K T; Chow, C W; Yeh, C H; Pan, C-L

    2014-06-02

    In this work, a scalable and continuous upgradable convergent optical access network is proposed. By using a multi-wavelength coherent comb source and a programmable waveshaper at the central office (CO), optical millimeter-wave (mm-wave) signals of different frequencies (from baseband to > 100 GHz) can be generated. Hence, it provides a scalable and continuous upgradable solution for end-user who needs 60 GHz wireless services now and > 100 GHz wireless services in the future. During the upgrade, user only needs to upgrade their optical networking unit (ONU). A programmable waveshaper is used to select the suitable optical tones with wavelength separation equals to the desired mm-wave frequency; while the CO remains intact. The centralized characteristics of the proposed system can easily add any new service and end-user. The centralized control of the wavelength makes the system more stable. Wired data rate of 17.45 Gb/s and w-band wireless data rate up to 3.36 Gb/s were demonstrated after transmission over 40 km of single-mode fiber (SMF).

  7. Novel flat datacenter network architecture based on scalable and flow-controlled optical switch system.

    Science.gov (United States)

    Miao, Wang; Luo, Jun; Di Lucente, Stefano; Dorren, Harm; Calabretta, Nicola

    2014-02-10

    We propose and demonstrate an optical flat datacenter network based on scalable optical switch system with optical flow control. Modular structure with distributed control results in port-count independent optical switch reconfiguration time. RF tone in-band labeling technique allowing parallel processing of the label bits ensures the low latency operation regardless of the switch port-count. Hardware flow control is conducted at optical level by re-using the label wavelength without occupying extra bandwidth, space, and network resources which further improves the performance of latency within a simple structure. Dynamic switching including multicasting operation is validated for a 4 x 4 system. Error free operation of 40 Gb/s data packets has been achieved with only 1 dB penalty. The system could handle an input load up to 0.5 providing a packet loss lower that 10(-5) and an average latency less that 500 ns when a buffer size of 16 packets is employed. Investigation on scalability also indicates that the proposed system could potentially scale up to large port count with limited power penalty.

  8. Scalable DeNoise-and-Forward in Bidirectional Relay Networks

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Krigslund, Rasmus; Popovski, Petar

    2010-01-01

    In this paper a scalable relaying scheme is proposed based on an existing concept called DeNoise-and-Forward, DNF. We call it Scalable DNF, S-DNF, and it targets the scenario with multiple communication flows through a single common relay. The idea of the scheme is to combine packets at the relay...... in order to save transmissions. To ensure decodability at the end-nodes, a priori information about the content of the combined packets must be available. This is gathered during the initial transmissions to the relay. The trade-off between decodability and number of necessary transmissions is analysed...

  9. Scalable Lunar Surface Networks and Adaptive Orbit Access, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Innovative network architecture, protocols, and algorithms are proposed for both lunar surface networks and orbit access networks. Firstly, an overlaying...

  10. The Solar Umbrella: A Low-cost Demonstration of Scalable Space Based Solar Power

    Science.gov (United States)

    Contreras, Michael T.; Trease, Brian P.; Sherwood, Brent

    2013-01-01

    Within the past decade, the Space Solar Power (SSP) community has seen an influx of stakeholders willing to entertain the SSP prospect of potentially boundless, base-load solar energy. Interested parties affiliated with the Department of Defense (DoD), the private sector, and various international entities have all agreed that while the benefits of SSP are tremendous and potentially profitable, the risk associated with developing an efficient end to end SSP harvesting system is still very high. In an effort to reduce the implementation risk for future SSP architectures, this study proposes a system level design that is both low-cost and seeks to demonstrate the furthest transmission of wireless power to date. The overall concept is presented and each subsystem is explained in detail with best estimates of current implementable technologies. Basic cost models were constructed based on input from JPL subject matter experts and assume that the technology demonstration would be carried out by a federally funded entity. The main thrust of the architecture is to demonstrate that a usable amount of solar power can be safely and reliably transmitted from space to the Earth's surface; however, maximum power scalability limits and their cost implications are discussed.

  11. A Scalable, Timing-Safe, Network-on-Chip Architecture with an Integrated Clock Distribution Method

    DEFF Research Database (Denmark)

    Bjerregaard, Tobias; Stensgaard, Mikkel Bystrup; Sparsø, Jens

    2007-01-01

    Growing system sizes together with increasing performance variability are making globally synchronous operation hard to realize. Mesochronous clocking constitutes a possible solution to the problems faced. The most fundamental of problems faced when communicating between mesochronously clocked re...... is based purely on local observations. It is demonstrated with a 90 nm CMOS standard cell network-on-chip design which implements completely timing-safe, global communication in a modular system......Growing system sizes together with increasing performance variability are making globally synchronous operation hard to realize. Mesochronous clocking constitutes a possible solution to the problems faced. The most fundamental of problems faced when communicating between mesochronously clocked...... regions concerns the possibility of data corruption caused by metastability. This paper presents an integrated communication and mesochronous clocking strategy, which avoids timing related errors while maintaining a globally synchronous system perspective. The architecture is scalable as timing integrity...

  12. Scalable and practical multi-objective distribution network expansion planning

    NARCIS (Netherlands)

    Luong, N.H.; Grond, M.O.W.; Poutré, La J.A.; Bosman, P.A.N.

    2015-01-01

    We formulate the distribution network expansion planning (DNEP) problem as a multi-objective optimization (MOO) problem with different objectives that distribution network operators (DNOs) would typically like to consider during decision making processes for expanding their networks. Objectives are

  13. Scalable Video Streaming in Wireless Mesh Networks for Education

    Science.gov (United States)

    Liu, Yan; Wang, Xinheng; Zhao, Liqiang

    2011-01-01

    In this paper, a video streaming system for education based on a wireless mesh network is proposed. A wireless mesh network is a self-organizing, self-managing and reliable intelligent network, which allows educators to deploy a network quickly. Video streaming plays an important role in this system for multimedia data transmission. This new…

  14. A scalable computational framework for establishing long-term behavior of stochastic reaction networks.

    Directory of Open Access Journals (Sweden)

    Ankit Gupta

    2014-06-01

    Full Text Available Reaction networks are systems in which the populations of a finite number of species evolve through predefined interactions. Such networks are found as modeling tools in many biological disciplines such as biochemistry, ecology, epidemiology, immunology, systems biology and synthetic biology. It is now well-established that, for small population sizes, stochastic models for biochemical reaction networks are necessary to capture randomness in the interactions. The tools for analyzing such models, however, still lag far behind their deterministic counterparts. In this paper, we bridge this gap by developing a constructive framework for examining the long-term behavior and stability properties of the reaction dynamics in a stochastic setting. In particular, we address the problems of determining ergodicity of the reaction dynamics, which is analogous to having a globally attracting fixed point for deterministic dynamics. We also examine when the statistical moments of the underlying process remain bounded with time and when they converge to their steady state values. The framework we develop relies on a blend of ideas from probability theory, linear algebra and optimization theory. We demonstrate that the stability properties of a wide class of biological networks can be assessed from our sufficient theoretical conditions that can be recast as efficient and scalable linear programs, well-known for their tractability. It is notably shown that the computational complexity is often linear in the number of species. We illustrate the validity, the efficiency and the wide applicability of our results on several reaction networks arising in biochemistry, systems biology, epidemiology and ecology. The biological implications of the results as well as an example of a non-ergodic biological network are also discussed.

  15. Scalability analysis of large-scale LoRaWAN networks in ns-3

    OpenAIRE

    Abeele, Floris Van den; Haxhibeqiri, Jetmir; Moerman, Ingrid; Hoebeke, Jeroen

    2017-01-01

    As LoRaWAN networks are actively being deployed in the field, it is important to comprehend the limitations of this Low Power Wide Area Network technology. Previous work has raised questions in terms of the scalability and capacity of LoRaWAN networks as the number of end devices grows to hundreds or thousands per gateway. Some works have modeled LoRaWAN networks as pure ALOHA networks, which fails to capture important characteristics such as the capture effect and the effects of interference...

  16. LOGISTIC NETWORK REGRESSION FOR SCALABLE ANALYSIS OF NETWORKS WITH JOINT EDGE/VERTEX DYNAMICS.

    Science.gov (United States)

    Almquist, Zack W; Butts, Carter T

    2014-08-01

    Change in group size and composition has long been an important area of research in the social sciences. Similarly, interest in interaction dynamics has a long history in sociology and social psychology. However, the effects of endogenous group change on interaction dynamics are a surprisingly understudied area. One way to explore these relationships is through social network models. Network dynamics may be viewed as a process of change in the edge structure of a network, in the vertex set on which edges are defined, or in both simultaneously. Although early studies of such processes were primarily descriptive, recent work on this topic has increasingly turned to formal statistical models. Although showing great promise, many of these modern dynamic models are computationally intensive and scale very poorly in the size of the network under study and/or the number of time points considered. Likewise, currently used models focus on edge dynamics, with little support for endogenously changing vertex sets. Here, the authors show how an existing approach based on logistic network regression can be extended to serve as a highly scalable framework for modeling large networks with dynamic vertex sets. The authors place this approach within a general dynamic exponential family (exponential-family random graph modeling) context, clarifying the assumptions underlying the framework (and providing a clear path for extensions), and they show how model assessment methods for cross-sectional networks can be extended to the dynamic case. Finally, the authors illustrate this approach on a classic data set involving interactions among windsurfers on a California beach.

  17. Increasing Scalability of Researcher Network Extraction from the Web

    Science.gov (United States)

    Asada, Yohei; Matsuo, Yutaka; Ishizuka, Mitsuru

    Social networks, which describe relations among people or organizations as a network, have recently attracted attention. With the help of a social network, we can analyze the structure of a community and thereby promote efficient communications within it. We investigate the problem of extracting a network of researchers from the Web, to assist efficient cooperation among researchers. Our method uses a search engine to get the cooccurences of names of two researchers and calculates the streangth of the relation between them. Then we label the relation by analyzing the Web pages in which these two names cooccur. Research on social network extraction using search engines as ours, is attracting attention in Japan as well as abroad. However, the former approaches issue too many queries to search engines to extract a large-scale network. In this paper, we propose a method to filter superfluous queries and facilitates the extraction of large-scale networks. By this method we are able to extract a network of around 3000-nodes. Our experimental results show that the proposed method reduces the number of queries significantly while preserving the quality of the network as compared to former methods.

  18. Edison Demonstration of Smallsat Networks (EDSN)

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA's Edison Demonstration of SmallSat Networks (EDSN) mission will launch and deploy a group of eight cubesats into a loose formation approximately 250 miles (400...

  19. Linux containers networking: performance and scalability of kernel modules

    NARCIS (Netherlands)

    Claassen, J.; Koning, R.; Grosso, P.; Oktug Badonnel, S.; Ulema, M.; Cavdar, C.; Zambenedetti Granville, L.; dos Santos, C.R.P.

    2016-01-01

    Linux container virtualisation is gaining momentum as lightweight technology to support cloud and distributed computing. Applications relying on container architectures might at times rely on inter-container communication, and container networking solutions are emerging to address this need.

  20. Scalable Wrap-Around Shuffle Exchange Network with Deflection Routing

    Science.gov (United States)

    Monacos, Steve P. (Inventor)

    1997-01-01

    The invention in one embodiment is a communication network including plural non-blocking crossbar nodes, first apparatus for connecting the nodes in a first layer of connecting links, and second apparatus for connecting links independent of the first layer, whereby each layer is connected to the other layer at each point of the nodes. Preferably, each one of the layers of connecting links corresponds to one recirculating network topology that closes in on itself.

  1. A Framework for Scalable TSV Assignment and Selection in Three-Dimensional Networks-on-Chips

    Directory of Open Access Journals (Sweden)

    Amir Charif

    2017-01-01

    Full Text Available 3D integration can greatly benefit future many-cores by enabling low-latency three-dimensional Network-on-Chip (3D-NoC topologies. However, due to high cost, low yield, and frequent failures of Through-Silicon Via (TSV, 3D-NoCs are most likely to include only a few vertical connections, resulting in incomplete topologies that pose new challenges in terms of deadlock-free routing and TSV assignment. The routers of such networks require a way to locate the nodes that have vertical connections, commonly known as elevators, and select one of them in order to be able to reach other layers when necessary. In this paper, several alternative TSV selection strategies requiring a constant amount of configurable bits per router are introduced. Each proposed solution consists of a configuration algorithm, which provides each router with the necessary information to locate the elevators, and a routing algorithm, which uses this information at runtime to route packets to an elevator. Our algorithms are compared by simulation to highlight the advantages and disadvantages of each solution under various scenarios, and hardware synthesis results demonstrate the scalability of the proposed approach and its suitability for cost-oriented designs.

  2. Scalable Spectrum Sharing Mechanism for Local Area Networks Deployment

    DEFF Research Database (Denmark)

    Da Costa, Gustavo Wagner Oliveira; Cattoni, Andrea Fabio; Kovacs, Istvan Zsolt

    2010-01-01

    in high mobility conditions. These goals can be only achieved through the use highly Optimized Local Area (OLA) access networks, operating at low range and low power transmissions. The efficient sharing of radio resources among OLAs will be very difficult to achieve with a traditional network planning......The availability on the market of powerful and lightweight mobile devices has led to a fast diffusion of mobile services for end users and the trend is shifting from voice based services to multimedia contents distribution. The current access networks are, however, able to support relatively low...... data rates and with limited Quality of Service (QoS). In order to extend the access to high data rate services to wireless users, the International Telecommunication Union (ITU) established new requirements for future wireless communication technologies of up to 1Gbps in low mobility and up to 100Mbps...

  3. Spatiotemporal Stochastic Modeling of IoT Enabled Cellular Networks: Scalability and Stability Analysis

    KAUST Repository

    Gharbieh, Mohammad; Elsawy, Hesham; Bader, Ahmed; Alouini, Mohamed-Slim

    2017-01-01

    The Internet of Things (IoT) is large-scale by nature, which is manifested by the massive number of connected devices as well as their vast spatial existence. Cellular networks, which provide ubiquitous, reliable, and efficient wireless access, will play fundamental rule in delivering the first-mile access for the data tsunami to be generated by the IoT. However, cellular networks may have scalability problems to provide uplink connectivity to massive numbers of connected things. To characterize the scalability of cellular uplink in the context of IoT networks, this paper develops a traffic-aware spatiotemporal mathematical model for IoT devices supported by cellular uplink connectivity. The developed model is based on stochastic geometry and queueing theory to account for the traffic requirement per IoT device, the different transmission strategies, and the mutual interference between the IoT devices. To this end, the developed model is utilized to characterize the extent to which cellular networks can accommodate IoT traffic as well as to assess and compare three different transmission strategies that incorporate a combination of transmission persistency, backoff, and power-ramping. The analysis and the results clearly illustrate the scalability problem imposed by IoT on cellular network and offer insights into effective scenarios for each transmission strategy.

  4. Spatiotemporal Stochastic Modeling of IoT Enabled Cellular Networks: Scalability and Stability Analysis

    KAUST Repository

    Gharbieh, Mohammad

    2017-05-02

    The Internet of Things (IoT) is large-scale by nature, which is manifested by the massive number of connected devices as well as their vast spatial existence. Cellular networks, which provide ubiquitous, reliable, and efficient wireless access, will play fundamental rule in delivering the first-mile access for the data tsunami to be generated by the IoT. However, cellular networks may have scalability problems to provide uplink connectivity to massive numbers of connected things. To characterize the scalability of cellular uplink in the context of IoT networks, this paper develops a traffic-aware spatiotemporal mathematical model for IoT devices supported by cellular uplink connectivity. The developed model is based on stochastic geometry and queueing theory to account for the traffic requirement per IoT device, the different transmission strategies, and the mutual interference between the IoT devices. To this end, the developed model is utilized to characterize the extent to which cellular networks can accommodate IoT traffic as well as to assess and compare three different transmission strategies that incorporate a combination of transmission persistency, backoff, and power-ramping. The analysis and the results clearly illustrate the scalability problem imposed by IoT on cellular network and offer insights into effective scenarios for each transmission strategy.

  5. Historical building monitoring using an energy-efficient scalable wireless sensor network architecture.

    Science.gov (United States)

    Capella, Juan V; Perles, Angel; Bonastre, Alberto; Serrano, Juan J

    2011-01-01

    We present a set of novel low power wireless sensor nodes designed for monitoring wooden masterpieces and historical buildings, in order to perform an early detection of pests. Although our previous star-based system configuration has been in operation for more than 13 years, it does not scale well for sensorization of large buildings or when deploying hundreds of nodes. In this paper we demonstrate the feasibility of a cluster-based dynamic-tree hierarchical Wireless Sensor Network (WSN) architecture where realistic assumptions of radio frequency data transmission are applied to cluster construction, and a mix of heterogeneous nodes are used to minimize economic cost of the whole system and maximize power saving of the leaf nodes. Simulation results show that the specialization of a fraction of the nodes by providing better antennas and some energy harvesting techniques can dramatically extend the life of the entire WSN and reduce the cost of the whole system. A demonstration of the proposed architecture with a new routing protocol and applied to termite pest detection has been implemented on a set of new nodes and should last for about 10 years, but it provides better scalability, reliability and deployment properties.

  6. Historical Building Monitoring Using an Energy-Efficient Scalable Wireless Sensor Network Architecture

    Science.gov (United States)

    Capella, Juan V.; Perles, Angel; Bonastre, Alberto; Serrano, Juan J.

    2011-01-01

    We present a set of novel low power wireless sensor nodes designed for monitoring wooden masterpieces and historical buildings, in order to perform an early detection of pests. Although our previous star-based system configuration has been in operation for more than 13 years, it does not scale well for sensorization of large buildings or when deploying hundreds of nodes. In this paper we demonstrate the feasibility of a cluster-based dynamic-tree hierarchical Wireless Sensor Network (WSN) architecture where realistic assumptions of radio frequency data transmission are applied to cluster construction, and a mix of heterogeneous nodes are used to minimize economic cost of the whole system and maximize power saving of the leaf nodes. Simulation results show that the specialization of a fraction of the nodes by providing better antennas and some energy harvesting techniques can dramatically extend the life of the entire WSN and reduce the cost of the whole system. A demonstration of the proposed architecture with a new routing protocol and applied to termite pest detection has been implemented on a set of new nodes and should last for about 10 years, but it provides better scalability, reliability and deployment properties. PMID:22346630

  7. Scalable Harmonization of Complex Networks With Local Adaptive Controllers

    Czech Academy of Sciences Publication Activity Database

    Kárný, Miroslav; Herzallah, R.

    2017-01-01

    Roč. 47, č. 3 (2017), s. 394-404 ISSN 2168-2216 R&D Projects: GA ČR GA13-13502S Institutional support: RVO:67985556 Keywords : Adaptive control * Adaptive estimation * Bayes methods * Complex networks * Decentralized control * Fee dback * Fee dforward systems * Recursive estimation Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability Impact factor: 2.350, year: 2016 http://library.utia.cas.cz/separaty/2016/AS/karny-0457337.pdf

  8. Scalable Active Optical Access Network Using Variable High-Speed PLZT Optical Switch/Splitter

    Science.gov (United States)

    Ashizawa, Kunitaka; Sato, Takehiro; Tokuhashi, Kazumasa; Ishii, Daisuke; Okamoto, Satoru; Yamanaka, Naoaki; Oki, Eiji

    This paper proposes a scalable active optical access network using high-speed Plumbum Lanthanum Zirconate Titanate (PLZT) optical switch/splitter. The Active Optical Network, called ActiON, using PLZT switching technology has been presented to increase the number of subscribers and the maximum transmission distance, compared to the Passive Optical Network (PON). ActiON supports the multicast slot allocation realized by running the PLZT switch elements in the splitter mode, which forces the switch to behave as an optical splitter. However, the previous ActiON creates a tradeoff between the network scalability and the power loss experienced by the optical signal to each user. It does not use the optical power efficiently because the optical power is simply divided into 0.5 to 0.5 without considering transmission distance from OLT to each ONU. The proposed network adopts PLZT switch elements in the variable splitter mode, which controls the split ratio of the optical power considering the transmission distance from OLT to each ONU, in addition to PLZT switch elements in existing two modes, the switching mode and the splitter mode. The proposed network introduces the flexible multicast slot allocation according to the transmission distance from OLT to each user and the number of required users using three modes, while keeping the advantages of ActiON, which are to support scalable and secure access services. Numerical results show that the proposed network dramatically reduces the required number of slots and supports high bandwidth efficiency services and extends the coverage of access network, compared to the previous ActiON, and the required computation time for selecting multicast users is less than 30msec, which is acceptable for on-demand broadcast services.

  9. Towards Bandwidth Scalable Transceiver Technology for Optical Metro-Access Networks

    DEFF Research Database (Denmark)

    Spolitis, Sandis; Bobrovs, Vjaceslavs; Wagner, Christoph

    2015-01-01

    sliceable transceiver for 1 Gbit/s non-return to zero (NRZ) signal sliced into two slices is presented. Digital signal processing (DSP) power consumption and latency values for proposed sliceable transceiver technique are also discussed. In this research post FEC with 7% overhead error free transmission has......Massive fiber-to-the-home network deployment is creating a challenge for telecommunications network operators: exponential increase of the power consumption at the central offices and a never ending quest for equipment upgrades operating at higher bandwidth. In this paper, we report on flexible...... signal slicing technique, which allows transmission of high-bandwidth signals via low bandwidth electrical and optoelectrical equipment. The presented signal slicing technique is highly scalable in terms of bandwidth which is determined by the number of slices used. In this paper performance of scalable...

  10. Scalability analysis of the synchronizability for ring or chain networks with dense clusters

    International Nuclear Information System (INIS)

    Lu, Jun-An; Zhang, Yong; Chen, Juan; Lü, Jinhu

    2014-01-01

    It is well known that most real-world complex networks, such as the Internet and the World Wide Web, are evolving networks. An interesting fundamental question is: how do some important functions or dynamical behaviors of complex networks evolve with increasing network scale? This paper aims at investigating the scalability of the synchronizability for ring or chain networks with dense clusters as the network size increases. We discover some interesting phenomena as follows: (i) the synchronizability of ring or chain networks with clusters decreases with increasing network scale regardless of the inner structures of all communities; (ii) for the same network scale, the network synchronizability decreases more quickly with increasing number of cluster blocks than with increasing number of nodes within the cluster block; (iii) the number of rings or chains has a much more significant influence on the network synchronizability than the size of the rings or chains. Our results indicate that network synchronizability can be maintained with increasing network scale by avoiding ring and chain structures. (paper)

  11. Scalable and Fully Distributed Localization in Large-Scale Sensor Networks

    Directory of Open Access Journals (Sweden)

    Miao Jin

    2017-06-01

    Full Text Available This work proposes a novel connectivity-based localization algorithm, well suitable for large-scale sensor networks with complex shapes and a non-uniform nodal distribution. In contrast to current state-of-the-art connectivity-based localization methods, the proposed algorithm is highly scalable with linear computation and communication costs with respect to the size of the network; and fully distributed where each node only needs the information of its neighbors without cumbersome partitioning and merging process. The algorithm is theoretically guaranteed and numerically stable. Moreover, the algorithm can be readily extended to the localization of networks with a one-hop transmission range distance measurement, and the propagation of the measurement error at one sensor node is limited within a small area of the network around the node. Extensive simulations and comparison with other methods under various representative network settings are carried out, showing the superior performance of the proposed algorithm.

  12. Scalable and reusable emulator for evaluating the performance of SS7 networks

    Science.gov (United States)

    Lazar, Aurel A.; Tseng, Kent H.; Lim, Koon Seng; Choe, Winston

    1994-04-01

    A scalable and reusable emulator was designed and implemented for studying the behavior of SS7 networks. The emulator design was largely based on public domain software. It was developed on top of an environment supported by PVM, the Parallel Virtual Machine, and managed by OSIMIS-the OSI Management Information Service platform. The emulator runs on top of a commercially available ATM LAN interconnecting engineering workstations. As a case study for evaluating the emulator, the behavior of the Singapore National SS7 Network under fault and unbalanced loading conditions was investigated.

  13. Scalable Video Streaming Relay for Smart Mobile Devices in Wireless Networks.

    Science.gov (United States)

    Kwon, Dongwoo; Je, Huigwang; Kim, Hyeonwoo; Ju, Hongtaek; An, Donghyeok

    2016-01-01

    Recently, smart mobile devices and wireless communication technologies such as WiFi, third generation (3G), and long-term evolution (LTE) have been rapidly deployed. Many smart mobile device users can access the Internet wirelessly, which has increased mobile traffic. In 2014, more than half of the mobile traffic around the world was devoted to satisfying the increased demand for the video streaming. In this paper, we propose a scalable video streaming relay scheme. Because many collisions degrade the scalability of video streaming, we first separate networks to prevent excessive contention between devices. In addition, the member device controls the video download rate in order to adapt to video playback. If the data are sufficiently buffered, the member device stops the download. If not, it requests additional video data. We implemented apps to evaluate the proposed scheme and conducted experiments with smart mobile devices. The results showed that our scheme improves the scalability of video streaming in a wireless local area network (WLAN).

  14. Scalable Video Streaming Relay for Smart Mobile Devices in Wireless Networks

    Science.gov (United States)

    Kwon, Dongwoo; Je, Huigwang; Kim, Hyeonwoo; Ju, Hongtaek; An, Donghyeok

    2016-01-01

    Recently, smart mobile devices and wireless communication technologies such as WiFi, third generation (3G), and long-term evolution (LTE) have been rapidly deployed. Many smart mobile device users can access the Internet wirelessly, which has increased mobile traffic. In 2014, more than half of the mobile traffic around the world was devoted to satisfying the increased demand for the video streaming. In this paper, we propose a scalable video streaming relay scheme. Because many collisions degrade the scalability of video streaming, we first separate networks to prevent excessive contention between devices. In addition, the member device controls the video download rate in order to adapt to video playback. If the data are sufficiently buffered, the member device stops the download. If not, it requests additional video data. We implemented apps to evaluate the proposed scheme and conducted experiments with smart mobile devices. The results showed that our scheme improves the scalability of video streaming in a wireless local area network (WLAN). PMID:27907113

  15. Resource Allocation for OFDMA-Based Cognitive Radio Networks with Application to H.264 Scalable Video Transmission

    Directory of Open Access Journals (Sweden)

    Coon JustinP

    2011-01-01

    Full Text Available Resource allocation schemes for orthogonal frequency division multiple access- (OFDMA- based cognitive radio (CR networks that impose minimum and maximum rate constraints are considered. To demonstrate the practical application of such systems, we consider the transmission of scalable video sequences. An integer programming (IP formulation of the problem is presented, which provides the optimal solution when solved using common discrete programming methods. Due to the computational complexity involved in such an approach and its unsuitability for dynamic cognitive radio environments, we propose to use the method of lift-and-project to obtain a stronger formulation for the resource allocation problem such that the integrality gap between the integer program and its linear relaxation is reduced. A simple branching operation is then performed that eliminates any noninteger values at the output of the linear program solvers. Simulation results demonstrate that this simple technique results in solutions very close to the optimum.

  16. Use of modeling to assess the scalability of Ethernet networks for the ATLAS second level trigger

    CERN Document Server

    Korcyl, K; Dobinson, Robert W; Saka, F

    1999-01-01

    The second level trigger of LHC's ATLAS experiment has to perform real-time analyses on detector data at 10 GBytes/s. A switching network is required to connect more than thousand read-out buffers to about thousand processors that execute the trigger algorithm. We are investigating the use of Ethernet technology to build this large switching network. Ethernet is attractive because of the huge installed base, competitive prices, and recent introduction of the high-performance Gigabit version. Due to the network's size it has to be constructed as a layered structure of smaller units. To assess the scalability of such a structure we evaluated a single switch unit. (0 refs).

  17. The NIDS Cluster: Scalable, Stateful Network Intrusion Detection on Commodity Hardware

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, Brian L; Vallentin, Matthias; Sommer, Robin; Lee, Jason; Leres, Craig; Paxson, Vern; Tierney, Brian

    2007-09-19

    In this work we present a NIDS cluster as a scalable solution for realizing high-performance, stateful network intrusion detection on commodity hardware. The design addresses three challenges: (i) distributing traffic evenly across an extensible set of analysis nodes in a fashion that minimizes the communication required for coordination, (ii) adapting the NIDS's operation to support coordinating its low-level analysis rather than just aggregating alerts; and (iii) validating that the cluster produces sound results. Prototypes of our NIDS cluster now operate at the Lawrence Berkeley National Laboratory and the University of California at Berkeley. In both environments the clusters greatly enhance the power of the network security monitoring.

  18. Tunable optical frequency comb enabled scalable and cost-effective multiuser orthogonal frequency-division multiple access passive optical network with source-free optical network units.

    Science.gov (United States)

    Chen, Chen; Zhang, Chongfu; Liu, Deming; Qiu, Kun; Liu, Shuang

    2012-10-01

    We propose and experimentally demonstrate a multiuser orthogonal frequency-division multiple access passive optical network (OFDMA-PON) with source-free optical network units (ONUs), enabled by tunable optical frequency comb generation technology. By cascading a phase modulator (PM) and an intensity modulator and dynamically controlling the peak-to-peak voltage of a PM driven signal, a tunable optical frequency comb source can be generated. It is utilized to assist the configuration of a multiple source-free ONUs enhanced OFDMA-PON where simultaneous and interference-free multiuser upstream transmission over a single wavelength can be efficiently supported. The proposed multiuser OFDMA-PON is scalable and cost effective, and its feasibility is successfully verified by experiment.

  19. Leveraging Fog Computing for Scalable IoT Datacenter Using Spine-Leaf Network Topology

    Directory of Open Access Journals (Sweden)

    K. C. Okafor

    2017-01-01

    Full Text Available With the Internet of Everything (IoE paradigm that gathers almost every object online, huge traffic workload, bandwidth, security, and latency issues remain a concern for IoT users in today’s world. Besides, the scalability requirements found in the current IoT data processing (in the cloud can hardly be used for applications such as assisted living systems, Big Data analytic solutions, and smart embedded applications. This paper proposes an extended cloud IoT model that optimizes bandwidth while allowing edge devices (Internet-connected objects/devices to smartly process data without relying on a cloud network. Its integration with a massively scaled spine-leaf (SL network topology is highlighted. This is contrasted with a legacy multitier layered architecture housing network services and routing policies. The perspective offered in this paper explains how low-latency and bandwidth intensive applications can transfer data to the cloud (and then back to the edge application without impacting QoS performance. Consequently, a spine-leaf Fog computing network (SL-FCN is presented for reducing latency and network congestion issues in a highly distributed and multilayer virtualized IoT datacenter environment. This approach is cost-effective as it maximizes bandwidth while maintaining redundancy and resiliency against failures in mission critical applications.

  20. A Multi-Hop Clustering Mechanism for Scalable IoT Networks.

    Science.gov (United States)

    Sung, Yoonyoung; Lee, Sookyoung; Lee, Meejeong

    2018-03-23

    It is expected that up to 26 billion Internet of Things (IoT) equipped with sensors and wireless communication capabilities will be connected to the Internet by 2020 for various purposes. With a large scale IoT network, having each node connected to the Internet with an individual connection may face serious scalability issues. The scalability problem of the IoT network may be alleviated by grouping the nodes of the IoT network into clusters and having a representative node in each cluster connect to the Internet on behalf of the other nodes in the cluster instead of having a per-node Internet connection and communication. In this paper, we propose a multi-hop clustering mechanism for IoT networks to minimize the number of required Internet connections. Specifically, the objective of proposed mechanism is to select the minimum number of coordinators, which take the role of a representative node for the cluster, i.e., having the Internet connection on behalf of the rest of the nodes in the cluster and to map a partition of the IoT nodes onto the selected set of coordinators to minimize the total distance between the nodes and their respective coordinator under a certain constraint in terms of maximum hop count between the IoT nodes and their respective coordinator. Since this problem can be mapped into a set cover problem which is known as NP-hard, we pursue a heuristic approach to solve the problem and analyze the complexity of the proposed solution. Through a set of experiments with varying parameters, the proposed scheme shows 63-87.3% reduction of the Internet connections depending on the number of the IoT nodes while that of the optimal solution is 65.6-89.9% in a small scale network. Moreover, it is shown that the performance characteristics of the proposed mechanism coincide with expected performance characteristics of the optimal solution in a large-scale network.

  1. All-optical packet/circuit switching-based data center network for enhanced scalability, latency and throughput

    NARCIS (Netherlands)

    Perelló, J.; Spadaro, S.; Ricciardi, S.; Careglio, D.; Peng, S.; Nejabati, R.; Zervas, G.; Simeonidou, D.; Predieri, A.; Biancani, M.; Dorren, H.J.S.; Di Lucente, S.; Luo, J.; Calabretta, N.; Bernini, G.; Ciulli, N.; Sancho, J.C.; Iordache, S.; Farreras, M.; Becerra, Y.; Liou, C.; Hussain, I.; Yin, Y.; Liu, L.; Proietti, R.

    2013-01-01

    Applications running inside data centers are enabled through the cooperation of thousands of servers arranged in racks and interconnected together through the data center network. Current DCN architectures based on electronic devices are neither scalable to face the massive growth of DCs, nor

  2. Scalable Approach To Construct Free-Standing and Flexible Carbon Networks for Lithium–Sulfur Battery

    KAUST Repository

    Li, Mengliu

    2017-02-21

    Reconstructing carbon nanomaterials (e.g., fullerene, carbon nanotubes (CNTs), and graphene) to multidimensional networks with hierarchical structure is a critical step in exploring their applications. Herein, a sacrificial template method by casting strategy is developed to prepare highly flexible and free-standing carbon film consisting of CNTs, graphene, or both. The scalable size, ultralight and binder-free characteristics, as well as the tunable process/property are promising for their large-scale applications, such as utilizing as interlayers in lithium-sulfur battery. The capability of holding polysulfides (i.e., suppressing the sulfur diffusion) for the networks made from CNTs, graphene, or their mixture is pronounced, among which CNTs are the best. The diffusion process of polysulfides can be visualized in a specially designed glass tube battery. X-ray photoelectron spectroscopy analysis of discharged electrodes was performed to characterize the species in electrodes. A detailed analysis of lithium diffusion constant, electrochemical impedance, and elementary distribution of sulfur in electrodes has been performed to further illustrate the differences of different carbon interlayers for Li-S batteries. The proposed simple and enlargeable production of carbon-based networks may facilitate their applications in battery industry even as a flexible cathode directly. The versatile and reconstructive strategy is extendable to prepare other flexible films and/or membranes for wider applications.

  3. Information Operations Innovation Network (IOIN) Demonstration

    National Research Council Canada - National Science Library

    Choo, Vic; Scheiderich, Louis

    2006-01-01

    ...; and Supplement existing/future network defense tools with additional capabilities. The actual software packages used for this effort include VIAasst, VisAlert, Flexviewer, Event Correlation for Cyber Attack Recognition (ECCARS...

  4. Visual Communications for Heterogeneous Networks/Visually Optimized Scalable Image Compression. Final Report for September 1, 1995 - February 28, 2002

    Energy Technology Data Exchange (ETDEWEB)

    Hemami, S. S.

    2003-06-03

    The authors developed image and video compression algorithms that provide scalability, reconstructibility, and network adaptivity, and developed compression and quantization strategies that are visually optimal at all bit rates. The goal of this research is to enable reliable ''universal access'' to visual communications over the National Information Infrastructure (NII). All users, regardless of their individual network connection bandwidths, qualities-of-service, or terminal capabilities, should have the ability to access still images, video clips, and multimedia information services, and to use interactive visual communications services. To do so requires special capabilities for image and video compression algorithms: scalability, reconstructibility, and network adaptivity. Scalability allows an information service to provide visual information at many rates, without requiring additional compression or storage after the stream has been compressed the first time. Reconstructibility allows reliable visual communications over an imperfect network. Network adaptivity permits real-time modification of compression parameters to adjust to changing network conditions. Furthermore, to optimize the efficiency of the compression algorithms, they should be visually optimal, where each bit expended reduces the visual distortion. Visual optimality is achieved through first extensive experimentation to quantify human sensitivity to supra-threshold compression artifacts and then incorporation of these experimental results into quantization strategies and compression algorithms.

  5. Experimental demonstration of spectrum-sliced elastic optical path network (SLICE).

    Science.gov (United States)

    Kozicki, Bartłomiej; Takara, Hidehiko; Tsukishima, Yukio; Yoshimatsu, Toshihide; Yonenaga, Kazushige; Jinno, Masahiko

    2010-10-11

    We describe experimental demonstration of spectrum-sliced elastic optical path network (SLICE) architecture. We employ optical orthogonal frequency-division multiplexing (OFDM) modulation format and bandwidth-variable optical cross-connects (OXC) to generate, transmit and receive optical paths with bandwidths of up to 1 Tb/s. We experimentally demonstrate elastic optical path setup and spectrally-efficient transmission of multiple channels with bit rates ranging from 40 to 140 Gb/s between six nodes of a mesh network. We show dynamic bandwidth scalability for optical paths with bit rates of 40 to 440 Gb/s. Moreover, we demonstrate multihop transmission of a 1 Tb/s optical path over 400 km of standard single-mode fiber (SMF). Finally, we investigate the filtering properties and the required guard band width for spectrally-efficient allocation of optical paths in SLICE.

  6. vDNN: Virtualized Deep Neural Networks for Scalable, Memory-Efficient Neural Network Design

    OpenAIRE

    Rhu, Minsoo; Gimelshein, Natalia; Clemons, Jason; Zulfiqar, Arslan; Keckler, Stephen W.

    2016-01-01

    The most widely used machine learning frameworks require users to carefully tune their memory usage so that the deep neural network (DNN) fits into the DRAM capacity of a GPU. This restriction hampers a researcher's flexibility to study different machine learning algorithms, forcing them to either use a less desirable network architecture or parallelize the processing across multiple GPUs. We propose a runtime memory manager that virtualizes the memory usage of DNNs such that both GPU and CPU...

  7. Scalable rule-based modelling of allosteric proteins and biochemical networks.

    Directory of Open Access Journals (Sweden)

    Julien F Ollivier

    2010-11-01

    Full Text Available Much of the complexity of biochemical networks comes from the information-processing abilities of allosteric proteins, be they receptors, ion-channels, signalling molecules or transcription factors. An allosteric protein can be uniquely regulated by each combination of input molecules that it binds. This "regulatory complexity" causes a combinatorial increase in the number of parameters required to fit experimental data as the number of protein interactions increases. It therefore challenges the creation, updating, and re-use of biochemical models. Here, we propose a rule-based modelling framework that exploits the intrinsic modularity of protein structure to address regulatory complexity. Rather than treating proteins as "black boxes", we model their hierarchical structure and, as conformational changes, internal dynamics. By modelling the regulation of allosteric proteins through these conformational changes, we often decrease the number of parameters required to fit data, and so reduce over-fitting and improve the predictive power of a model. Our method is thermodynamically grounded, imposes detailed balance, and also includes molecular cross-talk and the background activity of enzymes. We use our Allosteric Network Compiler to examine how allostery can facilitate macromolecular assembly and how competitive ligands can change the observed cooperativity of an allosteric protein. We also develop a parsimonious model of G protein-coupled receptors that explains functional selectivity and can predict the rank order of potency of agonists acting through a receptor. Our methodology should provide a basis for scalable, modular and executable modelling of biochemical networks in systems and synthetic biology.

  8. Broadband and scalable mobile satellite communication system for future access networks

    Science.gov (United States)

    Ohata, Kohei; Kobayashi, Kiyoshi; Nakahira, Katsuya; Ueba, Masazumi

    2005-07-01

    Due to the recent market trends, NTT has begun research into next generation satellite communication systems, such as broadband and scalable mobile communication systems. One service application objective is to provide broadband Internet access for transportation systems, temporal broadband access networks and telemetries to remote areas. While these are niche markets the total amount of capacity should be significant. We set a 1-Gb/s total transmission capacity as our goal. Our key concern is the system cost, which means that the system should be unified system with diversified services and not tailored for each application. As satellites account for a large portion of the total system cost, we set the target satellite size as a small, one-ton class dry mass with a 2-kW class payload power. In addition to the payload power and weight, the mobile satellite's frequency band is extremely limited. Therefore, we need to develop innovative technologies that will reduce the weight and maximize spectrum and power efficiency. Another challenge is the need for the system to handle up to 50 dB and a wide data rate range of other applications. This paper describes the key communication system technologies; the frequency reuse strategy, multiplexing scheme, resource allocation scheme, and QoS management algorithm to ensure excellent spectrum efficiency and support a variety of services and quality requirements in the mobile environment.

  9. Unsupervised Scalable Statistical Method for Identifying Influential Users in Online Social Networks.

    Science.gov (United States)

    Azcorra, A; Chiroque, L F; Cuevas, R; Fernández Anta, A; Laniado, H; Lillo, R E; Romo, J; Sguera, C

    2018-05-03

    Billions of users interact intensively every day via Online Social Networks (OSNs) such as Facebook, Twitter, or Google+. This makes OSNs an invaluable source of information, and channel of actuation, for sectors like advertising, marketing, or politics. To get the most of OSNs, analysts need to identify influential users that can be leveraged for promoting products, distributing messages, or improving the image of companies. In this report we propose a new unsupervised method, Massive Unsupervised Outlier Detection (MUOD), based on outliers detection, for providing support in the identification of influential users. MUOD is scalable, and can hence be used in large OSNs. Moreover, it labels the outliers as of shape, magnitude, or amplitude, depending of their features. This allows classifying the outlier users in multiple different classes, which are likely to include different types of influential users. Applying MUOD to a subset of roughly 400 million Google+ users, it has allowed identifying and discriminating automatically sets of outlier users, which present features associated to different definitions of influential users, like capacity to attract engagement, capacity to attract a large number of followers, or high infection capacity.

  10. A Scalable QoS-Aware VoD Resource Sharing Scheme for Next Generation Networks

    Science.gov (United States)

    Huang, Chenn-Jung; Luo, Yun-Cheng; Chen, Chun-Hua; Hu, Kai-Wen

    In network-aware concept, applications are aware of network conditions and are adaptable to the varying environment to achieve acceptable and predictable performance. In this work, a solution for video on demand service that integrates wireless and wired networks by using the network aware concepts is proposed to reduce the blocking probability and dropping probability of mobile requests. Fuzzy logic inference system is employed to select appropriate cache relay nodes to cache published video streams and distribute them to different peers through service oriented architecture (SOA). SIP-based control protocol and IMS standard are adopted to ensure the possibility of heterogeneous communication and provide a framework for delivering real-time multimedia services over an IP-based network to ensure interoperability, roaming, and end-to-end session management. The experimental results demonstrate that effectiveness and practicability of the proposed work.

  11. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    Directory of Open Access Journals (Sweden)

    Jakob Jordan

    2018-02-01

    Full Text Available State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  12. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.

    Science.gov (United States)

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  13. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    Science.gov (United States)

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613

  14. Hardware demonstration of high-speed networks for satellite applications.

    Energy Technology Data Exchange (ETDEWEB)

    Donaldson, Jonathon W.; Lee, David S.

    2008-09-01

    This report documents the implementation results of a hardware demonstration utilizing the Serial RapidIO{trademark} and SpaceWire protocols that was funded by Sandia National Laboratories (SNL's) Laboratory Directed Research and Development (LDRD) office. This demonstration was one of the activities in the Modeling and Design of High-Speed Networks for Satellite Applications LDRD. This effort has demonstrated the transport of application layer packets across both RapidIO and SpaceWire networks to a common downlink destination using small topologies comprised of commercial-off-the-shelf and custom devices. The RapidFET and NEX-SRIO debug and verification tools were instrumental in the successful implementation of the RapidIO hardware demonstration. The SpaceWire hardware demonstration successfully demonstrated the transfer and routing of application data packets between multiple nodes and also was able reprogram remote nodes using configuration bitfiles transmitted over the network, a key feature proposed in node-based architectures (NBAs). Although a much larger network (at least 18 to 27 nodes) would be required to fully verify the design for use in a real-world application, this demonstration has shown that both RapidIO and SpaceWire are capable of routing application packets across a network to a common downlink node, illustrating their potential use in real-world NBAs.

  15. A Scalable Permutation Approach Reveals Replication and Preservation Patterns of Network Modules in Large Datasets.

    Science.gov (United States)

    Ritchie, Scott C; Watts, Stephen; Fearnley, Liam G; Holt, Kathryn E; Abraham, Gad; Inouye, Michael

    2016-07-01

    Network modules-topologically distinct groups of edges and nodes-that are preserved across datasets can reveal common features of organisms, tissues, cell types, and molecules. Many statistics to identify such modules have been developed, but testing their significance requires heuristics. Here, we demonstrate that current methods for assessing module preservation are systematically biased and produce skewed p values. We introduce NetRep, a rapid and computationally efficient method that uses a permutation approach to score module preservation without assuming data are normally distributed. NetRep produces unbiased p values and can distinguish between true and false positives during multiple hypothesis testing. We use NetRep to quantify preservation of gene coexpression modules across murine brain, liver, adipose, and muscle tissues. Complex patterns of multi-tissue preservation were revealed, including a liver-derived housekeeping module that displayed adipose- and muscle-specific association with body weight. Finally, we demonstrate the broader applicability of NetRep by quantifying preservation of bacterial networks in gut microbiota between men and women. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  16. Scalable 3D bicontinuous fluid networks: polymer heat exchangers toward artificial organs.

    Science.gov (United States)

    Roper, Christopher S; Schubert, Randall C; Maloney, Kevin J; Page, David; Ro, Christopher J; Yang, Sophia S; Jacobsen, Alan J

    2015-04-17

    A scalable method for fabricating architected materials well-suited for heat and mass exchange is presented. These materials exhibit unprecedented combinations of small hydraulic diameters (13.0-0.09 mm) and large hydraulic-diameter-to-thickness ratios (5.0-30,100). This process expands the range of material architectures achievable starting from photopolymer waveguide lattices or additive manufacturing. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Tree-based server-middleman-client architecture: improving scalability and reliability for voting-based network games in ad hoc wireless networks

    Science.gov (United States)

    Guo, Y.; Fujinoki, H.

    2006-10-01

    The concept of a new tree-based architecture for networked multi-player games was proposed by Matuszek to improve scalability in network traffic at the same time to improve reliability. The architecture (we refer it as "Tree-Based Server- Middlemen-Client architecture") will solve the two major problems in ad-hoc wireless networks: frequent link failures and significance in battery power consumption at wireless transceivers by using two new techniques, recursive aggregation of client messages and subscription-based propagation of game state. However, the performance of the TBSMC architecture has never been quantitatively studied. In this paper, the TB-SMC architecture is compared with the client-server architecture using simulation experiments. We developed an event driven simulator to evaluate the performance of the TB-SMC architecture. In the network traffic scalability experiments, the TB-SMC architecture resulted in less than 1/14 of the network traffic load for 200 end users. In the reliability experiments, the TB-SMC architecture improved the number of successfully delivered players' votes by 31.6, 19.0, and 12.4% from the clientserver architecture at high (failure probability of 90%), moderate (50%) and low (10%) failure probability.

  18. A Scalable Context-Aware Objective Function (SCAOF of Routing Protocol for Agricultural Low-Power and Lossy Networks (RPAL

    Directory of Open Access Journals (Sweden)

    Yibo Chen

    2015-08-01

    Full Text Available In recent years, IoT (Internet of Things technologies have seen great advances, particularly, the IPv6 Routing Protocol for Low-power and Lossy Networks (RPL, which provides a powerful and flexible routing framework that can be applied in a variety of application scenarios. In this context, as an important role of IoT, Wireless Sensor Networks (WSNs can utilize RPL to design efficient routing protocols for a specific application to increase the ubiquity of networks with resource-constrained WSN nodes that are low-cost and easy to deploy. In this article, our work starts with the description of Agricultural Low-power and Lossy Networks (A-LLNs complying with the LLN framework, and to clarify the requirements of this application-oriented routing solution. After a brief review of existing optimization techniques for RPL, our contribution is dedicated to a Scalable Context-Aware Objective Function (SCAOF that can adapt RPL to the environmental monitoring of A-LLNs, through combining energy-aware, reliability-aware, robustness-aware and resource-aware contexts according to the composite routing metrics approach. The correct behavior of this enhanced RPL version (RPAL was verified by performance evaluations on both simulation and field tests. The obtained experimental results confirm that SCAOF can deliver the desired advantages on network lifetime extension, and high reliability and efficiency in different simulation scenarios and hardware testbeds.

  19. A Scalable Context-Aware Objective Function (SCAOF) of Routing Protocol for Agricultural Low-Power and Lossy Networks (RPAL).

    Science.gov (United States)

    Chen, Yibo; Chanet, Jean-Pierre; Hou, Kun-Mean; Shi, Hongling; de Sousa, Gil

    2015-08-10

    In recent years, IoT (Internet of Things) technologies have seen great advances, particularly, the IPv6 Routing Protocol for Low-power and Lossy Networks (RPL), which provides a powerful and flexible routing framework that can be applied in a variety of application scenarios. In this context, as an important role of IoT, Wireless Sensor Networks (WSNs) can utilize RPL to design efficient routing protocols for a specific application to increase the ubiquity of networks with resource-constrained WSN nodes that are low-cost and easy to deploy. In this article, our work starts with the description of Agricultural Low-power and Lossy Networks (A-LLNs) complying with the LLN framework, and to clarify the requirements of this application-oriented routing solution. After a brief review of existing optimization techniques for RPL, our contribution is dedicated to a Scalable Context-Aware Objective Function (SCAOF) that can adapt RPL to the environmental monitoring of A-LLNs, through combining energy-aware, reliability-aware, robustness-aware and resource-aware contexts according to the composite routing metrics approach. The correct behavior of this enhanced RPL version (RPAL) was verified by performance evaluations on both simulation and field tests. The obtained experimental results confirm that SCAOF can deliver the desired advantages on network lifetime extension, and high reliability and efficiency in different simulation scenarios and hardware testbeds.

  20. Experimental demonstration of SCMA-OFDM for passive optical network

    Science.gov (United States)

    Lin, Bangjiang; Tang, Xuan; Shen, Xiaohuan; Zhang, Min; Lin, Chun; Ghassemlooy, Zabih

    2017-12-01

    We introduces a novel architecture for next generation passive optical network (PON) based on the employment of sparse code multiple access (SCMA) combined with orthogonal frequency division multiplexing (OFDM) modulation, in which the binary data is directly encoded to multi-dimensional codewords and then spread over OFDM subcarriers. The feasibility of SCMA-OFDM-PON is verified with experimental demonstration. We show that the SCMA-OFDM offers 150% overloading gain in the number of optical network units compared with the orthogonal frequency division multiplexing access.

  1. Scalable High-Performance Parallel Design for Network Intrusion Detection Systems on Many-Core Processors

    OpenAIRE

    Jiang, Hayang; Xie, Gaogang; Salamatian, Kavé; Mathy, Laurent

    2013-01-01

    Network Intrusion Detection Systems (NIDSes) face significant challenges coming from the relentless network link speed growth and increasing complexity of threats. Both hardware accelerated and parallel software-based NIDS solutions, based on commodity multi-core and GPU processors, have been proposed to overcome these challenges. Network Intrusion Detection Systems (NIDSes) face significant challenges coming from the relentless network link speed growth and increasing complexity of threats. ...

  2. An investigation of scalable anomaly detection techniques for a large network of Wi-Fi hotspots

    CSIR Research Space (South Africa)

    Machaka, P

    2015-01-01

    Full Text Available . The Neural Networks, Bayesian Networks and Artificial Immune Systems were used for this experiment. Using a set of data extracted from a live network of Wi-Fi hotspots managed by an ISP; we integrated algorithms into a data collection system to detect...

  3. From Field Notes to Data Portal - A Scalable Data QA/QC Framework for Tower Networks: Progress and Preliminary Results

    Science.gov (United States)

    Sturtevant, C.; Hackley, S.; Lee, R.; Holling, G.; Bonarrigo, S.

    2017-12-01

    Quality assurance and control (QA/QC) is one of the most important yet challenging aspects of producing research-quality data. Data quality issues are multi-faceted, including sensor malfunctions, unmet theoretical assumptions, and measurement interference from humans or the natural environment. Tower networks such as Ameriflux, ICOS, and NEON continue to grow in size and sophistication, yet tools for robust, efficient, scalable QA/QC have lagged. Quality control remains a largely manual process heavily relying on visual inspection of data. In addition, notes of measurement interference are often recorded on paper without an explicit pathway to data flagging. As such, an increase in network size requires a near-proportional increase in personnel devoted to QA/QC, quickly stressing the human resources available. We present a scalable QA/QC framework in development for NEON that combines the efficiency and standardization of automated checks with the power and flexibility of human review. This framework includes fast-response monitoring of sensor health, a mobile application for electronically recording maintenance activities, traditional point-based automated quality flagging, and continuous monitoring of quality outcomes and longer-term holistic evaluations. This framework maintains the traceability of quality information along the entirety of the data generation pipeline, and explicitly links field reports of measurement interference to quality flagging. Preliminary results show that data quality can be effectively monitored and managed for a multitude of sites with a small group of QA/QC staff. Several components of this framework are open-source, including a R-Shiny application for efficiently monitoring, synthesizing, and investigating data quality issues.

  4. Secure, Mobile, Wireless Network Technology Designed, Developed, and Demonstrated

    Science.gov (United States)

    Ivancic, William D.; Paulsen, Phillip E.

    2004-01-01

    The inability to seamlessly disseminate data securely over a high-integrity, wireless broadband network has been identified as a primary technical barrier to providing an order-of-magnitude increase in aviation capacity and safety. Secure, autonomous communications to and from aircraft will enable advanced, automated, data-intensive air traffic management concepts, increase National Air Space (NAS) capacity, and potentially reduce the overall cost of air travel operations. For the first time ever, secure, mobile, network technology was designed, developed, and demonstrated with state-ofthe- art protocols and applications by a diverse, cooperative Government-industry team led by the NASA Glenn Research Center. This revolutionary technology solution will make fundamentally new airplane system capabilities possible by enabling secure, seamless network connections from platforms in motion (e.g., cars, ships, aircraft, and satellites) to existing terrestrial systems without the need for manual reconfiguration. Called Mobile Router, the new technology autonomously connects and configures networks as they traverse from one operating theater to another. The Mobile Router demonstration aboard the Neah Bay, a U.S. Coast Guard vessel stationed in Cleveland, Ohio, accomplished secure, seamless interoperability of mobile network systems across multiple domains without manual system reconfiguration. The Neah Bay was chosen because of its low cost and communications mission similarity to low-Earth-orbiting satellite platforms. This technology was successfully advanced from technology readiness level (TRL) 2 (concept and/or application formation) to TRL 6 (system model or prototype demonstration in a relevant environment). The secure, seamless interoperability offered by the Mobile Router and encryption device will enable several new, vehicle-specific and systemwide technologies to perform such things as remote, autonomous aircraft performance monitoring and early detection and

  5. A scalable global positioning system-free localization scheme for underwater wireless sensor networks

    KAUST Repository

    Mohammed, A.M.; Muhammad, Z.S.; Mohamed, S.-A.

    2013-01-01

    Seaweb is an acoustic communication technology that enables communication between sensor nodes. Seaweb technology utilizes the commercially available telesonar modems that has developed link and network layer firmware to provide a robust undersea

  6. A scalable global positioning system-free localization scheme for underwater wireless sensor networks

    KAUST Repository

    Mohammed, A.M.

    2013-05-07

    Seaweb is an acoustic communication technology that enables communication between sensor nodes. Seaweb technology utilizes the commercially available telesonar modems that has developed link and network layer firmware to provide a robust undersea communication capability. Seaweb interconnects the underwater nodes through digital signal processing-based modem by using acoustic links between the neighboring sensors. In this paper, we design and investigate a global positioning system-free passive localization protocol by integrating the innovations of levelling and localization with the Seaweb technology. This protocol uses the range data and planar trigonometry principles to estimate the positions of the underwater sensor nodes. Moreover, for precise localization, we consider more realistic conditions namely, (a) small displacement of sensor nodes due to watch circles and (b) deployment of sensor nodes over non-uniform water surface. Once the nodes are localized, we divide the whole network field into circular levels and sectors to minimize the traffic complexity and thereby increases the lifetime of the sensor nodes in the network field. We then form the mesh network inside each of the sectors that increases the reliability. The algorithm is designed in such a way that it overcomes the ambiguous nodes errata and reflected paths and therefore makes the algorithm more robust. The synthetic network geometries are so designed which can evaluate the algorithm in the presence of perfect or imperfect ranges or in case of incomplete data. A comparative study is made with the existing algorithms which proves the efficiency of our newly proposed algorithm. 2013 Mohammed et al.

  7. Efficient and scalable IPv6 communication functions for wireless outdour lighting networks

    NARCIS (Netherlands)

    Mamo, S.T.

    2014-01-01

    Outdoor lighting today is becoming increasingly network-connected. The rapid development in wireless communication technologies makes this progress faster and competitive. Philips Research and Philips Lighting are part of the leading forces in exploration and development of a wide spectrum of

  8. Scalable Approach To Construct Free-Standing and Flexible Carbon Networks for Lithium–Sulfur Battery

    KAUST Repository

    Li, Mengliu; Wahyudi, Wandi; Kumar, Pushpendra; Wu, Feng-Yu; Yang, Xiulin; Li, Henan; Li, Lain-Jong; Ming, Jun

    2017-01-01

    for their large-scale applications, such as utilizing as interlayers in lithium-sulfur battery. The capability of holding polysulfides (i.e., suppressing the sulfur diffusion) for the networks made from CNTs, graphene, or their mixture is pronounced, among which

  9. Towards Scalable Cost-Effective Service and Survivability Provisioning in Ultra High Speed Networks

    Energy Technology Data Exchange (ETDEWEB)

    Bin Wang

    2006-12-01

    Optical transport networks based on wavelength division multiplexing (WDM) are considered to be the most appropriate choice for future Internet backbone. On the other hand, future DOE networks are expected to have the ability to dynamically provision on-demand survivable services to suit the needs of various high performance scientific applications and remote collaboration. Since a failure in aWDMnetwork such as a cable cut may result in a tremendous amount of data loss, efficient protection of data transport in WDM networks is therefore essential. As the backbone network is moving towards GMPLS/WDM optical networks, the unique requirement to support DOE’s science mission results in challenging issues that are not directly addressed by existing networking techniques and methodologies. The objectives of this project were to develop cost effective protection and restoration mechanisms based on dedicated path, shared path, preconfigured cycle (p-cycle), and so on, to deal with single failure, dual failure, and shared risk link group (SRLG) failure, under different traffic and resource requirement models; to devise efficient service provisioning algorithms that deal with application specific network resource requirements for both unicast and multicast; to study various aspects of traffic grooming in WDM ring and mesh networks to derive cost effective solutions while meeting application resource and QoS requirements; to design various diverse routing and multi-constrained routing algorithms, considering different traffic models and failure models, for protection and restoration, as well as for service provisioning; to propose and study new optical burst switched architectures and mechanisms for effectively supporting dynamic services; and to integrate research with graduate and undergraduate education. All objectives have been successfully met. This report summarizes the major accomplishments of this project. The impact of the project manifests in many aspects: First

  10. Scalable devices

    KAUST Repository

    Krü ger, Jens J.; Hadwiger, Markus

    2014-01-01

    In computer science in general and in particular the field of high performance computing and supercomputing the term scalable plays an important role. It indicates that a piece of hardware, a concept, an algorithm, or an entire system scales

  11. Scalable Mobile Ad Hoc Network (MANET) to Enhance Situational Awareness in Distributed Small Unit Operations

    Science.gov (United States)

    2015-06-01

    computing the algorithm is written in Python, a simple terminal programming language. At the top of that script was the integration to ROS and the rest of...etError    (1) where the refresh frequency is represented by Network in units of Hz and the angular speed of the Marine’s sweep rate...starts with: Java , C++, Python, and more. There are literally thousands if not hundreds of thousands, of projects, platforms, sensors, actuators

  12. A Fast and Scalable Algorithm for Calculating the Achievable Capacity of a Wireless Mesh Network

    Science.gov (United States)

    2016-05-09

    increase the speed of the proposed algorithm with only limited decrease in the solution quality. One of the primary motivations of our work is to have a...outline of the scheduling algorithm. Afterwards, each step is discussed in more detail, and potential speed improvements are explored. 1) Algorithm...GHz ISM band has been considered for future 5G network design [33]. Atmospheric absorption loss at 24 GHz is around 0.1 dB/km [34], while at 2.4 GHz

  13. A proposed scalable design and simulation of wireless sensor network-based long-distance water pipeline leakage monitoring system.

    Science.gov (United States)

    Almazyad, Abdulaziz S; Seddiq, Yasser M; Alotaibi, Ahmed M; Al-Nasheri, Ahmed Y; BenSaleh, Mohammed S; Obeid, Abdulfattah M; Qasim, Syed Manzoor

    2014-02-20

    Anomalies such as leakage and bursts in water pipelines have severe consequences for the environment and the economy. To ensure the reliability of water pipelines, they must be monitored effectively. Wireless Sensor Networks (WSNs) have emerged as an effective technology for monitoring critical infrastructure such as water, oil and gas pipelines. In this paper, we present a scalable design and simulation of a water pipeline leakage monitoring system using Radio Frequency IDentification (RFID) and WSN technology. The proposed design targets long-distance aboveground water pipelines that have special considerations for maintenance, energy consumption and cost. The design is based on deploying a group of mobile wireless sensor nodes inside the pipeline and allowing them to work cooperatively according to a prescheduled order. Under this mechanism, only one node is active at a time, while the other nodes are sleeping. The node whose turn is next wakes up according to one of three wakeup techniques: location-based, time-based and interrupt-driven. In this paper, mathematical models are derived for each technique to estimate the corresponding energy consumption and memory size requirements. The proposed equations are analyzed and the results are validated using simulation.

  14. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    Science.gov (United States)

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  15. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Dongyul Lee

    2014-01-01

    Full Text Available The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC with adaptive modulation and coding (AMC provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  16. A Proposed Scalable Design and Simulation of Wireless Sensor Network-Based Long-Distance Water Pipeline Leakage Monitoring System

    Directory of Open Access Journals (Sweden)

    Abdulaziz S. Almazyad

    2014-02-01

    Full Text Available Anomalies such as leakage and bursts in water pipelines have severe consequences for the environment and the economy. To ensure the reliability of water pipelines, they must be monitored effectively. Wireless Sensor Networks (WSNs have emerged as an effective technology for monitoring critical infrastructure such as water, oil and gas pipelines. In this paper, we present a scalable design and simulation of a water pipeline leakage monitoring system using Radio Frequency IDentification (RFID and WSN technology. The proposed design targets long-distance aboveground water pipelines that have special considerations for maintenance, energy consumption and cost. The design is based on deploying a group of mobile wireless sensor nodes inside the pipeline and allowing them to work cooperatively according to a prescheduled order. Under this mechanism, only one node is active at a time, while the other nodes are sleeping. The node whose turn is next wakes up according to one of three wakeup techniques: location-based, time-based and interrupt-driven. In this paper, mathematical models are derived for each technique to estimate the corresponding energy consumption and memory size requirements. The proposed equations are analyzed and the results are validated using simulation.

  17. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Chase Qishi [New Jersey Inst. of Technology, Newark, NJ (United States); Univ. of Memphis, TN (United States); Zhu, Michelle Mengxia [Southern Illinois Univ., Carbondale, IL (United States)

    2016-06-06

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models feature diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific

  18. Sensor Network Demonstration for In Situ Decommissioning - 13332

    Energy Technology Data Exchange (ETDEWEB)

    Lagos, L.; Varona, J.; Awwad, A. [Applied Research Center, Florida International University, 10555 West Flagler Street, Suite 2100, Miami, FL 33174 (United States); Rivera, J.; McGill, J. [Department of Energy - DOE, Environmental Management Office (United States)

    2013-07-01

    ensure that the individual sensors would be immobilized during the grout pouring activities, a set of nine sensor racks were designed. The 270 sensors provided by Idaho National Laboratory (INL), Mississippi State University (MSU), University of Houston (UH), and University of South Carolina (USC) were secured to these racks based on predetermined locations. Once sensor racks were installed inside the test cube, connected and debugged, approximately 32 cubic yards of special grout material was used to entomb the sensors. MSU provided and demonstrated four types of fiber loop ring-down (FLR) sensors for detection of water, temperature, cracks, and movement of fluids. INL provided and demonstrated time differenced 3D electrical resistivity tomography (ERT), advanced tensiometers for moisture content, and thermocouples for temperature measurements. University of Houston provided smart aggregate (SA) sensors, which detect crack severity and water presence. An additional UH sensor system demonstrated was a Fiber Bragg Grating (FBG) fiber optic system measuring strain, presence of water, and temperature. USC provided a system which measured acoustic emissions during cracking, as well as temperature and pH sensors. All systems were connected to a Sensor Remote Access System (SRAS) data networking and collection system designed, developed and provided by FIU. The purpose of SRAS was to collect and allow download of the raw sensor data from all the sensor system, as well as allow upload of the processed data and any analysis reports and graphs. All this information was made available to the research teams via the Deactivation and Decommissioning Knowledge Management and Information Tool (D and D KM-IT). As a current research effort, FIU is performing an energy analysis, and transferring several sensor systems to a Photovoltaic (PV) System to continuously monitor energy consumption parameters and overall power demands. Also, One final component of this research is focusing on

  19. Scalable devices

    KAUST Repository

    Krüger, Jens J.

    2014-01-01

    In computer science in general and in particular the field of high performance computing and supercomputing the term scalable plays an important role. It indicates that a piece of hardware, a concept, an algorithm, or an entire system scales with the size of the problem, i.e., it can not only be used in a very specific setting but it\\'s applicable for a wide range of problems. From small scenarios to possibly very large settings. In this spirit, there exist a number of fixed areas of research on scalability. There are works on scalable algorithms, scalable architectures but what are scalable devices? In the context of this chapter, we are interested in a whole range of display devices, ranging from small scale hardware such as tablet computers, pads, smart-phones etc. up to large tiled display walls. What interests us mostly is not so much the hardware setup but mostly the visualization algorithms behind these display systems that scale from your average smart phone up to the largest gigapixel display walls.

  20. Experimental demonstrations of all-optical networking functions for WDM optical networks

    Science.gov (United States)

    Gurkan, Deniz

    The deployment of optical networks will enable high capacity links between users but will introduce the problems associated with transporting and managing more channels. Many network functions should be implemented in optical domain; main reasons are: to avoid electronic processing bottlenecks, to achieve data-format and data-rate independence, to provide reliable and cost efficient control and management information, to simultaneously process multiple wavelength channel operation for wavelength division multiplexed (WDM) optical networks. The following novel experimental demonstrations of network functions in the optical domain are presented: Variable-bit-rate recognition of the header information in a data packet. The technique is reconfigurable for different header sequences and uses optical correlators as look-up tables. The header is processed and a signal is sent to the switch for a series of incoming data packets at 155 Mb/s, 622 Mb/s, and 2.5 Gb/s in a reconfigurable network. Simultaneous optical time-slot-interchange and wavelength conversion of the bits in a 2.5-Gb/s data stream to achieve a reconfigurable time/wavelength switch. The technique uses difference-frequency-generation (DFG) for wavelength conversion and fiber Bragg gratings (FBG) as wavelength-dependent optical time buffers. The WDM header recognition module simultaneously recognizing two header bits on each of two 2.5-Gbit/s WDM packet streams. The module is tunable to enable reconfigurable look-up tables. Simultaneous and independent label swapping and wavelength conversion of two WDM channels for a multi-protocol label switching (MPLS) network. Demonstration of label swapping of distinct 8-bit-long labels for two WDM data channels is presented. Two-dimensional code conversion module for an optical code-division multiple-access (O-CDMA) local area network (LAN) system. Simultaneous wavelength conversion and time shifting is achieved to enable flexible code conversion and increase code re

  1. Network Science Based Quantification of Resilience Demonstrated on the Indian Railways Network

    Science.gov (United States)

    Bhatia, Udit; Kumar, Devashish; Kodra, Evan; Ganguly, Auroop R.

    2015-01-01

    The structure, interdependence, and fragility of systems ranging from power-grids and transportation to ecology, climate, biology and even human communities and the Internet have been examined through network science. While response to perturbations has been quantified, recovery strategies for perturbed networks have usually been either discussed conceptually or through anecdotal case studies. Here we develop a network science based quantitative framework for measuring, comparing and interpreting hazard responses as well as recovery strategies. The framework, motivated by the recently proposed temporal resilience paradigm, is demonstrated with the Indian Railways Network. Simulations inspired by the 2004 Indian Ocean Tsunami and the 2012 North Indian blackout as well as a cyber-physical attack scenario illustrate hazard responses and effectiveness of proposed recovery strategies. Multiple metrics are used to generate various recovery strategies, which are simply sequences in which system components should be recovered after a disruption. Quantitative evaluation of these strategies suggests that faster and more efficient recovery is possible through network centrality measures. Optimal recovery strategies may be different per hazard, per community within a network, and for different measures of partial recovery. In addition, topological characterization provides a means for interpreting the comparative performance of proposed recovery strategies. The methods can be directly extended to other Large-Scale Critical Lifeline Infrastructure Networks including transportation, water, energy and communications systems that are threatened by natural or human-induced hazards, including cascading failures. Furthermore, the quantitative framework developed here can generalize across natural, engineered and human systems, offering an actionable and generalizable approach for emergency management in particular as well as for network resilience in general. PMID:26536227

  2. Network Science Based Quantification of Resilience Demonstrated on the Indian Railways Network.

    Science.gov (United States)

    Bhatia, Udit; Kumar, Devashish; Kodra, Evan; Ganguly, Auroop R

    2015-01-01

    The structure, interdependence, and fragility of systems ranging from power-grids and transportation to ecology, climate, biology and even human communities and the Internet have been examined through network science. While response to perturbations has been quantified, recovery strategies for perturbed networks have usually been either discussed conceptually or through anecdotal case studies. Here we develop a network science based quantitative framework for measuring, comparing and interpreting hazard responses as well as recovery strategies. The framework, motivated by the recently proposed temporal resilience paradigm, is demonstrated with the Indian Railways Network. Simulations inspired by the 2004 Indian Ocean Tsunami and the 2012 North Indian blackout as well as a cyber-physical attack scenario illustrate hazard responses and effectiveness of proposed recovery strategies. Multiple metrics are used to generate various recovery strategies, which are simply sequences in which system components should be recovered after a disruption. Quantitative evaluation of these strategies suggests that faster and more efficient recovery is possible through network centrality measures. Optimal recovery strategies may be different per hazard, per community within a network, and for different measures of partial recovery. In addition, topological characterization provides a means for interpreting the comparative performance of proposed recovery strategies. The methods can be directly extended to other Large-Scale Critical Lifeline Infrastructure Networks including transportation, water, energy and communications systems that are threatened by natural or human-induced hazards, including cascading failures. Furthermore, the quantitative framework developed here can generalize across natural, engineered and human systems, offering an actionable and generalizable approach for emergency management in particular as well as for network resilience in general.

  3. Network Science Based Quantification of Resilience Demonstrated on the Indian Railways Network.

    Directory of Open Access Journals (Sweden)

    Udit Bhatia

    Full Text Available The structure, interdependence, and fragility of systems ranging from power-grids and transportation to ecology, climate, biology and even human communities and the Internet have been examined through network science. While response to perturbations has been quantified, recovery strategies for perturbed networks have usually been either discussed conceptually or through anecdotal case studies. Here we develop a network science based quantitative framework for measuring, comparing and interpreting hazard responses as well as recovery strategies. The framework, motivated by the recently proposed temporal resilience paradigm, is demonstrated with the Indian Railways Network. Simulations inspired by the 2004 Indian Ocean Tsunami and the 2012 North Indian blackout as well as a cyber-physical attack scenario illustrate hazard responses and effectiveness of proposed recovery strategies. Multiple metrics are used to generate various recovery strategies, which are simply sequences in which system components should be recovered after a disruption. Quantitative evaluation of these strategies suggests that faster and more efficient recovery is possible through network centrality measures. Optimal recovery strategies may be different per hazard, per community within a network, and for different measures of partial recovery. In addition, topological characterization provides a means for interpreting the comparative performance of proposed recovery strategies. The methods can be directly extended to other Large-Scale Critical Lifeline Infrastructure Networks including transportation, water, energy and communications systems that are threatened by natural or human-induced hazards, including cascading failures. Furthermore, the quantitative framework developed here can generalize across natural, engineered and human systems, offering an actionable and generalizable approach for emergency management in particular as well as for network resilience in general.

  4. Launching applications on compute and service processors running under different operating systems in scalable network of processor boards with routers

    Science.gov (United States)

    Tomkins, James L [Albuquerque, NM; Camp, William J [Albuquerque, NM

    2009-03-17

    A multiple processor computing apparatus includes a physical interconnect structure that is flexibly configurable to support selective segregation of classified and unclassified users. The physical interconnect structure also permits easy physical scalability of the computing apparatus. The computing apparatus can include an emulator which permits applications from the same job to be launched on processors that use different operating systems.

  5. EGFR Signal-Network Reconstruction Demonstrates Metabolic Crosstalk in EMT.

    Directory of Open Access Journals (Sweden)

    Kumari Sonal Choudhary

    2016-06-01

    Full Text Available Epithelial to mesenchymal transition (EMT is an important event during development and cancer metastasis. There is limited understanding of the metabolic alterations that give rise to and take place during EMT. Dysregulation of signalling pathways that impact metabolism, including epidermal growth factor receptor (EGFR, are however a hallmark of EMT and metastasis. In this study, we report the investigation into EGFR signalling and metabolic crosstalk of EMT through constraint-based modelling and analysis of the breast epithelial EMT cell model D492 and its mesenchymal counterpart D492M. We built an EGFR signalling network for EMT based on stoichiometric coefficients and constrained the network with gene expression data to build epithelial (EGFR_E and mesenchymal (EGFR_M networks. Metabolic alterations arising from differential expression of EGFR genes was derived from a literature review of AKT regulated metabolic genes. Signaling flux differences between EGFR_E and EGFR_M models subsequently allowed metabolism in D492 and D492M cells to be assessed. Higher flux within AKT pathway in the D492 cells compared to D492M suggested higher glycolytic activity in D492 that we confirmed experimentally through measurements of glucose uptake and lactate secretion rates. The signaling genes from the AKT, RAS/MAPK and CaM pathways were predicted to revert D492M to D492 phenotype. Follow-up analysis of EGFR signaling metabolic crosstalk in three additional breast epithelial cell lines highlighted variability in in vitro cell models of EMT. This study shows that the metabolic phenotype may be predicted by in silico analyses of gene expression data of EGFR signaling genes, but this phenomenon is cell-specific and does not follow a simple trend.

  6. EGFR Signal-Network Reconstruction Demonstrates Metabolic Crosstalk in EMT.

    Science.gov (United States)

    Choudhary, Kumari Sonal; Rohatgi, Neha; Halldorsson, Skarphedinn; Briem, Eirikur; Gudjonsson, Thorarinn; Gudmundsson, Steinn; Rolfsson, Ottar

    2016-06-01

    Epithelial to mesenchymal transition (EMT) is an important event during development and cancer metastasis. There is limited understanding of the metabolic alterations that give rise to and take place during EMT. Dysregulation of signalling pathways that impact metabolism, including epidermal growth factor receptor (EGFR), are however a hallmark of EMT and metastasis. In this study, we report the investigation into EGFR signalling and metabolic crosstalk of EMT through constraint-based modelling and analysis of the breast epithelial EMT cell model D492 and its mesenchymal counterpart D492M. We built an EGFR signalling network for EMT based on stoichiometric coefficients and constrained the network with gene expression data to build epithelial (EGFR_E) and mesenchymal (EGFR_M) networks. Metabolic alterations arising from differential expression of EGFR genes was derived from a literature review of AKT regulated metabolic genes. Signaling flux differences between EGFR_E and EGFR_M models subsequently allowed metabolism in D492 and D492M cells to be assessed. Higher flux within AKT pathway in the D492 cells compared to D492M suggested higher glycolytic activity in D492 that we confirmed experimentally through measurements of glucose uptake and lactate secretion rates. The signaling genes from the AKT, RAS/MAPK and CaM pathways were predicted to revert D492M to D492 phenotype. Follow-up analysis of EGFR signaling metabolic crosstalk in three additional breast epithelial cell lines highlighted variability in in vitro cell models of EMT. This study shows that the metabolic phenotype may be predicted by in silico analyses of gene expression data of EGFR signaling genes, but this phenomenon is cell-specific and does not follow a simple trend.

  7. Experimental demonstration of IDMA-OFDM for passive optical network

    Science.gov (United States)

    Lin, Bangjiang; Tang, Xuan; Li, Yiwei; Zhang, Min; Lin, Chun; Ghassemlooy, Zabih

    2017-11-01

    We present interleave division multiple access (IDMA) scheme combined with orthogonal frequency division multiplexing (OFDM) for passive optical network, which offers improved transmission performance and good chromatic dispersion tolerance. The interleavers are employed to separate different users and the generated chips are modulated on OFDM subcarriers. The feasibility of IDMA-OFDM-PON is experimentally verified with a bitrate of 3.3 Gb/s per user. Compared with OFDMA, IDMA-OFDM offers 8 and 6 dB gains in term of receiver sensitivity in the cases of 2 and 4 users, respectively.

  8. Scalable Nanomanufacturing—A Review

    Directory of Open Access Journals (Sweden)

    Khershed Cooper

    2017-01-01

    Full Text Available This article describes the field of scalable nanomanufacturing, its importance and need, its research activities and achievements. The National Science Foundation is taking a leading role in fostering basic research in scalable nanomanufacturing (SNM. From this effort several novel nanomanufacturing approaches have been proposed, studied and demonstrated, including scalable nanopatterning. This paper will discuss SNM research areas in materials, processes and applications, scale-up methods with project examples, and manufacturing challenges that need to be addressed to move nanotechnology discoveries closer to the marketplace.

  9. A Scalable Weight-Free Learning Algorithm for Regulatory Control of Cell Activity in Spiking Neuronal Networks.

    Science.gov (United States)

    Zhang, Xu; Foderaro, Greg; Henriquez, Craig; Ferrari, Silvia

    2018-03-01

    Recent developments in neural stimulation and recording technologies are providing scientists with the ability of recording and controlling the activity of individual neurons in vitro or in vivo, with very high spatial and temporal resolution. Tools such as optogenetics, for example, are having a significant impact in the neuroscience field by delivering optical firing control with the precision and spatiotemporal resolution required for investigating information processing and plasticity in biological brains. While a number of training algorithms have been developed to date for spiking neural network (SNN) models of biological neuronal circuits, exiting methods rely on learning rules that adjust the synaptic strengths (or weights) directly, in order to obtain the desired network-level (or functional-level) performance. As such, they are not applicable to modifying plasticity in biological neuronal circuits, in which synaptic strengths only change as a result of pre- and post-synaptic neuron firings or biological mechanisms beyond our control. This paper presents a weight-free training algorithm that relies solely on adjusting the spatiotemporal delivery of neuron firings in order to optimize the network performance. The proposed weight-free algorithm does not require any knowledge of the SNN model or its plasticity mechanisms. As a result, this training approach is potentially realizable in vitro or in vivo via neural stimulation and recording technologies, such as optogenetics and multielectrode arrays, and could be utilized to control plasticity at multiple scales of biological neuronal circuits. The approach is demonstrated by training SNNs with hundreds of units to control a virtual insect navigating in an unknown environment.

  10. A reactive, scalable, and transferable model for molecular energies from a neural network approach based on local information

    Science.gov (United States)

    Unke, Oliver T.; Meuwly, Markus

    2018-06-01

    Despite the ever-increasing computer power, accurate ab initio calculations for large systems (thousands to millions of atoms) remain infeasible. Instead, approximate empirical energy functions are used. Most current approaches are either transferable between different chemical systems, but not particularly accurate, or they are fine-tuned to a specific application. In this work, a data-driven method to construct a potential energy surface based on neural networks is presented. Since the total energy is decomposed into local atomic contributions, the evaluation is easily parallelizable and scales linearly with system size. With prediction errors below 0.5 kcal mol-1 for both unknown molecules and configurations, the method is accurate across chemical and configurational space, which is demonstrated by applying it to datasets from nonreactive and reactive molecular dynamics simulations and a diverse database of equilibrium structures. The possibility to use small molecules as reference data to predict larger structures is also explored. Since the descriptor only uses local information, high-level ab initio methods, which are computationally too expensive for large molecules, become feasible for generating the necessary reference data used to train the neural network.

  11. Advanced Modulation Formats in Cognitive Optical Networks: EU project CHRON Demonstration

    DEFF Research Database (Denmark)

    Borkowski, Robert; Caballero Jambrina, Antonio; Klonidis, Dimitris

    2014-01-01

    We demonstrate real-time path establishment and switching of coherent modulation formats (QPSK, 16QAM) within an optical network driven by cognitive algorithms. Cognition aims at autonomous configuration optimization to satisfy quality of transmission requirements.......We demonstrate real-time path establishment and switching of coherent modulation formats (QPSK, 16QAM) within an optical network driven by cognitive algorithms. Cognition aims at autonomous configuration optimization to satisfy quality of transmission requirements....

  12. Field and long-term demonstration of a wide area quantum key distribution network.

    Science.gov (United States)

    Wang, Shuang; Chen, Wei; Yin, Zhen-Qiang; Li, Hong-Wei; He, De-Yong; Li, Yu-Hu; Zhou, Zheng; Song, Xiao-Tian; Li, Fang-Yi; Wang, Dong; Chen, Hua; Han, Yun-Guang; Huang, Jing-Zheng; Guo, Jun-Fu; Hao, Peng-Lei; Li, Mo; Zhang, Chun-Mei; Liu, Dong; Liang, Wen-Ye; Miao, Chun-Hua; Wu, Ping; Guo, Guang-Can; Han, Zheng-Fu

    2014-09-08

    A wide area quantum key distribution (QKD) network deployed on communication infrastructures provided by China Mobile Ltd. is demonstrated. Three cities and two metropolitan area QKD networks were linked up to form the Hefei-Chaohu-Wuhu wide area QKD network with over 150 kilometers coverage area, in which Hefei metropolitan area QKD network was a typical full-mesh core network to offer all-to-all interconnections, and Wuhu metropolitan area QKD network was a representative quantum access network with point-to-multipoint configuration. The whole wide area QKD network ran for more than 5000 hours, from 21 December 2011 to 19 July 2012, and part of the network stopped until last December. To adapt to the complex and volatile field environment, the Faraday-Michelson QKD system with several stability measures was adopted when we designed QKD devices. Through standardized design of QKD devices, resolution of symmetry problem of QKD devices, and seamless switching in dynamic QKD network, we realized the effective integration between point-to-point QKD techniques and networking schemes.

  13. Direct2Experts: a pilot national network to demonstrate interoperability among research-networking platforms

    OpenAIRE

    Weber, Griffin M; Barnett, William; Conlon, Mike; Eichmann, David; Kibbe, Warren; Falk-Krzesinski, Holly; Halaas, Michael; Johnson, Layne; Meeks, Eric; Mitchell, Donald; Schleyer, Titus; Stallings, Sarah; Warden, Michael; Kahlon, Maninder

    2011-01-01

    Research-networking tools use data-mining and social networking to enable expertise discovery, matchmaking and collaboration, which are important facets of team science and translational research. Several commercial and academic platforms have been built, and many institutions have deployed these products to help their investigators find local collaborators. Recent studies, though, have shown the growing importance of multiuniversity teams in science. Unfortunately, the lack of a standard dat...

  14. High-performance flat data center network architecture based on scalable and flow-controlled optical switching system

    NARCIS (Netherlands)

    Calabretta, N.; Miao, W.; Dorren, H.J.S.; Schroder, H.; Chen, R.T.

    2016-01-01

    Traffic in data centers networks (DCNs) is steadily growing to support various applications and virtualization technologies. Multi-tenancy enabling efficient resource utilization is considered as a key requirement for the next generation DCs resulting from the growing demands for services and

  15. Demonstration of Hydrogen Energy Network and Fuel Cells in Residential Homes

    International Nuclear Information System (INIS)

    Hirohisa Aki; Tetsuhiko Maeda; Itaru Tamura; Akeshi Kegasa; Yoshiro Ishikawa; Ichiro Sugimoto; Itaru Ishii

    2006-01-01

    The authors proposed the setting up of an energy interchange system by establishing energy networks of electricity, hot water, and hydrogen in residential homes. In such networks, some homes are equipped with fuel cell stacks, fuel processors, hydrogen storage devices, and large storage tanks for hot water. The energy network enables the flexible operation of the fuel cell stacks and fuel processors. A demonstration project has been planned in existing residential homes to evaluate the proposal. The demonstration will be presented in a small apartment building. The building will be renovated and will be equipped with a hydrogen production facility, a hydrogen interchange pipe, and fuel cell stacks with a heat recovery device. The energy flow process from hydrogen production to consumption in the homes will be demonstrated. This paper presents the proposed energy interchange system and demonstration project. (authors)

  16. Experimental demonstration of software defined data center optical networks with Tbps end-to-end tunability

    Science.gov (United States)

    Zhao, Yongli; Zhang, Jie; Ji, Yuefeng; Li, Hui; Wang, Huitao; Ge, Chao

    2015-10-01

    The end-to-end tunability is important to provision elastic channel for the burst traffic of data center optical networks. Then, how to complete the end-to-end tunability based on elastic optical networks? Software defined networking (SDN) based end-to-end tunability solution is proposed for software defined data center optical networks, and the protocol extension and implementation procedure are designed accordingly. For the first time, the flexible grid all optical networks with Tbps end-to-end tunable transport and switch system have been online demonstrated for data center interconnection, which are controlled by OpenDayLight (ODL) based controller. The performance of the end-to-end tunable transport and switch system has been evaluated with wavelength number tuning, bit rate tuning, and transmit power tuning procedure.

  17. Albemarle Sound demonstration study of the national monitoring network for US coastal waters and their tributaries

    Science.gov (United States)

    Michelle Moorman; Sharon Fitzgerald; Keith Loftin; Elizabeth Fensin

    2016-01-01

    The U.S. Geological Survey’s (USGS) is implementing a demonstration project in the Albemarle Sound for the National Monitoring Network for U.S. coastal waters and their tributaries. The goal of the National Monitoring Network is to provide information about the health of our oceans and coastal ecosystems and inland influences on coastal waters for improved resource...

  18. Scalable algorithms for contact problems

    CERN Document Server

    Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít

    2016-01-01

    This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...

  19. Climate-Determined Suitability of the Water Saving Technology "Alternate Wetting and Drying" in Rice Systems: A Scalable Methodology demonstrated for a Province in the Philippines.

    Directory of Open Access Journals (Sweden)

    Andrew Nelson

    Full Text Available 70% of the world's freshwater is used for irrigated agriculture and demand is expected to increase to meet future food security requirements. In Asia, rice accounts for the largest proportion of irrigated water use and reducing or conserving water in rice systems has been a long standing goal in agricultural research. The Alternate Wetting and Drying (AWD technique has been developed to reduce water use by up to 30% compared to the continuously flooded conditions typically found in rice systems, while not impacting yield. AWD also reduces methane emissions produced by anaerobic archae and hence has applications for reducing water use and greenhouse gas emissions. Although AWD is being promoted across Asia, there have been no attempts to estimate the suitable area for this promising technology on a large scale. We present and demonstrate a spatial and temporal climate suitability assessment method for AWD that can be widely applied across rice systems in Asia. We use a simple water balance model and easily available spatial and temporal information on rice area, rice seasonality, rainfall, potential evapotranspiration and soil percolation rates to assess the suitable area per season. We apply the model to Cagayan province in the Philippines and conduct a sensitivity analysis to account for uncertainties in soil percolation and suitability classification. As expected, the entire dry season is climatically suitable for AWD for all scenarios. A further 60% of the wet season area is found suitable contradicting general perceptions that AWD would not be feasible in the wet season and showing that spatial and temporal assessments are necessary to explore the full potential of AWD.

  20. High-performance flat data center network architecture based on scalable and flow-controlled optical switching system

    Science.gov (United States)

    Calabretta, Nicola; Miao, Wang; Dorren, Harm

    2016-03-01

    Traffic in data centers networks (DCNs) is steadily growing to support various applications and virtualization technologies. Multi-tenancy enabling efficient resource utilization is considered as a key requirement for the next generation DCs resulting from the growing demands for services and applications. Virtualization mechanisms and technologies can leverage statistical multiplexing and fast switch reconfiguration to further extend the DC efficiency and agility. We present a novel high performance flat DCN employing bufferless and distributed fast (sub-microsecond) optical switches with wavelength, space, and time switching operation. The fast optical switches can enhance the performance of the DCNs by providing large-capacity switching capability and efficiently sharing the data plane resources by exploiting statistical multiplexing. Benefiting from the Software-Defined Networking (SDN) control of the optical switches, virtual DCNs can be flexibly created and reconfigured by the DCN provider. Numerical and experimental investigations of the DCN based on the fast optical switches show the successful setup of virtual network slices for intra-data center interconnections. Experimental results to assess the DCN performance in terms of latency and packet loss show less than 10^-5 packet loss and 640ns end-to-end latency with 0.4 load and 16- packet size buffer. Numerical investigation on the performance of the systems when the port number of the optical switch is scaled to 32x32 system indicate that more than 1000 ToRs each with Terabit/s interface can be interconnected providing a Petabit/s capacity. The roadmap to photonic integration of large port optical switches will be also presented.

  1. HyLIFT-FLEX. ''Development and demonstration of flexible and scalable fuel cell power system for various material handling vehicles''. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-10-15

    The project has successfully developed and tested a new fuel cell system from H2 Logic in a tow tractor from MULAG. Based on the project results a positive decision has been taken on continuing commercialisation efforts. Next step will be a large scale demonstration of up to 100 units in a new project named HyLIFT-Europe that is expected to commence in early 2013, with support from the FCH-JU programme. Main efforts in the project have been the development of a new fuel cell system, named H2Drive from H2 Logic, and the integration and test in a standard battery powered COMET 3 towing tractor from MULAG. The system size is exactly the same as a standard battery box (DIN measures) and can be easily integrated into e.g. the MULAG vehicle or other electric powered material handling vehicles using the same battery size. Several R and D efforts on the fuel cell system have been conducted with the aim to reduce cost and improve efficiency, among others the following: 1) New air compressor sub-system and control - improving overall system efficiency with {approx}2,5%. 2) New simplified air-based compressor cooling sub-system. 3) New hydrogen compressor sub-system with improved efficiency and reduced cost. 4) New hydrogen inlet and outlet manifold sub-system - resulting in reduction of more than 50% of all sensor components in the fuel cell system. 5) New DC/DC converter with an average efficiency of 97% - a 3% improvement. 6) A new optimized hybrid system that meets the vehicle cycle requirements. In total the R and D efforts have improved the overall fuel cell system efficiency with 10% and helped to reduce costs with 33% compared to the previous generation. A first prototype of the developed H2Drive system has been constructed and integrated into the MULAG Towing Tractor. Only few modifications were made on the base vehicle, among others integration of cabin-heating, displays and motor control. Several internal tests were conducted at H2 Logic and MULAG before making a

  2. Numeric Analysis for Relationship-Aware Scalable Streaming Scheme

    Directory of Open Access Journals (Sweden)

    Heung Ki Lee

    2014-01-01

    Full Text Available Frequent packet loss of media data is a critical problem that degrades the quality of streaming services over mobile networks. Packet loss invalidates frames containing lost packets and other related frames at the same time. Indirect loss caused by losing packets decreases the quality of streaming. A scalable streaming service can decrease the amount of dropped multimedia resulting from a single packet loss. Content providers typically divide one large media stream into several layers through a scalable streaming service and then provide each scalable layer to the user depending on the mobile network. Also, a scalable streaming service makes it possible to decode partial multimedia data depending on the relationship between frames and layers. Therefore, a scalable streaming service provides a way to decrease the wasted multimedia data when one packet is lost. However, the hierarchical structure between frames and layers of scalable streams determines the service quality of the scalable streaming service. Even if whole packets of layers are transmitted successfully, they cannot be decoded as a result of the absence of reference frames and layers. Therefore, the complicated relationship between frames and layers in a scalable stream increases the volume of abandoned layers. For providing a high-quality scalable streaming service, we choose a proper relationship between scalable layers as well as the amount of transmitted multimedia data depending on the network situation. We prove that a simple scalable scheme outperforms a complicated scheme in an error-prone network. We suggest an adaptive set-top box (AdaptiveSTB to lower the dependency between scalable layers in a scalable stream. Also, we provide a numerical model to obtain the indirect loss of multimedia data and apply it to various multimedia streams. Our AdaptiveSTB enhances the quality of a scalable streaming service by removing indirect loss.

  3. Scalable optical switches for computing applications

    NARCIS (Netherlands)

    White, I.H.; Aw, E.T.; Williams, K.A.; Wang, Haibo; Wonfor, A.; Penty, R.V.

    2009-01-01

    A scalable photonic interconnection network architecture is proposed whereby a Clos network is populated with broadcast-and-select stages. This enables the efficient exploitation of an emerging class of photonic integrated switch fabric. A low distortion space switch technology based on recently

  4. Design and analysis of electrical energy storage demonstration projects on UK distribution networks

    International Nuclear Information System (INIS)

    Lyons, P.F.; Wade, N.S.; Jiang, T.; Taylor, P.C.; Hashiesh, F.; Michel, M.; Miller, D.

    2015-01-01

    Highlights: • Results of an EES system demonstration project carried out in the UK. • Approaches to the design of trials for EES and observation on their application. • A formalised methodology for analysis of smart grids trials. • Validated models of energy storage. • Capability of EES to connect larger quantities of heat pumps and PV is evaluated. - Abstract: The UK government’s CO 2 emissions targets will require electrification of much of the country’s infrastructure with low carbon technologies such as photovoltaic panels, electric vehicles and heat pumps. The large scale proliferation of these technologies will necessitate major changes to the planning and operation of distribution networks. Distribution network operators are trialling electrical energy storage (EES) across their networks to increase their understanding of the contribution that it can make to enable the expected paradigm shift in generation and consumption of electricity. In order to evaluate a range of applications for EES, including voltage control and power flow management, installations have taken place at various distribution network locations and voltage levels. This article reports on trial design approaches and their application to a UK trial of an EES system to ensure broad applicability of the results. Results from these trials of an EES system, low carbon technologies and trial distribution networks are used to develop validated power system models. These models are used to evaluate, using a formalised methodology, the impact that EES could have on the design and operation of future distribution networks

  5. Experimental demonstration of optical stealth transmission over wavelength-division multiplexing network.

    Science.gov (United States)

    Zhu, Huatao; Wang, Rong; Pu, Tao; Fang, Tao; Xiang, Peng; Zheng, Jilin; Tang, Yeteng; Chen, Dalei

    2016-08-10

    We propose and experimentally demonstrate an optical stealth transmission system over a 200 GHz-grid wavelength-division multiplexing (WDM) network. The stealth signal is processed by spectral broadening, temporal spreading, and power equalizing. The public signal is suppressed by multiband notch filtering at the stealth channel receiver. The interaction between the public and stealth channels is investigated in terms of public-signal-to-stealth-signal ratio, data rate, notch-filter bandwidth, and public channel number. The stealth signal can transmit over 80 km single-mode fiber with no error. Our experimental results verify the feasibility of optical steganography used over the existing WDM-based optical network.

  6. Delay/Disruption Tolerant Networks (DTN): Testing and Demonstration for Lunar Surface Applications

    Science.gov (United States)

    2009-01-01

    This slide presentation reviews the testing of the Delay/Disruption Tolerant Network (DTN) designed for use with Lunar Surface applications. This is being done through the DTN experimental Network (DEN), that permit access and testing by other NASA centers, DTN team members and protocol developers. The objective of this work is to demonstrate DTN for high return applications in lunar scenarios, provide DEN connectivity with analogs of Constellation elements, emulators, and other resources from DTN Team Members, serve as a wireless communications staging ground for remote analog excursions and enable testing of detailed communication scenarios and evaluation of network performance. Three scenarios for DTN on the Lunar surface are reviewed: Motion imagery, Voice and sensor telemetry, and Navigation telemetry.

  7. Modelling and Simulation of National Electronic Product Code Network Demonstrator Project

    Science.gov (United States)

    Mo, John P. T.

    The National Electronic Product Code (EPC) Network Demonstrator Project (NDP) was the first large scale consumer goods track and trace investigation in the world using full EPC protocol system for applying RFID technology in supply chains. The NDP demonstrated the methods of sharing information securely using EPC Network, providing authentication to interacting parties, and enhancing the ability to track and trace movement of goods within the entire supply chain involving transactions among multiple enterprise. Due to project constraints, the actual run of the NDP was 3 months only and was unable to consolidate with quantitative results. This paper discusses the modelling and simulation of activities in the NDP in a discrete event simulation environment and provides an estimation of the potential benefits that can be derived from the NDP if it was continued for one whole year.

  8. Demonstration of hybrid orbital angular momentum multiplexing and time-division multiplexing passive optical network.

    Science.gov (United States)

    Wang, Andong; Zhu, Long; Liu, Jun; Du, Cheng; Mo, Qi; Wang, Jian

    2015-11-16

    Mode-division multiplexing passive optical network (MDM-PON) is a promising scheme for next-generation access networks to further increase fiber transmission capacity. In this paper, we demonstrate the proof-of-concept experiment of hybrid mode-division multiplexing (MDM) and time-division multiplexing (TDM) PON architecture by exploiting orbital angular momentum (OAM) modes. Bidirectional transmissions with 2.5-Gbaud 4-level pulse amplitude modulation (PAM-4) downstream and 2-Gbaud on-off keying (OOK) upstream are demonstrated in the experiment. The observed optical signal-to-noise ratio (OSNR) penalties for downstream and upstream transmissions at a bit-error rate (BER) of 2 × 10(-3) are less than 2.0 dB and 3.0 dB, respectively.

  9. Experimental demonstration of large capacity WSDM optical access network with multicore fibers and advanced modulation formats.

    Science.gov (United States)

    Li, Borui; Feng, Zhenhua; Tang, Ming; Xu, Zhilin; Fu, Songnian; Wu, Qiong; Deng, Lei; Tong, Weijun; Liu, Shuang; Shum, Perry Ping

    2015-05-04

    Towards the next generation optical access network supporting large capacity data transmission to enormous number of users covering a wider area, we proposed a hybrid wavelength-space division multiplexing (WSDM) optical access network architecture utilizing multicore fibers with advanced modulation formats. As a proof of concept, we experimentally demonstrated a WSDM optical access network with duplex transmission using our developed and fabricated multicore (7-core) fibers with 58.7km distance. As a cost-effective modulation scheme for access network, the optical OFDM-QPSK signal has been intensity modulated on the downstream transmission in the optical line terminal (OLT) and it was directly detected in the optical network unit (ONU) after MCF transmission. 10 wavelengths with 25GHz channel spacing from an optical comb generator are employed and each wavelength is loaded with 5Gb/s OFDM-QPSK signal. After amplification, power splitting, and fan-in multiplexer, 10-wavelength downstream signal was injected into six outer layer cores simultaneously and the aggregation downstream capacity reaches 300 Gb/s. -16 dBm sensitivity has been achieved for 3.8 × 10-3 bit error ratio (BER) with 7% Forward Error Correction (FEC) limit for all wavelengths in every core. Upstream signal from ONU side has also been generated and the bidirectional transmission in the same core causes negligible performance degradation to the downstream signal. As a universal platform for wired/wireless data access, our proposed architecture provides additional dimension for high speed mobile signal transmission and we hence demonstrated an upstream delivery of 20Gb/s per wavelength with QPSK modulation formats using the inner core of MCF emulating a mobile backhaul service. The IQ modulated data was coherently detected in the OLT side. -19 dBm sensitivity has been achieved under the FEC limit and more than 18 dB power budget is guaranteed.

  10. Scalable Frequent Subgraph Mining

    KAUST Repository

    Abdelhamid, Ehab

    2017-06-19

    A graph is a data structure that contains a set of nodes and a set of edges connecting these nodes. Nodes represent objects while edges model relationships among these objects. Graphs are used in various domains due to their ability to model complex relations among several objects. Given an input graph, the Frequent Subgraph Mining (FSM) task finds all subgraphs with frequencies exceeding a given threshold. FSM is crucial for graph analysis, and it is an essential building block in a variety of applications, such as graph clustering and indexing. FSM is computationally expensive, and its existing solutions are extremely slow. Consequently, these solutions are incapable of mining modern large graphs. This slowness is caused by the underlying approaches of these solutions which require finding and storing an excessive amount of subgraph matches. This dissertation proposes a scalable solution for FSM that avoids the limitations of previous work. This solution is composed of four components. The first component is a single-threaded technique which, for each candidate subgraph, needs to find only a minimal number of matches. The second component is a scalable parallel FSM technique that utilizes a novel two-phase approach. The first phase quickly builds an approximate search space, which is then used by the second phase to optimize and balance the workload of the FSM task. The third component focuses on accelerating frequency evaluation, which is a critical step in FSM. To do so, a machine learning model is employed to predict the type of each graph node, and accordingly, an optimized method is selected to evaluate that node. The fourth component focuses on mining dynamic graphs, such as social networks. To this end, an incremental index is maintained during the dynamic updates. Only this index is processed and updated for the majority of graph updates. Consequently, search space is significantly pruned and efficiency is improved. The empirical evaluation shows that the

  11. Blind Cooperative Routing for Scalable and Energy-Efficient Internet of Things

    KAUST Repository

    Bader, Ahmed; Alouini, Mohamed-Slim

    2016-01-01

    Multihop networking is promoted in this paper for energy-efficient and highly-scalable Internet of Things (IoT). Recognizing concerns related to the scalability of classical multihop routing and medium access techniques, the use of blind cooperation

  12. Experimental demonstration of topologically protected efficient sound propagation in an acoustic waveguide network

    Science.gov (United States)

    Wei, Qi; Tian, Ye; Zuo, Shu-Yu; Cheng, Ying; Liu, Xiao-Jun

    2017-03-01

    Acoustic topological states support sound propagation along the boundary in a one-way direction with inherent robustness against defects and disorders, leading to the revolution of the manipulation on acoustic waves. A variety of acoustic topological states relying on circulating fluid, chiral coupling, or temporal modulation have been proposed theoretically. However, experimental demonstration has so far remained a significant challenge, due to the critical limitations such as structural complexity and high losses. Here, we experimentally demonstrate an acoustic anomalous Floquet topological insulator in a waveguide network. The acoustic gapless edge states can be found in the band gap when the waveguides are strongly coupled. The scheme features simple structure and high-energy throughput, leading to the experimental demonstration of efficient and robust topologically protected sound propagation along the boundary. The proposal may offer a unique, promising application for design of acoustic devices in acoustic guiding, switching, isolating, filtering, etc.

  13. Experimental demonstration of time- and mode-division multiplexed passive optical network

    Science.gov (United States)

    Ren, Fang; Li, Juhao; Tang, Ruizhi; Hu, Tao; Yu, Jinyi; Mo, Qi; He, Yongqi; Chen, Zhangyuan; Li, Zhengbin

    2017-07-01

    A time- and mode-division multiplexed passive optical network (TMDM-PON) architecture is proposed, in which each optical network unit (ONU) communicates with the optical line terminal (OLT) independently utilizing both different time slots and switched optical linearly polarized (LP) spatial modes. Combination of a mode multiplexer/demultiplexer (MUX/DEUX) and a simple N × 1 optical switch is employed to select the specific LP mode in each ONU. A mode-insensitive power splitter is used for signal broadcast/combination between OLT and ONUs. We theoretically propose a dynamic mode and time slot assignment scheme for TMDM-PON based on inter-ONU priority rating, in which the time delay and packet loss ratio's variation tendency are investigated by simulation. Moreover, we experimentally demonstrate 2-mode TMDM-PON transmission over 10 km FMF with 10-Gb/s on-off keying (OOK) signal and direct detection.

  14. Developing Scalable Information Security Systems

    Directory of Open Access Journals (Sweden)

    Valery Konstantinovich Ablekov

    2013-06-01

    Full Text Available Existing physical security systems has wide range of lacks, including: high cost, a large number of vulnerabilities, problems of modification and support system. This paper covers an actual problem of developing systems without this list of drawbacks. The paper presents the architecture of the information security system, which operates through the network protocol TCP/IP, including the ability to connect different types of devices and integration with existing security systems. The main advantage is a significant increase in system reliability, scalability, both vertically and horizontally, with minimal cost of both financial and time resources.

  15. Waste Energy Recovery from Natural Gas Distribution Network: CELSIUS Project Demonstrator in Genoa

    Directory of Open Access Journals (Sweden)

    Davide Borelli

    2015-12-01

    Full Text Available Increasing energy efficiency by the smart recovery of waste energy is the scope of the CELSIUS Project (Combined Efficient Large Scale Integrated Urban Systems. The CELSIUS consortium includes a world-leading partnership of outstanding research, innovation and implementation organizations, and gather competence and excellence from five European cities with complementary baseline positions regarding the sustainable use of energy: Cologne, Genoa, Gothenburg, London, and Rotterdam. Lasting four-years and coordinated by the City of Gothenburg, the project faces with an holistic approach technical, economic, administrative, social, legal and political issues concerning smart district heating and cooling, aiming to establish best practice solutions. This will be done through the implementation of twelve new high-reaching demonstration projects, which cover the most major aspects of innovative urban heating and cooling for a smart city. The Genoa demonstrator was designed in order to recover energy from the pressure drop between the main supply line and the city natural gas network. The potential mechanical energy is converted to electricity by a turboexpander/generator system, which has been integrated in a combined heat and power plant to supply a district heating network. The performed energy analysis assessed natural gas saving and greenhouse gas reduction achieved through the smart systems integration.

  16. Scalable Open Source Smart Grid Simulator (SGSim)

    DEFF Research Database (Denmark)

    Ebeid, Emad Samuel Malki; Jacobsen, Rune Hylsberg; Stefanni, Francesco

    2017-01-01

    . This paper presents an open source smart grid simulator (SGSim). The simulator is based on open source SystemC Network Simulation Library (SCNSL) and aims to model scalable smart grid applications. SGSim has been tested under different smart grid scenarios that contain hundreds of thousands of households...

  17. First demonstration of single-mode MCF transport network with crosstalk-aware in-service optical channel control

    DEFF Research Database (Denmark)

    Pulverer, K.; Tanaka, T.; Häbel, U.

    2017-01-01

    We demonstrate the first crosstalk-aware traffic engineering as a use case in a multicore fibre transport network. With the help of a software-defined network controller, modulation format and channel route are adaptively changed using programmable devices with XT monitors.......We demonstrate the first crosstalk-aware traffic engineering as a use case in a multicore fibre transport network. With the help of a software-defined network controller, modulation format and channel route are adaptively changed using programmable devices with XT monitors....

  18. Experimental demonstration of OpenFlow-enabled media ecosystem architecture for high-end applications over metro and core networks.

    Science.gov (United States)

    Ntofon, Okung-Dike; Channegowda, Mayur P; Efstathiou, Nikolaos; Rashidi Fard, Mehdi; Nejabati, Reza; Hunter, David K; Simeonidou, Dimitra

    2013-02-25

    In this paper, a novel Software-Defined Networking (SDN) architecture is proposed for high-end Ultra High Definition (UHD) media applications. UHD media applications require huge amounts of bandwidth that can only be met with high-capacity optical networks. In addition, there are requirements for control frameworks capable of delivering effective application performance with efficient network utilization. A novel SDN-based Controller that tightly integrates application-awareness with network control and management is proposed for such applications. An OpenFlow-enabled test-bed demonstrator is reported with performance evaluations of advanced online and offline media- and network-aware schedulers.

  19. A Testbed for Highly-Scalable Mission Critical Information Systems

    National Research Council Canada - National Science Library

    Birman, Kenneth P

    2005-01-01

    ... systems in a networked environment. Headed by Professor Ken Birman, the project is exploring a novel fusion of classical protocols for reliable multicast communication with a new style of peer-to-peer protocol called scalable "gossip...

  20. Scalable coherent interface

    International Nuclear Information System (INIS)

    Alnaes, K.; Kristiansen, E.H.; Gustavson, D.B.; James, D.V.

    1990-01-01

    The Scalable Coherent Interface (IEEE P1596) is establishing an interface standard for very high performance multiprocessors, supporting a cache-coherent-memory model scalable to systems with up to 64K nodes. This Scalable Coherent Interface (SCI) will supply a peak bandwidth per node of 1 GigaByte/second. The SCI standard should facilitate assembly of processor, memory, I/O and bus bridge cards from multiple vendors into massively parallel systems with throughput far above what is possible today. The SCI standard encompasses two levels of interface, a physical level and a logical level. The physical level specifies electrical, mechanical and thermal characteristics of connectors and cards that meet the standard. The logical level describes the address space, data transfer protocols, cache coherence mechanisms, synchronization primitives and error recovery. In this paper we address logical level issues such as packet formats, packet transmission, transaction handshake, flow control, and cache coherence. 11 refs., 10 figs

  1. The Concept of Business Model Scalability

    DEFF Research Database (Denmark)

    Lund, Morten; Nielsen, Christian

    2018-01-01

    -term pro table business. However, the main message of this article is that while providing a good value proposition may help the rm ‘get by’, the really successful businesses of today are those able to reach the sweet-spot of business model scalability. Design/Methodology/Approach: The article is based...... on a ve-year longitudinal action research project of over 90 companies that participated in the International Center for Innovation project aimed at building 10 global network-based business models. Findings: This article introduces and discusses the term scalability from a company-level perspective......Purpose: The purpose of the article is to de ne what scalable business models are. Central to the contemporary understanding of business models is the value proposition towards the customer and the hypotheses generated about delivering value to the customer which become a good foundation for a long...

  2. ENDEAVOUR: A Scalable SDN Architecture for Real-World IXPs

    KAUST Repository

    Antichi, Gianni

    2017-10-25

    Innovation in interdomain routing has remained stagnant for over a decade. Recently, IXPs have emerged as economically-advantageous interconnection points for reducing path latencies and exchanging ever increasing traffic volumes among, possibly, hundreds of networks. Given their far-reaching implications on interdomain routing, IXPs are the ideal place to foster network innovation and extend the benefits of SDN to the interdomain level. In this paper, we present, evaluate, and demonstrate ENDEAVOUR, an SDN platform for IXPs. ENDEAVOUR can be deployed on a multi-hop IXP fabric, supports a large number of use cases, and is highly-scalable while avoiding broadcast storms. Our evaluation with real data from one of the largest IXPs, demonstrates the benefits and scalability of our solution: ENDEAVOUR requires around 70% fewer rules than alternative SDN solutions thanks to our rule partitioning mechanism. In addition, by providing an open source solution, we invite everyone from the community to experiment (and improve) our implementation as well as adapt it to new use cases.

  3. First demonstration of single-mode MCF transport network with crosstalk-aware in-service optical channel control

    DEFF Research Database (Denmark)

    Pulverer, K.; Tanaka, T.; Häbel, U.

    2017-01-01

    We demonstrate the first crosstalk-aware traffic engineering as a use case in a multicore fibre transport network. With the help of a software-defined network controller, modulation format and channel route are adaptively changed using programmable devices with XT monitors....

  4. PKI Scalability Issues

    OpenAIRE

    Slagell, Adam J; Bonilla, Rafael

    2004-01-01

    This report surveys different PKI technologies such as PKIX and SPKI and the issues of PKI that affect scalability. Much focus is spent on certificate revocation methodologies and status verification systems such as CRLs, Delta-CRLs, CRS, Certificate Revocation Trees, Windowed Certificate Revocation, OCSP, SCVP and DVCS.

  5. CERN is first to demonstrate a 10 Gigabit/sec network

    CERN Multimedia

    CERN Press Office. Geneva

    2000-01-01

    CERN* has made another important step forward in high-speed computing and state of the art networking. On 15 September a Gigabyte System Network (GSN) network with a capacity of 10 Gbit per second ramped up successfully at CERN, connecting together computers from different computer companies (Compaq, IBM, SGI and Sun).

  6. Campus-Wide Networks: Three State-of-the-Art Demonstration Projects.

    Science.gov (United States)

    Chapman, Dale T.

    1986-01-01

    During the 1980's, the educational community has been keeping its eye hopefully on several campus-wide networking projects. Included are reports on progress in networks and networking at Carnegie Mellon University, the Massachussetts Institute of Technology (MIT), and at Brown University. (JN)

  7. Enabling Highly-Scalable Remote Memory Access Programming with MPI-3 One Sided

    Directory of Open Access Journals (Sweden)

    Robert Gerstenberger

    2014-01-01

    Full Text Available Modern interconnects offer remote direct memory access (RDMA features. Yet, most applications rely on explicit message passing for communications albeit their unwanted overheads. The MPI-3.0 standard defines a programming interface for exploiting RDMA networks directly, however, it's scalability and practicability has to be demonstrated in practice. In this work, we develop scalable bufferless protocols that implement the MPI-3.0 specification. Our protocols support scaling to millions of cores with negligible memory consumption while providing highest performance and minimal overheads. To arm programmers, we provide a spectrum of performance models for all critical functions and demonstrate the usability of our library and models with several application studies with up to half a million processes. We show that our design is comparable to, or better than UPC and Fortran Coarrays in terms of latency, bandwidth and message rate. We also demonstrate application performance improvements with comparable programming complexity.

  8. Building scalable apps with Redis and Node.js

    CERN Document Server

    Johanan, Joshua

    2014-01-01

    If the phrase scalability sounds alien to you, then this is an ideal book for you. You will not need much Node.js experience as each framework is demonstrated in a way that requires no previous knowledge of the framework. You will be building scalable Node.js applications in no time! Knowledge of JavaScript is required.

  9. Intelligent Network Flow Optimization (INFLO) prototype : Seattle small-scale demonstration plan.

    Science.gov (United States)

    2015-01-01

    This report describes the INFLO Prototype Small-Scale Demonstration to be performed in Seattle Washington. This demonstration is intended to demonstrate that the INFLO Prototype, previously demonstrated in a controlled environment, functions well in ...

  10. Cooperative Scalable Moving Continuous Query Processing

    DEFF Research Database (Denmark)

    Li, Xiaohui; Karras, Panagiotis; Jensen, Christian S.

    2012-01-01

    of the global view and handle the majority of the workload. Meanwhile, moving clients, having basic memory and computation resources, handle small portions of the workload. This model is further enhanced by dynamic region allocation and grid size adjustment mechanisms that reduce the communication...... and computation cost for both servers and clients. An experimental study demonstrates that our approaches offer better scalability than competitors...

  11. Scalable Resolution Display Walls

    KAUST Repository

    Leigh, Jason; Johnson, Andrew; Renambot, Luc; Peterka, Tom; Jeong, Byungil; Sandin, Daniel J.; Talandis, Jonas; Jagodic, Ratko; Nam, Sungwon; Hur, Hyejung; Sun, Yiwen

    2013-01-01

    This article will describe the progress since 2000 on research and development in 2-D and 3-D scalable resolution display walls that are built from tiling individual lower resolution flat panel displays. The article will describe approaches and trends in display hardware construction, middleware architecture, and user-interaction design. The article will also highlight examples of use cases and the benefits the technology has brought to their respective disciplines. © 1963-2012 IEEE.

  12. Scalable Packet Classification with Hash Tables

    Science.gov (United States)

    Wang, Pi-Chung

    In the last decade, the technique of packet classification has been widely deployed in various network devices, including routers, firewalls and network intrusion detection systems. In this work, we improve the performance of packet classification by using multiple hash tables. The existing hash-based algorithms have superior scalability with respect to the required space; however, their search performance may not be comparable to other algorithms. To improve the search performance, we propose a tuple reordering algorithm to minimize the number of accessed hash tables with the aid of bitmaps. We also use pre-computation to ensure the accuracy of our search procedure. Performance evaluation based on both real and synthetic filter databases shows that our scheme is effective and scalable and the pre-computation cost is moderate.

  13. Architecture Design and Experimental Platform Demonstration of Optical Network based on OpenFlow Protocol

    Science.gov (United States)

    Xing, Fangyuan; Wang, Honghuan; Yin, Hongxi; Li, Ming; Luo, Shenzi; Wu, Chenguang

    2016-02-01

    With the extensive application of cloud computing and data centres, as well as the constantly emerging services, the big data with the burst characteristic has brought huge challenges to optical networks. Consequently, the software defined optical network (SDON) that combines optical networks with software defined network (SDN), has attracted much attention. In this paper, an OpenFlow-enabled optical node employed in optical cross-connect (OXC) and reconfigurable optical add/drop multiplexer (ROADM), is proposed. An open source OpenFlow controller is extended on routing strategies. In addition, the experiment platform based on OpenFlow protocol for software defined optical network, is designed. The feasibility and availability of the OpenFlow-enabled optical nodes and the extended OpenFlow controller are validated by the connectivity test, protection switching and load balancing experiments in this test platform.

  14. Scalable inference for stochastic block models

    KAUST Repository

    Peng, Chengbin

    2017-12-08

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference algorithms for such a model are increasingly limited due to their high time complexity and poor scalability. In this paper, we propose a multi-stage maximum likelihood approach to recover the latent parameters of the stochastic block model, in time linear with respect to the number of edges. We also propose a parallel algorithm based on message passing. Our algorithm can overlap communication and computation, providing speedup without compromising accuracy as the number of processors grows. For example, to process a real-world graph with about 1.3 million nodes and 10 million edges, our algorithm requires about 6 seconds on 64 cores of a contemporary commodity Linux cluster. Experiments demonstrate that the algorithm can produce high quality results on both benchmark and real-world graphs. An example of finding more meaningful communities is illustrated consequently in comparison with a popular modularity maximization algorithm.

  15. Demonstration of Single-Mode Multicore Fiber Transport Network with Crosstalk-Aware In-Service Optical Path Control

    DEFF Research Database (Denmark)

    Tanaka, Takafumi; Pulverer, Klaus; Häbel, Ulrich

    2017-01-01

    transport network testbed and demonstrate an XT-aware traffic engineering scenario. With the help of a software-defined network (SDN) controller, the modulation format and optical path route are adaptively changed based on the monitored XT values by using programmable devices such as a real-time transponder......-capacity transmission, because inter-core crosstalk (XT) could be the main limiting factor for MCF transmission. In a real MCF network, the inter-core XT in a particular core is likely to change continuously as the optical paths in the adjacent cores are dynamically assigned to match the dynamic nature of the data...

  16. Intelligent Network Flow Optimization (INFLO) prototype : Seattle small-scale demonstration report.

    Science.gov (United States)

    2015-05-01

    This report describes the performance and results of the INFLO Prototype Small-Scale Demonstration. The purpose of : the Small-Scale Demonstration was to deploy the INFLO Prototype System to demonstrate its functionality and : performance in an opera...

  17. Beyond ectomycorrhizal bipartite networks: projected networks demonstrate contrasted patterns between early- and late-successional plants in Corsica.

    Directory of Open Access Journals (Sweden)

    Adrien eTaudiere

    2015-10-01

    Full Text Available The ectomycorrhizal (ECM symbiosis connects mutualistic plants and fungal species into bipartite networks. While links between one focal ECM plant and its fungal symbionts have been widely documented, systemic views of ECM networks are lacking, in particular, concerning the ability of fungal species to mediate indirect ecological interactions between ECM plant species (projected-ECM networks. We assembled a large dataset of plant-fungi associations at the species level and at the scale of Corsica using molecular data and unambiguously host-assigned records to: (i examine the correlation between the number of fungal symbionts of a plant species and the average specialization of these fungal species, (ii explore the structure of the plant-plant projected network and (iii compare plant association patterns in regard to their position along the ecological succession. Our analysis reveals no trade-off between specialization of plants and specialization of their partners and a saturation of the plant projected network. Moreover, there is a significantly lower-than-expected sharing of partners between early- and late-successional plant species, with fewer fungal partners for early-successional ones and similar average specialization of symbionts of early- and late-successional plants. Our work paves the way for ecological readings of Mediterranean landscapes that include the astonishing diversity of below-ground interactions.

  18. Experimental Demonstration of Mixed Formats and Bit Rates Signal Allocation for Spectrum-flexible Optical Networking

    DEFF Research Database (Denmark)

    Borkowski, Robert; Karinou, Fotini; Angelou, Marianna

    2012-01-01

    We report on an extensive experimental study for adaptive allocation of 16-QAM and QPSK signals inside spectrum flexible heterogeneous superchannel. Physical-layer performance parameters are extracted for use in resource allocation mechanisms of future flexible networks....

  19. Experimental Demonstration of a Cognitive Optical Network for Reduction of Restoration Time

    DEFF Research Database (Denmark)

    Kachris, Christoforos; Klonidis, Dimitris; Francescon, Antonio

    2014-01-01

    This paper presents the implementation and performance evaluation of a cognitive heterogeneous optical network testbed. The testbed integrates the CMP, the data plane and the cognitive system and reduces by 48% the link restoration time.......This paper presents the implementation and performance evaluation of a cognitive heterogeneous optical network testbed. The testbed integrates the CMP, the data plane and the cognitive system and reduces by 48% the link restoration time....

  20. Scalable Performance Measurement and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gamblin, Todd [Univ. of North Carolina, Chapel Hill, NC (United States)

    2009-01-01

    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number of tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.

  1. A system to build distributed multivariate models and manage disparate data sharing policies: implementation in the scalable national network for effectiveness research.

    Science.gov (United States)

    Meeker, Daniella; Jiang, Xiaoqian; Matheny, Michael E; Farcas, Claudiu; D'Arcy, Michel; Pearlman, Laura; Nookala, Lavanya; Day, Michele E; Kim, Katherine K; Kim, Hyeoneui; Boxwala, Aziz; El-Kareh, Robert; Kuo, Grace M; Resnic, Frederic S; Kesselman, Carl; Ohno-Machado, Lucila

    2015-11-01

    Centralized and federated models for sharing data in research networks currently exist. To build multivariate data analysis for centralized networks, transfer of patient-level data to a central computation resource is necessary. The authors implemented distributed multivariate models for federated networks in which patient-level data is kept at each site and data exchange policies are managed in a study-centric manner. The objective was to implement infrastructure that supports the functionality of some existing research networks (e.g., cohort discovery, workflow management, and estimation of multivariate analytic models on centralized data) while adding additional important new features, such as algorithms for distributed iterative multivariate models, a graphical interface for multivariate model specification, synchronous and asynchronous response to network queries, investigator-initiated studies, and study-based control of staff, protocols, and data sharing policies. Based on the requirements gathered from statisticians, administrators, and investigators from multiple institutions, the authors developed infrastructure and tools to support multisite comparative effectiveness studies using web services for multivariate statistical estimation in the SCANNER federated network. The authors implemented massively parallel (map-reduce) computation methods and a new policy management system to enable each study initiated by network participants to define the ways in which data may be processed, managed, queried, and shared. The authors illustrated the use of these systems among institutions with highly different policies and operating under different state laws. Federated research networks need not limit distributed query functionality to count queries, cohort discovery, or independently estimated analytic models. Multivariate analyses can be efficiently and securely conducted without patient-level data transport, allowing institutions with strict local data storage

  2. Scalable optical quantum computer

    Energy Technology Data Exchange (ETDEWEB)

    Manykin, E A; Mel' nichenko, E V [Institute for Superconductivity and Solid-State Physics, Russian Research Centre ' Kurchatov Institute' , Moscow (Russian Federation)

    2014-12-31

    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

  3. Scalable optical quantum computer

    International Nuclear Information System (INIS)

    Manykin, E A; Mel'nichenko, E V

    2014-01-01

    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr 3+ , regularly located in the lattice of the orthosilicate (Y 2 SiO 5 ) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

  4. Demonstration of fully functional MIMO wireless LAN transmission over GI-MMF for in-building networks

    NARCIS (Netherlands)

    Zou, S.; Chen, H.; Huijskens, F.M.; Cao, Z.; Tangdiongga, E.; Koonen, A.M.J.

    2013-01-01

    We propose a low-cost optically-fed architecture capable of increasing the capacity and overall coverage of IEEE 802.11n MIMO WLAN system for in-building networks. A fully functional transmission over GI-MMF was demonstrated employing 2×3 MIMO configuration.

  5. Demonstration of Self-Training Autonomous Neural Networks in Space Vehicle Docking Simulations

    Science.gov (United States)

    Patrick, M. Clinton; Thaler, Stephen L.; Stevenson-Chavis, Katherine

    2006-01-01

    Neural Networks have been under examination for decades in many areas of research, with varying degrees of success and acceptance. Key goals of computer learning, rapid problem solution, and automatic adaptation have been elusive at best. This paper summarizes efforts at NASA's Marshall Space Flight Center harnessing such technology to autonomous space vehicle docking for the purpose of evaluating applicability to future missions.

  6. Demonstration of an application-aware resilience mechanism for dynamic heterogeneous networks

    NARCIS (Netherlands)

    Okonkwo, C.M.; Martin, R.; Ferrera, M.P.; Guild, K.M.; O'Mahony, M.; Everett, J.; Marciniak, M.

    2007-01-01

    Business uptake of IP-centric services has been strong and necessitates reliable, highly-available, high-quality Internet access. Real time services such as mission-critical broadcast video, video conferencing and voice over IP (VoIP) have low tolerance for short-term network outages. However,

  7. Scalable parallel prefix solvers for discrete ordinates transport

    International Nuclear Information System (INIS)

    Pautz, S.; Pandya, T.; Adams, M.

    2009-01-01

    The well-known 'sweep' algorithm for inverting the streaming-plus-collision term in first-order deterministic radiation transport calculations has some desirable numerical properties. However, it suffers from parallel scaling issues caused by a lack of concurrency. The maximum degree of concurrency, and thus the maximum parallelism, grows more slowly than the problem size for sweeps-based solvers. We investigate a new class of parallel algorithms that involves recasting the streaming-plus-collision problem in prefix form and solving via cyclic reduction. This method, although computationally more expensive at low levels of parallelism than the sweep algorithm, offers better theoretical scalability properties. Previous work has demonstrated this approach for one-dimensional calculations; we show how to extend it to multidimensional calculations. Notably, for multiple dimensions it appears that this approach is limited to long-characteristics discretizations; other discretizations cannot be cast in prefix form. We implement two variants of the algorithm within the radlib/SCEPTRE transport code library at Sandia National Laboratories and show results on two different massively parallel systems. Both the 'forward' and 'symmetric' solvers behave similarly, scaling well to larger degrees of parallelism then sweeps-based solvers. We do observe some issues at the highest levels of parallelism (relative to the system size) and discuss possible causes. We conclude that this approach shows good potential for future parallel systems, but the parallel scalability will depend heavily on the architecture of the communication networks of these systems. (authors)

  8. Design Document for the Technology Demonstration of the Joint Network Defence and Management System (JNDMS) Project

    Science.gov (United States)

    2012-02-06

    Event Interface Custom ASCII JSS Client Y (Spectrum) 3.2 8 IT Infrastructure Performance Data/Vulnerability Assessment eHealth , Spectrum NSM...monitoring of infrastructure servers.) The Concord product line. Concord products ( eHealth and Spectrum) can provide both real-time and historical...Network and Systems Management (NSM) • Unicenter Asset Management • Spectrum • eHealth • Centennial Discovery Table 12 summarizes the the role of

  9. Blind Cooperative Routing for Scalable and Energy-Efficient Internet of Things

    KAUST Repository

    Bader, Ahmed

    2016-02-26

    Multihop networking is promoted in this paper for energy-efficient and highly-scalable Internet of Things (IoT). Recognizing concerns related to the scalability of classical multihop routing and medium access techniques, the use of blind cooperation in conjunction with multihop communications is advocated herewith. Blind cooperation however is actually shown to be inefficient unless power control is applied. Inefficiency in this paper is projected in terms of the transport rate normalized to energy consumption. To that end, an uncoordinated power control mechanism is proposed whereby each device in a blind cooperative cluster randomly adjusts its transmit power level. An upper bound is derived for the mean transmit power that must be observed at each device. Finally, the uncoordinated power control mechanism is demonstrated to consistently outperform the simple point-to-point routing case. © 2015 IEEE.

  10. Scalable photoreactor for hydrogen production

    KAUST Repository

    Takanabe, Kazuhiro; Shinagawa, Tatsuya

    2017-01-01

    Provided herein are scalable photoreactors that can include a membrane-free water- splitting electrolyzer and systems that can include a plurality of membrane-free water- splitting electrolyzers. Also provided herein are methods of using the scalable photoreactors provided herein.

  11. Scalable photoreactor for hydrogen production

    KAUST Repository

    Takanabe, Kazuhiro

    2017-04-06

    Provided herein are scalable photoreactors that can include a membrane-free water- splitting electrolyzer and systems that can include a plurality of membrane-free water- splitting electrolyzers. Also provided herein are methods of using the scalable photoreactors provided herein.

  12. Final Project Report: DOE Award FG02-04ER25606 Overlay Transit Networking for Scalable, High Performance Data Communication across Heterogeneous Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Beck, Micah; Moore, Terry

    2007-08-31

    As the flood of data associated with leading edge computational science continues to escalate, the challenge of supporting the distributed collaborations that are now characteristic of it becomes increasingly daunting. The chief obstacles to progress on this front lie less in the synchronous elements of collaboration, which have been reasonably well addressed by new global high performance networks, than in the asynchronous elements, where appropriate shared storage infrastructure seems to be lacking. The recent report from the Department of Energy on the emerging 'data management challenge' captures the multidimensional nature of this problem succinctly: Data inevitably needs to be buffered, for periods ranging from seconds to weeks, in order to be controlled as it moves through the distributed and collaborative research process. To meet the diverse and changing set of application needs that different research communities have, large amounts of non-archival storage are required for transitory buffering, and it needs to be widely dispersed, easily available, and configured to maximize flexibility of use. In today's grid fabric, however, massive storage is mostly concentrated in data centers, available only to those with user accounts and membership in the appropriate virtual organizations, allocated as if its usage were non-transitory, and encapsulated behind legacy interfaces that inhibit the flexibility of use and scheduling. This situation severely restricts the ability of application communities to access and schedule usable storage where and when they need to in order to make their workflow more productive. (p.69f) One possible strategy to deal with this problem lies in creating a storage infrastructure that can be universally shared because it provides only the most generic of asynchronous services. Different user communities then define higher level services as necessary to meet their needs. One model of such a service is a Storage Network

  13. ALADDIN - enhancing applicability and scalability

    International Nuclear Information System (INIS)

    Roverso, Davide

    2001-02-01

    The ALADDIN project aims at the study and development of flexible, accurate, and reliable techniques and principles for computerised event classification and fault diagnosis for complex machinery and industrial processes. The main focus of the project is on advanced numerical techniques, such as wavelets, and empirical modelling with neural networks. This document reports on recent important advancements, which significantly widen the practical applicability of the developed principles, both in terms of flexibility of use, and in terms of scalability to large problem domains. In particular, two novel techniques are here described. The first, which we call Wavelet On- Line Pre-processing (WOLP), is aimed at extracting, on-line, relevant dynamic features from the process data streams. This technique allows a system a greater flexibility in detecting and processing transients at a range of different time scales. The second technique, which we call Autonomous Recursive Task Decomposition (ARTD), is aimed at tackling the problem of constructing a classifier able to discriminate among a large number of different event/fault classes, which is often the case when the application domain is a complex industrial process. ARTD also allows for incremental application development (i.e. the incremental addition of new classes to an existing classifier, without the need of retraining the entire system), and for simplified application maintenance. The description of these novel techniques is complemented by reports of quantitative experiments that show in practice the extent of these improvements. (Author)

  14. Experimental Demonstration of Coexistence of Microwave Wireless Communication and Power Transfer Technologies for Battery-Free Sensor Network Systems

    Directory of Open Access Journals (Sweden)

    Satoshi Yoshida

    2013-01-01

    Full Text Available This paper describes experimental demonstrations of a wireless power transfer system equipped with a microwave band communication function. Battery charging using the system is described to evaluate the possibility of the coexistence of both wireless power transfer and communication functions in the C-band. A battery-free wireless sensor network system is demonstrated, and a high-power rectifier for the system is also designed and evaluated in the S-band. We have confirmed that microwave wireless power transfer can coexist with communication function.

  15. Remote Earth Terminals in the Health, Education, Telecommunications Network. Satellite Technology Demonstration, Technical Report No. 0423.

    Science.gov (United States)

    Braunstein, Jean; And Others

    The major purpose of the Health, Education, Telecommunications experiment was to demonstrate the feasibility of distributing video materials to a large number of low-cost earth terminals located in rural areas. The receivers are of two types: one-way video receivers for the reception of video programs, and two-way voice/data terminals which permit…

  16. A Demonstration of Lusail

    KAUST Repository

    Mansour, Essam; Abdelaziz, Ibrahim; Ouzzani, Mourad; Aboulnaga, Ashraf; Kalnis, Panos

    2017-01-01

    There has been a proliferation of datasets available as interlinked RDF data accessible through SPARQL endpoints. This has led to the emergence of various applications in life science, distributed social networks, and Internet of Things that need to integrate data from multiple endpoints. We will demonstrate Lusail; a system that supports the need of emerging applications to access tens to hundreds of geo-distributed datasets. Lusail is a geo-distributed graph engine for querying linked RDF data. Lusail delivers outstanding performance using (i) a novel locality-aware query decomposition technique that minimizes the intermediate data to be accessed by the subqueries, and (ii) selectivityawareness and parallel query execution to reduce network latency and to increase parallelism. During the demo, the audience will be able to query actually deployed RDF endpoints as well as large synthetic and real benchmarks that we have deployed in the public cloud. The demo will also show that Lusail outperforms state-of-the-art systems by orders of magnitude in terms of scalability and response time.

  17. A Demonstration of Lusail

    KAUST Repository

    Mansour, Essam

    2017-05-10

    There has been a proliferation of datasets available as interlinked RDF data accessible through SPARQL endpoints. This has led to the emergence of various applications in life science, distributed social networks, and Internet of Things that need to integrate data from multiple endpoints. We will demonstrate Lusail; a system that supports the need of emerging applications to access tens to hundreds of geo-distributed datasets. Lusail is a geo-distributed graph engine for querying linked RDF data. Lusail delivers outstanding performance using (i) a novel locality-aware query decomposition technique that minimizes the intermediate data to be accessed by the subqueries, and (ii) selectivityawareness and parallel query execution to reduce network latency and to increase parallelism. During the demo, the audience will be able to query actually deployed RDF endpoints as well as large synthetic and real benchmarks that we have deployed in the public cloud. The demo will also show that Lusail outperforms state-of-the-art systems by orders of magnitude in terms of scalability and response time.

  18. Modular Universal Scalable Ion-trap Quantum Computer

    Science.gov (United States)

    2016-06-02

    SECURITY CLASSIFICATION OF: The main goal of the original MUSIQC proposal was to construct and demonstrate a modular and universally- expandable ion...Distribution Unlimited UU UU UU UU 02-06-2016 1-Aug-2010 31-Jan-2016 Final Report: Modular Universal Scalable Ion-trap Quantum Computer The views...P.O. Box 12211 Research Triangle Park, NC 27709-2211 Ion trap quantum computation, scalable modular architectures REPORT DOCUMENTATION PAGE 11

  19. Scalable, full-colour and controllable chromotropic plasmonic printing

    OpenAIRE

    Xue, Jiancai; Zhou, Zhang-Kai; Wei, Zhiqiang; Su, Rongbin; Lai, Juan; Li, Juntao; Li, Chao; Zhang, Tengwei; Wang, Xue-Hua

    2015-01-01

    Plasmonic colour printing has drawn wide attention as a promising candidate for the next-generation colour-printing technology. However, an efficient approach to realize full colour and scalable fabrication is still lacking, which prevents plasmonic colour printing from practical applications. Here we present a scalable and full-colour plasmonic printing approach by combining conjugate twin-phase modulation with a plasmonic broadband absorber. More importantly, our approach also demonstrates ...

  20. Scalable Nonlinear Compact Schemes

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, Debojyoti [Argonne National Lab. (ANL), Argonne, IL (United States); Constantinescu, Emil M. [Univ. of Chicago, IL (United States); Brown, Jed [Univ. of Colorado, Boulder, CO (United States)

    2014-04-01

    In this work, we focus on compact schemes resulting in tridiagonal systems of equations, specifically the fifth-order CRWENO scheme. We propose a scalable implementation of the nonlinear compact schemes by implementing a parallel tridiagonal solver based on the partitioning/substructuring approach. We use an iterative solver for the reduced system of equations; however, we solve this system to machine zero accuracy to ensure that no parallelization errors are introduced. It is possible to achieve machine-zero convergence with few iterations because of the diagonal dominance of the system. The number of iterations is specified a priori instead of a norm-based exit criterion, and collective communications are avoided. The overall algorithm thus involves only point-to-point communication between neighboring processors. Our implementation of the tridiagonal solver differs from and avoids the drawbacks of past efforts in the following ways: it introduces no parallelization-related approximations (multiprocessor solutions are exactly identical to uniprocessor ones), it involves minimal communication, the mathematical complexity is similar to that of the Thomas algorithm on a single processor, and it does not require any communication and computation scheduling.

  1. Research, development and demonstration. Issue paper - working group 3; Denmark. Smart Grid Network; Forskning, udvikling og demonstration. Issue paper, arbejdsgruppe 3

    Energy Technology Data Exchange (ETDEWEB)

    Balasiu, A [Siemens A/S, Ballerup (Denmark); Troi, A [Danmarks Tekniske Univ. Risoe Nationallaboratoriet for Baeredygtig Energi, Roskilde (Denmark); Andersen, Casper [DI Energibranchen, Copenhagen (Denmark); and others

    2011-07-01

    The Smart Grid Network was established in 2010 by the Danish climate and energy minister tasked with developing recommendations for future actions and initiatives that make it possible to handle up to 50% electricity from wind energy in the power system in 2020. The task of working group 3 was defined as: - An overview of the Danish research and development of smart grids and related areas; - Conducting an analysis of the research and development needs required for the introduction of a smart grid in Denmark. Based on this analysis, provide suggestions for new large research and development projects; - Provide recommendations on how the activities are best carried out taking into account innovation, economic growth and jobs. In the analysis it is explained that Denmark so far has a strong position in several elements of RD and D activities. This position will soon be threatened as several European countries have launched ambitious initiatives to strengthen the national position. The working group recommends that Denmark gives priority to Smart Grids as a national action in order to solve the challenge of technically and economically efficient integration of renewable energy. Smart Grid is a catalyst that strengthens a new green growth industry (cleantech) in Denmark. Research and development has an important role to play in this development. A common vision and roadmap must be established for research institutions, energy companies and industries related to research, development and demonstration of Smart Grid, which can maintain and expand Denmark's global leadership position. As part of this, there is a need to strengthen and market research infrastructures, which can turn Denmark into a global hub for smart grid development. There is a current need to strengthen the advanced technical and scientific research in the complexities of the power system, research on market design, user behavior and smart grid interoperability. (LN)

  2. Research, development and demonstration. Issue paper - working group 3; Denmark. Smart Grid Network; Forskning, udvikling og demonstration. Issue paper, arbejdsgruppe 3

    Energy Technology Data Exchange (ETDEWEB)

    Balasiu, A. (Siemens A/S, Ballerup (Denmark)); Troi, A. (Danmarks Tekniske Univ.. Risoe Nationallaboratoriet for Baeredygtig Energi, Roskilde (Denmark)); Andersen, Casper (DI Energibranchen, Copenhagen (Denmark)) (and others)

    2011-07-01

    The Smart Grid Network was established in 2010 by the Danish climate and energy minister tasked with developing recommendations for future actions and initiatives that make it possible to handle up to 50% electricity from wind energy in the power system in 2020. The task of working group 3 was defined as: - An overview of the Danish research and development of smart grids and related areas; - Conducting an analysis of the research and development needs required for the introduction of a smart grid in Denmark. Based on this analysis, provide suggestions for new large research and development projects; - Provide recommendations on how the activities are best carried out taking into account innovation, economic growth and jobs. In the analysis it is explained that Denmark so far has a strong position in several elements of RD and D activities. This position will soon be threatened as several European countries have launched ambitious initiatives to strengthen the national position. The working group recommends that Denmark gives priority to Smart Grids as a national action in order to solve the challenge of technically and economically efficient integration of renewable energy. Smart Grid is a catalyst that strengthens a new green growth industry (cleantech) in Denmark. Research and development has an important role to play in this development. A common vision and roadmap must be established for research institutions, energy companies and industries related to research, development and demonstration of Smart Grid, which can maintain and expand Denmark's global leadership position. As part of this, there is a need to strengthen and market research infrastructures, which can turn Denmark into a global hub for smart grid development. There is a current need to strengthen the advanced technical and scientific research in the complexities of the power system, research on market design, user behavior and smart grid interoperability. (LN)

  3. Weaving the native web: using social network analysis to demonstrate the value of a minority career development program.

    Science.gov (United States)

    Buchwald, Dedra; Dick, Rhonda Wiegman

    2011-06-01

    American Indian and Alaska Native scientists are consistently among the most underrepresented minority groups in health research. The authors used social network analysis (SNA) to evaluate the Native Investigator Development Program (NIDP), a career development program for junior Native researchers established as a collaboration between the University of Washington and the University of Colorado Denver. The study focused on 29 trainees and mentors who participated in the NIDP. Data were collected on manuscripts and grant proposals produced by participants from 1998 to 2007. Information on authorship of manuscripts and collaborations on grant applications was used to conduct social network analyses with three measures of centrality and one measure of network reach. Both visual and quantitative analyses were performed. Participants in the NIDP collaborated on 106 manuscripts and 83 grant applications. Although three highly connected individuals, with critical and central roles in the program, accounted for much of the richness of the network, both current core faculty and "graduates" of the program were heavily involved in collaborations on manuscripts and grants. This study's innovative application of SNA demonstrates that collaborative relationships can be an important outcome of career development programs for minority investigators and that an analysis of these relationships can provide a more complete assessment of the value of such programs.

  4. Scalable fabrication of perovskite solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Li, Zhen; Klein, Talysa R.; Kim, Dong Hoe; Yang, Mengjin; Berry, Joseph J.; van Hest, Maikel F. A. M.; Zhu, Kai

    2018-03-27

    Perovskite materials use earth-abundant elements, have low formation energies for deposition and are compatible with roll-to-roll and other high-volume manufacturing techniques. These features make perovskite solar cells (PSCs) suitable for terawatt-scale energy production with low production costs and low capital expenditure. Demonstrations of performance comparable to that of other thin-film photovoltaics (PVs) and improvements in laboratory-scale cell stability have recently made scale up of this PV technology an intense area of research focus. Here, we review recent progress and challenges in scaling up PSCs and related efforts to enable the terawatt-scale manufacturing and deployment of this PV technology. We discuss common device and module architectures, scalable deposition methods and progress in the scalable deposition of perovskite and charge-transport layers. We also provide an overview of device and module stability, module-level characterization techniques and techno-economic analyses of perovskite PV modules.

  5. Residential Fuel Cell Demonstration Handbook: National Rural Electric Cooperative Association Cooperative Research Network

    Energy Technology Data Exchange (ETDEWEB)

    Torrero, E.; McClelland, R.

    2002-07-01

    This report is a guide for rural electric cooperatives engaged in field testing of equipment and in assessing related application and market issues. Dispersed generation and its companion fuel cell technology have attracted increased interest by rural electric cooperatives and their customers. In addition, fuel cells are a particularly interesting source because their power quality, efficiency, and environmental benefits have now been coupled with major manufacturer development efforts. The overall effort is structured to measure the performance, durability, reliability, and maintainability of these systems, to identify promising types of applications and modes of operation, and to assess the related prospect for future use. In addition, technical successes and shortcomings will be identified by demonstration participants and manufacturers using real-world experience garnered under typical operating environments.

  6. Temporal sequence learning in winner-take-all networks of spiking neurons demonstrated in a brain-based device.

    Science.gov (United States)

    McKinstry, Jeffrey L; Edelman, Gerald M

    2013-01-01

    Animal behavior often involves a temporally ordered sequence of actions learned from experience. Here we describe simulations of interconnected networks of spiking neurons that learn to generate patterns of activity in correct temporal order. The simulation consists of large-scale networks of thousands of excitatory and inhibitory neurons that exhibit short-term synaptic plasticity and spike-timing dependent synaptic plasticity. The neural architecture within each area is arranged to evoke winner-take-all (WTA) patterns of neural activity that persist for tens of milliseconds. In order to generate and switch between consecutive firing patterns in correct temporal order, a reentrant exchange of signals between these areas was necessary. To demonstrate the capacity of this arrangement, we used the simulation to train a brain-based device responding to visual input by autonomously generating temporal sequences of motor actions.

  7. The smart grid research network: Road map for Smart Grid research, development and demonstration up to 2020

    Energy Technology Data Exchange (ETDEWEB)

    Troi, A. [Technical Univ. of Denmark. DTU Electrical Engineering, DTU Risoe Campus, Roskilde (Denmark); Noerregaard Joergensen, B. [Syddansk Univ. (SDU), Odense (Denmark); Mahler Larsen, E. [Technical Univ. of Denmark. DTU Electrical Engineering, Kgs. Lyngby (Denmark)] [and others

    2013-01-15

    This road map is a result of part-recommendation no. 25 in 'MAIN REPORT - The Smart Grid Network's recommendations', written by the Smart Grid Network for the Danish Ministry of Climate, Energy and Building in October 2011. This part-recommendation states: ''Part-recommendation 25 - A road map for Smart Grid research, development and demonstration It is recommended that the electricity sector invite the Ministry to participate in the creation of a road map to ensure that solutions are implemented and coordinated with related policy areas. The sector should also establish a fast-acting working group with representatives from universities, distribution companies and the electric industry, in order to produce a mutual, binding schedule for the RDD of the Smart Grid in Denmark. Time prioritisation of part-recommendation: 2011-2012 Responsibility for implementation of part-recommendation: Universities, along with relevant electric-industry actors, should establish a working group for the completion of a consolidated road map by the end of 2012.'' In its work on this report, the Smart Grid Research Network has focused particularly on part-recommendations 26, 27 and 28 in 'MAIN REPORT - The Smart Grid Network's recommendations', which relate to strengthening and marketing the research infrastructure that will position Denmark as the global hub for Smart Grid development; strengthening basic research into the complex relationships in electric systems with large quantities of independent parties; and improved understanding of consumer behaviour and social economics. Naturally the work has spread to related areas along the way. The work has been conducted by the Smart Grid Research Network. (Author)

  8. Proof-of-Concept Demonstrations of a Flight Adjustment Logging and Communication Network

    Science.gov (United States)

    Underwood, Matthew C.; Merlino, Daniel K.; Carboneau, Lindsey M.; Wilson, C. Logan; Wilder, Andrew J.

    2016-01-01

    The National Airspace System is a highly complex system of systems within which a number of participants with widely varying business and operating models exist. From the airspace user's perspective, a means by which to operate flights in a more flexible and efficient manner is highly desired to meet their business objectives. From the air navigation service provider's viewpoint, there is a need for increasing the capacity of the airspace, while maintaining or increasing the levels of efficiency and safety that currently exist in order to meet the charter under which they operate. Enhancing the communication between airspace operators and users is essential in order to meet these demands. In the spring of 2015, a prototype system that implemented an airborne tool to optimize en-route flight paths for fuel and time savings was designed and tested. The system utilized in-flight Internet as a high-bandwidth data link to facilitate collaborative decision making between the flight deck and an airline dispatcher. The system was tested and demonstrated in a laboratory environment, as well as in-situ. Initial results from these tests indicate that this system is not only feasible, but could also serve as a growth path and testbed for future air traffic management concepts that rely on shared situational awareness through data exchange and electronic negotiation between multiple entities operating within the National Airspace System.

  9. A Massively Scalable Architecture for Instant Messaging & Presence

    NARCIS (Netherlands)

    Schippers, Jorrit; Remke, Anne Katharina Ingrid; Punt, Henk; Wegdam, M.; Haverkort, Boudewijn R.H.M.; Thomas, N.; Bradley, J.; Knottenbelt, W.; Dingle, N.; Harder, U.

    2010-01-01

    This paper analyzes the scalability of Instant Messaging & Presence (IM&P) architectures. We take a queueing-based modelling and analysis approach to ��?nd the bottlenecks of the current IM&P architecture at the Dutch social network Hyves, as well as of alternative architectures. We use the

  10. A scalable distributed RRT for motion planning

    KAUST Repository

    Jacobs, Sam Ade

    2013-05-01

    Rapidly-exploring Random Tree (RRT), like other sampling-based motion planning methods, has been very successful in solving motion planning problems. Even so, sampling-based planners cannot solve all problems of interest efficiently, so attention is increasingly turning to parallelizing them. However, one challenge in parallelizing RRT is the global computation and communication overhead of nearest neighbor search, a key operation in RRTs. This is a critical issue as it limits the scalability of previous algorithms. We present two parallel algorithms to address this problem. The first algorithm extends existing work by introducing a parameter that adjusts how much local computation is done before a global update. The second algorithm radially subdivides the configuration space into regions, constructs a portion of the tree in each region in parallel, and connects the subtrees,i removing cycles if they exist. By subdividing the space, we increase computation locality enabling a scalable result. We show that our approaches are scalable. We present results demonstrating almost linear scaling to hundreds of processors on a Linux cluster and a Cray XE6 machine. © 2013 IEEE.

  11. A scalable distributed RRT for motion planning

    KAUST Repository

    Jacobs, Sam Ade; Stradford, Nicholas; Rodriguez, Cesar; Thomas, Shawna; Amato, Nancy M.

    2013-01-01

    Rapidly-exploring Random Tree (RRT), like other sampling-based motion planning methods, has been very successful in solving motion planning problems. Even so, sampling-based planners cannot solve all problems of interest efficiently, so attention is increasingly turning to parallelizing them. However, one challenge in parallelizing RRT is the global computation and communication overhead of nearest neighbor search, a key operation in RRTs. This is a critical issue as it limits the scalability of previous algorithms. We present two parallel algorithms to address this problem. The first algorithm extends existing work by introducing a parameter that adjusts how much local computation is done before a global update. The second algorithm radially subdivides the configuration space into regions, constructs a portion of the tree in each region in parallel, and connects the subtrees,i removing cycles if they exist. By subdividing the space, we increase computation locality enabling a scalable result. We show that our approaches are scalable. We present results demonstrating almost linear scaling to hundreds of processors on a Linux cluster and a Cray XE6 machine. © 2013 IEEE.

  12. Scalable and balanced dynamic hybrid data assimilation

    Science.gov (United States)

    Kauranne, Tuomo; Amour, Idrissa; Gunia, Martin; Kallio, Kari; Lepistö, Ahti; Koponen, Sampsa

    2017-04-01

    implemented as parallel model runs themselves. The only bottleneck in the process is the gathering and scattering of initial and final model state snapshots before and after the parallel runs which requires a very efficient and low-latency communication network. However, the volume of data communicated is small and the intervening minimization steps are only 3D-Var, which means their computational load is negligible compared with the fully parallel model runs. We present example results of scalable VEnKF with the 4D lake and shallow sea model COHERENS, assimilating simultaneously continuous in situ measurements in a single point and infrequent satellite images that cover a whole lake, with the fully scalable VEnKF.

  13. Towards scalable quantum communication and computation: Novel approaches and realizations

    Science.gov (United States)

    Jiang, Liang

    Quantum information science involves exploration of fundamental laws of quantum mechanics for information processing tasks. This thesis presents several new approaches towards scalable quantum information processing. First, we consider a hybrid approach to scalable quantum computation, based on an optically connected network of few-qubit quantum registers. Specifically, we develop a novel scheme for scalable quantum computation that is robust against various imperfections. To justify that nitrogen-vacancy (NV) color centers in diamond can be a promising realization of the few-qubit quantum register, we show how to isolate a few proximal nuclear spins from the rest of the environment and use them for the quantum register. We also demonstrate experimentally that the nuclear spin coherence is only weakly perturbed under optical illumination, which allows us to implement quantum logical operations that use the nuclear spins to assist the repetitive-readout of the electronic spin. Using this technique, we demonstrate more than two-fold improvement in signal-to-noise ratio. Apart from direct application to enhance the sensitivity of the NV-based nano-magnetometer, this experiment represents an important step towards the realization of robust quantum information processors using electronic and nuclear spin qubits. We then study realizations of quantum repeaters for long distance quantum communication. Specifically, we develop an efficient scheme for quantum repeaters based on atomic ensembles. We use dynamic programming to optimize various quantum repeater protocols. In addition, we propose a new protocol of quantum repeater with encoding, which efficiently uses local resources (about 100 qubits) to identify and correct errors, to achieve fast one-way quantum communication over long distances. Finally, we explore quantum systems with topological order. Such systems can exhibit remarkable phenomena such as quasiparticles with anyonic statistics and have been proposed as

  14. Traffic and Quality Characterization of the H.264/AVC Scalable Video Coding Extension

    Directory of Open Access Journals (Sweden)

    Geert Van der Auwera

    2008-01-01

    Full Text Available The recent scalable video coding (SVC extension to the H.264/AVC video coding standard has unprecedented compression efficiency while supporting a wide range of scalability modes, including temporal, spatial, and quality (SNR scalability, as well as combined spatiotemporal SNR scalability. The traffic characteristics, especially the bit rate variabilities, of the individual layer streams critically affect their network transport. We study the SVC traffic statistics, including the bit rate distortion and bit rate variability distortion, with long CIF resolution video sequences and compare them with the corresponding MPEG-4 Part 2 traffic statistics. We consider (i temporal scalability with three temporal layers, (ii spatial scalability with a QCIF base layer and a CIF enhancement layer, as well as (iii quality scalability modes FGS and MGS. We find that the significant improvement in RD efficiency of SVC is accompanied by substantially higher traffic variabilities as compared to the equivalent MPEG-4 Part 2 streams. We find that separately analyzing the traffic of temporal-scalability only encodings gives reasonable estimates of the traffic statistics of the temporal layers embedded in combined spatiotemporal encodings and in the base layer of combined FGS-temporal encodings. Overall, we find that SVC achieves significantly higher compression ratios than MPEG-4 Part 2, but produces unprecedented levels of traffic variability, thus presenting new challenges for the network transport of scalable video.

  15. Optimized bit extraction using distortion modeling in the scalable extension of H.264/AVC.

    Science.gov (United States)

    Maani, Ehsan; Katsaggelos, Aggelos K

    2009-09-01

    The newly adopted scalable extension of H.264/AVC video coding standard (SVC) demonstrates significant improvements in coding efficiency in addition to an increased degree of supported scalability relative to the scalable profiles of prior video coding standards. Due to the complicated hierarchical prediction structure of the SVC and the concept of key pictures, content-aware rate adaptation of SVC bit streams to intermediate bit rates is a nontrivial task. The concept of quality layers has been introduced in the design of the SVC to allow for fast content-aware prioritized rate adaptation. However, existing quality layer assignment methods are suboptimal and do not consider all network abstraction layer (NAL) units from different layers for the optimization. In this paper, we first propose a technique to accurately and efficiently estimate the quality degradation resulting from discarding an arbitrary number of NAL units from multiple layers of a bitstream by properly taking drift into account. Then, we utilize this distortion estimation technique to assign quality layers to NAL units for a more efficient extraction. Experimental results show that a significant gain can be achieved by the proposed scheme.

  16. BASSET: Scalable Gateway Finder in Large Graphs

    Energy Technology Data Exchange (ETDEWEB)

    Tong, H; Papadimitriou, S; Faloutsos, C; Yu, P S; Eliassi-Rad, T

    2010-11-03

    Given a social network, who is the best person to introduce you to, say, Chris Ferguson, the poker champion? Or, given a network of people and skills, who is the best person to help you learn about, say, wavelets? The goal is to find a small group of 'gateways': persons who are close enough to us, as well as close enough to the target (person, or skill) or, in other words, are crucial in connecting us to the target. The main contributions are the following: (a) we show how to formulate this problem precisely; (b) we show that it is sub-modular and thus it can be solved near-optimally; (c) we give fast, scalable algorithms to find such gateways. Experiments on real data sets validate the effectiveness and efficiency of the proposed methods, achieving up to 6,000,000x speedup.

  17. Scalability of RAID Systems

    OpenAIRE

    Li, Yan

    2010-01-01

    RAID systems (Redundant Arrays of Inexpensive Disks) have dominated backend storage systems for more than two decades and have grown continuously in size and complexity. Currently they face unprecedented challenges from data intensive applications such as image processing, transaction processing and data warehousing. As the size of RAID systems increases, designers are faced with both performance and reliability challenges. These challenges include limited back-end network band...

  18. A lightweight scalable agarose-gel-synthesized thermoelectric composite

    Science.gov (United States)

    Kim, Jin Ho; Fernandes, Gustavo E.; Lee, Do-Joong; Hirst, Elizabeth S.; Osgood, Richard M., III; Xu, Jimmy

    2018-03-01

    Electronic devices are now advancing beyond classical, rigid systems and moving into lighweight flexible regimes, enabling new applications such as body-wearables and ‘e-textiles’. To support this new electronic platform, composite materials that are highly conductive yet scalable, flexible, and wearable are needed. Materials with high electrical conductivity often have poor thermoelectric properties because their thermal transport is made greater by the same factors as their electronic conductivity. We demonstrate, in proof-of-principle experiments, that a novel binary composite can disrupt thermal (phononic) transport, while maintaining high electrical conductivity, thus yielding promising thermoelectric properties. Highly conductive Multi-Wall Carbon Nanotube (MWCNT) composites are combined with a low-band gap semiconductor, PbS. The work functions of the two materials are closely matched, minimizing the electrical contact resistance within the composite. Disparities in the speed of sound in MWCNTs and PbS help to inhibit phonon propagation, and boundary layer scattering at interfaces between these two materials lead to large Seebeck coefficient (> 150 μV/K) (Mott N F and Davis E A 1971 Electronic Processes in Non-crystalline Materials (Oxford: Clarendon), p 47) and a power factor as high as 10 μW/(K2 m). The overall fabrication process is not only scalable but also conformal and compatible with large-area flexible hosts including metal sheets, films, coatings, possibly arrays of fibers, textiles and fabrics. We explain the behavior of this novel thermoelectric material platform in terms of differing length scales for electrical conductivity and phononic heat transfer, and explore new material configurations for potentially lightweight and flexible thermoelectric devices that could be networked in a textile.

  19. Joint-layer encoder optimization for HEVC scalable extensions

    Science.gov (United States)

    Tsai, Chia-Ming; He, Yuwen; Dong, Jie; Ye, Yan; Xiu, Xiaoyu; He, Yong

    2014-09-01

    Scalable video coding provides an efficient solution to support video playback on heterogeneous devices with various channel conditions in heterogeneous networks. SHVC is the latest scalable video coding standard based on the HEVC standard. To improve enhancement layer coding efficiency, inter-layer prediction including texture and motion information generated from the base layer is used for enhancement layer coding. However, the overall performance of the SHVC reference encoder is not fully optimized because rate-distortion optimization (RDO) processes in the base and enhancement layers are independently considered. It is difficult to directly extend the existing joint-layer optimization methods to SHVC due to the complicated coding tree block splitting decisions and in-loop filtering process (e.g., deblocking and sample adaptive offset (SAO) filtering) in HEVC. To solve those problems, a joint-layer optimization method is proposed by adjusting the quantization parameter (QP) to optimally allocate the bit resource between layers. Furthermore, to make more proper resource allocation, the proposed method also considers the viewing probability of base and enhancement layers according to packet loss rate. Based on the viewing probability, a novel joint-layer RD cost function is proposed for joint-layer RDO encoding. The QP values of those coding tree units (CTUs) belonging to lower layers referenced by higher layers are decreased accordingly, and the QP values of those remaining CTUs are increased to keep total bits unchanged. Finally the QP values with minimal joint-layer RD cost are selected to match the viewing probability. The proposed method was applied to the third temporal level (TL-3) pictures in the Random Access configuration. Simulation results demonstrate that the proposed joint-layer optimization method can improve coding performance by 1.3% for these TL-3 pictures compared to the SHVC reference encoder without joint-layer optimization.

  20. Scalable quality assurance for large SNOMED CT hierarchies using subject-based subtaxonomies.

    Science.gov (United States)

    Ochs, Christopher; Geller, James; Perl, Yehoshua; Chen, Yan; Xu, Junchuan; Min, Hua; Case, James T; Wei, Zhi

    2015-05-01

    Standards terminologies may be large and complex, making their quality assurance challenging. Some terminology quality assurance (TQA) methodologies are based on abstraction networks (AbNs), compact terminology summaries. We have tested AbNs and the performance of related TQA methodologies on small terminology hierarchies. However, some standards terminologies, for example, SNOMED, are composed of very large hierarchies. Scaling AbN TQA techniques to such hierarchies poses a significant challenge. We present a scalable subject-based approach for AbN TQA. An innovative technique is presented for scaling TQA by creating a new kind of subject-based AbN called a subtaxonomy for large hierarchies. New hypotheses about concentrations of erroneous concepts within the AbN are introduced to guide scalable TQA. We test the TQA methodology for a subject-based subtaxonomy for the Bleeding subhierarchy in SNOMED's large Clinical finding hierarchy. To test the error concentration hypotheses, three domain experts reviewed a sample of 300 concepts. A consensus-based evaluation identified 87 erroneous concepts. The subtaxonomy-based TQA methodology was shown to uncover statistically significantly more erroneous concepts when compared to a control sample. The scalability of TQA methodologies is a challenge for large standards systems like SNOMED. We demonstrated innovative subject-based TQA techniques by identifying groups of concepts with a higher likelihood of having errors within the subtaxonomy. Scalability is achieved by reviewing a large hierarchy by subject. An innovative methodology for scaling the derivation of AbNs and a TQA methodology was shown to perform successfully for the largest hierarchy of SNOMED. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Scalable cloud without dedicated storage

    Science.gov (United States)

    Batkovich, D. V.; Kompaniets, M. V.; Zarochentsev, A. K.

    2015-05-01

    We present a prototype of a scalable computing cloud. It is intended to be deployed on the basis of a cluster without the separate dedicated storage. The dedicated storage is replaced by the distributed software storage. In addition, all cluster nodes are used both as computing nodes and as storage nodes. This solution increases utilization of the cluster resources as well as improves fault tolerance and performance of the distributed storage. Another advantage of this solution is high scalability with a relatively low initial and maintenance cost. The solution is built on the basis of the open source components like OpenStack, CEPH, etc.

  2. Tip-Based Nanofabrication for Scalable Manufacturing

    Directory of Open Access Journals (Sweden)

    Huan Hu

    2017-03-01

    Full Text Available Tip-based nanofabrication (TBN is a family of emerging nanofabrication techniques that use a nanometer scale tip to fabricate nanostructures. In this review, we first introduce the history of the TBN and the technology development. We then briefly review various TBN techniques that use different physical or chemical mechanisms to fabricate features and discuss some of the state-of-the-art techniques. Subsequently, we focus on those TBN methods that have demonstrated potential to scale up the manufacturing throughput. Finally, we discuss several research directions that are essential for making TBN a scalable nano-manufacturing technology.

  3. Tip-Based Nanofabrication for Scalable Manufacturing

    International Nuclear Information System (INIS)

    Hu, Huan; Somnath, Suhas

    2017-01-01

    Tip-based nanofabrication (TBN) is a family of emerging nanofabrication techniques that use a nanometer scale tip to fabricate nanostructures. Here in this review, we first introduce the history of the TBN and the technology development. We then briefly review various TBN techniques that use different physical or chemical mechanisms to fabricate features and discuss some of the state-of-the-art techniques. Subsequently, we focus on those TBN methods that have demonstrated potential to scale up the manufacturing throughput. Finally, we discuss several research directions that are essential for making TBN a scalable nano-manufacturing technology.

  4. Scalable Networked Information Processing Environment (SNIPE)

    Energy Technology Data Exchange (ETDEWEB)

    Fagg, G.E.; Moore, K. [Univ. of Tennessee, Knoxville, TN (United States). Dept. of Computer Science; Dongarra, J.J. [Univ. of Tennessee, Knoxville, TN (United States). Dept. of Computer Science]|[Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Geist, A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.

    1997-11-01

    SNIPE is a metacomputing system that aims to provide a reliable, secure, fault tolerant environment for long term distributed computing applications and data stores across the global Internet. This system combines global naming and replication of both processing and data to support large scale information processing applications leading to better availability and reliability than currently available with typical cluster computing and/or distributed computer environments.

  5. On Scalability and Replicability of Smart Grid Projects—A Case Study

    Directory of Open Access Journals (Sweden)

    Lukas Sigrist

    2016-03-01

    Full Text Available This paper studies the scalability and replicability of smart grid projects. Currently, most smart grid projects are still in the R&D or demonstration phases. The full roll-out of the tested solutions requires a suitable degree of scalability and replicability to prevent project demonstrators from remaining local experimental exercises. Scalability and replicability are the preliminary requisites to perform scaling-up and replication successfully; therefore, scalability and replicability allow for or at least reduce barriers for the growth and reuse of the results of project demonstrators. The paper proposes factors that influence and condition a project’s scalability and replicability. These factors involve technical, economic, regulatory and stakeholder acceptance related aspects, and they describe requirements for scalability and replicability. In order to assess and evaluate the identified scalability and replicability factors, data has been collected from European and national smart grid projects by means of a survey, reflecting the projects’ view and results. The evaluation of the factors allows quantifying the status quo of on-going projects with respect to the scalability and replicability, i.e., they provide a feedback on to what extent projects take into account these factors and on whether the projects’ results and solutions are actually scalable and replicable.

  6. Novel Scheme for Packet Forwarding without Header Modifications in Optical Networks

    DEFF Research Database (Denmark)

    Wessing, Henrik; Christiansen, Henrik Lehrmann; Fjelde, Tina

    2002-01-01

    We present a novel scheme for packet forwarding in optical packet-switched networks and we further demonstrate its good scalability through simulations. The scheme requires neither header modification nor any label distribution protocol, thus reducing component cost while simplifying network...

  7. Proof of Stake Blockchain: Performance and Scalability for Groupware Communications

    DEFF Research Database (Denmark)

    Spasovski, Jason; Eklund, Peter

    2017-01-01

    A blockchain is a distributed transaction ledger, a disruptive technology that creates new possibilities for digital ecosystems. The blockchain ecosystem maintains an immutable transaction record to support many types of digital services. This paper compares the performance and scalability of a web......-based groupware communication application using both non-blockchain and blockchain technologies. Scalability is measured where message load is synthesized over two typical communication topologies. The first is 1 to n network -- a typical client-server or star-topology with a central vertex (server) receiving all...... messages from the remaining n - 1 vertices (clients). The second is a more naturally occurring scale-free network topology, where multiple communication hubs are distributed throughout the network. System performance is tested with both blockchain and non-blockchain solutions using multiple cloud computing...

  8. Highlights and Lessons from the EU CCS Demonstration Project Network: 13th International Conference on Greenhouse Gas Control Technologies, GHGT 2016. 14 November 2016 through 18 November 2016

    NARCIS (Netherlands)

    Kapetaki, Z.; Hetland, J.; Guenan, T. le; Mikunda, T.; Scowcroft, J.

    2017-01-01

    The European Carbon Capture and Storage (CCS) Demonstration Project Network (the “Network”) is currently composed of projects located in the Netherlands, Norway, Spain, and the UK. The goal of the Network is to accelerate deployment of CCS by sharing project development experiences about technology

  9. First field demonstration of cloud datacenter workflow automation employing dynamic optical transport network resources under OpenStack and OpenFlow orchestration.

    Science.gov (United States)

    Szyrkowiec, Thomas; Autenrieth, Achim; Gunning, Paul; Wright, Paul; Lord, Andrew; Elbers, Jörg-Peter; Lumb, Alan

    2014-02-10

    For the first time, we demonstrate the orchestration of elastic datacenter and inter-datacenter transport network resources using a combination of OpenStack and OpenFlow. Programmatic control allows a datacenter operator to dynamically request optical lightpaths from a transport network operator to accommodate rapid changes of inter-datacenter workflows.

  10. Scalable shared-memory multiprocessing

    CERN Document Server

    Lenoski, Daniel E

    1995-01-01

    Dr. Lenoski and Dr. Weber have experience with leading-edge research and practical issues involved in implementing large-scale parallel systems. They were key contributors to the architecture and design of the DASH multiprocessor. Currently, they are involved with commercializing scalable shared-memory technology.

  11. Investigation on Reliability and Scalability of an FBG-Based Hierarchical AOFSN

    Directory of Open Access Journals (Sweden)

    Li-Mei Peng

    2010-03-01

    Full Text Available The reliability and scalability of large-scale based optical fiber sensor networks (AOFSN are considered in this paper. The AOFSN network consists of three-level hierarchical sensor network architectures. The first two levels consist of active interrogation and remote nodes (RNs and the third level, called the sensor subnet (SSN, consists of passive Fiber Bragg Gratings (FBGs and a few switches. The switch architectures in the RN and various SSNs to improve the reliability and scalability of AOFSN are studied. Two SSNs with a regular topology are proposed to support simple routing and scalability in AOFSN: square-based sensor cells (SSC and pentagon-based sensor cells (PSC. The reliability and scalability are evaluated in terms of the available sensing coverage in the case of one or multiple link failures.

  12. Towards Scalable Graph Computation on Mobile Devices.

    Science.gov (United States)

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2014-10-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach.

  13. Towards Scalable Graph Computation on Mobile Devices

    Science.gov (United States)

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2015-01-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach. PMID:25859564

  14. Parallel scalability of Hartree-Fock calculations

    Science.gov (United States)

    Chow, Edmond; Liu, Xing; Smelyanskiy, Mikhail; Hammond, Jeff R.

    2015-03-01

    Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree-Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.

  15. The scalable coherent interface, IEEE P1596

    International Nuclear Information System (INIS)

    Gustavson, D.B.

    1990-01-01

    IEEE P1596, the scalable coherent interface (formerly known as SuperBus) is based on experience gained while developing Fastbus (ANSI/IEEE 960--1986, IEC 935), Futurebus (IEEE P896.x) and other modern 32-bit buses. SCI goals include a minimum bandwidth of 1 GByte/sec per processor in multiprocessor systems with thousands of processors; efficient support of a coherent distributed-cache image of distributed shared memory; support for repeaters which interface to existing or future buses; and support for inexpensive small rings as well as for general switched interconnections like Banyan, Omega, or crossbar networks. This paper presents a summary of current directions, reports the status of the work in progress, and suggests some applications in data acquisition and physics

  16. Silicon nanowire networks for multi-stage thermoelectric modules

    International Nuclear Information System (INIS)

    Norris, Kate J.; Garrett, Matthew P.; Zhang, Junce; Coleman, Elane; Tompa, Gary S.; Kobayashi, Nobuhiko P.

    2015-01-01

    Highlights: • Fabricated flexible single, double, and quadruple stacked Si thermoelectric modules. • Measured an enhanced power production of 27%, showing vertical stacking is scalable. • Vertically scalable thermoelectric module design of semiconducting nanowires. • Design can utilize either p or n-type semiconductors, both types are not required. • ΔT increases with thickness therefore power/area can increase as modules are stacked. - Abstract: We present the fabrication and characterization of single, double, and quadruple stacked flexible silicon nanowire network based thermoelectric modules. From double to quadruple stacked modules, power production increased 27%, demonstrating that stacking multiple nanowire thermoelectric devices in series is a scalable method to generate power by supplying larger temperature gradient. We present a vertically scalable multi-stage thermoelectric module design using semiconducting nanowires, eliminating the need for both n-type and p-type semiconductors for modules

  17. Scalable on-chip quantum state tomography

    Science.gov (United States)

    Titchener, James G.; Gräfe, Markus; Heilmann, René; Solntsev, Alexander S.; Szameit, Alexander; Sukhorukov, Andrey A.

    2018-03-01

    Quantum information systems are on a path to vastly exceed the complexity of any classical device. The number of entangled qubits in quantum devices is rapidly increasing, and the information required to fully describe these systems scales exponentially with qubit number. This scaling is the key benefit of quantum systems, however it also presents a severe challenge. To characterize such systems typically requires an exponentially long sequence of different measurements, becoming highly resource demanding for large numbers of qubits. Here we propose and demonstrate a novel and scalable method for characterizing quantum systems based on expanding a multi-photon state to larger dimensionality. We establish that the complexity of this new measurement technique only scales linearly with the number of qubits, while providing a tomographically complete set of data without a need for reconfigurability. We experimentally demonstrate an integrated photonic chip capable of measuring two- and three-photon quantum states with statistical reconstruction fidelity of 99.71%.

  18. Scalability of DL_POLY on High Performance Computing Platform

    Directory of Open Access Journals (Sweden)

    Mabule Samuel Mabakane

    2017-12-01

    Full Text Available This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.

  19. Scalable Multi-group Key Management for Advanced Metering Infrastructure

    OpenAIRE

    Benmalek , Mourad; Challal , Yacine; Bouabdallah , Abdelmadjid

    2015-01-01

    International audience; Advanced Metering Infrastructure (AMI) is composed of systems and networks to incorporate changes for modernizing the electricity grid, reduce peak loads, and meet energy efficiency targets. AMI is a privileged target for security attacks with potentially great damage against infrastructures and privacy. For this reason, Key Management has been identified as one of the most challenging topics in AMI development. In this paper, we propose a new Scalable multi-group key ...

  20. No study left behind: a network meta-analysis in non-small-cell lung cancer demonstrating the importance of considering all relevant data.

    Science.gov (United States)

    Hawkins, Neil; Scott, David A; Woods, Beth S; Thatcher, Nicholas

    2009-09-01

    To demonstrate the importance of considering all relevant indirect data in a network meta-analysis of treatments for non-small-cell lung cancer (NSCLC). A recent National Institute for Health and Clinical Excellence appraisal focussed on the indirect comparison of docetaxel with erlotinib in second-line treatment of NSCLC based on trials including a common comparator. We compared the results of this analysis to a network meta-analysis including other trials that formed a network of evidence. We also examined the importance of allowing for the correlations between the estimated treatment effects that can arise when analysing such networks. The analysis of the restricted network including only trials of docetaxel and erlotinib linked via the common placebo comparator produced an estimated mean hazard ratio (HR) for erlotinib compared with docetaxel of 1.55 (95% confidence interval [CI] 0.72-2.97). In contrast, the network meta-analysis produced an estimated HR for erlotinib compared with docetaxel of 0.83 (95% CI 0.65-1.06). Analyzing the wider network improved the precision of estimated treatment effects, altered their rankings and also allowed further treatments to be compared. Some of the estimated treatment effects from the wider network were highly correlated. This empirical example shows the importance of considering all potentially relevant data when comparing treatments. Care should therefore be taken to consider all relevant information, including correlations induced by the network of trial data, when comparing treatments.

  1. Network Performance and Coordination in the Health, Education, Telecommunications System. Satellite Technology Demonstration, Technical Report No. 0422.

    Science.gov (United States)

    Braunstein, Jean; Janky, James M.

    This paper describes the network coordination for the Health, Education, Telecommunications (HET) system. Specifically, it discusses HET network performance as a function of a specially-developed coordination system which was designed to link terrestrial equipment to satellite operations centers. Because all procedures and equipment developed for…

  2. Experimental demonstration of bandwidth on demand (BoD) provisioning based on time scheduling in software-defined multi-domain optical networks

    Science.gov (United States)

    Zhao, Yongli; Li, Yajie; Wang, Xinbo; Chen, Bowen; Zhang, Jie

    2016-09-01

    A hierarchical software-defined networking (SDN) control architecture is designed for multi-domain optical networks with the Open Daylight (ODL) controller. The OpenFlow-based Control Virtual Network Interface (CVNI) protocol is deployed between the network orchestrator and the domain controllers. Then, a dynamic bandwidth on demand (BoD) provisioning solution is proposed based on time scheduling in software-defined multi-domain optical networks (SD-MDON). Shared Risk Link Groups (SRLG)-disjoint routing schemes are adopted to separate each tenant for reliability. The SD-MDON testbed is built based on the proposed hierarchical control architecture. Then the proposed time scheduling-based BoD (Ts-BoD) solution is experimentally demonstrated on the testbed. The performance of the Ts-BoD solution is evaluated with respect to blocking probability, resource utilization, and lightpath setup latency.

  3. Efficient Enhancement for Spatial Scalable Video Coding Transmission

    Directory of Open Access Journals (Sweden)

    Mayada Khairy

    2017-01-01

    Full Text Available Scalable Video Coding (SVC is an international standard technique for video compression. It is an extension of H.264 Advanced Video Coding (AVC. In the encoding of video streams by SVC, it is suitable to employ the macroblock (MB mode because it affords superior coding efficiency. However, the exhaustive mode decision technique that is usually used for SVC increases the computational complexity, resulting in a longer encoding time (ET. Many other algorithms were proposed to solve this problem with imperfection of increasing transmission time (TT across the network. To minimize the ET and TT, this paper introduces four efficient algorithms based on spatial scalability. The algorithms utilize the mode-distribution correlation between the base layer (BL and enhancement layers (ELs and interpolation between the EL frames. The proposed algorithms are of two categories. Those of the first category are based on interlayer residual SVC spatial scalability. They employ two methods, namely, interlayer interpolation (ILIP and the interlayer base mode (ILBM method, and enable ET and TT savings of up to 69.3% and 83.6%, respectively. The algorithms of the second category are based on full-search SVC spatial scalability. They utilize two methods, namely, full interpolation (FIP and the full-base mode (FBM method, and enable ET and TT savings of up to 55.3% and 76.6%, respectively.

  4. Improving diabetes medication adherence: successful, scalable interventions

    Directory of Open Access Journals (Sweden)

    Zullig LL

    2015-01-01

    Full Text Available Leah L Zullig,1,2 Walid F Gellad,3,4 Jivan Moaddeb,2,5 Matthew J Crowley,1,2 William Shrank,6 Bradi B Granger,7 Christopher B Granger,8 Troy Trygstad,9 Larry Z Liu,10 Hayden B Bosworth1,2,7,11 1Center for Health Services Research in Primary Care, Durham Veterans Affairs Medical Center, Durham, NC, USA; 2Department of Medicine, Duke University, Durham, NC, USA; 3Center for Health Equity Research and Promotion, Pittsburgh Veterans Affairs Medical Center, Pittsburgh, PA, USA; 4Division of General Internal Medicine, University of Pittsburgh, Pittsburgh, PA, USA; 5Institute for Genome Sciences and Policy, Duke University, Durham, NC, USA; 6CVS Caremark Corporation; 7School of Nursing, Duke University, Durham, NC, USA; 8Department of Medicine, Division of Cardiology, Duke University School of Medicine, Durham, NC, USA; 9North Carolina Community Care Networks, Raleigh, NC, USA; 10Pfizer, Inc., and Weill Medical College of Cornell University, New York, NY, USA; 11Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, NC, USA Abstract: Effective medications are a cornerstone of prevention and disease treatment, yet only about half of patients take their medications as prescribed, resulting in a common and costly public health challenge for the US healthcare system. Since poor medication adherence is a complex problem with many contributing causes, there is no one universal solution. This paper describes interventions that were not only effective in improving medication adherence among patients with diabetes, but were also potentially scalable (ie, easy to implement to a large population. We identify key characteristics that make these interventions effective and scalable. This information is intended to inform healthcare systems seeking proven, low resource, cost-effective solutions to improve medication adherence. Keywords: medication adherence, diabetes mellitus, chronic disease, dissemination research

  5. CODA: A scalable, distributed data acquisition system

    International Nuclear Information System (INIS)

    Watson, W.A. III; Chen, J.; Heyes, G.; Jastrzembski, E.; Quarrie, D.

    1994-01-01

    A new data acquisition system has been designed for physics experiments scheduled to run at CEBAF starting in the summer of 1994. This system runs on Unix workstations connected via ethernet, FDDI, or other network hardware to multiple intelligent front end crates -- VME, CAMAC or FASTBUS. CAMAC crates may either contain intelligent processors, or may be interfaced to VME. The system is modular and scalable, from a single front end crate and one workstation linked by ethernet, to as may as 32 clusters of front end crates ultimately connected via a high speed network to a set of analysis workstations. The system includes an extensible, device independent slow controls package with drivers for CAMAC, VME, and high voltage crates, as well as a link to CEBAF accelerator controls. All distributed processes are managed by standard remote procedure calls propagating change-of-state requests, or reading and writing program variables. Custom components may be easily integrated. The system is portable to any front end processor running the VxWorks real-time kernel, and to most workstations supplying a few standard facilities such as rsh and X-windows, and Motif and socket libraries. Sample implementations exist for 2 Unix workstation families connected via ethernet or FDDI to VME (with interfaces to FASTBUS or CAMAC), and via ethernet to FASTBUS or CAMAC

  6. Scalable Techniques for Formal Verification

    CERN Document Server

    Ray, Sandip

    2010-01-01

    This book presents state-of-the-art approaches to formal verification techniques to seamlessly integrate different formal verification methods within a single logical foundation. It should benefit researchers and practitioners looking to get a broad overview of the spectrum of formal verification techniques, as well as approaches to combining such techniques within a single framework. Coverage includes a range of case studies showing how such combination is fruitful in developing a scalable verification methodology for industrial designs. This book outlines both theoretical and practical issue

  7. Modeling In-Network Aggregation in VANETs

    NARCIS (Netherlands)

    Dietzel, Stefan; Kargl, Frank; Heijenk, Geert; Schaub, Florian

    2011-01-01

    The multitude of applications envisioned for vehicular ad hoc networks requires efficient communication and dissemination mechanisms to prevent network congestion. In-network data aggregation promises to reduce bandwidth requirements and enable scalability in large vehicular networks. However, most

  8. Demonstration of statistical approaches to identify component's ageing by operational data analysis-A case study for the ageing PSA network

    International Nuclear Information System (INIS)

    Rodionov, Andrei; Atwood, Corwin L.; Kirchsteiger, Christian; Patrik, Milan

    2008-01-01

    The paper presents some results of a case study on 'Demonstration of statistical approaches to identify the component's ageing by operational data analysis', which was done in the frame of the EC JRC Ageing PSA Network. Several techniques: visual evaluation, nonparametric and parametric hypothesis tests, were proposed and applied in order to demonstrate the capacity, advantages and limitations of statistical approaches to identify the component's ageing by operational data analysis. Engineering considerations are out of the scope of the present study

  9. Software-defined networking control plane for seamless integration of multiple silicon photonic switches in Datacom networks.

    Science.gov (United States)

    Shen, Yiwen; Hattink, Maarten H N; Samadi, Payman; Cheng, Qixiang; Hu, Ziyiz; Gazman, Alexander; Bergman, Keren

    2018-04-16

    Silicon photonics based switches offer an effective option for the delivery of dynamic bandwidth for future large-scale Datacom systems while maintaining scalable energy efficiency. The integration of a silicon photonics-based optical switching fabric within electronic Datacom architectures requires novel network topologies and arbitration strategies to effectively manage the active elements in the network. We present a scalable software-defined networking control plane to integrate silicon photonic based switches with conventional Ethernet or InfiniBand networks. Our software-defined control plane manages both electronic packet switches and multiple silicon photonic switches for simultaneous packet and circuit switching. We built an experimental Dragonfly network testbed with 16 electronic packet switches and 2 silicon photonic switches to evaluate our control plane. Observed latencies occupied by each step of the switching procedure demonstrate a total of 344 µs control plane latency for data-center and high performance computing platforms.

  10. Secure and scalable deduplication of horizontally partitioned health data for privacy-preserving distributed statistical computation.

    Science.gov (United States)

    Yigzaw, Kassaye Yitbarek; Michalas, Antonis; Bellika, Johan Gustav

    2017-01-03

    Techniques have been developed to compute statistics on distributed datasets without revealing private information except the statistical results. However, duplicate records in a distributed dataset may lead to incorrect statistical results. Therefore, to increase the accuracy of the statistical analysis of a distributed dataset, secure deduplication is an important preprocessing step. We designed a secure protocol for the deduplication of horizontally partitioned datasets with deterministic record linkage algorithms. We provided a formal security analysis of the protocol in the presence of semi-honest adversaries. The protocol was implemented and deployed across three microbiology laboratories located in Norway, and we ran experiments on the datasets in which the number of records for each laboratory varied. Experiments were also performed on simulated microbiology datasets and data custodians connected through a local area network. The security analysis demonstrated that the protocol protects the privacy of individuals and data custodians under a semi-honest adversarial model. More precisely, the protocol remains secure with the collusion of up to N - 2 corrupt data custodians. The total runtime for the protocol scales linearly with the addition of data custodians and records. One million simulated records distributed across 20 data custodians were deduplicated within 45 s. The experimental results showed that the protocol is more efficient and scalable than previous protocols for the same problem. The proposed deduplication protocol is efficient and scalable for practical uses while protecting the privacy of patients and data custodians.

  11. Continuity-Aware Scheduling Algorithm for Scalable Video Streaming

    Directory of Open Access Journals (Sweden)

    Atinat Palawan

    2016-05-01

    Full Text Available The consumer demand for retrieving and delivering visual content through consumer electronic devices has increased rapidly in recent years. The quality of video in packet networks is susceptible to certain traffic characteristics: average bandwidth availability, loss, delay and delay variation (jitter. This paper presents a scheduling algorithm that modifies the stream of scalable video to combat jitter. The algorithm provides unequal look-ahead by safeguarding the base layer (without the need for overhead of the scalable video. The results of the experiments show that our scheduling algorithm reduces the number of frames with a violated deadline and significantly improves the continuity of the video stream without compromising the average Y Peek Signal-to-Noise Ratio (PSNR.

  12. Energy-Efficient Bandwidth Allocation for Multiuser Scalable Video Streaming over WLAN

    Directory of Open Access Journals (Sweden)

    Lafruit Gauthier

    2008-01-01

    Full Text Available Abstract We consider the problem of packet scheduling for the transmission of multiple video streams over a wireless local area network (WLAN. A cross-layer optimization framework is proposed to minimize the wireless transceiver energy consumption while meeting the user required visual quality constraints. The framework relies on the IEEE 802.11 standard and on the embedded bitstream structure of the scalable video coding scheme. It integrates an application-level video quality metric as QoS constraint (instead of a communication layer quality metric with energy consumption optimization through link layer scaling and sleeping. Both energy minimization and min-max energy optimization strategies are discussed. Simulation results demonstrate significant energy gains compared to the state-of-the-art approaches.

  13. I(Re2-WiNoC: Exploring scalable wireless on-chip micronetworks for heterogeneous embedded many-core SoCs

    Directory of Open Access Journals (Sweden)

    Dan Zhao

    2015-02-01

    In this work, an irregular and reconfigurable WiNoC platform is proposed to tackle ever increasing complexity, density and heterogeneity challenges. A flexible RF infrastructure is established where RF nodes are properly distributed and IP cores are clustered. Consequently, a performance-cost effective topology is formed. A region-aided routing scheme is further deigned and implemented to realize loop-free, minimum path cost and high scalability for irregular WiNoC infrastructure. To implement the data transmission protocol, the RF microarchitecture of WiNoC is developed where the RF nodes are designed to fulfill the functions of distributed table routing, multi-channel arbitration, virtual output queuing, and distributed flow control. Our simulation studies based on synthetic traffics demonstrate the network efficiency and scalability of WiNoC.

  14. Network Simulation

    CERN Document Server

    Fujimoto, Richard

    2006-01-01

    "Network Simulation" presents a detailed introduction to the design, implementation, and use of network simulation tools. Discussion topics include the requirements and issues faced for simulator design and use in wired networks, wireless networks, distributed simulation environments, and fluid model abstractions. Several existing simulations are given as examples, with details regarding design decisions and why those decisions were made. Issues regarding performance and scalability are discussed in detail, describing how one can utilize distributed simulation methods to increase the

  15. Percolator: Scalable Pattern Discovery in Dynamic Graphs

    Energy Technology Data Exchange (ETDEWEB)

    Choudhury, Sutanay; Purohit, Sumit; Lin, Peng; Wu, Yinghui; Holder, Lawrence B.; Agarwal, Khushbu

    2018-02-06

    We demonstrate Percolator, a distributed system for graph pattern discovery in dynamic graphs. In contrast to conventional mining systems, Percolator advocates efficient pattern mining schemes that (1) support pattern detection with keywords; (2) integrate incremental and parallel pattern mining; and (3) support analytical queries such as trend analysis. The core idea of Percolator is to dynamically decide and verify a small fraction of patterns and their in- stances that must be inspected in response to buffered updates in dynamic graphs, with a total mining cost independent of graph size. We demonstrate a) the feasibility of incremental pattern mining by walking through each component of Percolator, b) the efficiency and scalability of Percolator over the sheer size of real-world dynamic graphs, and c) how the user-friendly GUI of Percolator inter- acts with users to support keyword-based queries that detect, browse and inspect trending patterns. We also demonstrate two user cases of Percolator, in social media trend analysis and academic collaboration analysis, respectively.

  16. A Demonstration Marine Biodiversity Observation Network (MBON): Understanding Marine Life and its Role in Maintaining Ecosystem Services

    Science.gov (United States)

    Muller-Karger, F. E.; Iken, K.; Miller, R. J.; Duffy, J. E.; Chavez, F.; Montes, E.

    2016-02-01

    The U.S. Federal government (NOAA, NASA, BOEM, and the Smithsonian Institution), academic researchers, and private partners are laying the foundation for a Marine Biodiversity Observation Network (MBON). The goals of the network are to: 1) Observe and understand life, from microbes to whales, in different coastal and continental shelf habitats; 2) Define an efficient set of observations required for implementing a useful MBON; 3) Develop technology for biodiversity assessments including emerging environmental DNA (eDNA), remote sensing, and image analysis methods to coordinate with classical sampling; 4) Integrate and synthesize information in coordination with the Integrated Ocean Observing System (IOOS), the international Group on Earth Observations Biodiversity Observation Network(GEO BON), and the Ocean Biogeographic Information System (OBIS) sponsored by UNESCO's Intergovernmental Oceanographic Commission (IOC); and 5) Understand the linkages between marine biodiversity, ecosystem processes, and the social-economic context of a region. Pilot projects have been implemented within three NOAA National Marine Sanctuaries (Florida Keys, Monterey Bay, and Channel Islands), the wider Santa Barbara Channel, in the Chukchi Sea, and through the Smithsonian's Tennenbaum Marine Observatories Network (TMON) at several sites in the U.S. and collaborating countries. Together, these MBON sites encompass a wide range of marine environments, including deep sea, continental shelves, and coastal habitats including estuaries, wetlands, and coral reefs. The present MBON partners are open to growth of the MBON through additional collaborations. Given these initiatives, GEO BON is proposing an MBON effort that spans from pole to pole, with a pathfinder effort among countries in the Americas. By specializing in coastal ecosystems—where marine biodiversity and people are concentrated and interact most—the MBON and TMON initiatives aim to provide policymakers with the science to

  17. Experimental demonstration of an OpenFlow based software-defined optical network employing packet, fixed and flexible DWDM grid technologies on an international multi-domain testbed.

    Science.gov (United States)

    Channegowda, M; Nejabati, R; Rashidi Fard, M; Peng, S; Amaya, N; Zervas, G; Simeonidou, D; Vilalta, R; Casellas, R; Martínez, R; Muñoz, R; Liu, L; Tsuritani, T; Morita, I; Autenrieth, A; Elbers, J P; Kostecki, P; Kaczmarek, P

    2013-03-11

    Software defined networking (SDN) and flexible grid optical transport technology are two key technologies that allow network operators to customize their infrastructure based on application requirements and therefore minimizing the extra capital and operational costs required for hosting new applications. In this paper, for the first time we report on design, implementation & demonstration of a novel OpenFlow based SDN unified control plane allowing seamless operation across heterogeneous state-of-the-art optical and packet transport domains. We verify and experimentally evaluate OpenFlow protocol extensions for flexible DWDM grid transport technology along with its integration with fixed DWDM grid and layer-2 packet switching.

  18. On the scalability of uncoordinated multiple access for the Internet of Things

    KAUST Repository

    Chisci, Giovanni

    2017-11-16

    The Internet of things (IoT) will entail massive number of wireless connections with sporadic traffic patterns. To support the IoT traffic, several technologies are evolving to support low power wide area (LPWA) wireless communications. However, LPWA networks rely on variations of uncoordinated spectrum access, either for data transmissions or scheduling requests, thus imposing a scalability problem to the IoT. This paper presents a novel spatiotemporal model to study the scalability of the ALOHA medium access. In particular, the developed mathematical model relies on stochastic geometry and queueing theory to account for spatial and temporal attributes of the IoT. To this end, the scalability of the ALOHA is characterized by the percentile of IoT devices that can be served while keeping their queues stable. The results highlight the scalability problem of ALOHA and quantify the extend to which ALOHA can support in terms of number of devices, traffic requirement, and transmission rate.

  19. Fast & scalable pattern transfer via block copolymer nanolithography

    DEFF Research Database (Denmark)

    Li, Tao; Wang, Zhongli; Schulte, Lars

    2015-01-01

    A fully scalable and efficient pattern transfer process based on block copolymer (BCP) self-assembling directly on various substrates is demonstrated. PS-rich and PDMS-rich poly(styrene-b-dimethylsiloxane) (PS-b-PDMS) copolymers are used to give monolayer sphere morphology after spin-casting of s......A fully scalable and efficient pattern transfer process based on block copolymer (BCP) self-assembling directly on various substrates is demonstrated. PS-rich and PDMS-rich poly(styrene-b-dimethylsiloxane) (PS-b-PDMS) copolymers are used to give monolayer sphere morphology after spin...... on long range lateral order, including fabrication of substrates for catalysis, solar cells, sensors, ultrafiltration membranes and templating of semiconductors or metals....

  20. Metal–organic covalent network chemical vapor deposition for gas separation

    NARCIS (Netherlands)

    Boscher, N.D.; Wang, M.; Perrotta, A.; Heinze, K.; Creatore, A.; Gleason, K.K.

    2016-01-01

    The chemical vapor deposition (CVD) polymerization of metalloporphyrin building units is demonstrated to provide an easily up-scalable one-step method toward the deposition of a new class of dense and defect-free metal–organic covalent network (MOCN) layers. The resulting hyper-thin and flexible

  1. Requirements for Scalable Access Control and Security Management Architectures

    National Research Council Canada - National Science Library

    Keromytis, Angelos D; Smith, Jonathan M

    2005-01-01

    Maximizing local autonomy has led to a scalable Internet. Scalability and the capacity for distributed control have unfortunately not extended well to resource access control policies and mechanisms...

  2. The intergroup protocols: Scalable group communication for the internet

    Energy Technology Data Exchange (ETDEWEB)

    Berket, Karlo [Univ. of California, Santa Barbara, CA (United States)

    2000-12-04

    Reliable group ordered delivery of multicast messages in a distributed system is a useful service that simplifies the programming of distributed applications. Such a service helps to maintain the consistency of replicated information and to coordinate the activities of the various processes. With the increasing popularity of the Internet, there is an increasing interest in scaling the protocols that provide this service to the environment of the Internet. The InterGroup protocol suite, described in this dissertation, provides such a service, and is intended for the environment of the Internet with scalability to large numbers of nodes and high latency links. The InterGroup protocols approach the scalability problem from various directions. They redefine the meaning of group membership, allow voluntary membership changes, add a receiver-oriented selection of delivery guarantees that permits heterogeneity of the receiver set, and provide a scalable reliability service. The InterGroup system comprises several components, executing at various sites within the system. Each component provides part of the services necessary to implement a group communication system for the wide-area. The components can be categorized as: (1) control hierarchy, (2) reliable multicast, (3) message distribution and delivery, and (4) process group membership. We have implemented a prototype of the InterGroup protocols in Java, and have tested the system performance in both local-area and wide-area networks.

  3. Adaptive format conversion for scalable video coding

    Science.gov (United States)

    Wan, Wade K.; Lim, Jae S.

    2001-12-01

    The enhancement layer in many scalable coding algorithms is composed of residual coding information. There is another type of information that can be transmitted instead of (or in addition to) residual coding. Since the encoder has access to the original sequence, it can utilize adaptive format conversion (AFC) to generate the enhancement layer and transmit the different format conversion methods as enhancement data. This paper investigates the use of adaptive format conversion information as enhancement data in scalable video coding. Experimental results are shown for a wide range of base layer qualities and enhancement bitrates to determine when AFC can improve video scalability. Since the parameters needed for AFC are small compared to residual coding, AFC can provide video scalability at low enhancement layer bitrates that are not possible with residual coding. In addition, AFC can also be used in addition to residual coding to improve video scalability at higher enhancement layer bitrates. Adaptive format conversion has not been studied in detail, but many scalable applications may benefit from it. An example of an application that AFC is well-suited for is the migration path for digital television where AFC can provide immediate video scalability as well as assist future migrations.

  4. Continuous assessment of land mapping accuracy at High Resolution from global networks of atmospheric and field observatories -concept and demonstration

    Science.gov (United States)

    Sicard, Pierre; Martin-lauzer, François-regis

    2017-04-01

    In the context of global climate change and adjustment/resilience policies' design and implementation, there is a need not only i. for environmental monitoring, e.g. through a range of Earth Observations (EO) land "products" but ii. for a precise assessment of uncertainties of the aforesaid information that feed environmental decision-making (to be introduced in the EO metadata) and also iii. for a perfect handing of the thresholds which help translate "environment tolerance limits" to match detected EO changes through ecosystem modelling. Uncertainties' insight means precision and accuracy's knowledge and subsequent ability of setting thresholds for change detection systems. Traditionally, the validation of satellite-derived products has taken the form of intensive field campaigns to sanction the introduction of data processors in Payload Data Ground Segments chains. It is marred by logistical challenges and cost issues, reason why it is complemented by specific surveys at ground-based monitoring sites which can provide near-continuous observations at a high temporal resolution (e.g. RadCalNet). Unfortunately, most of the ground-level monitoring sites, in the number of 100th or 1000th, which are part of wider observation networks (e.g. FLUXNET, NEON, IMAGINES) mainly monitor the state of the atmosphere and the radiation exchange at the surface, which are different to the products derived from EO data. In addition they are "point-based" compared to the EO cover to be obtained from Sentinel-2 or Sentinel-3. Yet, data from these networks, processed by spatial extrapolation models, are well-suited to the bottom-up approach and relevant to the validation of vegetation parameters' consistency (e.g. leaf area index, fraction of absorbed photosynthetically active radiation). Consistency means minimal errors on spatial and temporal gradients of EO products. Test of the procedure for land-cover products' consistency assessment with field measurements delivered by worldwide

  5. Fast volume reconstruction in positron emission tomography: Implementation of four algorithms on a high-performance scalable parallel platform

    International Nuclear Information System (INIS)

    Egger, M.L.; Scheurer, A.H.; Joseph, C.

    1996-01-01

    The issue of long reconstruction times in PET has been addressed from several points of view, resulting in an affordable dedicated system capable of handling routine 3D reconstruction in a few minutes per frame: on the hardware side using fast processors and a parallel architecture, and on the software side, using efficient implementations of computationally less intensive algorithms. Execution times obtained for the PRT-1 data set on a parallel system of five hybrid nodes, each combining an Alpha processor for computation and a transputer for communication, are the following (256 sinograms of 96 views by 128 radial samples): Ramp algorithm 56 s, Favor 81 s and reprojection algorithm of Kinahan and Rogers 187 s. The implementation of fast rebinning algorithms has shown our hardware platform to become communications-limited; they execute faster on a conventional single-processor Alpha workstation: single-slice rebinning 7 s, Fourier rebinning 22 s, 2D filtered backprojection 5 s. The scalability of the system has been demonstrated, and a saturation effect at network sizes above ten nodes has become visible; new T9000-based products lifting most of the constraints on network topology and link throughput are expected to result in improved parallel efficiency and scalability properties

  6. Formulation and demonstration of a robust mean variance optimization approach for concurrent airline network and aircraft design

    Science.gov (United States)

    Davendralingam, Navindran

    Conceptual design of aircraft and the airline network (routes) on which aircraft fly on are inextricably linked to passenger driven demand. Many factors influence passenger demand for various Origin-Destination (O-D) city pairs including demographics, geographic location, seasonality, socio-economic factors and naturally, the operations of directly competing airlines. The expansion of airline operations involves the identificaion of appropriate aircraft to meet projected future demand. The decisions made in incorporating and subsequently allocating these new aircraft to serve air travel demand affects the inherent risk and profit potential as predicted through the airline revenue management systems. Competition between airlines then translates to latent passenger observations of the routes served between OD pairs and ticket pricing---this in effect reflexively drives future states of demand. This thesis addresses the integrated nature of aircraft design, airline operations and passenger demand, in order to maximize future expected profits as new aircraft are brought into service. The goal of this research is to develop an approach that utilizes aircraft design, airline network design and passenger demand as a unified framework to provide better integrated design solutions in order to maximize expexted profits of an airline. This is investigated through two approaches. The first is a static model that poses the concurrent engineering paradigm above as an investment portfolio problem. Modern financial portfolio optimization techniques are used to leverage risk of serving future projected demand using a 'yet to be introduced' aircraft against potentially generated future profits. Robust optimization methodologies are incorporated to mitigate model sensitivity and address estimation risks associated with such optimization techniques. The second extends the portfolio approach to include dynamic effects of an airline's operations. A dynamic programming approach is

  7. Fast and scalable inequality joins

    KAUST Repository

    Khayyat, Zuhair

    2016-09-07

    Inequality joins, which is to join relations with inequality conditions, are used in various applications. Optimizing joins has been the subject of intensive research ranging from efficient join algorithms such as sort-merge join, to the use of efficient indices such as (Formula presented.)-tree, (Formula presented.)-tree and Bitmap. However, inequality joins have received little attention and queries containing such joins are notably very slow. In this paper, we introduce fast inequality join algorithms based on sorted arrays and space-efficient bit-arrays. We further introduce a simple method to estimate the selectivity of inequality joins which is then used to optimize multiple predicate queries and multi-way joins. Moreover, we study an incremental inequality join algorithm to handle scenarios where data keeps changing. We have implemented a centralized version of these algorithms on top of PostgreSQL, a distributed version on top of Spark SQL, and an existing data cleaning system, Nadeef. By comparing our algorithms against well-known optimization techniques for inequality joins, we show our solution is more scalable and several orders of magnitude faster. © 2016 Springer-Verlag Berlin Heidelberg

  8. Scalable privacy-preserving big data aggregation mechanism

    Directory of Open Access Journals (Sweden)

    Dapeng Wu

    2016-08-01

    Full Text Available As the massive sensor data generated by large-scale Wireless Sensor Networks (WSNs recently become an indispensable part of ‘Big Data’, the collection, storage, transmission and analysis of the big sensor data attract considerable attention from researchers. Targeting the privacy requirements of large-scale WSNs and focusing on the energy-efficient collection of big sensor data, a Scalable Privacy-preserving Big Data Aggregation (Sca-PBDA method is proposed in this paper. Firstly, according to the pre-established gradient topology structure, sensor nodes in the network are divided into clusters. Secondly, sensor data is modified by each node according to the privacy-preserving configuration message received from the sink. Subsequently, intra- and inter-cluster data aggregation is employed during the big sensor data reporting phase to reduce energy consumption. Lastly, aggregated results are recovered by the sink to complete the privacy-preserving big data aggregation. Simulation results validate the efficacy and scalability of Sca-PBDA and show that the big sensor data generated by large-scale WSNs is efficiently aggregated to reduce network resource consumption and the sensor data privacy is effectively protected to meet the ever-growing application requirements.

  9. Superlinearly scalable noise robustness of redundant coupled dynamical systems.

    Science.gov (United States)

    Kohar, Vivek; Kia, Behnam; Lindner, John F; Ditto, William L

    2016-03-01

    We illustrate through theory and numerical simulations that redundant coupled dynamical systems can be extremely robust against local noise in comparison to uncoupled dynamical systems evolving in the same noisy environment. Previous studies have shown that the noise robustness of redundant coupled dynamical systems is linearly scalable and deviations due to noise can be minimized by increasing the number of coupled units. Here, we demonstrate that the noise robustness can actually be scaled superlinearly if some conditions are met and very high noise robustness can be realized with very few coupled units. We discuss these conditions and show that this superlinear scalability depends on the nonlinearity of the individual dynamical units. The phenomenon is demonstrated in discrete as well as continuous dynamical systems. This superlinear scalability not only provides us an opportunity to exploit the nonlinearity of physical systems without being bogged down by noise but may also help us in understanding the functional role of coupled redundancy found in many biological systems. Moreover, engineers can exploit superlinear noise suppression by starting a coupled system near (not necessarily at) the appropriate initial condition.

  10. Experimental performance evaluation of software defined networking (SDN) based data communication networks for large scale flexi-grid optical networks.

    Science.gov (United States)

    Zhao, Yongli; He, Ruiying; Chen, Haoran; Zhang, Jie; Ji, Yuefeng; Zheng, Haomian; Lin, Yi; Wang, Xinbo

    2014-04-21

    Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment.

  11. Scalable, full-colour and controllable chromotropic plasmonic printing

    Science.gov (United States)

    Xue, Jiancai; Zhou, Zhang-Kai; Wei, Zhiqiang; Su, Rongbin; Lai, Juan; Li, Juntao; Li, Chao; Zhang, Tengwei; Wang, Xue-Hua

    2015-01-01

    Plasmonic colour printing has drawn wide attention as a promising candidate for the next-generation colour-printing technology. However, an efficient approach to realize full colour and scalable fabrication is still lacking, which prevents plasmonic colour printing from practical applications. Here we present a scalable and full-colour plasmonic printing approach by combining conjugate twin-phase modulation with a plasmonic broadband absorber. More importantly, our approach also demonstrates controllable chromotropic capability, that is, the ability of reversible colour transformations. This chromotropic capability affords enormous potentials in building functionalized prints for anticounterfeiting, special label, and high-density data encryption storage. With such excellent performances in functional colour applications, this colour-printing approach could pave the way for plasmonic colour printing in real-world commercial utilization. PMID:26567803

  12. Embedded High Performance Scalable Computing Systems

    National Research Council Canada - National Science Library

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  13. Scalable Fault-Tolerant Location Management Scheme for Mobile IP

    Directory of Open Access Journals (Sweden)

    JinHo Ahn

    2001-11-01

    Full Text Available As the number of mobile nodes registering with a network rapidly increases in Mobile IP, multiple mobility (home of foreign agents can be allocated to a network in order to improve performance and availability. Previous fault tolerant schemes (denoted by PRT schemes to mask failures of the mobility agents use passive replication techniques. However, they result in high failure-free latency during registration process if the number of mobility agents in the same network increases, and force each mobility agent to manage bindings of all the mobile nodes registering with its network. In this paper, we present a new fault-tolerant scheme (denoted by CML scheme using checkpointing and message logging techniques. The CML scheme achieves low failure-free latency even if the number of mobility agents in a network increases, and improves scalability to a large number of mobile nodes registering with each network compared with the PRT schemes. Additionally, the CML scheme allows each failed mobility agent to recover bindings of the mobile nodes registering with the mobility agent when it is repaired even if all the other mobility agents in the same network concurrently fail.

  14. Scalable, incremental learning with MapReduce parallelization for cell detection in high-resolution 3D microscopy data

    KAUST Repository

    Sung, Chul

    2013-08-01

    Accurate estimation of neuronal count and distribution is central to the understanding of the organization and layout of cortical maps in the brain, and changes in the cell population induced by brain disorders. High-throughput 3D microscopy techniques such as Knife-Edge Scanning Microscopy (KESM) are enabling whole-brain survey of neuronal distributions. Data from such techniques pose serious challenges to quantitative analysis due to the massive, growing, and sparsely labeled nature of the data. In this paper, we present a scalable, incremental learning algorithm for cell body detection that can address these issues. Our algorithm is computationally efficient (linear mapping, non-iterative) and does not require retraining (unlike gradient-based approaches) or retention of old raw data (unlike instance-based learning). We tested our algorithm on our rat brain Nissl data set, showing superior performance compared to an artificial neural network-based benchmark, and also demonstrated robust performance in a scenario where the data set is rapidly growing in size. Our algorithm is also highly parallelizable due to its incremental nature, and we demonstrated this empirically using a MapReduce-based implementation of the algorithm. We expect our scalable, incremental learning approach to be widely applicable to medical imaging domains where there is a constant flux of new data. © 2013 IEEE.

  15. SuperLU{_}DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiaoye S.; Demmel, James W.

    2002-03-27

    In this paper, we present the main algorithmic features in the software package SuperLU{_}DIST, a distributed-memory sparse direct solver for large sets of linear equations. We give in detail our parallelization strategies, with focus on scalability issues, and demonstrate the parallel performance and scalability on current machines. The solver is based on sparse Gaussian elimination, with an innovative static pivoting strategy proposed earlier by the authors. The main advantage of static pivoting over classical partial pivoting is that it permits a priori determination of data structures and communication pattern for sparse Gaussian elimination, which makes it more scalable on distributed memory machines. Based on this a priori knowledge, we designed highly parallel and scalable algorithms for both LU decomposition and triangular solve and we show that they are suitable for large-scale distributed memory machines.

  16. Resource-aware complexity scalability for mobile MPEG encoding

    NARCIS (Netherlands)

    Mietens, S.O.; With, de P.H.N.; Hentschel, C.; Panchanatan, S.; Vasudev, B.

    2004-01-01

    Complexity scalability attempts to scale the required resources of an algorithm with the chose quality settings, in order to broaden the application range. In this paper, we present complexity-scalable MPEG encoding of which the core processing modules are modified for scalability. Scalability is

  17. VPLS: an effective technology for building scalable transparent LAN services

    Science.gov (United States)

    Dong, Ximing; Yu, Shaohua

    2005-02-01

    Virtual Private LAN Service (VPLS) is generating considerable interest with enterprises and service providers as it offers multipoint transparent LAN service (TLS) over MPLS networks. This paper describes an effective technology - VPLS, which links virtual switch instances (VSIs) through MPLS to form an emulated Ethernet switch and build Scalable Transparent Lan Services. It first focuses on the architecture of VPLS with Ethernet bridging technique at the edge and MPLS at the core, then it tries to elucidate the data forwarding mechanism within VPLS domain, including learning and aging MAC addresses on a per LSP basis, flooding of unknown frames and replication for unknown, multicast, and broadcast frames. The loop-avoidance mechanism, known as split horizon forwarding, is also analyzed. Another important aspect of VPLS service is its basic operation, including autodiscovery and signaling, is discussed. From the perspective of efficiency and scalability the paper compares two important signaling mechanism, BGP and LDP, which are used to set up a PW between the PEs and bind the PWs to a particular VSI. With the extension of VPLS and the increase of full mesh of PWs between PE devices (n*(n-1)/2 PWs in all, a n2 complete problem), VPLS instance could have a large number of remote PE associations, resulting in an inefficient use of network bandwidth and system resources as the ingress PE has to replicate each frame and append MPLS labels for remote PE. So the latter part of this paper focuses on the scalability issue: the Hierarchical VPLS. Within the architecture of HVPLS, this paper addresses two ways to cope with a possibly large number of MAC addresses, which make VPLS operate more efficiently.

  18. Scalable Video Streaming Adaptive to Time-Varying IEEE 802.11 MAC Parameters

    Science.gov (United States)

    Lee, Kyung-Jun; Suh, Doug-Young; Park, Gwang-Hoon; Huh, Jae-Doo

    This letter proposes a QoS control method for video streaming service over wireless networks. Based on statistical analysis, the time-varying MAC parameters highly related to channel condition are selected to predict available bitrate. Adaptive bitrate control of scalably-encoded video guarantees continuity in streaming service even if the channel condition changes abruptly.

  19. Novel scheme for efficient and cost-effective forwarding of packets in optical networks without header modification

    DEFF Research Database (Denmark)

    Wessing, Henrik; Fjelde, Tina; Christiansen, Henrik Lehrmann

    2001-01-01

    We present a novel scheme for addressing the outputs in optical packet switches and demonstrate its good scalability. The scheme requires neither header modification nor distribution of routing information to the packet switches, thus reducing optical component count while simplifying network...

  20. Characteristics of networks in energy efficiency research, development and demonstration – a comparison of actors, technological domains and network structure in seven research areas

    DEFF Research Database (Denmark)

    Ruby, Tobias Møller

    2013-01-01

    efficiency research and development. The results show how certain knowledge institutions that connect the scientific knowledge with specific applications seem to be especially important for progress in the field. Overall the study enriches the understanding of RD&D in energy efficiency with a new view......The need for more energy efficient products and technologies has increased recently in connection with meeting today’s energy and environmental issues. Research, development and demonstration (RD&D) is one way of supporting technological innovation and knowledge diffusion...

  1. Scalable Video Coding with Interlayer Signal Decorrelation Techniques

    Directory of Open Access Journals (Sweden)

    Yang Wenxian

    2007-01-01

    Full Text Available Scalability is one of the essential requirements in the compression of visual data for present-day multimedia communications and storage. The basic building block for providing the spatial scalability in the scalable video coding (SVC standard is the well-known Laplacian pyramid (LP. An LP achieves the multiscale representation of the video as a base-layer signal at lower resolution together with several enhancement-layer signals at successive higher resolutions. In this paper, we propose to improve the coding performance of the enhancement layers through efficient interlayer decorrelation techniques. We first show that, with nonbiorthogonal upsampling and downsampling filters, the base layer and the enhancement layers are correlated. We investigate two structures to reduce this correlation. The first structure updates the base-layer signal by subtracting from it the low-frequency component of the enhancement layer signal. The second structure modifies the prediction in order that the low-frequency component in the new enhancement layer is diminished. The second structure is integrated in the JSVM 4.0 codec with suitable modifications in the prediction modes. Experimental results with some standard test sequences demonstrate coding gains up to 1 dB for I pictures and up to 0.7 dB for both I and P pictures.

  2. Scalability of Sustainable Business Models in Hybrid Organizations

    Directory of Open Access Journals (Sweden)

    Adam Jabłoński

    2016-02-01

    Full Text Available The dynamics of change in modern business create new mechanisms for company management to determine their pursuit and the achievement of their high performance. This performance maintained over a long period of time becomes a source of ensuring business continuity by companies. An ontological being enabling the adoption of such assumptions is such a business model that has the ability to generate results in every possible market situation and, moreover, it has the features of permanent adaptability. A feature that describes the adaptability of the business model is its scalability. Being a factor ensuring more work and more efficient work with an increasing number of components, scalability can be applied to the concept of business models as the company’s ability to maintain similar or higher performance through it. Ensuring the company’s performance in the long term helps to build the so-called sustainable business model that often balances the objectives of stakeholders and shareholders, and that is created by the implemented principles of value-based management and corporate social responsibility. This perception of business paves the way for building hybrid organizations that integrate business activities with pro-social ones. The combination of an approach typical of hybrid organizations in designing and implementing sustainable business models pursuant to the scalability criterion seems interesting from the cognitive point of view. Today, hybrid organizations are great spaces for building effective and efficient mechanisms for dialogue between business and society. This requires the appropriate business model. The purpose of the paper is to present the conceptualization and operationalization of scalability of sustainable business models that determine the performance of a hybrid organization in the network environment. The paper presents the original concept of applying scalability in sustainable business models with detailed

  3. ENDEAVOUR: A Scalable SDN Architecture for Real-World IXPs

    KAUST Repository

    Antichi, Gianni; Castro, Ignacio; Chiesa, Marco; Fernandes, Eder L.; Lapeyrade, Remy; Kopp, Daniel; Han, Jong Hun; Bruyere, Marc; Dietzel, Christoph; Gusat, Mitchell; Moore, Andrew W.; Owezarski, Philippe; Uhlig, Steve; Canini, Marco

    2017-01-01

    , hundreds of networks. Given their far-reaching implications on interdomain routing, IXPs are the ideal place to foster network innovation and extend the benefits of SDN to the interdomain level. In this paper, we present, evaluate, and demonstrate ENDEAVOUR

  4. Overview of the Scalable Coherent Interface, IEEE STD 1596 (SCI)

    International Nuclear Information System (INIS)

    Gustavson, D.B.; James, D.V.; Wiggers, H.A.

    1992-10-01

    The Scalable Coherent Interface standard defines a new generation of interconnection that spans the full range from supercomputer memory 'bus' to campus-wide network. SCI provides bus-like services and a shared-memory software model while using an underlying, packet protocol on many independent communication links. Initially these links are 1 GByte/s (wires) and 1 GBit/s (fiber), but the protocol scales well to future faster or lower-cost technologies. The interconnect may use switches, meshes, and rings. The SCI distributed-shared-memory model is simple and versatile, enabling for the first time a smooth integration of highly parallel multiprocessors, workstations, personal computers, I/O, networking and data acquisition

  5. CASTOR: Widely Distributed Scalable Infospaces

    Science.gov (United States)

    2008-11-01

    Symposium on Networked System Design and Implementation (NSDI 08). San Francisco , CA. April 2008. Gossip-based Distribution Estimation in Peer-to...and S. Paul. RMTP: A reliable multicast transport protocol. In INFOCOM, pages 1414–1424, San Francisco , CA, Mar. 1996. [22] T. Montgomery, B. Whetten...Fast approx- imate reconciliation of set differences. Boston University Computer Science Technical Report 2002-019., 2002. [5] L. Camargos , F. Pedone

  6. Integration in primary community care networks (PCCNs: examination of governance, clinical, marketing, financial, and information infrastructures in a national demonstration project in Taiwan

    Directory of Open Access Journals (Sweden)

    Lin Blossom Yen-Ju

    2007-06-01

    Full Text Available Abstract Background Taiwan's primary community care network (PCCN demonstration project, funded by the Bureau of National Health Insurance on March 2003, was established to discourage hospital shopping behavior of people and drive the traditional fragmented health care providers into cooperate care models. Between 2003 and 2005, 268 PCCNs were established. This study profiled the individual members in the PCCNs to study the nature and extent to which their network infrastructures have been integrated among the members (clinics and hospitals within individual PCCNs. Methods The thorough questionnaire items, covering the network working infrastructures – governance, clinical, marketing, financial, and information integration in PCCNs, were developed with validity and reliability confirmed. One thousand five hundred and fifty-seven clinics that had belonged to PCCNs for more than one year, based on the 2003–2005 Taiwan Primary Community Care Network List, were surveyed by mail. Nine hundred and twenty-eight clinic members responded to the surveys giving a 59.6 % response rate. Results Overall, the PCCNs' members had higher involvement in the governance infrastructure, which was usually viewed as the most important for establishment of core values in PCCNs' organization design and management at the early integration stage. In addition, it found that there existed a higher extent of integration of clinical, marketing, and information infrastructures among the hospital-clinic member relationship than those among clinic members within individual PCCNs. The financial infrastructure was shown the least integrated relative to other functional infrastructures at the early stage of PCCN formation. Conclusion There was still room for better integrated partnerships, as evidenced by the great variety of relationships and differences in extent of integration in this study. In addition to provide how the network members have done for their initial work at

  7. A scalable infrastructure model for carbon capture and storage: SimCCS

    International Nuclear Information System (INIS)

    Middleton, Richard S.; Bielicki, Jeffrey M.

    2009-01-01

    In the carbon capture and storage (CCS) process, CO 2 sources and geologic reservoirs may be widely spatially dispersed and need to be connected through a dedicated CO 2 pipeline network. We introduce a scalable infrastructure model for CCS (simCCS) that generates a fully integrated, cost-minimizing CCS system. SimCCS determines where and how much CO 2 to capture and store, and where to build and connect pipelines of different sizes, in order to minimize the combined annualized costs of sequestering a given amount of CO 2 . SimCCS is able to aggregate CO 2 flows between sources and reservoirs into trunk pipelines that take advantage of economies of scale. Pipeline construction costs take into account factors including topography and social impacts. SimCCS can be used to calculate the scale of CCS deployment (local, regional, national). SimCCS' deployment of a realistic, capacitated pipeline network is a major advancement for planning CCS infrastructure. We demonstrate simCCS using a set of 37 CO 2 sources and 14 reservoirs for California. The results highlight the importance of systematic planning for CCS infrastructure by examining the sensitivity of CCS infrastructure, as optimized by simCCS, to varying CO 2 targets. We finish by identifying critical future research areas for CCS infrastructure

  8. Declarative and Scalable Selection for Map Visualizations

    DEFF Research Database (Denmark)

    Kefaloukos, Pimin Konstantin Balic

    and is itself a source and cause of prolific data creation. This calls for scalable map processing techniques that can handle the data volume and which play well with the predominant data models on the Web. (4) Maps are now consumed around the clock by a global audience. While historical maps were singleuser......-defined constraints as well as custom objectives. The purpose of the language is to derive a target multi-scale database from a source database according to holistic specifications. (b) The Glossy SQL compiler allows Glossy SQL to be scalably executed in a spatial analytics system, such as a spatial relational......, there are indications that the method is scalable for databases that contain millions of records, especially if the target language of the compiler is substituted by a cluster-ready variant of SQL. While several realistic use cases for maps have been implemented in CVL, additional non-geographic data visualization uses...

  9. Scalable Density-Based Subspace Clustering

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Günnemann, Stephan

    2011-01-01

    For knowledge discovery in high dimensional databases, subspace clustering detects clusters in arbitrary subspace projections. Scalability is a crucial issue, as the number of possible projections is exponential in the number of dimensions. We propose a scalable density-based subspace clustering...... method that steers mining to few selected subspace clusters. Our novel steering technique reduces subspace processing by identifying and clustering promising subspaces and their combinations directly. Thereby, it narrows down the search space while maintaining accuracy. Thorough experiments on real...... and synthetic databases show that steering is efficient and scalable, with high quality results. For future work, our steering paradigm for density-based subspace clustering opens research potential for speeding up other subspace clustering approaches as well....

  10. Process efficiency. Redesigning social networks to improve surgery patient flow.

    Science.gov (United States)

    Samarth, Chandrika N; Gloor, Peter A

    2009-01-01

    We propose a novel approach to improve throughput of the surgery patient flow process of a Boston area teaching hospital. A social network analysis was conducted in an effort to demonstrate that process efficiency gains could be achieved through redesign of social network patterns at the workplace; in conjunction with redesign of organization structure and the implementation of workflow over an integrated information technology system. Key knowledge experts and coordinators in times of crisis were identified and a new communication structure more conducive to trust and knowledge sharing was suggested. The new communication structure is scalable without compromising on coordination required among key roles in the network for achieving efficiency gains.

  11. Asynchronous Checkpoint Migration with MRNet in the Scalable Checkpoint / Restart Library

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, K; Moody, A; de Supinski, B R

    2012-03-20

    Applications running on today's supercomputers tolerate failures by periodically saving their state in checkpoint files on stable storage, such as a parallel file system. Although this approach is simple, the overhead of writing the checkpoints can be prohibitive, especially for large-scale jobs. In this paper, we present initial results of an enhancement to our Scalable Checkpoint/Restart Library (SCR). We employ MRNet, a tree-based overlay network library, to transfer checkpoints from the compute nodes to the parallel file system asynchronously. This enhancement increases application efficiency by removing the need for an application to block while checkpoints are transferred to the parallel file system. We show that the integration of SCR with MRNet can reduce the time spent in I/O operations by as much as 15x. However, our experiments exposed new scalability issues with our initial implementation. We discuss the sources of the scalability problems and our plans to address them.

  12. Enhancing Scalability of Sparse Direct Methods

    International Nuclear Information System (INIS)

    Li, Xiaoye S.; Demmel, James; Grigori, Laura; Gu, Ming; Xia, Jianlin; Jardin, Steve; Sovinec, Carl; Lee, Lie-Quan

    2007-01-01

    TOPS is providing high-performance, scalable sparse direct solvers, which have had significant impacts on the SciDAC applications, including fusion simulation (CEMM), accelerator modeling (COMPASS), as well as many other mission-critical applications in DOE and elsewhere. Our recent developments have been focusing on new techniques to overcome scalability bottleneck of direct methods, in both time and memory. These include parallelizing symbolic analysis phase and developing linear-complexity sparse factorization methods. The new techniques will make sparse direct methods more widely usable in large 3D simulations on highly-parallel petascale computers

  13. Software performance and scalability a quantitative approach

    CERN Document Server

    Liu, Henry H

    2009-01-01

    Praise from the Reviewers:"The practicality of the subject in a real-world situation distinguishes this book from othersavailable on the market."—Professor Behrouz Far, University of Calgary"This book could replace the computer organization texts now in use that every CS and CpEstudent must take. . . . It is much needed, well written, and thoughtful."—Professor Larry Bernstein, Stevens Institute of TechnologyA distinctive, educational text onsoftware performance and scalabilityThis is the first book to take a quantitative approach to the subject of software performance and scalability

  14. From Digital Disruption to Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten; Thomsen, Peter Poulsen

    2017-01-01

    This article discusses the terms disruption, digital disruption, business models and business model scalability. It illustrates how managers should be using these terms for the benefit of their business by developing business models capable of achieving exponentially increasing returns to scale...... will seldom lead to business model scalability capable of competing with digital disruption(s)....... as a response to digital disruption. A series of case studies illustrate that besides frequent existing messages in the business literature relating to the importance of creating agile businesses, both in growing and declining economies, as well as hard to copy value propositions or value propositions that take...

  15. Scalable error correction in distributed ion trap computers

    International Nuclear Information System (INIS)

    Oi, Daniel K. L.; Devitt, Simon J.; Hollenberg, Lloyd C. L.

    2006-01-01

    A major challenge for quantum computation in ion trap systems is scalable integration of error correction and fault tolerance. We analyze a distributed architecture with rapid high-fidelity local control within nodes and entangled links between nodes alleviating long-distance transport. We demonstrate fault-tolerant operator measurements which are used for error correction and nonlocal gates. This scheme is readily applied to linear ion traps which cannot be scaled up beyond a few ions per individual trap but which have access to a probabilistic entanglement mechanism. A proof-of-concept system is presented which is within the reach of current experiment

  16. Empirical Evaluation of Superposition Coded Multicasting for Scalable Video

    KAUST Repository

    Chun Pong Lau

    2013-03-01

    In this paper we investigate cross-layer superposition coded multicast (SCM). Previous studies have proven its effectiveness in exploiting better channel capacity and service granularities via both analytical and simulation approaches. However, it has never been practically implemented using a commercial 4G system. This paper demonstrates our prototype in achieving the SCM using a standard 802.16 based testbed for scalable video transmissions. In particular, to implement the superposition coded (SPC) modulation, we take advantage a novel software approach, namely logical SPC (L-SPC), which aims to mimic the physical layer superposition coded modulation. The emulation results show improved throughput comparing with generic multicast method.

  17. Content-Aware Scalability-Type Selection for Rate Adaptation of Scalable Video

    Directory of Open Access Journals (Sweden)

    Tekalp A Murat

    2007-01-01

    Full Text Available Scalable video coders provide different scaling options, such as temporal, spatial, and SNR scalabilities, where rate reduction by discarding enhancement layers of different scalability-type results in different kinds and/or levels of visual distortion depend on the content and bitrate. This dependency between scalability type, video content, and bitrate is not well investigated in the literature. To this effect, we first propose an objective function that quantifies flatness, blockiness, blurriness, and temporal jerkiness artifacts caused by rate reduction by spatial size, frame rate, and quantization parameter scaling. Next, the weights of this objective function are determined for different content (shot types and different bitrates using a training procedure with subjective evaluation. Finally, a method is proposed for choosing the best scaling type for each temporal segment that results in minimum visual distortion according to this objective function given the content type of temporal segments. Two subjective tests have been performed to validate the proposed procedure for content-aware selection of the best scalability type on soccer videos. Soccer videos scaled from 600 kbps to 100 kbps by the proposed content-aware selection of scalability type have been found visually superior to those that are scaled using a single scalability option over the whole sequence.

  18. Optimizing Nanoelectrode Arrays for Scalable Intracellular Electrophysiology.

    Science.gov (United States)

    Abbott, Jeffrey; Ye, Tianyang; Ham, Donhee; Park, Hongkun

    2018-03-20

    Electrode technology for electrophysiology has a long history of innovation, with some decisive steps including the development of the voltage-clamp measurement technique by Hodgkin and Huxley in the 1940s and the invention of the patch clamp electrode by Neher and Sakmann in the 1970s. The high-precision intracellular recording enabled by the patch clamp electrode has since been a gold standard in studying the fundamental cellular processes underlying the electrical activities of neurons and other excitable cells. One logical next step would then be to parallelize these intracellular electrodes, since simultaneous intracellular recording from a large number of cells will benefit the study of complex neuronal networks and will increase the throughput of electrophysiological screening from basic neurobiology laboratories to the pharmaceutical industry. Patch clamp electrodes, however, are not built for parallelization; as for now, only ∼10 patch measurements in parallel are possible. It has long been envisioned that nanoscale electrodes may help meet this challenge. First, nanoscale electrodes were shown to enable intracellular access. Second, because their size scale is within the normal reach of the standard top-down fabrication, the nanoelectrodes can be scaled into a large array for parallelization. Third, such a nanoelectrode array can be monolithically integrated with complementary metal-oxide semiconductor (CMOS) electronics to facilitate the large array operation and the recording of the signals from a massive number of cells. These are some of the central ideas that have motivated the research activity into nanoelectrode electrophysiology, and these past years have seen fruitful developments. This Account aims to synthesize these findings so as to provide a useful reference. Summing up from the recent studies, we will first elucidate the morphology and associated electrical properties of the interface between a nanoelectrode and a cellular membrane

  19. Responsive, Flexible and Scalable Broader Impacts (Invited)

    Science.gov (United States)

    Decharon, A.; Companion, C.; Steinman, M.

    2010-12-01

    In many educator professional development workshops, scientists present content in a slideshow-type format and field questions afterwards. Drawbacks of this approach include: inability to begin the lecture with content that is responsive to audience needs; lack of flexible access to specific material within the linear presentation; and “Q&A” sessions are not easily scalable to broader audiences. Often this type of traditional interaction provides little direct benefit to the scientists. The Centers for Ocean Sciences Education Excellence - Ocean Systems (COSEE-OS) applies the technique of concept mapping with demonstrated effectiveness in helping scientists and educators “get on the same page” (deCharon et al., 2009). A key aspect is scientist professional development geared towards improving face-to-face and online communication with non-scientists. COSEE-OS promotes scientist-educator collaboration, tests the application of scientist-educator maps in new contexts through webinars, and is piloting the expansion of maps as long-lived resources for the broader community. Collaboration - COSEE-OS has developed and tested a workshop model bringing scientists and educators together in a peer-oriented process, often clarifying common misconceptions. Scientist-educator teams develop online concept maps that are hyperlinked to “assets” (i.e., images, videos, news) and are responsive to the needs of non-scientist audiences. In workshop evaluations, 91% of educators said that the process of concept mapping helped them think through science topics and 89% said that concept mapping helped build a bridge of communication with scientists (n=53). Application - After developing a concept map, with COSEE-OS staff assistance, scientists are invited to give webinar presentations that include live “Q&A” sessions. The webinars extend the reach of scientist-created concept maps to new contexts, both geographically and topically (e.g., oil spill), with a relatively small

  20. The Node Monitoring Component of a Scalable Systems Software Environment

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Samuel James [Iowa State Univ., Ames, IA (United States)

    2006-01-01

    This research describes Fountain, a suite of programs used to monitor the resources of a cluster. A cluster is a collection of individual computers that are connected via a high speed communication network. They are traditionally used by users who desire more resources, such as processing power and memory, than any single computer can provide. A common drawback to effectively utilizing such a large-scale system is the management infrastructure, which often does not often scale well as the system grows. Large-scale parallel systems provide new research challenges in the area of systems software, the programs or tools that manage the system from boot-up to running a parallel job. The approach presented in this thesis utilizes a collection of separate components that communicate with each other to achieve a common goal. While systems software comprises a broad array of components, this thesis focuses on the design choices for a node monitoring component. We will describe Fountain, an implementation of the Scalable Systems Software (SSS) node monitor specification. It is targeted at aggregate node monitoring for clusters, focusing on both scalability and fault tolerance as its design goals. It leverages widely used technologies such as XML and HTTP to present an interface to other components in the SSS environment.

  1. Using scalable vector graphics to evolve art

    NARCIS (Netherlands)

    den Heijer, E.; Eiben, A. E.

    2016-01-01

    In this paper, we describe our investigations of the use of scalable vector graphics as a genotype representation in evolutionary art. We describe the technical aspects of using SVG in evolutionary art, and explain our custom, SVG specific operators initialisation, mutation and crossover. We perform

  2. Scalable fast multipole accelerated vortex methods

    KAUST Repository

    Hu, Qi; Gumerov, Nail A.; Yokota, Rio; Barba, Lorena A.; Duraiswami, Ramani

    2014-01-01

    -node communication and load balance efficiently, with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff

  3. Scalable Domain Decomposed Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  4. Demonstrating the use of web analytics and an online survey to understand user groups of a national network of river level data

    Science.gov (United States)

    Macleod, Christopher Kit; Braga, Joao; Arts, Koen; Ioris, Antonio; Han, Xiwu; Sripada, Yaji; van der Wal, Rene

    2016-04-01

    The number of local, national and international networks of online environmental sensors are rapidly increasing. Where environmental data are made available online for public consumption, there is a need to advance our understanding of the relationships between the supply of and the different demands for such information. Understanding how individuals and groups of users are using online information resources may provide valuable insights into their activities and decision making. As part of the 'dot.rural wikiRivers' project we investigated the potential of web analytics and an online survey to generate insights into the use of a national network of river level data from across Scotland. These sources of online information were collected alongside phone interviews with volunteers sampled from the online survey, and interviews with providers of online river level data; as part of a larger project that set out to help improve the communication of Scotland's online river data. Our web analytics analysis was based on over 100 online sensors which are maintained by the Scottish Environmental Protection Agency (SEPA). Through use of Google Analytics data accessed via the R Ganalytics package we assessed: if the quality of data provided by Google Analytics free service is good enough for research purposes; if we could demonstrate what sensors were being used, when and where; how the nature and pattern of sensor data may affect web traffic; and whether we can identify and profile these users based on information from traffic sources. Web analytics data consists of a series of quantitative metrics which capture and summarize various dimensions of the traffic to a certain web page or set of pages. Examples of commonly used metrics include the number of total visits to a site and the number of total page views. Our analyses of the traffic sources from 2009 to 2011 identified several different major user groups. To improve our understanding of how the use of this national

  5. Low-cost scalable quartz crystal microbalance array for environmental sensing

    Energy Technology Data Exchange (ETDEWEB)

    Anazagasty, Cristain [University of Puerto Rico; Hianik, Tibor [Comenius University, Bratislava, Slovakia; Ivanov, Ilia N [ORNL

    2016-01-01

    Proliferation of environmental sensors for internet of things (IoT) applications has increased the need for low-cost platforms capable of accommodating multiple sensors. Quartz crystal microbalance (QCM) crystals coated with nanometer-thin sensor films are suitable for use in high-resolution (~1 ng) selective gas sensor applications. We demonstrate a scalable array for measuring frequency response of six QCM sensors controlled by low-cost Arduino microcontrollers and a USB multiplexer. Gas pulses and data acquisition were controlled by a LabVIEW user interface. We test the sensor array by measuring the frequency shift of crystals coated with different compositions of polymer composites based on poly(3,4-ethylenedioxythiophene):polystyrene sulfonate (PEDOT:PSS) while films are exposed to water vapor and oxygen inside a controlled environmental chamber. Our sensor array exhibits comparable performance to that of a commercial QCM system, while enabling high-throughput 6 QCM testing for under $1,000. We use deep neural network structures to process sensor response and demonstrate that the QCM array is suitable for gas sensing, environmental monitoring, and electronic-nose applications.

  6. Heterogeneous Network Convergence with Artificial Mapping for Cognitive Radio Networks

    Directory of Open Access Journals (Sweden)

    Hang QIN

    2013-04-01

    Full Text Available The artificial mapping scheme is proposed in this paper for adaptive network collaboration of cognitive radio networks. The superiority of the DHT-based overlay for its link state aggregation property, which establishes global convergence for link state aggregation message among a scalable number of nodes, is considered in the analysis. In addition, the fuzzy logic inference can better handle uncertainty, fuzziness, and incomplete information in node convergence report, which is developed as a novel approach to aggregate wireless node control with affordable message overload. The Artificial Mapping Tree (AMT for the new convergence scheme is verified by the simulation and experimental results. The moderately increased network throughput for convergence validation is demonstrated with the proactive spectrum coordination.

  7. Automation of multi-agent control for complex dynamic systems in heterogeneous computational network

    Science.gov (United States)

    Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan

    2017-01-01

    The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.

  8. Scalable Nernst thermoelectric power using a coiled galfenol wire

    Science.gov (United States)

    Yang, Zihao; Codecido, Emilio A.; Marquez, Jason; Zheng, Yuanhua; Heremans, Joseph P.; Myers, Roberto C.

    2017-09-01

    The Nernst thermopower usually is considered far too weak in most metals for waste heat recovery. However, its transverse orientation gives it an advantage over the Seebeck effect on non-flat surfaces. Here, we experimentally demonstrate the scalable generation of a Nernst voltage in an air-cooled metal wire coiled around a hot cylinder. In this geometry, a radial temperature gradient generates an azimuthal electric field in the coil. A Galfenol (Fe0.85Ga0.15) wire is wrapped around a cartridge heater, and the voltage drop across the wire is measured as a function of axial magnetic field. As expected, the Nernst voltage scales linearly with the length of the wire. Based on heat conduction and fluid dynamic equations, finite-element method is used to calculate the temperature gradient across the Galfenol wire and determine the Nernst coefficient. A giant Nernst coefficient of -2.6 μV/KT at room temperature is estimated, in agreement with measurements on bulk Galfenol. We expect that the giant Nernst effect in Galfenol arises from its magnetostriction, presumably through enhanced magnon-phonon coupling. Our results demonstrate the feasibility of a transverse thermoelectric generator capable of scalable output power from non-flat heat sources.

  9. Towards Scalable Strain Gauge-Based Joint Torque Sensors

    Science.gov (United States)

    D’Imperio, Mariapaola; Cannella, Ferdinando; Caldwell, Darwin G.; Cuschieri, Alfred

    2017-01-01

    During recent decades, strain gauge-based joint torque sensors have been commonly used to provide high-fidelity torque measurements in robotics. Although measurement of joint torque/force is often required in engineering research and development, the gluing and wiring of strain gauges used as torque sensors pose difficulties during integration within the restricted space available in small joints. The problem is compounded by the need for a scalable geometric design to measure joint torque. In this communication, we describe a novel design of a strain gauge-based mono-axial torque sensor referred to as square-cut torque sensor (SCTS), the significant features of which are high degree of linearity, symmetry, and high scalability in terms of both size and measuring range. Most importantly, SCTS provides easy access for gluing and wiring of the strain gauges on sensor surface despite the limited available space. We demonstrated that the SCTS was better in terms of symmetry (clockwise and counterclockwise rotation) and more linear. These capabilities have been shown through finite element modeling (ANSYS) confirmed by observed data obtained by load testing experiments. The high performance of SCTS was confirmed by studies involving changes in size, material and/or wings width and thickness. Finally, we demonstrated that the SCTS can be successfully implementation inside the hip joints of miniaturized hydraulically actuated quadruped robot-MiniHyQ. This communication is based on work presented at the 18th International Conference on Climbing and Walking Robots (CLAWAR). PMID:28820446

  10. Scalable manufacturing of biomimetic moldable hydrogels for industrial applications

    Science.gov (United States)

    Yu, Anthony C.; Chen, Haoxuan; Chan, Doreen; Agmon, Gillie; Stapleton, Lyndsay M.; Sevit, Alex M.; Tibbitt, Mark W.; Acosta, Jesse D.; Zhang, Tony; Franzia, Paul W.; Langer, Robert; Appel, Eric A.

    2016-12-01

    Hydrogels are a class of soft material that is exploited in many, often completely disparate, industrial applications, on account of their unique and tunable properties. Advances in soft material design are yielding next-generation moldable hydrogels that address engineering criteria in several industrial settings such as complex viscosity modifiers, hydraulic or injection fluids, and sprayable carriers. Industrial implementation of these viscoelastic materials requires extreme volumes of material, upwards of several hundred million gallons per year. Here, we demonstrate a paradigm for the scalable fabrication of self-assembled moldable hydrogels using rationally engineered, biomimetic polymer-nanoparticle interactions. Cellulose derivatives are linked together by selective adsorption to silica nanoparticles via dynamic and multivalent interactions. We show that the self-assembly process for gel formation is easily scaled in a linear fashion from 0.5 mL to over 15 L without alteration of the mechanical properties of the resultant materials. The facile and scalable preparation of these materials leveraging self-assembly of inexpensive, renewable, and environmentally benign starting materials, coupled with the tunability of their properties, make them amenable to a range of industrial applications. In particular, we demonstrate their utility as injectable materials for pipeline maintenance and product recovery in industrial food manufacturing as well as their use as sprayable carriers for robust application of fire retardants in preventing wildland fires.

  11. Scalable Nernst thermoelectric power using a coiled galfenol wire

    Directory of Open Access Journals (Sweden)

    Zihao Yang

    2017-09-01

    Full Text Available The Nernst thermopower usually is considered far too weak in most metals for waste heat recovery. However, its transverse orientation gives it an advantage over the Seebeck effect on non-flat surfaces. Here, we experimentally demonstrate the scalable generation of a Nernst voltage in an air-cooled metal wire coiled around a hot cylinder. In this geometry, a radial temperature gradient generates an azimuthal electric field in the coil. A Galfenol (Fe0.85Ga0.15 wire is wrapped around a cartridge heater, and the voltage drop across the wire is measured as a function of axial magnetic field. As expected, the Nernst voltage scales linearly with the length of the wire. Based on heat conduction and fluid dynamic equations, finite-element method is used to calculate the temperature gradient across the Galfenol wire and determine the Nernst coefficient. A giant Nernst coefficient of -2.6 μV/KT at room temperature is estimated, in agreement with measurements on bulk Galfenol. We expect that the giant Nernst effect in Galfenol arises from its magnetostriction, presumably through enhanced magnon-phonon coupling. Our results demonstrate the feasibility of a transverse thermoelectric generator capable of scalable output power from non-flat heat sources.

  12. Outage analysis for underlay cognitive networks using incremental regenerative relaying

    KAUST Repository

    Tourki, Kamel; Qaraqe, Khalid A.; Alouini, Mohamed-Slim

    2013-01-01

    Cooperative relay technology has recently been introduced into cognitive radio (CR) networks to enhance the network capacity, scalability, and reliability of end-to-end communication. In this paper, we investigate an underlay cognitive network where

  13. Photonic Architecture for Scalable Quantum Information Processing in Diamond

    Directory of Open Access Journals (Sweden)

    Kae Nemoto

    2014-08-01

    Full Text Available Physics and information are intimately connected, and the ultimate information processing devices will be those that harness the principles of quantum mechanics. Many physical systems have been identified as candidates for quantum information processing, but none of them are immune from errors. The challenge remains to find a path from the experiments of today to a reliable and scalable quantum computer. Here, we develop an architecture based on a simple module comprising an optical cavity containing a single negatively charged nitrogen vacancy center in diamond. Modules are connected by photons propagating in a fiber-optical network and collectively used to generate a topological cluster state, a robust substrate for quantum information processing. In principle, all processes in the architecture can be deterministic, but current limitations lead to processes that are probabilistic but heralded. We find that the architecture enables large-scale quantum information processing with existing technology.

  14. Efficient Delivery of Scalable Video Using a Streaming Class Model

    Directory of Open Access Journals (Sweden)

    Jason J. Quinlan

    2018-03-01

    Full Text Available When we couple the rise in video streaming with the growing number of portable devices (smart phones, tablets, laptops, we see an ever-increasing demand for high-definition video online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide a graceful changes in video quality, all while respecting viewing satisfaction. In this context, the use of well-known scalable/layered media streaming techniques, commonly known as scalable video coding (SVC, is an attractive solution. SVC encodes a number of video quality levels within a single media stream. This has been shown to be an especially effective and efficient solution, but it fares badly in the presence of datagram losses. While multiple description coding (MDC can reduce the effects of packet loss on scalable video delivery, the increased delivery cost is counterproductive for constrained networks. This situation is accentuated in cases where only the lower quality level is required. In this paper, we assess these issues and propose a new approach called Streaming Classes (SC through which we can define a key set of quality levels, each of which can be delivered in a self-contained manner. This facilitates efficient delivery, yielding reduced transmission byte-cost for devices requiring lower quality, relative to MDC and Adaptive Layer Distribution (ALD (42% and 76% respective reduction for layer 2, while also maintaining high levels of consistent quality. We also illustrate how selective packetisation technique can further reduce the effects of packet loss on viewable quality by

  15. Scalable Algorithms for Adaptive Statistical Designs

    Directory of Open Access Journals (Sweden)

    Robert Oehmke

    2000-01-01

    Full Text Available We present a scalable, high-performance solution to multidimensional recurrences that arise in adaptive statistical designs. Adaptive designs are an important class of learning algorithms for a stochastic environment, and we focus on the problem of optimally assigning patients to treatments in clinical trials. While adaptive designs have significant ethical and cost advantages, they are rarely utilized because of the complexity of optimizing and analyzing them. Computational challenges include massive memory requirements, few calculations per memory access, and multiply-nested loops with dynamic indices. We analyze the effects of various parallelization options, and while standard approaches do not work well, with effort an efficient, highly scalable program can be developed. This allows us to solve problems thousands of times more complex than those solved previously, which helps make adaptive designs practical. Further, our work applies to many other problems involving neighbor recurrences, such as generalized string matching.

  16. Scalable Atomistic Simulation Algorithms for Materials Research

    Directory of Open Access Journals (Sweden)

    Aiichiro Nakano

    2002-01-01

    Full Text Available A suite of scalable atomistic simulation programs has been developed for materials research based on space-time multiresolution algorithms. Design and analysis of parallel algorithms are presented for molecular dynamics (MD simulations and quantum-mechanical (QM calculations based on the density functional theory. Performance tests have been carried out on 1,088-processor Cray T3E and 1,280-processor IBM SP3 computers. The linear-scaling algorithms have enabled 6.44-billion-atom MD and 111,000-atom QM calculations on 1,024 SP3 processors with parallel efficiency well over 90%. production-quality programs also feature wavelet-based computational-space decomposition for adaptive load balancing, spacefilling-curve-based adaptive data compression with user-defined error bound for scalable I/O, and octree-based fast visibility culling for immersive and interactive visualization of massive simulation data.

  17. Durango: Scalable Synthetic Workload Generation for Extreme-Scale Application Performance Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Carothers, Christopher D. [Rensselaer Polytechnic Institute (RPI); Meredith, Jeremy S. [ORNL; Blanco, Marc [Rensselaer Polytechnic Institute (RPI); Vetter, Jeffrey S. [ORNL; Mubarak, Misbah [Argonne National Laboratory; LaPre, Justin [Rensselaer Polytechnic Institute (RPI); Moore, Shirley V. [ORNL

    2017-05-01

    Performance modeling of extreme-scale applications on accurate representations of potential architectures is critical for designing next generation supercomputing systems because it is impractical to construct prototype systems at scale with new network hardware in order to explore designs and policies. However, these simulations often rely on static application traces that can be difficult to work with because of their size and lack of flexibility to extend or scale up without rerunning the original application. To address this problem, we have created a new technique for generating scalable, flexible workloads from real applications, we have implemented a prototype, called Durango, that combines a proven analytical performance modeling language, Aspen, with the massively parallel HPC network modeling capabilities of the CODES framework.Our models are compact, parameterized and representative of real applications with computation events. They are not resource intensive to create and are portable across simulator environments. We demonstrate the utility of Durango by simulating the LULESH application in the CODES simulation environment on several topologies and show that Durango is practical to use for simulation without loss of fidelity, as quantified by simulation metrics. During our validation of Durango's generated communication model of LULESH, we found that the original LULESH miniapp code had a latent bug where the MPI_Waitall operation was used incorrectly. This finding underscores the potential need for a tool such as Durango, beyond its benefits for flexible workload generation and modeling.Additionally, we demonstrate the efficacy of Durango's direct integration approach, which links Aspen into CODES as part of the running network simulation model. Here, Aspen generates the application-level computation timing events, which in turn drive the start of a network communication phase. Results show that Durango's performance scales well when

  18. Scalable manufacturing processes with soft materials

    OpenAIRE

    White, Edward; Case, Jennifer; Kramer, Rebecca

    2014-01-01

    The emerging field of soft robotics will benefit greatly from new scalable manufacturing techniques for responsive materials. Currently, most of soft robotic examples are fabricated one-at-a-time, using techniques borrowed from lithography and 3D printing to fabricate molds. This limits both the maximum and minimum size of robots that can be fabricated, and hinders batch production, which is critical to gain wider acceptance for soft robotic systems. We have identified electrical structures, ...

  19. Architecture Knowledge for Evaluating Scalable Databases

    Science.gov (United States)

    2015-01-16

    Architecture Knowledge for Evaluating Scalable Databases 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Nurgaliev... Scala , Erlang, Javascript Cursor-based queries Supported, Not Supported JOIN queries Supported, Not Supported Complex data types Lists, maps, sets...is therefore needed, using technology such as machine learning to extract content from product documentation. The terminology used in the database

  20. Randomized Algorithms for Scalable Machine Learning

    OpenAIRE

    Kleiner, Ariel Jacob

    2012-01-01

    Many existing procedures in machine learning and statistics are computationally intractable in the setting of large-scale data. As a result, the advent of rapidly increasing dataset sizes, which should be a boon yielding improved statistical performance, instead severely blunts the usefulness of a variety of existing inferential methods. In this work, we use randomness to ameliorate this lack of scalability by reducing complex, computationally difficult inferential problems to larger sets o...

  1. Bitcoin-NG: A Scalable Blockchain Protocol

    OpenAIRE

    Eyal, Ittay; Gencer, Adem Efe; Sirer, Emin Gun; van Renesse, Robbert

    2015-01-01

    Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin's blockchain protocol, Bitcoin-NG is By...

  2. Scuba: scalable kernel-based gene prioritization.

    Science.gov (United States)

    Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio

    2018-01-25

    The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .

  3. Constraint Solver Techniques for Implementing Precise and Scalable Static Program Analysis

    DEFF Research Database (Denmark)

    Zhang, Ye

    solver using unification we could make a program analysis easier to design and implement, much more scalable, and still as precise as expected. We present an inclusion constraint language with the explicit equality constructs for specifying program analysis problems, and a parameterized framework...... developers to build reliable software systems more quickly and with fewer bugs or security defects. While designing and implementing a program analysis remains a hard work, making it both scalable and precise is even more challenging. In this dissertation, we show that with a general inclusion constraint...... data flow analyses for C language, we demonstrate a large amount of equivalences could be detected by off-line analyses, and they could then be used by a constraint solver to significantly improve the scalability of an analysis without sacrificing any precision....

  4. Scalable architecture for a room temperature solid-state quantum information processor.

    Science.gov (United States)

    Yao, N Y; Jiang, L; Gorshkov, A V; Maurer, P C; Giedke, G; Cirac, J I; Lukin, M D

    2012-04-24

    The realization of a scalable quantum information processor has emerged over the past decade as one of the central challenges at the interface of fundamental science and engineering. Here we propose and analyse an architecture for a scalable, solid-state quantum information processor capable of operating at room temperature. Our approach is based on recent experimental advances involving nitrogen-vacancy colour centres in diamond. In particular, we demonstrate that the multiple challenges associated with operation at ambient temperature, individual addressing at the nanoscale, strong qubit coupling, robustness against disorder and low decoherence rates can be simultaneously achieved under realistic, experimentally relevant conditions. The architecture uses a novel approach to quantum information transfer and includes a hierarchy of control at successive length scales. Moreover, it alleviates the stringent constraints currently limiting the realization of scalable quantum processors and will provide fundamental insights into the physics of non-equilibrium many-body quantum systems.

  5. Scalable Motion Estimation Processor Core for Multimedia System-on-Chip Applications

    Science.gov (United States)

    Lai, Yeong-Kang; Hsieh, Tian-En; Chen, Lien-Fei

    2007-04-01

    In this paper, we describe a high-throughput and scalable motion estimation processor architecture for multimedia system-on-chip applications. The number of processing elements (PEs) is scalable according to the variable algorithm parameters and the performance required for different applications. Using the PE rings efficiently and an intelligent memory-interleaving organization, the efficiency of the architecture can be increased. Moreover, using efficient on-chip memories and a data management technique can effectively decrease the power consumption and memory bandwidth. Techniques for reducing the number of interconnections and external memory accesses are also presented. Our results demonstrate that the proposed scalable PE-ringed architecture is a flexible and high-performance processor core in multimedia system-on-chip applications.

  6. DISP: Optimizations towards Scalable MPI Startup

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Huansong [Florida State University, Tallahassee; Pophale, Swaroop S [ORNL; Gorentla Venkata, Manjunath [ORNL; Yu, Weikuan [Florida State University, Tallahassee

    2016-01-01

    Despite the popularity of MPI for high performance computing, the startup of MPI programs faces a scalability challenge as both the execution time and memory consumption increase drastically at scale. We have examined this problem using the collective modules of Cheetah and Tuned in Open MPI as representative implementations. Previous improvements for collectives have focused on algorithmic advances and hardware off-load. In this paper, we examine the startup cost of the collective module within a communicator and explore various techniques to improve its efficiency and scalability. Accordingly, we have developed a new scalable startup scheme with three internal techniques, namely Delayed Initialization, Module Sharing and Prediction-based Topology Setup (DISP). Our DISP scheme greatly benefits the collective initialization of the Cheetah module. At the same time, it helps boost the performance of non-collective initialization in the Tuned module. We evaluate the performance of our implementation on Titan supercomputer at ORNL with up to 4096 processes. The results show that our delayed initialization can speed up the startup of Tuned and Cheetah by an average of 32.0% and 29.2%, respectively, our module sharing can reduce the memory consumption of Tuned and Cheetah by up to 24.1% and 83.5%, respectively, and our prediction-based topology setup can speed up the startup of Cheetah by up to 80%.

  7. Scalable robotic biofabrication of tissue spheroids

    International Nuclear Information System (INIS)

    Mehesz, A Nagy; Hajdu, Z; Visconti, R P; Markwald, R R; Mironov, V; Brown, J; Beaver, W; Da Silva, J V L

    2011-01-01

    Development of methods for scalable biofabrication of uniformly sized tissue spheroids is essential for tissue spheroid-based bioprinting of large size tissue and organ constructs. The most recent scalable technique for tissue spheroid fabrication employs a micromolded recessed template prepared in a non-adhesive hydrogel, wherein the cells loaded into the template self-assemble into tissue spheroids due to gravitational force. In this study, we present an improved version of this technique. A new mold was designed to enable generation of 61 microrecessions in each well of a 96-well plate. The microrecessions were seeded with cells using an EpMotion 5070 automated pipetting machine. After 48 h of incubation, tissue spheroids formed at the bottom of each microrecession. To assess the quality of constructs generated using this technology, 600 tissue spheroids made by this method were compared with 600 spheroids generated by the conventional hanging drop method. These analyses showed that tissue spheroids fabricated by the micromolded method are more uniform in diameter. Thus, use of micromolded recessions in a non-adhesive hydrogel, combined with automated cell seeding, is a reliable method for scalable robotic fabrication of uniform-sized tissue spheroids.

  8. Scalable robotic biofabrication of tissue spheroids

    Energy Technology Data Exchange (ETDEWEB)

    Mehesz, A Nagy; Hajdu, Z; Visconti, R P; Markwald, R R; Mironov, V [Advanced Tissue Biofabrication Center, Department of Regenerative Medicine and Cell Biology, Medical University of South Carolina, Charleston, SC (United States); Brown, J [Department of Mechanical Engineering, Clemson University, Clemson, SC (United States); Beaver, W [York Technical College, Rock Hill, SC (United States); Da Silva, J V L, E-mail: mironovv@musc.edu [Renato Archer Information Technology Center-CTI, Campinas (Brazil)

    2011-06-15

    Development of methods for scalable biofabrication of uniformly sized tissue spheroids is essential for tissue spheroid-based bioprinting of large size tissue and organ constructs. The most recent scalable technique for tissue spheroid fabrication employs a micromolded recessed template prepared in a non-adhesive hydrogel, wherein the cells loaded into the template self-assemble into tissue spheroids due to gravitational force. In this study, we present an improved version of this technique. A new mold was designed to enable generation of 61 microrecessions in each well of a 96-well plate. The microrecessions were seeded with cells using an EpMotion 5070 automated pipetting machine. After 48 h of incubation, tissue spheroids formed at the bottom of each microrecession. To assess the quality of constructs generated using this technology, 600 tissue spheroids made by this method were compared with 600 spheroids generated by the conventional hanging drop method. These analyses showed that tissue spheroids fabricated by the micromolded method are more uniform in diameter. Thus, use of micromolded recessions in a non-adhesive hydrogel, combined with automated cell seeding, is a reliable method for scalable robotic fabrication of uniform-sized tissue spheroids.

  9. Ultracold molecules: vehicles to scalable quantum information processing

    International Nuclear Information System (INIS)

    Brickman Soderberg, Kathy-Anne; Gemelke, Nathan; Chin Cheng

    2009-01-01

    In this paper, we describe a novel scheme to implement scalable quantum information processing using Li-Cs molecular states to entangle 6 Li and 133 Cs ultracold atoms held in independent optical lattices. The 6 Li atoms will act as quantum bits to store information and 133 Cs atoms will serve as messenger bits that aid in quantum gate operations and mediate entanglement between distant qubit atoms. Each atomic species is held in a separate optical lattice and the atoms can be overlapped by translating the lattices with respect to each other. When the messenger and qubit atoms are overlapped, targeted single-spin operations and entangling operations can be performed by coupling the atomic states to a molecular state with radio-frequency pulses. By controlling the frequency and duration of the radio-frequency pulses, entanglement can be either created or swapped between a qubit messenger pair. We estimate operation fidelities for entangling two distant qubits and discuss scalability of this scheme and constraints on the optical lattice lasers. Finally we demonstrate experimental control of the optical potentials sufficient to translate atoms in the lattice.

  10. A highly scalable peptide-based assay system for proteomics.

    Directory of Open Access Journals (Sweden)

    Igor A Kozlov

    Full Text Available We report a scalable and cost-effective technology for generating and screening high-complexity customizable peptide sets. The peptides are made as peptide-cDNA fusions by in vitro transcription/translation from pools of DNA templates generated by microarray-based synthesis. This approach enables large custom sets of peptides to be designed in silico, manufactured cost-effectively in parallel, and assayed efficiently in a multiplexed fashion. The utility of our peptide-cDNA fusion pools was demonstrated in two activity-based assays designed to discover protease and kinase substrates. In the protease assay, cleaved peptide substrates were separated from uncleaved and identified by digital sequencing of their cognate cDNAs. We screened the 3,011 amino acid HCV proteome for susceptibility to cleavage by the HCV NS3/4A protease and identified all 3 known trans cleavage sites with high specificity. In the kinase assay, peptide substrates phosphorylated by tyrosine kinases were captured and identified by sequencing of their cDNAs. We screened a pool of 3,243 peptides against Abl kinase and showed that phosphorylation events detected were specific and consistent with the known substrate preferences of Abl kinase. Our approach is scalable and adaptable to other protein-based assays.

  11. Performance-scalable volumetric data classification for online industrial inspection

    Science.gov (United States)

    Abraham, Aby J.; Sadki, Mustapha; Lea, R. M.

    2002-03-01

    Non-intrusive inspection and non-destructive testing of manufactured objects with complex internal structures typically requires the enhancement, analysis and visualization of high-resolution volumetric data. Given the increasing availability of fast 3D scanning technology (e.g. cone-beam CT), enabling on-line detection and accurate discrimination of components or sub-structures, the inherent complexity of classification algorithms inevitably leads to throughput bottlenecks. Indeed, whereas typical inspection throughput requirements range from 1 to 1000 volumes per hour, depending on density and resolution, current computational capability is one to two orders-of-magnitude less. Accordingly, speeding up classification algorithms requires both reduction of algorithm complexity and acceleration of computer performance. A shape-based classification algorithm, offering algorithm complexity reduction, by using ellipses as generic descriptors of solids-of-revolution, and supporting performance-scalability, by exploiting the inherent parallelism of volumetric data, is presented. A two-stage variant of the classical Hough transform is used for ellipse detection and correlation of the detected ellipses facilitates position-, scale- and orientation-invariant component classification. Performance-scalability is achieved cost-effectively by accelerating a PC host with one or more COTS (Commercial-Off-The-Shelf) PCI multiprocessor cards. Experimental results are reported to demonstrate the feasibility and cost-effectiveness of the data-parallel classification algorithm for on-line industrial inspection applications.

  12. Scalable Production of the Silicon-Tin Yin-Yang Hybrid Structure with Graphene Coating for High Performance Lithium-Ion Battery Anodes.

    Science.gov (United States)

    Jin, Yan; Tan, Yingling; Hu, Xiaozhen; Zhu, Bin; Zheng, Qinghui; Zhang, Zijiao; Zhu, Guoying; Yu, Qian; Jin, Zhong; Zhu, Jia

    2017-05-10

    Alloy anodes possessed of high theoretical capacity show great potential for next-generation advanced lithium-ion battery. Even though huge volume change during lithium insertion and extraction leads to severe problems, such as pulverization and an unstable solid-electrolyte interphase (SEI), various nanostructures including nanoparticles, nanowires, and porous networks can address related challenges to improve electrochemical performance. However, the complex and expensive fabrication process hinders the widespread application of nanostructured alloy anodes, which generate an urgent demand of low-cost and scalable processes to fabricate building blocks with fine controls of size, morphology, and porosity. Here, we demonstrate a scalable and low-cost process to produce a porous yin-yang hybrid composite anode with graphene coating through high energy ball-milling and selective chemical etching. With void space to buffer the expansion, the produced functional electrodes demonstrate stable cycling performance of 910 mAh g -1 over 600 cycles at a rate of 0.5C for Si-graphene "yin" particles and 750 mAh g -1 over 300 cycles at 0.2C for Sn-graphene "yang" particles. Therefore, we open up a new approach to fabricate alloy anode materials at low-cost, low-energy consumption, and large scale. This type of porous silicon or tin composite with graphene coating can also potentially play a significant role in thermoelectrics and optoelectronics applications.

  13. Scalable quantum information processing with atomic ensembles and flying photons

    International Nuclear Information System (INIS)

    Mei Feng; Yu Yafei; Feng Mang; Zhang Zhiming

    2009-01-01

    We present a scheme for scalable quantum information processing with atomic ensembles and flying photons. Using the Rydberg blockade, we encode the qubits in the collective atomic states, which could be manipulated fast and easily due to the enhanced interaction in comparison to the single-atom case. We demonstrate that our proposed gating could be applied to generation of two-dimensional cluster states for measurement-based quantum computation. Moreover, the atomic ensembles also function as quantum repeaters useful for long-distance quantum state transfer. We show the possibility of our scheme to work in bad cavity or in weak coupling regime, which could much relax the experimental requirement. The efficient coherent operations on the ensemble qubits enable our scheme to be switchable between quantum computation and quantum communication using atomic ensembles.

  14. A Practical and Scalable Tool to Find Overlaps between Sequences

    Directory of Open Access Journals (Sweden)

    Maan Haj Rachid

    2015-01-01

    Full Text Available The evolution of the next generation sequencing technology increases the demand for efficient solutions, in terms of space and time, for several bioinformatics problems. This paper presents a practical and easy-to-implement solution for one of these problems, namely, the all-pairs suffix-prefix problem, using a compact prefix tree. The paper demonstrates an efficient construction of this time-efficient and space-economical tree data structure. The paper presents techniques for parallel implementations of the proposed solution. Experimental evaluation indicates superior results in terms of space and time over existing solutions. Results also show that the proposed technique is highly scalable in a parallel execution environment.

  15. Smartphone based scalable reverse engineering by digital image correlation

    Science.gov (United States)

    Vidvans, Amey; Basu, Saurabh

    2018-03-01

    There is a need for scalable open source 3D reconstruction systems for reverse engineering. This is because most commercially available reconstruction systems are capital and resource intensive. To address this, a novel reconstruction technique is proposed. The technique involves digital image correlation based characterization of surface speeds followed by normalization with respect to angular speed during rigid body rotational motion of the specimen. Proof of concept of the same is demonstrated and validated using simulation and empirical characterization. Towards this, smart-phone imaging and inexpensive off the shelf components along with those fabricated additively using poly-lactic acid polymer with a standard 3D printer are used. Some sources of error in this reconstruction methodology are discussed. It is seen that high curvatures on the surface suppress accuracy of reconstruction. Reasons behind this are delineated in the nature of the correlation function. Theoretically achievable resolution during smart-phone based 3D reconstruction by digital image correlation is derived.

  16. Vortex Filaments in Grids for Scalable, Fine Smoke Simulation.

    Science.gov (United States)

    Meng, Zhang; Weixin, Si; Yinling, Qian; Hanqiu, Sun; Jing, Qin; Heng, Pheng-Ann

    2015-01-01

    Vortex modeling can produce attractive visual effects of dynamic fluids, which are widely applicable for dynamic media, computer games, special effects, and virtual reality systems. However, it is challenging to effectively simulate intensive and fine detailed fluids such as smoke with fast increasing vortex filaments and smoke particles. The authors propose a novel vortex filaments in grids scheme in which the uniform grids dynamically bridge the vortex filaments and smoke particles for scalable, fine smoke simulation with macroscopic vortex structures. Using the vortex model, their approach supports the trade-off between simulation speed and scale of details. After computing the whole velocity, external control can be easily exerted on the embedded grid to guide the vortex-based smoke motion. The experimental results demonstrate the efficiency of using the proposed scheme for a visually plausible smoke simulation with macroscopic vortex structures.

  17. Optimization of Hierarchical Modulation for Use of Scalable Media

    Directory of Open Access Journals (Sweden)

    Heneghan Conor

    2010-01-01

    Full Text Available This paper studies the Hierarchical Modulation, a transmission strategy of the approaching scalable multimedia over frequency-selective fading channel for improving the perceptible quality. An optimization strategy for Hierarchical Modulation and convolutional encoding, which can achieve the target bit error rates with minimum global signal-to-noise ratio in a single-user scenario, is suggested. This strategy allows applications to make a free choice of relationship between Higher Priority (HP and Lower Priority (LP stream delivery. The similar optimization can be used in multiuser scenario. An image transport task and a transport task of an H.264/MPEG4 AVC video embedding both QVGA and VGA resolutions are simulated as the implementation example of this optimization strategy, and demonstrate savings in SNR and improvement in Peak Signal-to-Noise Ratio (PSNR for the particular examples shown.

  18. Neuromorphic adaptive plastic scalable electronics: analog learning systems.

    Science.gov (United States)

    Srinivasa, Narayan; Cruz-Albrecht, Jose

    2012-01-01

    Decades of research to build programmable intelligent machines have demonstrated limited utility in complex, real-world environments. Comparing their performance with biological systems, these machines are less efficient by a factor of 1 million1 billion in complex, real-world environments. The Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program is a multifaceted Defense Advanced Research Projects Agency (DARPA) project that seeks to break the programmable machine paradigm and define a new path for creating useful, intelligent machines. Since real-world systems exhibit infinite combinatorial complexity, electronic neuromorphic machine technology would be preferable in a host of applications, but useful and practical implementations still do not exist. HRL Laboratories LLC has embarked on addressing these challenges, and, in this article, we provide an overview of our project and progress made thus far.

  19. A Scalable Framework and Prototype for CAS e-Science

    Directory of Open Access Journals (Sweden)

    Yuanchun Zhou

    2007-07-01

    Full Text Available Based on the Small-World model of CAS e-Science and the power low of Internet, this paper presents a scalable CAS e-Science Grid framework based on virtual region called Virtual Region Grid Framework (VRGF. VRGF takes virtual region and layer as logic manage-unit. In VRGF, the mode of intra-virtual region is pure P2P, and the model of inter-virtual region is centralized. Therefore, VRGF is decentralized framework with some P2P properties. Further more, VRGF is able to achieve satisfactory performance on resource organizing and locating at a small cost, and is well adapted to the complicated and dynamic features of scientific collaborations. We have implemented a demonstration VRGF based Grid prototype—SDG.

  20. A Secure and Scalable Data Communication Scheme in Smart Grids

    Directory of Open Access Journals (Sweden)

    Chunqiang Hu

    2018-01-01

    Full Text Available The concept of smart grid gained tremendous attention among researchers and utility providers in recent years. How to establish a secure communication among smart meters, utility companies, and the service providers is a challenging issue. In this paper, we present a communication architecture for smart grids and propose a scheme to guarantee the security and privacy of data communications among smart meters, utility companies, and data repositories by employing decentralized attribute based encryption. The architecture is highly scalable, which employs an access control Linear Secret Sharing Scheme (LSSS matrix to achieve a role-based access control. The security analysis demonstrated that the scheme ensures security and privacy. The performance analysis shows that the scheme is efficient in terms of computational cost.

  1. Scalable Creation of Long-Lived Multipartite Entanglement

    Science.gov (United States)

    Kaufmann, H.; Ruster, T.; Schmiegelow, C. T.; Luda, M. A.; Kaushal, V.; Schulz, J.; von Lindenfels, D.; Schmidt-Kaler, F.; Poschinger, U. G.

    2017-10-01

    We demonstrate the deterministic generation of multipartite entanglement based on scalable methods. Four qubits are encoded in 40Ca+, stored in a microstructured segmented Paul trap. These qubits are sequentially entangled by laser-driven pairwise gate operations. Between these, the qubit register is dynamically reconfigured via ion shuttling operations, where ion crystals are separated and merged, and ions are moved in and out of a fixed laser interaction zone. A sequence consisting of three pairwise entangling gates yields a four-ion Greenberger-Horne-Zeilinger state |ψ ⟩=(1 /√{2 })(|0000 ⟩+|1111 ⟩) , and full quantum state tomography reveals a state fidelity of 94.4(3)%. We analyze the decoherence of this state and employ dynamic decoupling on the spatially distributed constituents to maintain 69(5)% coherence at a storage time of 1.1 sec.

  2. Scalability Issues for Remote Sensing Infrastructure: A Case Study

    Directory of Open Access Journals (Sweden)

    Yang Liu

    2017-04-01

    Full Text Available For the past decade, a team of University of Calgary researchers has operated a large “sensor Web” to collect, analyze, and share scientific data from remote measurement instruments across northern Canada. This sensor Web receives real-time data streams from over a thousand Internet-connected sensors, with a particular emphasis on environmental data (e.g., space weather, auroral phenomena, atmospheric imaging. Through research collaborations, we had the opportunity to evaluate the performance and scalability of their remote sensing infrastructure. This article reports the lessons learned from our study, which considered both data collection and data dissemination aspects of their system. On the data collection front, we used benchmarking techniques to identify and fix a performance bottleneck in the system’s memory management for TCP data streams, while also improving system efficiency on multi-core architectures. On the data dissemination front, we used passive and active network traffic measurements to identify and reduce excessive network traffic from the Web robots and JavaScript techniques used for data sharing. While our results are from one specific sensor Web system, the lessons learned may apply to other scientific Web sites with remote sensing infrastructure.

  3. A Scalable Architecture for VoIP Conferencing

    Directory of Open Access Journals (Sweden)

    R Venkatesha Prasad

    2003-10-01

    Full Text Available Real-Time services are traditionally supported on circuit switched network. However, there is a need to port these services on packet switched network. Architecture for audio conferencing application over the Internet in the light of ITU-T H.323 recommendations is considered. In a conference, considering packets only from a set of selected clients can reduce speech quality degradation because mixing packets from all clients can lead to lack of speech clarity. A distributed algorithm and architecture for selecting clients for mixing is suggested here based on a new quantifier of the voice activity called "Loudness Number" (LN. The proposed system distributes the computation load and reduces the load on client terminals. The highlights of this architecture are scalability, bandwidth saving and speech quality enhancement. Client selection for playing out tries to mimic a physical conference where the most vocal participants attract more attention. The contributions of the paper are expected to aid H.323 recommendations implementations for Multipoint Processors (MP. A working prototype based on the proposed architecture is already functional.

  4. Programming Scala Scalability = Functional Programming + Objects

    CERN Document Server

    Wampler, Dean

    2009-01-01

    Learn how to be more productive with Scala, a new multi-paradigm language for the Java Virtual Machine (JVM) that integrates features of both object-oriented and functional programming. With this book, you'll discover why Scala is ideal for highly scalable, component-based applications that support concurrency and distribution. Programming Scala clearly explains the advantages of Scala as a JVM language. You'll learn how to leverage the wealth of Java class libraries to meet the practical needs of enterprise and Internet projects more easily. Packed with code examples, this book provides us

  5. Towards a Scalable, Biomimetic, Antibacterial Coating

    Science.gov (United States)

    Dickson, Mary Nora

    Corneal afflictions are the second leading cause of blindness worldwide. When a corneal transplant is unavailable or contraindicated, an artificial cornea device is the only chance to save sight. Bacterial or fungal biofilm build up on artificial cornea devices can lead to serious complications including the need for systemic antibiotic treatment and even explantation. As a result, much emphasis has been placed on anti-adhesion chemical coatings and antibiotic leeching coatings. These methods are not long-lasting, and microorganisms can eventually circumvent these measures. Thus, I have developed a surface topographical antimicrobial coating. Various surface structures including rough surfaces, superhydrophobic surfaces, and the natural surfaces of insects' wings and sharks' skin are promising anti-biofilm candidates, however none meet the criteria necessary for implementation on the surface of an artificial cornea device. In this thesis I: 1) developed scalable fabrication protocols for a library of biomimetic nanostructure polymer surfaces 2) assessed the potential these for poly(methyl methacrylate) nanopillars to kill or prevent formation of biofilm by E. coli bacteria and species of Pseudomonas and Staphylococcus bacteria and improved upon a proposed mechanism for the rupture of Gram-negative bacterial cell walls 3) developed a scalable, commercially viable method for producing antibacterial nanopillars on a curved, PMMA artificial cornea device and 4) developed scalable fabrication protocols for implantation of antibacterial nanopatterned surfaces on the surfaces of thermoplastic polyurethane materials, commonly used in catheter tubings. This project constitutes a first step towards fabrication of the first entirely PMMA artificial cornea device. The major finding of this work is that by precisely controlling the topography of a polymer surface at the nano-scale, we can kill adherent bacteria and prevent biofilm formation of certain pathogenic bacteria

  6. Scalable Tensor Factorizations with Missing Data

    DEFF Research Database (Denmark)

    Acar, Evrim; Dunlavy, Daniel M.; Kolda, Tamara G.

    2010-01-01

    of missing data, many important data sets will be discarded or improperly analyzed. Therefore, we need a robust and scalable approach for factorizing multi-way arrays (i.e., tensors) in the presence of missing data. We focus on one of the most well-known tensor factorizations, CANDECOMP/PARAFAC (CP...... is shown to successfully factor tensors with noise and up to 70% missing data. Moreover, our approach is significantly faster than the leading alternative and scales to larger problems. To show the real-world usefulness of CP-WOPT, we illustrate its applicability on a novel EEG (electroencephalogram...

  7. Scalable and Anonymous Group Communication with MTor

    Directory of Open Access Journals (Sweden)

    Lin Dong

    2016-04-01

    Full Text Available This paper presents MTor, a low-latency anonymous group communication system. We construct MTor as an extension to Tor, allowing the construction of multi-source multicast trees on top of the existing Tor infrastructure. MTor does not depend on an external service to broker the group communication, and avoids central points of failure and trust. MTor’s substantial bandwidth savings and graceful scalability enable new classes of anonymous applications that are currently too bandwidth-intensive to be viable through traditional unicast Tor communication-e.g., group file transfer, collaborative editing, streaming video, and real-time audio conferencing.

  8. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  9. Supporting scalable Bayesian networks using configurable discretizer actuators

    CSIR Research Space (South Africa)

    Osunmakinde, I

    2009-04-01

    Full Text Available The authors propose a generalized model with configurable discretizer actuators as a solution to the problem of the discretization of massive numerical datasets. Their solution is based on a concurrent distribution of the actuators and uses dynamic...

  10. Scalable Lunar Surface Networks and Adaptive Orbit Access, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Based on our proposed innovations and accomplished work in Phase I, we will focus on developing the new MAC protocol and hybrid routing protocol for lunar surface...

  11. Scalable, ultra-resistant structural colors based on network metamaterials

    KAUST Repository

    Galinski, Henning; Favraud, Gael; Dong, Hao; Gongora, J. S. Totero; Favaro, Gré gory; Dö beli, Max; Spolenak, Ralph; Fratalocchi, Andrea; Capasso, Federico

    2017-01-01

    Structural colors have drawn wide attention for their potential as a future printing technology for various applications, ranging from biomimetic tissues to adaptive camouflage materials. However, an efficient approach to realize robust colors

  12. Performance and Scalability of Blockchain Networks and Smart Contracts

    OpenAIRE

    Scherer, Mattias

    2017-01-01

    The blockchain technology started as the innovation that powered the cryptocurrency Bitcoin. But in recent years, leaders in finance, banking, and many more companies has given this new innovation more attention than ever before. They seek a new technology to replace their system which are often inefficient and costly to operate. However, one of the reasons why it not possible to use a blockchain right away is because of the poor performance. Public blockchains, where anyone can participate, ...

  13. Scalable Management of Enterprise and Data-Center Networks

    Science.gov (United States)

    2011-09-01

    of number of next hops (M = 600KB,N = 200K) from 4 to 8. Since next hops have different popularity, Pareto distribution is used to generate the number...the performance difference between NOX and DIFANE for flows of normal sizes. Switches, NOX, and traffic generators (“ clients ”) run on separate 3.0...and DIFANE. In Figure 3.10, we show the maximum throughput of flow setup for one client using one switch. In DIFANE, the switch was able to achieve the

  14. Network dynamics: The World Wide Web

    Science.gov (United States)

    Adamic, Lada Ariana

    Despite its rapidly growing and dynamic nature, the Web displays a number of strong regularities which can be understood by drawing on methods of statistical physics. This thesis finds power-law distributions in website sizes, traffic, and links, and more importantly, develops a stochastic theory which explains them. Power-law link distributions are shown to lead to network characteristics which are especially suitable for scalable localized search. It is also demonstrated that the Web is a "small world": to reach one site from any other takes an average of only 4 hops, while most related sites cluster together. Additional dynamical properties of the Web graph are extracted from diffusion processes.

  15. Design and implementation of scalable IPv4-IPv6 internetworking gateway

    Science.gov (United States)

    Zhu, Guo-sheng; Yu, Shao-hua; Dai, Jin-you

    2008-11-01

    This paper proposed a scalable architecture of IPv4-IPv6 internetworking gateway based on EZchip 10Gbps network processor NP-1c. The Application Layer Gateway(ALG) of control plane can be upgraded without needing to modify the data forwarding plane.A SIP ALG of 3GPP IMS(IP Multimedia Subsystem)was implemented and tested under real China Next Generation Internet(CNGI) network environment.IPv4 SIP UEs can communicate with IPv6 SIP UEs through the gateway.

  16. Transportation Network Topologies

    Science.gov (United States)

    Holmes, Bruce J.; Scott, John

    2004-01-01

    A discomforting reality has materialized on the transportation scene: our existing air and ground infrastructures will not scale to meet our nation's 21st century demands and expectations for mobility, commerce, safety, and security. The consequence of inaction is diminished quality of life and economic opportunity in the 21st century. Clearly, new thinking is required for transportation that can scale to meet to the realities of a networked, knowledge-based economy in which the value of time is a new coin of the realm. This paper proposes a framework, or topology, for thinking about the problem of scalability of the system of networks that comprise the aviation system. This framework highlights the role of integrated communication-navigation-surveillance systems in enabling scalability of future air transportation networks. Scalability, in this vein, is a goal of the recently formed Joint Planning and Development Office for the Next Generation Air Transportation System. New foundations for 21st thinking about air transportation are underpinned by several technological developments in the traditional aircraft disciplines as well as in communication, navigation, surveillance and information systems. Complexity science and modern network theory give rise to one of the technological developments of importance. Scale-free (i.e., scalable) networks represent a promising concept space for modeling airspace system architectures, and for assessing network performance in terms of scalability, efficiency, robustness, resilience, and other metrics. The paper offers an air transportation system topology as framework for transportation system innovation. Successful outcomes of innovation in air transportation could lay the foundations for new paradigms for aircraft and their operating capabilities, air transportation system architectures, and airspace architectures and procedural concepts. The topology proposed considers air transportation as a system of networks, within which

  17. Tested Demonstrations.

    Science.gov (United States)

    Gilbert, George L.

    1983-01-01

    An apparatus is described in which effects of pressure, volume, and temperature changes on a gas can be observed simultaneously. Includes use of the apparatus in demonstrating Boyle's, Gay-Lussac's, and Charles' Laws, attractive forces, Dalton's Law of Partial pressures, and in illustrating measurable vapor pressures of liquids and some solids.…

  18. Tested Demonstrations.

    Science.gov (United States)

    Gilbert, George L., Ed.

    1987-01-01

    Describes two demonstrations to illustrate characteristics of substances. Outlines a method to detect the changes in pH levels during the electrolysis of water. Uses water pistols, one filled with methane gas and the other filled with water, to illustrate the differences in these two substances. (TW)

  19. Scalability Optimization of Seamless Positioning Service

    Directory of Open Access Journals (Sweden)

    Juraj Machaj

    2016-01-01

    Full Text Available Recently positioning services are getting more attention not only within research community but also from service providers. From the service providers point of view positioning service that will be able to work seamlessly in all environments, for example, indoor, dense urban, and rural, has a huge potential to open new markets. However, such system does not only need to provide accurate position estimates but have to be scalable and resistant to fake positioning requests. In the previous works we have proposed a modular system, which is able to provide seamless positioning in various environments. The system automatically selects optimal positioning module based on available radio signals. The system currently consists of three positioning modules—GPS, GSM based positioning, and Wi-Fi based positioning. In this paper we will propose algorithm which will reduce time needed for position estimation and thus allow higher scalability of the modular system and thus allow providing positioning services to higher amount of users. Such improvement is extremely important, for real world application where large number of users will require position estimates, since positioning error is affected by response time of the positioning server.

  20. Highly scalable Ab initio genomic motif identification

    KAUST Repository

    Marchand, Benoit; Bajic, Vladimir B.; Kaushik, Dinesh

    2011-01-01

    We present results of scaling an ab initio motif family identification system, Dragon Motif Finder (DMF), to 65,536 processor cores of IBM Blue Gene/P. DMF seeks groups of mutually similar polynucleotide patterns within a set of genomic sequences and builds various motif families from them. Such information is of relevance to many problems in life sciences. Prior attempts to scale such ab initio motif-finding algorithms achieved limited success. We solve the scalability issues using a combination of mixed-mode MPI-OpenMP parallel programming, master-slave work assignment, multi-level workload distribution, multi-level MPI collectives, and serial optimizations. While the scalability of our algorithm was excellent (94% parallel efficiency on 65,536 cores relative to 256 cores on a modest-size problem), the final speedup with respect to the original serial code exceeded 250,000 when serial optimizations are included. This enabled us to carry out many large-scale ab initio motiffinding simulations in a few hours while the original serial code would have needed decades of execution time. Copyright 2011 ACM.

  1. Algorithmic psychometrics and the scalable subject.

    Science.gov (United States)

    Stark, Luke

    2018-04-01

    Recent public controversies, ranging from the 2014 Facebook 'emotional contagion' study to psychographic data profiling by Cambridge Analytica in the 2016 American presidential election, Brexit referendum and elsewhere, signal watershed moments in which the intersecting trajectories of psychology and computer science have become matters of public concern. The entangled history of these two fields grounds the application of applied psychological techniques to digital technologies, and an investment in applying calculability to human subjectivity. Today, a quantifiable psychological subject position has been translated, via 'big data' sets and algorithmic analysis, into a model subject amenable to classification through digital media platforms. I term this position the 'scalable subject', arguing it has been shaped and made legible by algorithmic psychometrics - a broad set of affordances in digital platforms shaped by psychology and the behavioral sciences. In describing the contours of this 'scalable subject', this paper highlights the urgent need for renewed attention from STS scholars on the psy sciences, and on a computational politics attentive to psychology, emotional expression, and sociality via digital media.

  2. Scalable Simulation of Electromagnetic Hybrid Codes

    International Nuclear Information System (INIS)

    Perumalla, Kalyan S.; Fujimoto, Richard; Karimabadi, Dr. Homa

    2006-01-01

    New discrete-event formulations of physics simulation models are emerging that can outperform models based on traditional time-stepped techniques. Detailed simulation of the Earth's magnetosphere, for example, requires execution of sub-models that are at widely differing timescales. In contrast to time-stepped simulation which requires tightly coupled updates to entire system state at regular time intervals, the new discrete event simulation (DES) approaches help evolve the states of sub-models on relatively independent timescales. However, parallel execution of DES-based models raises challenges with respect to their scalability and performance. One of the key challenges is to improve the computation granularity to offset synchronization and communication overheads within and across processors. Our previous work was limited in scalability and runtime performance due to the parallelization challenges. Here we report on optimizations we performed on DES-based plasma simulation models to improve parallel performance. The net result is the capability to simulate hybrid particle-in-cell (PIC) models with over 2 billion ion particles using 512 processors on supercomputing platforms

  3. Scalable fast multipole accelerated vortex methods

    KAUST Repository

    Hu, Qi

    2014-05-01

    The fast multipole method (FMM) is often used to accelerate the calculation of particle interactions in particle-based methods to simulate incompressible flows. To evaluate the most time-consuming kernels - the Biot-Savart equation and stretching term of the vorticity equation, we mathematically reformulated it so that only two Laplace scalar potentials are used instead of six. This automatically ensuring divergence-free far-field computation. Based on this formulation, we developed a new FMM-based vortex method on heterogeneous architectures, which distributed the work between multicore CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm uses new data structures which can dynamically manage inter-node communication and load balance efficiently, with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching calculation for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s.

  4. Computational scalability of large size image dissemination

    Science.gov (United States)

    Kooper, Rob; Bajcsy, Peter

    2011-01-01

    We have investigated the computational scalability of image pyramid building needed for dissemination of very large image data. The sources of large images include high resolution microscopes and telescopes, remote sensing and airborne imaging, and high resolution scanners. The term 'large' is understood from a user perspective which means either larger than a display size or larger than a memory/disk to hold the image data. The application drivers for our work are digitization projects such as the Lincoln Papers project (each image scan is about 100-150MB or about 5000x8000 pixels with the total number to be around 200,000) and the UIUC library scanning project for historical maps from 17th and 18th century (smaller number but larger images). The goal of our work is understand computational scalability of the web-based dissemination using image pyramids for these large image scans, as well as the preservation aspects of the data. We report our computational benchmarks for (a) building image pyramids to be disseminated using the Microsoft Seadragon library, (b) a computation execution approach using hyper-threading to generate image pyramids and to utilize the underlying hardware, and (c) an image pyramid preservation approach using various hard drive configurations of Redundant Array of Independent Disks (RAID) drives for input/output operations. The benchmarks are obtained with a map (334.61 MB, JPEG format, 17591x15014 pixels). The discussion combines the speed and preservation objectives.

  5. Big data integration: scalability and sustainability

    KAUST Repository

    Zhang, Zhang

    2016-01-26

    Integration of various types of omics data is critically indispensable for addressing most important and complex biological questions. In the era of big data, however, data integration becomes increasingly tedious, time-consuming and expensive, posing a significant obstacle to fully exploit the wealth of big biological data. Here we propose a scalable and sustainable architecture that integrates big omics data through community-contributed modules. Community modules are contributed and maintained by different committed groups and each module corresponds to a specific data type, deals with data collection, processing and visualization, and delivers data on-demand via web services. Based on this community-based architecture, we build Information Commons for Rice (IC4R; http://ic4r.org), a rice knowledgebase that integrates a variety of rice omics data from multiple community modules, including genome-wide expression profiles derived entirely from RNA-Seq data, resequencing-based genomic variations obtained from re-sequencing data of thousands of rice varieties, plant homologous genes covering multiple diverse plant species, post-translational modifications, rice-related literatures, and community annotations. Taken together, such architecture achieves integration of different types of data from multiple community-contributed modules and accordingly features scalable, sustainable and collaborative integration of big data as well as low costs for database update and maintenance, thus helpful for building IC4R into a comprehensive knowledgebase covering all aspects of rice data and beneficial for both basic and translational researches.

  6. LoRa Scalability: A Simulation Model Based on Interference Measurements

    Directory of Open Access Journals (Sweden)

    Jetmir Haxhibeqiri

    2017-05-01

    Full Text Available LoRa is a long-range, low power, low bit rate and single-hop wireless communication technology. It is intended to be used in Internet of Things (IoT applications involving battery-powered devices with low throughput requirements. A LoRaWAN network consists of multiple end nodes that communicate with one or more gateways. These gateways act like a transparent bridge towards a common network server. The amount of end devices and their throughput requirements will have an impact on the performance of the LoRaWAN network. This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments. First, we determine the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node. Measurements show that even under concurrent transmission, one of the packets can be received under certain conditions. Based on these measurements, we create a simulation model for assessing the scalability of a single gateway LoRaWAN network. We show that when the number of nodes increases up to 1000 per gateway, the losses will be up to 32%. In such a case, pure Aloha will have around 90% losses. However, when the duty cycle of the application layer becomes lower than the allowed radio duty cycle of 1%, losses will be even lower. We also show network scalability simulation results for some IoT use cases based on real data.

  7. LoRa Scalability: A Simulation Model Based on Interference Measurements.

    Science.gov (United States)

    Haxhibeqiri, Jetmir; Van den Abeele, Floris; Moerman, Ingrid; Hoebeke, Jeroen

    2017-05-23

    LoRa is a long-range, low power, low bit rate and single-hop wireless communication technology. It is intended to be used in Internet of Things (IoT) applications involving battery-powered devices with low throughput requirements. A LoRaWAN network consists of multiple end nodes that communicate with one or more gateways. These gateways act like a transparent bridge towards a common network server. The amount of end devices and their throughput requirements will have an impact on the performance of the LoRaWAN network. This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments. First, we determine the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node. Measurements show that even under concurrent transmission, one of the packets can be received under certain conditions. Based on these measurements, we create a simulation model for assessing the scalability of a single gateway LoRaWAN network. We show that when the number of nodes increases up to 1000 per gateway, the losses will be up to 32%. In such a case, pure Aloha will have around 90% losses. However, when the duty cycle of the application layer becomes lower than the allowed radio duty cycle of 1%, losses will be even lower. We also show network scalability simulation results for some IoT use cases based on real data.

  8. Smart grid demonstrators and experiments in France: Economic assessments of smart grids. Challenges, methods, progress status and demonstrators; Contribution of 'smart grid' demonstrators to electricity transport and market architectures; Challenges and contributions of smart grid demonstrators to the distribution network. Focus on the integration of decentralised production; Challenges and contributions of smart grid demonstrators to the evolution of providing-related professions and to consumption practices

    International Nuclear Information System (INIS)

    Sudret, Thierry; Belhomme, Regine; Nekrassov, Andrei; Chartres, Sophie; Chiappini, Florent; Drouineau, Mathilde; Hadjsaid, Nouredine; Leonard, Cedric; Bena, Michel; Buhagiar, Thierry; Lemaitre, Christian; Janssen, Tanguy; Guedou, Benjamin; Viana, Maria Sebastian; Malarange, Gilles; Hadjsaid, Nouredine; Petit, Marc; Lehec, Guillaume; Jahn, Rafael; Gehain, Etienne

    2015-01-01

    This publication proposes a set of four articles which give an overview of challenges and contributions of smart grid demonstrators for the French electricity system according to different perspectives and different stakeholders. These articles present the first lessons learned from these demonstrators in terms of technical and technological innovations, of business and regulation models, and of customer behaviour and acceptance. More precisely, the authors discuss economic assessments of smart grids with an overview of challenges, methods, progress status and existing smart grid programs in the World, comment the importance of the introduction of intelligence at hardware, software and market level, highlight the challenges and contributions of smart grids for the integration of decentralised production, and discuss how smart grid demonstrators impact providing-related professions and customer consumption practices

  9. Towards Reliable, Scalable, and Energy Efficient Cognitive Radio Systems

    KAUST Repository

    Sboui, Lokman

    2017-11-01

    The cognitive radio (CR) concept is expected to be adopted along with many technologies to meet the requirements of the next generation of wireless and mobile systems, the 5G. Consequently, it is important to determine the performance of the CR systems with respect to these requirements. In this thesis, after briefly describing the 5G requirements, we present three main directions in which we aim to enhance the CR performance. The first direction is the reliability. We study the achievable rate of a multiple-input multiple-output (MIMO) relay-assisted CR under two scenarios; an unmanned aerial vehicle (UAV) one-way relaying (OWR) and a fixed two-way relaying (TWR). We propose special linear precoding schemes that enable the secondary user (SU) to take advantage of the primary-free channel eigenmodes. We study the SU rate sensitivity to the relay power, the relay gain, the UAV altitude, the number of antennas and the line of sight availability. The second direction is the scalability. We first study a multiple access channel (MAC) with multiple SUs scenario. We propose a particular linear precoding and SUs selection scheme maximizing their sum-rate. We show that the proposed scheme provides a significant sum-rate improvement as the number of SUs increases. Secondly, we expand our scalability study to cognitive cellular networks. We propose a low-complexity algorithm for base station activation/deactivation and dynamic spectrum management maximizing the profits of primary and secondary networks subject to green constraints. We show that our proposed algorithms achieve performance close to those obtained with the exhaustive search method. The third direction is the energy efficiency (EE). We present a novel power allocation scheme based on maximizing the EE of both single-input and single-output (SISO) and MIMO systems. We solve a non-convex problem and derive explicit expressions of the corresponding optimal power. When the instantaneous channel is not available, we

  10. NeuroPigPen: A Scalable Toolkit for Processing Electrophysiological Signal Data in Neuroscience Applications Using Apache Pig.

    Science.gov (United States)

    Sahoo, Satya S; Wei, Annan; Valdez, Joshua; Wang, Li; Zonjy, Bilal; Tatsuoka, Curtis; Loparo, Kenneth A; Lhatoo, Samden D

    2016-01-01

    The recent advances in neurological imaging and sensing technologies have led to rapid increase in the volume, rate of data generation, and variety of neuroscience data. This "neuroscience Big data" represents a significant opportunity for the biomedical research community to design experiments using data with greater timescale, large number of attributes, and statistically significant data size. The results from these new data-driven research techniques can advance our understanding of complex neurological disorders, help model long-term effects of brain injuries, and provide new insights into dynamics of brain networks. However, many existing neuroinformatics data processing and analysis tools were not built to manage large volume of data, which makes it difficult for researchers to effectively leverage this available data to advance their research. We introduce a new toolkit called NeuroPigPen that was developed using Apache Hadoop and Pig data flow language to address the challenges posed by large-scale electrophysiological signal data. NeuroPigPen is a modular toolkit that can process large volumes of electrophysiological signal data, such as Electroencephalogram (EEG), Electrocardiogram (ECG), and blood oxygen levels (SpO2), using a new distributed storage model called Cloudwave Signal Format (CSF) that supports easy partitioning and storage of signal data on commodity hardware. NeuroPigPen was developed with three design principles: (a) Scalability-the ability to efficiently process increasing volumes of data; (b) Adaptability-the toolkit can be deployed across different computing configurations; and (c) Ease of programming-the toolkit can be easily used to compose multi-step data processing pipelines using high-level programming constructs. The NeuroPigPen toolkit was evaluated using 750 GB of electrophysiological signal data over a variety of Hadoop cluster configurations ranging from 3 to 30 Data nodes. The evaluation results demonstrate that the toolkit

  11. Quantum networks based on cavity QED

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, Stephan; Bochmann, Joerg; Figueroa, Eden; Hahn, Carolin; Kalb, Norbert; Muecke, Martin; Neuzner, Andreas; Noelleke, Christian; Reiserer, Andreas; Uphoff, Manuel; Rempe, Gerhard [Max-Planck-Institut fuer Quantenoptik, Hans-Kopfermann-Strasse 1, 85748 Garching (Germany)

    2014-07-01

    Quantum repeaters require an efficient interface between stationary quantum memories and flying photons. Single atoms in optical cavities are ideally suited as universal quantum network nodes that are capable of sending, storing, retrieving, and even processing quantum information. We demonstrate this by presenting an elementary version of a quantum network based on two identical nodes in remote, independent laboratories. The reversible exchange of quantum information and the creation of remote entanglement are achieved by exchange of a single photon. Quantum teleportation is implemented using a time-resolved photonic Bell-state measurement. Quantum control over all degrees of freedom of the single atom also allows for the nondestructive detection of flying photons and the implementation of a quantum gate between the spin state of the atom and the polarization of a photon upon its reflection from the cavity. Our approach to quantum networking offers a clear perspective for scalability and provides the essential components for the realization of a quantum repeater.

  12. Node design in optical packet switched networks

    DEFF Research Database (Denmark)

    Nord, Martin

    2006-01-01

    The thesis discusses motivation, realisation and performance of the Optical Packet Switching (OPS) network paradigm. The work includes proposals for designs and methods to efficiently use both the wavelength- and time domain for contention resolution in asynchronous operation. The project has also......S parameter. Finally, the thesis includes a proposal for a node design and associated MAC protocol for an OPS ring topology metropolitan area network with high throughput and fairness, also for unbalanced traffic....... proposed parallel designs to overcome scalability constraints and to support migration scenarios. Furthermore, it has proposed and demonstrated optical input processing schemes for hybrids networks to simultaneously support OPS and Optical Circuit Switching. Quality of Service (QoS) differentiation enables...

  13. Networking

    OpenAIRE

    Rauno Lindholm, Daniel; Boisen Devantier, Lykke; Nyborg, Karoline Lykke; Høgsbro, Andreas; Fries, de; Skovlund, Louise

    2016-01-01

    The purpose of this project was to examine what influencing factor that has had an impact on the presumed increasement of the use of networking among academics on the labour market and how it is expressed. On the basis of the influence from globalization on the labour market it can be concluded that the globalization has transformed the labour market into a market based on the organization of networks. In this new organization there is a greater emphasis on employees having social qualificati...

  14. Building a scalable event-level metadata service for ATLAS

    International Nuclear Information System (INIS)

    Cranshaw, J; Malon, D; Goosens, L; Viegas, F T A; McGlone, H

    2008-01-01

    The ATLAS TAG Database is a multi-terabyte event-level metadata selection system, intended to allow discovery, selection of and navigation to events of interest to an analysis. The TAG Database encompasses file- and relational-database-resident event-level metadata, distributed across all ATLAS Tiers. An oracle hosted global TAG relational database, containing all ATLAS events, implemented in Oracle, will exist at Tier O. Implementing a system that is both performant and manageable at this scale is a challenge. A 1 TB relational TAG Database has been deployed at Tier 0 using simulated tag data. The database contains one billion events, each described by two hundred event metadata attributes, and is currently undergoing extensive testing in terms of queries, population and manageability. These 1 TB tests aim to demonstrate and optimise the performance and scalability of an Oracle TAG Database on a global scale. Partitioning and indexing strategies are crucial to well-performing queries and manageability of the database and have implications for database population and distribution, so these are investigated. Physics query patterns are anticipated, but a crucial feature of the system must be to support a broad range of queries across all attributes. Concurrently, event tags from ATLAS Computing System Commissioning distributed simulations are accumulated in an Oracle-hosted database at CERN, providing an event-level selection service valuable for user experience and gathering information about physics query patterns. In this paper we describe the status of the Global TAG relational database scalability work and highlight areas of future direction

  15. A scalable method for parallelizing sampling-based motion planning algorithms

    KAUST Repository

    Jacobs, Sam Ade; Manavi, Kasra; Burgos, Juan; Denny, Jory; Thomas, Shawna; Amato, Nancy M.

    2012-01-01

    This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.

  16. A scalable method for parallelizing sampling-based motion planning algorithms

    KAUST Repository

    Jacobs, Sam Ade

    2012-05-01

    This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.

  17. Cooperative Jamming for Physical Layer Security in Wireless Sensor Networks

    DEFF Research Database (Denmark)

    Rohokale, Vandana M.; Prasad, Neeli R.; Prasad, Ramjee

    2012-01-01

    Interference is generally considered as the redundant and unwanted occurrence in wireless communication. This work proposes a novel cooperative jamming mechanism for scalable networks like Wireless Sensor Networks (WSNs) which makes use of friendly interference to confuse the eavesdropper...

  18. Impact of packet losses in scalable 3D holoscopic video coding

    Science.gov (United States)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2014-05-01

    Holoscopic imaging became a prospective glassless 3D technology to provide more natural 3D viewing experiences to the end user. Additionally, holoscopic systems also allow new post-production degrees of freedom, such as controlling the plane of focus or the viewing angle presented to the user. However, to successfully introduce this technology into the consumer market, a display scalable coding approach is essential to achieve backward compatibility with legacy 2D and 3D displays. Moreover, to effectively transmit 3D holoscopic content over error-prone networks, e.g., wireless networks or the Internet, error resilience techniques are required to mitigate the impact of data impairments in the user quality perception. Therefore, it is essential to deeply understand the impact of packet losses in terms of decoding video quality for the specific case of 3D holoscopic content, notably when a scalable approach is used. In this context, this paper studies the impact of packet losses when using a three-layer display scalable 3D holoscopic video coding architecture previously proposed, where each layer represents a different level of display scalability (i.e., L0 - 2D, L1 - stereo or multiview, and L2 - full 3D holoscopic). For this, a simple error concealment algorithm is used, which makes use of inter-layer redundancy between multiview and 3D holoscopic content and the inherent correlation of the 3D holoscopic content to estimate lost data. Furthermore, a study of the influence of 2D views generation parameters used in lower layers on the performance of the used error concealment algorithm is also presented.

  19. A scalable architecture for online anomaly detection of WLCG batch jobs

    Science.gov (United States)

    Kuehn, E.; Fischer, M.; Giffels, M.; Jung, C.; Petzold, A.

    2016-10-01

    For data centres it is increasingly important to monitor the network usage, and learn from network usage patterns. Especially configuration issues or misbehaving batch jobs preventing a smooth operation need to be detected as early as possible. At the GridKa data and computing centre we therefore operate a tool BPNetMon for monitoring traffic data and characteristics of WLCG batch jobs and pilots locally on different worker nodes. On the one hand local information itself are not sufficient to detect anomalies for several reasons, e.g. the underlying job distribution on a single worker node might change or there might be a local misconfiguration. On the other hand a centralised anomaly detection approach does not scale regarding network communication as well as computational costs. We therefore propose a scalable architecture based on concepts of a super-peer network.

  20. Statistical Analysis of Video Frame Size Distribution Originating from Scalable Video Codec (SVC

    Directory of Open Access Journals (Sweden)

    Sima Ahmadpour

    2017-01-01

    Full Text Available Designing an effective and high performance network requires an accurate characterization and modeling of network traffic. The modeling of video frame sizes is normally applied in simulation studies and mathematical analysis and generating streams for testing and compliance purposes. Besides, video traffic assumed as a major source of multimedia traffic in future heterogeneous network. Therefore, the statistical distribution of video data can be used as the inputs for performance modeling of networks. The finding of this paper comprises the theoretical definition of distribution which seems to be relevant to the video trace in terms of its statistical properties and finds the best distribution using both the graphical method and the hypothesis test. The data set used in this article consists of layered video traces generating from Scalable Video Codec (SVC video compression technique of three different movies.

  1. Distributed picture compilation demonstration

    Science.gov (United States)

    Alexander, Richard; Anderson, John; Leal, Jeff; Mullin, David; Nicholson, David; Watson, Graham

    2004-08-01

    A physical demonstration of distributed surveillance and tracking is described. The demonstration environment is an outdoor car park overlooked by a system of four rooftop cameras. The cameras extract moving objects from the scene, and these objects are tracked in a decentralized way, over a real communication network, using the information form of the standard Kalman filter. Each node therefore has timely access to the complete global picture and because there is no single point of failure in the system, it is robust. The demonstration system and its main components are described here, with an emphasis on some of the lessons we have learned as a result of applying a corpus of distributed data fusion theory and algorithms in practice. Initial results are presented and future plans to scale up the network are also outlined.

  2. Scalable conditional induction variables (CIV) analysis

    DEFF Research Database (Denmark)

    Oancea, Cosmin Eugen; Rauchwerger, Lawrence

    2015-01-01

    parallelizing compiler and evaluated its impact on five Fortran benchmarks. We have found that that there are many important loops using CIV subscripts and that our analysis can lead to their scalable parallelization. This in turn has led to the parallelization of the benchmark programs they appear in.......Subscripts using induction variables that cannot be expressed as a formula in terms of the enclosing-loop indices appear in the low-level implementation of common programming abstractions such as filter, or stack operations and pose significant challenges to automatic parallelization. Because...... the complexity of such induction variables is often due to their conditional evaluation across the iteration space of loops we name them Conditional Induction Variables (CIV). This paper presents a flow-sensitive technique that summarizes both such CIV-based and affine subscripts to program level, using the same...

  3. Scalable Faceted Ranking in Tagging Systems

    Science.gov (United States)

    Orlicki, José I.; Alvarez-Hamelin, J. Ignacio; Fierens, Pablo I.

    Nowadays, web collaborative tagging systems which allow users to upload, comment on and recommend contents, are growing. Such systems can be represented as graphs where nodes correspond to users and tagged-links to recommendations. In this paper we analyze the problem of computing a ranking of users with respect to a facet described as a set of tags. A straightforward solution is to compute a PageRank-like algorithm on a facet-related graph, but it is not feasible for online computation. We propose an alternative: (i) a ranking for each tag is computed offline on the basis of tag-related subgraphs; (ii) a faceted order is generated online by merging rankings corresponding to all the tags in the facet. Based on the graph analysis of YouTube and Flickr, we show that step (i) is scalable. We also present efficient algorithms for step (ii), which are evaluated by comparing their results with two gold standards.

  4. A graph algebra for scalable visual analytics.

    Science.gov (United States)

    Shaverdian, Anna A; Zhou, Hao; Michailidis, George; Jagadish, Hosagrahar V

    2012-01-01

    Visual analytics (VA), which combines analytical techniques with advanced visualization features, is fast becoming a standard tool for extracting information from graph data. Researchers have developed many tools for this purpose, suggesting a need for formal methods to guide these tools' creation. Increased data demands on computing requires redesigning VA tools to consider performance and reliability in the context of analysis of exascale datasets. Furthermore, visual analysts need a way to document their analyses for reuse and results justification. A VA graph framework encapsulated in a graph algebra helps address these needs. Its atomic operators include selection and aggregation. The framework employs a visual operator and supports dynamic attributes of data to enable scalable visual exploration of data.

  5. iSIGHT-FD scalability test report.

    Energy Technology Data Exchange (ETDEWEB)

    Clay, Robert L.; Shneider, Max S.

    2008-07-01

    The engineering analysis community at Sandia National Laboratories uses a number of internal and commercial software codes and tools, including mesh generators, preprocessors, mesh manipulators, simulation codes, post-processors, and visualization packages. We define an analysis workflow as the execution of an ordered, logical sequence of these tools. Various forms of analysis (and in particular, methodologies that use multiple function evaluations or samples) involve executing parameterized variations of these workflows. As part of the DART project, we are evaluating various commercial workflow management systems, including iSIGHT-FD from Engineous. This report documents the results of a scalability test that was driven by DAKOTA and conducted on a parallel computer (Thunderbird). The purpose of this experiment was to examine the suitability and performance of iSIGHT-FD for large-scale, parameterized analysis workflows. As the results indicate, we found iSIGHT-FD to be suitable for this type of application.

  6. Scalable group level probabilistic sparse factor analysis

    DEFF Research Database (Denmark)

    Hinrich, Jesper Løve; Nielsen, Søren Føns Vind; Riis, Nicolai Andre Brogaard

    2017-01-01

    Many data-driven approaches exist to extract neural representations of functional magnetic resonance imaging (fMRI) data, but most of them lack a proper probabilistic formulation. We propose a scalable group level probabilistic sparse factor analysis (psFA) allowing spatially sparse maps, component...... pruning using automatic relevance determination (ARD) and subject specific heteroscedastic spatial noise modeling. For task-based and resting state fMRI, we show that the sparsity constraint gives rise to components similar to those obtained by group independent component analysis. The noise modeling...... shows that noise is reduced in areas typically associated with activation by the experimental design. The psFA model identifies sparse components and the probabilistic setting provides a natural way to handle parameter uncertainties. The variational Bayesian framework easily extends to more complex...

  7. A versatile scalable PET processing system

    International Nuclear Information System (INIS)

    Dong, H.; Weisenberger, A.; McKisson, J.; Wenze, Xi; Cuevas, C.; Wilson, J.; Zukerman, L.

    2011-01-01

    Positron Emission Tomography (PET) historically has major clinical and preclinical applications in cancerous oncology, neurology, and cardiovascular diseases. Recently, in a new direction, an application specific PET system is being developed at Thomas Jefferson National Accelerator Facility (Jefferson Lab) in collaboration with Duke University, University of Maryland at Baltimore (UMAB), and West Virginia University (WVU) targeted for plant eco-physiology research. The new plant imaging PET system is versatile and scalable such that it could adapt to several plant imaging needs - imaging many important plant organs including leaves, roots, and stems. The mechanical arrangement of the detectors is designed to accommodate the unpredictable and random distribution in space of the plant organs without requiring the plant be disturbed. Prototyping such a system requires a new data acquisition system (DAQ) and data processing system which are adaptable to the requirements of these unique and versatile detectors.

  8. The Concept of Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten

    2015-01-01

    The power of business models lies in their ability to visualize and clarify how firms’ may configure their value creation processes. Among the key aspects of business model thinking are a focus on what the customer values, how this value is best delivered to the customer and how strategic partners...... are leveraged in this value creation, delivery and realization exercise. Central to the mainstream understanding of business models is the value proposition towards the customer and the hypothesis generated is that if the firm delivers to the customer what he/she requires, then there is a good foundation...... for a long-term profitable business. However, the message conveyed in this article is that while providing a good value proposition may help the firm ‘get by’, the really successful businesses of today are those able to reach the sweet-spot of business model scalability. This article introduces and discusses...

  9. Scalable quantum search using trapped ions

    International Nuclear Information System (INIS)

    Ivanov, S. S.; Ivanov, P. A.; Linington, I. E.; Vitanov, N. V.

    2010-01-01

    We propose a scalable implementation of Grover's quantum search algorithm in a trapped-ion quantum information processor. The system is initialized in an entangled Dicke state by using adiabatic techniques. The inversion-about-average and oracle operators take the form of single off-resonant laser pulses. This is made possible by utilizing the physical symmetries of the trapped-ion linear crystal. The physical realization of the algorithm represents a dramatic simplification: each logical iteration (oracle and inversion about average) requires only two physical interaction steps, in contrast to the large number of concatenated gates required by previous approaches. This not only facilitates the implementation but also increases the overall fidelity of the algorithm.

  10. Scalable graphene aptasensors for drug quantification

    Science.gov (United States)

    Vishnubhotla, Ramya; Ping, Jinglei; Gao, Zhaoli; Lee, Abigail; Saouaf, Olivia; Vrudhula, Amey; Johnson, A. T. Charlie

    2017-11-01

    Simpler and more rapid approaches for therapeutic drug-level monitoring are highly desirable to enable use at the point-of-care. We have developed an all-electronic approach for detection of the HIV drug tenofovir based on scalable fabrication of arrays of graphene field-effect transistors (GFETs) functionalized with a commercially available DNA aptamer. The shift in the Dirac voltage of the GFETs varied systematically with the concentration of tenofovir in deionized water, with a detection limit less than 1 ng/mL. Tests against a set of negative controls confirmed the specificity of the sensor response. This approach offers the potential for further development into a rapid and convenient point-of-care tool with clinically relevant performance.

  11. Scalable Transactions for Web Applications in the Cloud

    NARCIS (Netherlands)

    Zhou, W.; Pierre, G.E.O.; Chi, C.-H.

    2009-01-01

    Cloud Computing platforms provide scalability and high availability properties for web applications but they sacrifice data consistency at the same time. However, many applications cannot afford any data inconsistency. We present a scalable transaction manager for NoSQL cloud database services to

  12. New Complexity Scalable MPEG Encoding Techniques for Mobile Applications

    Directory of Open Access Journals (Sweden)

    Stephan Mietens

    2004-03-01

    Full Text Available Complexity scalability offers the advantage of one-time design of video applications for a large product family, including mobile devices, without the need of redesigning the applications on the algorithmic level to meet the requirements of the different products. In this paper, we present complexity scalable MPEG encoding having core modules with modifications for scalability. The interdependencies of the scalable modules and the system performance are evaluated. Experimental results show scalability giving a smooth change in complexity and corresponding video quality. Scalability is basically achieved by varying the number of computed DCT coefficients and the number of evaluated motion vectors but other modules are designed such they scale with the previous parameters. In the experiments using the “Stefan” sequence, the elapsed execution time of the scalable encoder, reflecting the computational complexity, can be gradually reduced to roughly 50% of its original execution time. The video quality scales between 20 dB and 48 dB PSNR with unity quantizer setting, and between 21.5 dB and 38.5 dB PSNR for different sequences targeting 1500 kbps. The implemented encoder and the scalability techniques can be successfully applied in mobile systems based on MPEG video compression.

  13. Column generation algorithms for virtual network embedding in flexi-grid optical networks.

    Science.gov (United States)

    Lin, Rongping; Luo, Shan; Zhou, Jingwei; Wang, Sheng; Chen, Bin; Zhang, Xiaoning; Cai, Anliang; Zhong, Wen-De; Zukerman, Moshe

    2018-04-16

    Network virtualization provides means for efficient management of network resources by embedding multiple virtual networks (VNs) to share efficiently the same substrate network. Such virtual network embedding (VNE) gives rise to a challenging problem of how to optimize resource allocation to VNs and to guarantee their performance requirements. In this paper, we provide VNE algorithms for efficient management of flexi-grid optical networks. We provide an exact algorithm aiming to minimize the total embedding cost in terms of spectrum cost and computation cost for a single VN request. Then, to achieve scalability, we also develop a heuristic algorithm for the same problem. We apply these two algorithms for a dynamic traffic scenario where many VN requests arrive one-by-one. We first demonstrate by simulations for the case of a six-node network that the heuristic algorithm obtains very close blocking probabilities to exact algorithm (about 0.2% higher). Then, for a network of realistic size (namely, USnet) we demonstrate that the blocking probability of our new heuristic algorithm is about one magnitude lower than a simpler heuristic algorithm, which was a component of an earlier published algorithm.

  14. Fourier transform based scalable image quality measure.

    Science.gov (United States)

    Narwaria, Manish; Lin, Weisi; McLoughlin, Ian; Emmanuel, Sabu; Chia, Liang-Tien

    2012-08-01

    We present a new image quality assessment (IQA) algorithm based on the phase and magnitude of the 2D (twodimensional) Discrete Fourier Transform (DFT). The basic idea is to compare the phase and magnitude of the reference and distorted images to compute the quality score. However, it is well known that the Human Visual Systems (HVSs) sensitivity to different frequency components is not the same. We accommodate this fact via a simple yet effective strategy of nonuniform binning of the frequency components. This process also leads to reduced space representation of the image thereby enabling the reduced-reference (RR) prospects of the proposed scheme. We employ linear regression to integrate the effects of the changes in phase and magnitude. In this way, the required weights are determined via proper training and hence more convincing and effective. Lastly, using the fact that phase usually conveys more information than magnitude, we use only the phase for RR quality assessment. This provides the crucial advantage of further reduction in the required amount of reference image information. The proposed method is therefore further scalable for RR scenarios. We report extensive experimental results using a total of 9 publicly available databases: 7 image (with a total of 3832 distorted images with diverse distortions) and 2 video databases (totally 228 distorted videos). These show that the proposed method is overall better than several of the existing fullreference (FR) algorithms and two RR algorithms. Additionally, there is a graceful degradation in prediction performance as the amount of reference image information is reduced thereby confirming its scalability prospects. To enable comparisons and future study, a Matlab implementation of the proposed algorithm is available at http://www.ntu.edu.sg/home/wslin/reduced_phase.rar.

  15. The Scalable Coherent Interface and related standards projects

    International Nuclear Information System (INIS)

    Gustavson, D.B.

    1991-09-01

    The Scalable Coherent Interface (SCI) project (IEEE P1596) found a way to avoid the limits that are inherent in bus technology. SCI provides bus-like services by transmitting packets on a collection of point-to-point unidirectional links. The SCI protocols support cache coherence in a distributed-shared-memory multiprocessor model, message passing, I/O, and local-area-network-like communication over fiber optic or wire links. VLSI circuits that operate parallel links at 1000 MByte/s and serial links at 1000 Mbit/s will be available early in 1992. Several ongoing SCI-related projects are applying the SCI technology to new areas or extending it to more difficult problems. P1596.1 defines the architecture of a bridge between SCI and VME; P1596.2 compatibly extends the cache coherence mechanism for efficient operation with kiloprocessor systems; P1596.3 defines new low-voltage (about 0.25 V) differential signals suitable for low power interfaces for CMOS or GaAs VLSI implementations of SCI; P1596.4 defines a high performance memory chip interface using these signals; P1596.5 defines data transfer formats for efficient interprocessor communication in heterogeneous multiprocessor systems. This paper reports the current status of SCI, related standards, and new projects. 16 refs

  16. A Scalable proxy cache for Grid Data Access

    International Nuclear Information System (INIS)

    Cristian Cirstea, Traian; Just Keijser, Jan; Arthur Koeroo, Oscar; Starink, Ronald; Alan Templon, Jeffrey

    2012-01-01

    We describe a prototype grid proxy cache system developed at Nikhef, motivated by a desire to construct the first building block of a future https-based Content Delivery Network for grid infrastructures. Two goals drove the project: firstly to provide a “native view” of the grid for desktop-type users, and secondly to improve performance for physics-analysis type use cases, where multiple passes are made over the same set of data (residing on the grid). We further constrained the design by requiring that the system should be made of standard components wherever possible. The prototype that emerged from this exercise is a horizontally-scalable, cooperating system of web server / cache nodes, fronted by a customized webDAV server. The webDAV server is custom only in the sense that it supports http redirects (providing horizontal scaling) and that the authentication module has, as back end, a proxy delegation chain that can be used by the cache nodes to retrieve files from the grid. The prototype was deployed at Nikhef and tested at a scale of several terabytes of data and approximately one hundred fast cores of computing. Both small and large files were tested, in a number of scenarios, and with various numbers of cache nodes, in order to understand the scaling properties of the system. For properly-dimensioned cache-node hardware, the system showed speedup of several integer factors for the analysis-type use cases. These results and others are presented and discussed.

  17. Game theoretic wireless resource allocation for H.264 MGS video transmission over cognitive radio networks

    Science.gov (United States)

    Fragkoulis, Alexandros; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2015-03-01

    We propose a method for the fair and efficient allocation of wireless resources over a cognitive radio system network to transmit multiple scalable video streams to multiple users. The method exploits the dynamic architecture of the Scalable Video Coding extension of the H.264 standard, along with the diversity that OFDMA networks provide. We use a game-theoretic Nash Bargaining Solution (NBS) framework to ensure that each user receives the minimum video quality requirements, while maintaining fairness over the cognitive radio system. An optimization problem is formulated, where the objective is the maximization of the Nash product while minimizing the waste of resources. The problem is solved by using a Swarm Intelligence optimizer, namely Particle Swarm Optimization. Due to the high dimensionality of the problem, we also introduce a dimension-reduction technique. Our experimental results demonstrate the fairness imposed by the employed NBS framework.

  18. Peeking Network States with Clustered Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jinoh [Texas A & M Univ., Commerce, TX (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-10-20

    Network traffic monitoring has long been a core element for effec- tive network management and security. However, it is still a chal- lenging task with a high degree of complexity for comprehensive analysis when considering multiple variables and ever-increasing traffic volumes to monitor. For example, one of the widely con- sidered approaches is to scrutinize probabilistic distributions, but it poses a scalability concern and multivariate analysis is not gen- erally supported due to the exponential increase of the complexity. In this work, we propose a novel method for network traffic moni- toring based on clustering, one of the powerful deep-learning tech- niques. We show that the new approach enables us to recognize clustered results as patterns representing the network states, which can then be utilized to evaluate “similarity” of network states over time. In addition, we define a new quantitative measure for the similarity between two compared network states observed in dif- ferent time windows, as a supportive means for intuitive analysis. Finally, we demonstrate the clustering-based network monitoring with public traffic traces, and show that the proposed approach us- ing the clustering method has a great opportunity for feasible, cost- effective network monitoring.

  19. Scalability Dilemma and Statistic Multiplexed Computing — A Theory and Experiment

    Directory of Open Access Journals (Sweden)

    Justin Yuan Shi

    2017-08-01

    Full Text Available The For the last three decades, end-to-end computing paradigms, such as MPI (Message Passing Interface, RPC (Remote Procedure Call and RMI (Remote Method Invocation, have been the de facto paradigms for distributed and parallel programming. Despite of the successes, applications built using these paradigms suffer due to the proportionality factor of crash in the application with its size. Checkpoint/restore and backup/recovery are the only means to save otherwise lost critical information. The scalability dilemma is such a practical challenge that the probability of the data losses increases as the application scales in size. The theoretical significance of this practical challenge is that it undermines the fundamental structure of the scientific discovery process and mission critical services in production today. In 1997, the direct use of end-to-end reference model in distributed programming was recognized as a fallacy. The scalability dilemma was predicted. However, this voice was overrun by the passage of time. Today, the rapidly growing digitized data demands solving the increasingly critical scalability challenges. Computing architecture scalability, although loosely defined, is now the front and center of large-scale computing efforts. Constrained only by the economic law of diminishing returns, this paper proposes a narrow definition of a Scalable Computing Service (SCS. Three scalability tests are also proposed in order to distinguish service architecture flaws from poor application programming. Scalable data intensive service requires additional treatments. Thus, the data storage is assumed reliable in this paper. A single-sided Statistic Multiplexed Computing (SMC paradigm is proposed. A UVR (Unidirectional Virtual Ring SMC architecture is examined under SCS tests. SMC was designed to circumvent the well-known impossibility of end-to-end paradigms. It relies on the proven statistic multiplexing principle to deliver reliable service

  20. Demonstration of superconducting micromachined cavities

    Energy Technology Data Exchange (ETDEWEB)

    Brecht, T., E-mail: teresa.brecht@yale.edu; Reagor, M.; Chu, Y.; Pfaff, W.; Wang, C.; Frunzio, L.; Devoret, M. H.; Schoelkopf, R. J. [Department of Applied Physics, Yale University, New Haven, Connecticut 06511 (United States)

    2015-11-09

    Superconducting enclosures will be key components of scalable quantum computing devices based on circuit quantum electrodynamics. Within a densely integrated device, they can protect qubits from noise and serve as quantum memory units. Whether constructed by machining bulk pieces of metal or microfabricating wafers, 3D enclosures are typically assembled from two or more parts. The resulting seams potentially dissipate crossing currents and limit performance. In this letter, we present measured quality factors of superconducting cavity resonators of several materials, dimensions, and seam locations. We observe that superconducting indium can be a low-loss RF conductor and form low-loss seams. Leveraging this, we create a superconducting micromachined resonator with indium that has a quality factor of two million, despite a greatly reduced mode volume. Inter-layer coupling to this type of resonator is achieved by an aperture located under a planar transmission line. The described techniques demonstrate a proof-of-principle for multilayer microwave integrated quantum circuits for scalable quantum computing.

  1. Dynamic Virtual LANs for Adaptive Network Security

    National Research Council Canada - National Science Library

    Merani, Diego; Berni, Alessandro; Leonard, Michel

    2004-01-01

    The development of Network-Enabled capabilities in support of undersea research requires architectures for the interconnection and data sharing that are flexible, scalable, and built on open standards...

  2. Scalable synthesis of hierarchically structured carbon nanotube-graphene fibres for capacitive energy storage

    Science.gov (United States)

    Yu, Dingshan; Goh, Kunli; Wang, Hong; Wei, Li; Jiang, Wenchao; Zhang, Qiang; Dai, Liming; Chen, Yuan

    2014-07-01

    Micro-supercapacitors are promising energy storage devices that can complement or even replace batteries in miniaturized portable electronics and microelectromechanical systems. Their main limitation, however, is the low volumetric energy density when compared with batteries. Here, we describe a hierarchically structured carbon microfibre made of an interconnected network of aligned single-walled carbon nanotubes with interposed nitrogen-doped reduced graphene oxide sheets. The nanomaterials form mesoporous structures of large specific surface area (396 m2 g-1) and high electrical conductivity (102 S cm-1). We develop a scalable method to continuously produce the fibres using a silica capillary column functioning as a hydrothermal microreactor. The resultant fibres show a specific volumetric capacity as high as 305 F cm-3 in sulphuric acid (measured at 73.5 mA cm-3 in a three-electrode cell) or 300 F cm-3 in polyvinyl alcohol (PVA)/H3PO4 electrolyte (measured at 26.7 mA cm-3 in a two-electrode cell). A full micro-supercapacitor with PVA/H3PO4 gel electrolyte, free from binder, current collector and separator, has a volumetric energy density of ~6.3 mWh cm-3 (a value comparable to that of 4 V-500 µAh thin-film lithium batteries) while maintaining a power density more than two orders of magnitude higher than that of batteries, as well as a long cycle life. To demonstrate that our fibre-based, all-solid-state micro-supercapacitors can be easily integrated into miniaturized flexible devices, we use them to power an ultraviolet photodetector and a light-emitting diode.

  3. Embedded DCT and wavelet methods for fine granular scalable video: analysis and comparison

    Science.gov (United States)

    van der Schaar-Mitrea, Mihaela; Chen, Yingwei; Radha, Hayder

    2000-04-01

    Video transmission over bandwidth-varying networks is becoming increasingly important due to emerging applications such as streaming of video over the Internet. The fundamental obstacle in designing such systems resides in the varying characteristics of the Internet (i.e. bandwidth variations and packet-loss patterns). In MPEG-4, a new SNR scalability scheme, called Fine-Granular-Scalability (FGS), is currently under standardization, which is able to adapt in real-time (i.e. at transmission time) to Internet bandwidth variations. The FGS framework consists of a non-scalable motion-predicted base-layer and an intra-coded fine-granular scalable enhancement layer. For example, the base layer can be coded using a DCT-based MPEG-4 compliant, highly efficient video compression scheme. Subsequently, the difference between the original and decoded base-layer is computed, and the resulting FGS-residual signal is intra-frame coded with an embedded scalable coder. In order to achieve high coding efficiency when compressing the FGS enhancement layer, it is crucial to analyze the nature and characteristics of residual signals common to the SNR scalability framework (including FGS). In this paper, we present a thorough analysis of SNR residual signals by evaluating its statistical properties, compaction efficiency and frequency characteristics. The signal analysis revealed that the energy compaction of the DCT and wavelet transforms is limited and the frequency characteristic of SNR residual signals decay rather slowly. Moreover, the blockiness artifacts of the low bit-rate coded base-layer result in artificial high frequencies in the residual signal. Subsequently, a variety of wavelet and embedded DCT coding techniques applicable to the FGS framework are evaluated and their results are interpreted based on the identified signal properties. As expected from the theoretical signal analysis, the rate-distortion performances of the embedded wavelet and DCT-based coders are very

  4. A Platform for Scalable Satellite and Geospatial Data Analysis

    Science.gov (United States)

    Beneke, C. M.; Skillman, S.; Warren, M. S.; Kelton, T.; Brumby, S. P.; Chartrand, R.; Mathis, M.

    2017-12-01

    At Descartes Labs, we use the commercial cloud to run global-scale machine learning applications over satellite imagery. We have processed over 5 Petabytes of public and commercial satellite imagery, including the full Landsat and Sentinel archives. By combining open-source tools with a FUSE-based filesystem for cloud storage, we have enabled a scalable compute platform that has demonstrated reading over 200 GB/s of satellite imagery into cloud compute nodes. In one application, we generated global 15m Landsat-8, 20m Sentinel-1, and 10m Sentinel-2 composites from 15 trillion pixels, using over 10,000 CPUs. We recently created a public open-source Python client library that can be used to query and access preprocessed public satellite imagery from within our platform, and made this platform available to researchers for non-commercial projects. In this session, we will describe how you can use the Descartes Labs Platform for rapid prototyping and scaling of geospatial analyses and demonstrate examples in land cover classification.

  5. Scalable Pressure Sensor Based on Electrothermally Operated Resonator

    KAUST Repository

    Hajjaj, Amal Z.; Hafiz, Md Abdullah Al; Alcheikh, Nouha; Younis, Mohammad I.

    2017-01-01

    We experimentally demonstrate a new pressure sensor that offers the flexibility of being scalable to small sizes up to the nano regime. Unlike conventional pressure sensors that rely on large diaphragms and big-surface structures, the principle of operation here relies on convective cooling of the air surrounding an electrothermally heated resonant structure, which can be a beam or a bridge. This concept is demonstrated using an electrothermally tuned and electrostatically driven MEMS resonator, which is designed to be deliberately curved. We show that the variation of pressure can be tracked accurately by monitoring the change in the resonance frequency of the resonator at a constant electrothermal voltage. We show that the range of the sensed pressure and the sensitivity of detection are controllable by the amount of the applied electrothermal voltage. Theoretically, we verify the device concept using a multi-physics nonlinear finite element model. The proposed pressure sensor is simple in principle and design and offers the possibility of further miniaturization to the nanoscale.

  6. Scalable Pressure Sensor Based on Electrothermally Operated Resonator

    KAUST Repository

    Hajjaj, Amal Z.

    2017-11-03

    We experimentally demonstrate a new pressure sensor that offers the flexibility of being scalable to small sizes up to the nano regime. Unlike conventional pressure sensors that rely on large diaphragms and big-surface structures, the principle of operation here relies on convective cooling of the air surrounding an electrothermally heated resonant structure, which can be a beam or a bridge. This concept is demonstrated using an electrothermally tuned and electrostatically driven MEMS resonator, which is designed to be deliberately curved. We show that the variation of pressure can be tracked accurately by monitoring the change in the resonance frequency of the resonator at a constant electrothermal voltage. We show that the range of the sensed pressure and the sensitivity of detection are controllable by the amount of the applied electrothermal voltage. Theoretically, we verify the device concept using a multi-physics nonlinear finite element model. The proposed pressure sensor is simple in principle and design and offers the possibility of further miniaturization to the nanoscale.

  7. Exploration Medical System Demonstration

    Science.gov (United States)

    Rubin, D. A.; Watkins, S. D.

    2014-01-01

    BACKGROUND: Exploration class missions will present significant new challenges and hazards to the health of the astronauts. Regardless of the intended destination, beyond low Earth orbit a greater degree of crew autonomy will be required to diagnose medical conditions, develop treatment plans, and implement procedures due to limited communications with ground-based personnel. SCOPE: The Exploration Medical System Demonstration (EMSD) project will act as a test bed on the International Space Station (ISS) to demonstrate to crew and ground personnel that an end-to-end medical system can assist clinician and non-clinician crew members in optimizing medical care delivery and data management during an exploration mission. Challenges facing exploration mission medical care include limited resources, inability to evacuate to Earth during many mission phases, and potential rendering of medical care by non-clinicians. This system demonstrates the integration of medical devices and informatics tools for managing evidence and decision making and can be designed to assist crewmembers in nominal, non-emergent situations and in emergent situations when they may be suffering from performance decrements due to environmental, physiological or other factors. PROJECT OBJECTIVES: The objectives of the EMSD project are to: a. Reduce or eliminate the time required of an on-orbit crew and ground personnel to access, transfer, and manipulate medical data. b. Demonstrate that the on-orbit crew has the ability to access medical data/information via an intuitive and crew-friendly solution to aid in the treatment of a medical condition. c. Develop a common data management framework that can be ubiquitously used to automate repetitive data collection, management, and communications tasks for all activities pertaining to crew health and life sciences. d. Ensure crew access to medical data during periods of restricted ground communication. e. Develop a common data management framework that

  8. Remote monitoring demonstration

    International Nuclear Information System (INIS)

    Caskey, Susan; Olsen, John

    2006-01-01

    The recently upgraded remote monitoring system at the Joyo Experimental Reactor uses a DCM-14 camera module and GEMINI software. The final data is compatible both with the IAEA-approved GARS review software and the ALIS software that was used for this demonstration. Features of the remote monitoring upgrade emphasized compatibility with IAEA practice. This presentation gives particular attention to the selection process for meeting network security considerations at the O'arai site. The Joyo system is different from the NNCA's ACPF system, in that it emphasizes use of IAEA standard camera technology and data acquisition and transmission software. In the demonstration itself, a temporary virtual private network (VPN) between the meeting room and the server at Sandia in Albuquerque allowed attendees to observe data stored from routine transmissions from the Joyo Fresh Fuel Storage to Sandia. Image files from a fuel movement earlier in the month showed Joyo workers and IAEA inspectors carrying out a transfer. (author)

  9. Error-Resilient Unequal Error Protection of Fine Granularity Scalable Video Bitstreams

    Science.gov (United States)

    Cai, Hua; Zeng, Bing; Shen, Guobin; Xiong, Zixiang; Li, Shipeng

    2006-12-01

    This paper deals with the optimal packet loss protection issue for streaming the fine granularity scalable (FGS) video bitstreams over IP networks. Unlike many other existing protection schemes, we develop an error-resilient unequal error protection (ER-UEP) method that adds redundant information optimally for loss protection and, at the same time, cancels completely the dependency among bitstream after loss recovery. In our ER-UEP method, the FGS enhancement-layer bitstream is first packetized into a group of independent and scalable data packets. Parity packets, which are also scalable, are then generated. Unequal protection is finally achieved by properly shaping the data packets and the parity packets. We present an algorithm that can optimally allocate the rate budget between data packets and parity packets, together with several simplified versions that have lower complexity. Compared with conventional UEP schemes that suffer from bit contamination (caused by the bit dependency within a bitstream), our method guarantees successful decoding of all received bits, thus leading to strong error-resilience (at any fixed channel bandwidth) and high robustness (under varying and/or unclean channel conditions).

  10. Group Centric Networking: Large Scale Over the Air Testing of Group Centric Networking

    Science.gov (United States)

    2016-11-01

    Large Scale Over-the-Air Testing of Group Centric Networking Logan Mercer, Greg Kuperman, Andrew Hunter, Brian Proulx MIT Lincoln Laboratory...performance of Group Centric Networking (GCN), a networking protocol developed for robust and scalable communications in lossy networks where users are...devices, and the ad-hoc nature of the network . Group Centric Networking (GCN) is a proposed networking protocol that addresses challenges specific to

  11. A Scalable Gaussian Process Analysis Algorithm for Biomass Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Chandola, Varun [ORNL; Vatsavai, Raju [ORNL

    2011-01-01

    Biomass monitoring is vital for studying the carbon cycle of earth's ecosystem and has several significant implications, especially in the context of understanding climate change and its impacts. Recently, several change detection methods have been proposed to identify land cover changes in temporal profiles (time series) of vegetation collected using remote sensing instruments, but do not satisfy one or both of the two requirements of the biomass monitoring problem, i.e., {\\em operating in online mode} and {\\em handling periodic time series}. In this paper, we adapt Gaussian process regression to detect changes in such time series in an online fashion. While Gaussian process (GP) have been widely used as a kernel based learning method for regression and classification, their applicability to massive spatio-temporal data sets, such as remote sensing data, has been limited owing to the high computational costs involved. We focus on addressing the scalability issues associated with the proposed GP based change detection algorithm. This paper makes several significant contributions. First, we propose a GP based online time series change detection algorithm and demonstrate its effectiveness in detecting different types of changes in {\\em Normalized Difference Vegetation Index} (NDVI) data obtained from a study area in Iowa, USA. Second, we propose an efficient Toeplitz matrix based solution which significantly improves the computational complexity and memory requirements of the proposed GP based method. Specifically, the proposed solution can analyze a time series of length $t$ in $O(t^2)$ time while maintaining a $O(t)$ memory footprint, compared to the $O(t^3)$ time and $O(t^2)$ memory requirement of standard matrix manipulation based methods. Third, we describe a parallel version of the proposed solution which can be used to simultaneously analyze a large number of time series. We study three different parallel implementations: using threads, MPI, and a

  12. Oracle database performance and scalability a quantitative approach

    CERN Document Server

    Liu, Henry H

    2011-01-01

    A data-driven, fact-based, quantitative text on Oracle performance and scalability With database concepts and theories clearly explained in Oracle's context, readers quickly learn how to fully leverage Oracle's performance and scalability capabilities at every stage of designing and developing an Oracle-based enterprise application. The book is based on the author's more than ten years of experience working with Oracle, and is filled with dependable, tested, and proven performance optimization techniques. Oracle Database Performance and Scalability is divided into four parts that enable reader

  13. Scalable-to-lossless transform domain distributed video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Veselov, Anton

    2010-01-01

    Distributed video coding (DVC) is a novel approach providing new features as low complexity encoding by mainly exploiting the source statistics at the decoder based on the availability of decoder side information. In this paper, scalable-tolossless DVC is presented based on extending a lossy Tran...... codec provides frame by frame encoding. Comparing the lossless coding efficiency, the proposed scalable-to-lossless TDWZ video codec can save up to 5%-13% bits compared to JPEG LS and H.264 Intra frame lossless coding and do so as a scalable-to-lossless coding....

  14. Design issues for numerical libraries on scalable multicore architectures

    International Nuclear Information System (INIS)

    Heroux, M A

    2008-01-01

    Future generations of scalable computers will rely on multicore nodes for a significant portion of overall system performance. At present, most applications and libraries cannot exploit multiple cores beyond running addition MPI processes per node. In this paper we discuss important multicore architecture issues, programming models, algorithms requirements and software design related to effective use of scalable multicore computers. In particular, we focus on important issues for library research and development, making recommendations for how to effectively develop libraries for future scalable computer systems

  15. Network Characterization Service (NCS)

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Guojun [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yang, George [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Crowley, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Agarwal, Deborah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2001-06-06

    Distributed applications require information to effectively utilize the network. Some of the information they require is the current and maximum bandwidth, current and minimum latency, bottlenecks, burst frequency, and congestion extent. This type of information allows applications to determine parameters like optimal TCP buffer size. In this paper, we present a cooperative information-gathering tool called the network characterization service (NCS). NCS runs in user space and is used to acquire network information. Its protocol is designed for scalable and distributed deployment, similar to DNS. Its algorithms provide efficient, speedy and accurate detection of bottlenecks, especially dynamic bottlenecks. On current and future networks, dynamic bottlenecks do and will affect network performance dramatically.

  16. Building Scalable Knowledge Graphs for Earth Science

    Science.gov (United States)

    Ramachandran, R.; Maskey, M.; Gatlin, P. N.; Zhang, J.; Duan, X.; Bugbee, K.; Christopher, S. A.; Miller, J. J.

    2017-12-01

    Estimates indicate that the world's information will grow by 800% in the next five years. In any given field, a single researcher or a team of researchers cannot keep up with this rate of knowledge expansion without the help of cognitive systems. Cognitive computing, defined as the use of information technology to augment human cognition, can help tackle large systemic problems. Knowledge graphs, one of the foundational components of cognitive systems, link key entities in a specific domain with other entities via relationships. Researchers could mine these graphs to make probabilistic recommendations and to infer new knowledge. At this point, however, there is a dearth of tools to generate scalable Knowledge graphs using existing corpus of scientific literature for Earth science research. Our project is currently developing an end-to-end automated methodology for incrementally constructing Knowledge graphs for Earth Science. Semantic Entity Recognition (SER) is one of the key steps in this methodology. SER for Earth Science uses external resources (including metadata catalogs and controlled vocabulary) as references to guide entity extraction and recognition (i.e., labeling) from unstructured text, in order to build a large training set to seed the subsequent auto-learning component in our algorithm. Results from several SER experiments will be presented as well as lessons learned.

  17. Ancestors protocol for scalable key management

    Directory of Open Access Journals (Sweden)

    Dieter Gollmann

    2010-06-01

    Full Text Available Group key management is an important functional building block for secure multicast architecture. Thereby, it has been extensively studied in the literature. The main proposed protocol is Adaptive Clustering for Scalable Group Key Management (ASGK. According to ASGK protocol, the multicast group is divided into clusters, where each cluster consists of areas of members. Each cluster uses its own Traffic Encryption Key (TEK. These clusters are updated periodically depending on the dynamism of the members during the secure session. The modified protocol has been proposed based on ASGK with some modifications to balance the number of affected members and the encryption/decryption overhead with any number of the areas when a member joins or leaves the group. This modified protocol is called Ancestors protocol. According to Ancestors protocol, every area receives the dynamism of the members from its parents. The main objective of the modified protocol is to reduce the number of affected members during the leaving and joining members, then 1 affects n overhead would be reduced. A comparative study has been done between ASGK protocol and the modified protocol. According to the comparative results, it found that the modified protocol is always outperforming the ASGK protocol.

  18. Scalable Combinatorial Tools for Health Disparities Research

    Directory of Open Access Journals (Sweden)

    Michael A. Langston

    2014-10-01

    Full Text Available Despite staggering investments made in unraveling the human genome, current estimates suggest that as much as 90% of the variance in cancer and chronic diseases can be attributed to factors outside an individual’s genetic endowment, particularly to environmental exposures experienced across his or her life course. New analytical approaches are clearly required as investigators turn to complicated systems theory and ecological, place-based and life-history perspectives in order to understand more clearly the relationships between social determinants, environmental exposures and health disparities. While traditional data analysis techniques remain foundational to health disparities research, they are easily overwhelmed by the ever-increasing size and heterogeneity of available data needed to illuminate latent gene x environment interactions. This has prompted the adaptation and application of scalable combinatorial methods, many from genome science research, to the study of population health. Most of these powerful tools are algorithmically sophisticated, highly automated and mathematically abstract. Their utility motivates the main theme of this paper, which is to describe real applications of innovative transdisciplinary models and analyses in an effort to help move the research community closer toward identifying the causal mechanisms and associated environmental contexts underlying health disparities. The public health exposome is used as a contemporary focus for addressing the complex nature of this subject.

  19. Scalable Notch Antenna System for Multiport Applications

    Directory of Open Access Journals (Sweden)

    Abdurrahim Toktas

    2016-01-01

    Full Text Available A novel and compact scalable antenna system is designed for multiport applications. The basic design is built on a square patch with an electrical size of 0.82λ0×0.82λ0 (at 2.4 GHz on a dielectric substrate. The design consists of four symmetrical and orthogonal triangular notches with circular feeding slots at the corners of the common patch. The 4-port antenna can be simply rearranged to 8-port and 12-port systems. The operating band of the system can be tuned by scaling (S the size of the system while fixing the thickness of the substrate. The antenna system with S: 1/1 in size of 103.5×103.5 mm2 operates at the frequency band of 2.3–3.0 GHz. By scaling the antenna with S: 1/2.3, a system of 45×45 mm2 is achieved, and thus the operating band is tuned to 4.7–6.1 GHz with the same scattering characteristic. A parametric study is also conducted to investigate the effects of changing the notch dimensions. The performance of the antenna is verified in terms of the antenna characteristics as well as diversity and multiplexing parameters. The antenna system can be tuned by scaling so that it is applicable to the multiport WLAN, WIMAX, and LTE devices with port upgradability.

  20. Scalable conditional induction variables (CIV) analysis

    KAUST Repository

    Oancea, Cosmin E.

    2015-02-01

    Subscripts using induction variables that cannot be expressed as a formula in terms of the enclosing-loop indices appear in the low-level implementation of common programming abstractions such as Alter, or stack operations and pose significant challenges to automatic parallelization. Because the complexity of such induction variables is often due to their conditional evaluation across the iteration space of loops we name them Conditional Induction Variables (CIV). This paper presents a flow-sensitive technique that summarizes both such CIV-based and affine subscripts to program level, using the same representation. Our technique requires no modifications of our dependence tests, which is agnostic to the original shape of the subscripts, and is more powerful than previously reported dependence tests that rely on the pairwise disambiguation of read-write references. We have implemented the CIV analysis in our parallelizing compiler and evaluated its impact on five Fortran benchmarks. We have found that that there are many important loops using CIV subscripts and that our analysis can lead to their scalable parallelization. This in turn has led to the parallelization of the benchmark programs they appear in.

  1. A Programmable, Scalable-Throughput Interleaver

    Directory of Open Access Journals (Sweden)

    E. J. C. Rijshouwer

    2010-01-01

    Full Text Available The interleaver stages of digital communication standards show a surprisingly large variation in throughput, state sizes, and permutation functions. Furthermore, data rates for 4G standards such as LTE-Advanced will exceed typical baseband clock frequencies of handheld devices. Multistream operation for Software Defined Radio and iterative decoding algorithms will call for ever higher interleave data rates. Our interleave machine is built around 8 single-port SRAM banks and can be programmed to generate up to 8 addresses every clock cycle. The scalable architecture combines SIMD and VLIW concepts with an efficient resolution of bank conflicts. A wide range of cellular, connectivity, and broadcast interleavers have been mapped on this machine, with throughputs up to more than 0.5 Gsymbol/second. Although it was designed for channel interleaving, the application domain of the interleaver extends also to Turbo interleaving. The presented configuration of the architecture is designed as a part of a programmable outer receiver on a prototype board. It offers (near universal programmability to enable the implementation of new interleavers. The interleaver measures 2.09 mm2 in 65 nm CMOS (including memories and proves functional on silicon.

  2. A Programmable, Scalable-Throughput Interleaver

    Directory of Open Access Journals (Sweden)

    Rijshouwer EJC

    2010-01-01

    Full Text Available The interleaver stages of digital communication standards show a surprisingly large variation in throughput, state sizes, and permutation functions. Furthermore, data rates for 4G standards such as LTE-Advanced will exceed typical baseband clock frequencies of handheld devices. Multistream operation for Software Defined Radio and iterative decoding algorithms will call for ever higher interleave data rates. Our interleave machine is built around 8 single-port SRAM banks and can be programmed to generate up to 8 addresses every clock cycle. The scalable architecture combines SIMD and VLIW concepts with an efficient resolution of bank conflicts. A wide range of cellular, connectivity, and broadcast interleavers have been mapped on this machine, with throughputs up to more than 0.5 Gsymbol/second. Although it was designed for channel interleaving, the application domain of the interleaver extends also to Turbo interleaving. The presented configuration of the architecture is designed as a part of a programmable outer receiver on a prototype board. It offers (near universal programmability to enable the implementation of new interleavers. The interleaver measures 2.09 m in 65 nm CMOS (including memories and proves functional on silicon.

  3. Applications of Scalable Multipoint Video and Audio Using the Public Internet

    Directory of Open Access Journals (Sweden)

    Robert D. Gaglianello

    2000-01-01

    Full Text Available This paper describes a scalable multipoint video system, designed for efficient generation and display of high quality, multiple resolution, multiple compressed video streams over IP-based networks. We present our experiences using the system over the public Internet for several “real-world” applications, including distance learning, virtual theater, and virtual collaboration. The trials were a combined effort of Bell Laboratories and the Gertrude Stein Repertory Theatre (TGSRT. We also present current advances in the conferencing system since the trials, new areas for application and future applications.

  4. A Yellow Emitting InGaN/GaN Nanowires-based Light Emitting Diode Grown on Scalable Quartz Substrate

    KAUST Repository

    Prabaswara, Aditya

    2017-05-08

    The first InGaN/GaN nanowires-based yellow (λ = 590 nm) light-emitting diodes on scalable quartz substrates are demonstrated, by utilizing a thin Ti/TiN interlayer to achieve simultaneous substrate conductivity and transparency.

  5. A Yellow Emitting InGaN/GaN Nanowires-based Light Emitting Diode Grown on Scalable Quartz Substrate

    KAUST Repository

    Prabaswara, Aditya; Ng, Tien Khee; Zhao, Chao; Janjua, Bilal; Alyamani, Ahmed; El-desouki, Munir; Ooi, Boon S.

    2017-01-01

    The first InGaN/GaN nanowires-based yellow (λ = 590 nm) light-emitting diodes on scalable quartz substrates are demonstrated, by utilizing a thin Ti/TiN interlayer to achieve simultaneous substrate conductivity and transparency.

  6. Scalable Real-Time Negotiation Toolkit

    National Research Council Canada - National Science Library

    Lesser, Victor

    2004-01-01

    ... to implement an adaptive distributed sensor network. These activities involved the development of a distributed soft, real-time heuristic resource allocation protocol, the development of a domain-independent soft, real time agent architecture...

  7. Scalable inference for stochastic block models

    KAUST Repository

    Peng, Chengbin; Zhang, Zhihua; Wong, Ka-Chun; Zhang, Xiangliang; Keyes, David E.

    2017-01-01

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference

  8. ParaText : scalable solutions for processing and searching very large document collections : final LDRD report.

    Energy Technology Data Exchange (ETDEWEB)

    Crossno, Patricia Joyce; Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-09-01

    This report is a summary of the accomplishments of the 'Scalable Solutions for Processing and Searching Very Large Document Collections' LDRD, which ran from FY08 through FY10. Our goal was to investigate scalable text analysis; specifically, methods for information retrieval and visualization that could scale to extremely large document collections. Towards that end, we designed, implemented, and demonstrated a scalable framework for text analysis - ParaText - as a major project deliverable. Further, we demonstrated the benefits of using visual analysis in text analysis algorithm development, improved performance of heterogeneous ensemble models in data classification problems, and the advantages of information theoretic methods in user analysis and interpretation in cross language information retrieval. The project involved 5 members of the technical staff and 3 summer interns (including one who worked two summers). It resulted in a total of 14 publications, 3 new software libraries (2 open source and 1 internal to Sandia), several new end-user software applications, and over 20 presentations. Several follow-on projects have already begun or will start in FY11, with additional projects currently in proposal.

  9. GPU-based Scalable Volumetric Reconstruction for Multi-view Stereo

    Energy Technology Data Exchange (ETDEWEB)

    Kim, H; Duchaineau, M; Max, N

    2011-09-21

    We present a new scalable volumetric reconstruction algorithm for multi-view stereo using a graphics processing unit (GPU). It is an effectively parallelized GPU algorithm that simultaneously uses a large number of GPU threads, each of which performs voxel carving, in order to integrate depth maps with images from multiple views. Each depth map, triangulated from pair-wise semi-dense correspondences, represents a view-dependent surface of the scene. This algorithm also provides scalability for large-scale scene reconstruction in a high resolution voxel grid by utilizing streaming and parallel computation. The output is a photo-realistic 3D scene model in a volumetric or point-based representation. We demonstrate the effectiveness and the speed of our algorithm with a synthetic scene and real urban/outdoor scenes. Our method can also be integrated with existing multi-view stereo algorithms such as PMVS2 to fill holes or gaps in textureless regions.

  10. A Scalable Parallel PWTD-Accelerated SIE Solver for Analyzing Transient Scattering from Electrically Large Objects

    KAUST Repository

    Liu, Yang

    2015-12-17

    A scalable parallel plane-wave time-domain (PWTD) algorithm for efficient and accurate analysis of transient scattering from electrically large objects is presented. The algorithm produces scalable communication patterns on very large numbers of processors by leveraging two mechanisms: (i) a hierarchical parallelization strategy to evenly distribute the computation and memory loads at all levels of the PWTD tree among processors, and (ii) a novel asynchronous communication scheme to reduce the cost and memory requirement of the communications between the processors. The efficiency and accuracy of the algorithm are demonstrated through its applications to the analysis of transient scattering from a perfect electrically conducting (PEC) sphere with a diameter of 70 wavelengths and a PEC square plate with a dimension of 160 wavelengths. Furthermore, the proposed algorithm is used to analyze transient fields scattered from realistic airplane and helicopter models under high frequency excitation.

  11. The Development of a Field Services Network for a Satellite-Based Educational Telecommunications Experiment. Satellite Technology Demonstration, Technical Report No. 0333.

    Science.gov (United States)

    Anderson, Frank; And Others

    The Satellite Technology Demonstration (STD) of the Federation of Rocky Mountain States (FRMS) employed a technical delivery system to merge effectively hardware and software, products and services. It also needed a nontechnical component to insure product and service acceptance. Accordingly, the STD's Utilization Component was responsible for…

  12. Scalable Partitioning Algorithms for FPGAs With Heterogeneous Resources

    National Research Council Canada - National Science Library

    Selvakkumaran, Navaratnasothie; Ranjan, Abhishek; Raje, Salil; Karypis, George

    2004-01-01

    As FPGA densities increase, partitioning-based FPGA placement approaches are becoming increasingly important as they can be used to provide high-quality and computationally scalable placement solutions...

  13. A scalable variational inequality approach for flow through porous media models with pressure-dependent viscosity

    Science.gov (United States)

    Mapakshi, N. K.; Chang, J.; Nakshatrala, K. B.

    2018-04-01

    Mathematical models for flow through porous media typically enjoy the so-called maximum principles, which place bounds on the pressure field. It is highly desirable to preserve these bounds on the pressure field in predictive numerical simulations, that is, one needs to satisfy discrete maximum principles (DMP). Unfortunately, many of the existing formulations for flow through porous media models do not satisfy DMP. This paper presents a robust, scalable numerical formulation based on variational inequalities (VI), to model non-linear flows through heterogeneous, anisotropic porous media without violating DMP. VI is an optimization technique that places bounds on the numerical solutions of partial differential equations. To crystallize the ideas, a modification to Darcy equations by taking into account pressure-dependent viscosity will be discretized using the lowest-order Raviart-Thomas (RT0) and Variational Multi-scale (VMS) finite element formulations. It will be shown that these formulations violate DMP, and, in fact, these violations increase with an increase in anisotropy. It will be shown that the proposed VI-based formulation provides a viable route to enforce DMP. Moreover, it will be shown that the proposed formulation is scalable, and can work with any numerical discretization and weak form. A series of numerical benchmark problems are solved to demonstrate the effects of heterogeneity, anisotropy and non-linearity on DMP violations under the two chosen formulations (RT0 and VMS), and that of non-linearity on solver convergence for the proposed VI-based formulation. Parallel scalability on modern computational platforms will be illustrated through strong-scaling studies, which will prove the efficiency of the proposed formulation in a parallel setting. Algorithmic scalability as the problem size is scaled up will be demonstrated through novel static-scaling studies. The performed static-scaling studies can serve as a guide for users to be able to select

  14. Development, Verification and Validation of Parallel, Scalable Volume of Fluid CFD Program for Propulsion Applications

    Science.gov (United States)

    West, Jeff; Yang, H. Q.

    2014-01-01

    There are many instances involving liquid/gas interfaces and their dynamics in the design of liquid engine powered rockets such as the Space Launch System (SLS). Some examples of these applications are: Propellant tank draining and slosh, subcritical condition injector analysis for gas generators, preburners and thrust chambers, water deluge mitigation for launch induced environments and even solid rocket motor liquid slag dynamics. Commercially available CFD programs simulating gas/liquid interfaces using the Volume of Fluid approach are currently limited in their parallel scalability. In 2010 for instance, an internal NASA/MSFC review of three commercial tools revealed that parallel scalability was seriously compromised at 8 cpus and no additional speedup was possible after 32 cpus. Other non-interface CFD applications at the time were demonstrating useful parallel scalability up to 4,096 processors or more. Based on this review, NASA/MSFC initiated an effort to implement a Volume of Fluid implementation within the unstructured mesh, pressure-based algorithm CFD program, Loci-STREAM. After verification was achieved by comparing results to the commercial CFD program CFD-Ace+, and validation by direct comparison with data, Loci-STREAM-VoF is now the production CFD tool for propellant slosh force and slosh damping rate simulations at NASA/MSFC. On these applications, good parallel scalability has been demonstrated for problems sizes of tens of millions of cells and thousands of cpu cores. Ongoing efforts are focused on the application of Loci-STREAM-VoF to predict the transient flow patterns of water on the SLS Mobile Launch Platform in order to support the phasing of water for launch environment mitigation so that vehicle determinantal effects are not realized.

  15. GPU-FS-kNN: a software tool for fast and scalable kNN computation using GPUs.

    Directory of Open Access Journals (Sweden)

    Ahmed Shamsul Arefin

    Full Text Available BACKGROUND: The analysis of biological networks has become a major challenge due to the recent development of high-throughput techniques that are rapidly producing very large data sets. The exploding volumes of biological data are craving for extreme computational power and special computing facilities (i.e. super-computers. An inexpensive solution, such as General Purpose computation based on Graphics Processing Units (GPGPU, can be adapted to tackle this challenge, but the limitation of the device internal memory can pose a new problem of scalability. An efficient data and computational parallelism with partitioning is required to provide a fast and scalable solution to this problem. RESULTS: We propose an efficient parallel formulation of the k-Nearest Neighbour (kNN search problem, which is a popular method for classifying objects in several fields of research, such as pattern recognition, machine learning and bioinformatics. Being very simple and straightforward, the performance of the kNN search degrades dramatically for large data sets, since the task is computationally intensive. The proposed approach is not only fast but also scalable to large-scale instances. Based on our approach, we implemented a software tool GPU-FS-kNN (GPU-based Fast and Scalable k-Nearest Neighbour for CUDA enabled GPUs. The basic approach is simple and adaptable to other available GPU architectures. We observed speed-ups of 50-60 times compared with CPU implementation on a well-known breast microarray study and its associated data sets. CONCLUSION: Our GPU-based Fast and Scalable k-Nearest Neighbour search technique (GPU-FS-kNN provides a significant performance improvement for nearest neighbour computation in large-scale networks. Source code and the software tool is available under GNU Public License (GPL at https://sourceforge.net/p/gpufsknn/.

  16. SOL: A Library for Scalable Online Learning Algorithms

    OpenAIRE

    Wu, Yue; Hoi, Steven C. H.; Liu, Chenghao; Lu, Jing; Sahoo, Doyen; Yu, Nenghai

    2016-01-01

    SOL is an open-source library for scalable online learning algorithms, and is particularly suitable for learning with high-dimensional data. The library provides a family of regular and sparse online learning algorithms for large-scale binary and multi-class classification tasks with high efficiency, scalability, portability, and extensibility. SOL was implemented in C++, and provided with a collection of easy-to-use command-line tools, python wrappers and library calls for users and develope...

  17. Architectures and Applications for Scalable Quantum Information Systems

    Science.gov (United States)

    2007-01-01

    Gershenfeld and I. Chuang. Quantum computing with molecules. Scientific American, June 1998. [16] A. Globus, D. Bailey, J. Han, R. Jaffe, C. Levit , R...AFRL-IF-RS-TR-2007-12 Final Technical Report January 2007 ARCHITECTURES AND APPLICATIONS FOR SCALABLE QUANTUM INFORMATION SYSTEMS...NUMBER 5b. GRANT NUMBER FA8750-01-2-0521 4. TITLE AND SUBTITLE ARCHITECTURES AND APPLICATIONS FOR SCALABLE QUANTUM INFORMATION SYSTEMS 5c

  18. On the scalability of LISP and advanced overlaid services

    OpenAIRE

    Coras, Florin

    2015-01-01

    In just four decades the Internet has gone from a lab experiment to a worldwide, business critical infrastructure that caters to the communication needs of almost a half of the Earth's population. With these figures on its side, arguing against the Internet's scalability would seem rather unwise. However, the Internet's organic growth is far from finished and, as billions of new devices are expected to be joined in the not so distant future, scalability, or lack thereof, is commonly believed ...

  19. Online Hashing for Scalable Remote Sensing Image Retrieval

    Directory of Open Access Journals (Sweden)

    Peng Li

    2018-05-01

    Full Text Available Recently, hashing-based large-scale remote sensing (RS image retrieval has attracted much attention. Many new hashing algorithms have been developed and successfully applied to fast RS image retrieval tasks. However, there exists an important problem rarely addressed in the research literature of RS image hashing. The RS images are practically produced in a streaming manner in many real-world applications, which means the data distribution keeps changing over time. Most existing RS image hashing methods are batch-based models whose hash functions are learned once for all and kept fixed all the time. Therefore, the pre-trained hash functions might not fit the ever-growing new RS images. Moreover, the batch-based models have to load all the training images into memory for model learning, which consumes many computing and memory resources. To address the above deficiencies, we propose a new online hashing method, which learns and adapts its hashing functions with respect to the newly incoming RS images in terms of a novel online partial random learning scheme. Our hash model is updated in a sequential mode such that the representative power of the learned binary codes for RS images are improved accordingly. Moreover, benefiting from the online learning strategy, our proposed hashing approach is quite suitable for scalable real-world remote sensing image retrieval. Extensive experiments on two large-scale RS image databases under online setting demonstrated the efficacy and effectiveness of the proposed method.

  20. Scalable Production of Graphene-Based Wearable E-Textiles.

    Science.gov (United States)

    Karim, Nazmul; Afroj, Shaila; Tan, Sirui; He, Pei; Fernando, Anura; Carr, Chris; Novoselov, Kostya S

    2017-12-26

    Graphene-based wearable e-textiles are considered to be promising due to their advantages over traditional metal-based technology. However, the manufacturing process is complex and currently not suitable for industrial scale application. Here we report a simple, scalable, and cost-effective method of producing graphene-based wearable e-textiles through the chemical reduction of graphene oxide (GO) to make stable reduced graphene oxide (rGO) dispersion which can then be applied to the textile fabric using a simple pad-dry technique. This application method allows the potential manufacture of conductive graphene e-textiles at commercial production rates of ∼150 m/min. The graphene e-textile materials produced are durable and washable with acceptable softness/hand feel. The rGO coating enhanced the tensile strength of cotton fabric and also the flexibility due to the increase in strain% at maximum load. We demonstrate the potential application of these graphene e-textiles for wearable electronics with activity monitoring sensor. This could potentially lead to a multifunctional single graphene e-textile garment that can act both as sensors and flexible heating elements powered by the energy stored in graphene textile supercapacitors.

  1. Scalable Photogrammetric Motion Capture System "mosca": Development and Application

    Science.gov (United States)

    Knyaz, V. A.

    2015-05-01

    Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.

  2. Developing a scalable modeling architecture for studying survivability technologies

    Science.gov (United States)

    Mohammad, Syed; Bounker, Paul; Mason, James; Brister, Jason; Shady, Dan; Tucker, David

    2006-05-01

    To facilitate interoperability of models in a scalable environment, and provide a relevant virtual environment in which Survivability technologies can be evaluated, the US Army Research Development and Engineering Command (RDECOM) Modeling Architecture for Technology Research and Experimentation (MATREX) Science and Technology Objective (STO) program has initiated the Survivability Thread which will seek to address some of the many technical and programmatic challenges associated with the effort. In coordination with different Thread customers, such as the Survivability branches of various Army labs, a collaborative group has been formed to define the requirements for the simulation environment that would in turn provide them a value-added tool for assessing models and gauge system-level performance relevant to Future Combat Systems (FCS) and the Survivability requirements of other burgeoning programs. An initial set of customer requirements has been generated in coordination with the RDECOM Survivability IPT lead, through the Survivability Technology Area at RDECOM Tank-automotive Research Development and Engineering Center (TARDEC, Warren, MI). The results of this project are aimed at a culminating experiment and demonstration scheduled for September, 2006, which will include a multitude of components from within RDECOM and provide the framework for future experiments to support Survivability research. This paper details the components with which the MATREX Survivability Thread was created and executed, and provides insight into the capabilities currently demanded by the Survivability faculty within RDECOM.

  3. Dynamic superhydrophobic behavior in scalable random textured polymeric surfaces

    Science.gov (United States)

    Moreira, David; Park, Sung-hoon; Lee, Sangeui; Verma, Neil; Bandaru, Prabhakar R.

    2016-03-01

    Superhydrophobic (SH) surfaces, created from hydrophobic materials with micro- or nano- roughness, trap air pockets in the interstices of the roughness, leading, in fluid flow conditions, to shear-free regions with finite interfacial fluid velocity and reduced resistance to flow. Significant attention has been given to SH conditions on ordered, periodic surfaces. However, in practical terms, random surfaces are more applicable due to their relative ease of fabrication. We investigate SH behavior on a novel durable polymeric rough surface created through a scalable roll-coating process with varying micro-scale roughness through velocity and pressure drop measurements. We introduce a new method to construct the velocity profile over SH surfaces with significant roughness in microchannels. Slip length was measured as a function of differing roughness and interstitial air conditions, with roughness and air fraction parameters obtained through direct visualization. The slip length was matched to scaling laws with good agreement. Roughness at high air fractions led to a reduced pressure drop and higher velocities, demonstrating the effectiveness of the considered surface in terms of reduced resistance to flow. We conclude that the observed air fraction under flow conditions is the primary factor determining the response in fluid flow. Such behavior correlated well with the hydrophobic or superhydrophobic response, indicating significant potential for practical use in enhancing fluid flow efficiency.

  4. Scalable electro-photonic integration concept based on polymer waveguides

    Science.gov (United States)

    Bosman, E.; Van Steenberge, G.; Boersma, A.; Wiegersma, S.; Harmsma, P.; Karppinen, M.; Korhonen, T.; Offrein, B. J.; Dangel, R.; Daly, A.; Ortsiefer, M.; Justice, J.; Corbett, B.; Dorrestein, S.; Duis, J.

    2016-03-01

    A novel method for fabricating a single mode optical interconnection platform is presented. The method comprises the miniaturized assembly of optoelectronic single dies, the scalable fabrication of polymer single mode waveguides and the coupling to glass fiber arrays providing the I/O's. The low cost approach for the polymer waveguide fabrication is based on the nano-imprinting of a spin-coated waveguide core layer. The assembly of VCSELs and photodiodes is performed before waveguide layers are applied. By embedding these components in deep reactive ion etched pockets in the silicon substrate, the planarity of the substrate for subsequent layer processing is guaranteed and the thermal path of chip-to-substrate is minimized. Optical coupling of the embedded devices to the nano-imprinted waveguides is performed by laser ablating 45 degree trenches which act as optical mirror for 90 degree deviation of the light from VCSEL to waveguide. Laser ablation is also implemented for removing parts of the polymer stack in order to mount a custom fabricated connector containing glass fiber arrays. A demonstration device was built to show the proof of principle of the novel fabrication, packaging and optical coupling principles as described above, combined with a set of sub-demonstrators showing the functionality of the different techniques separately. The paper represents a significant part of the electro-photonic integration accomplishments in the European 7th Framework project "Firefly" and not only discusses the development of the different assembly processes described above, but the efforts on the complete integration of all process approaches into the single device demonstrator.

  5. Microscopic Characterization of Scalable Coherent Rydberg Superatoms

    Directory of Open Access Journals (Sweden)

    Johannes Zeiher

    2015-08-01

    Full Text Available Strong interactions can amplify quantum effects such that they become important on macroscopic scales. Controlling these coherently on a single-particle level is essential for the tailored preparation of strongly correlated quantum systems and opens up new prospects for quantum technologies. Rydberg atoms offer such strong interactions, which lead to extreme nonlinearities in laser-coupled atomic ensembles. As a result, multiple excitation of a micrometer-sized cloud can be blocked while the light-matter coupling becomes collectively enhanced. The resulting two-level system, often called a “superatom,” is a valuable resource for quantum information, providing a collective qubit. Here, we report on the preparation of 2 orders of magnitude scalable superatoms utilizing the large interaction strength provided by Rydberg atoms combined with precise control of an ensemble of ultracold atoms in an optical lattice. The latter is achieved with sub-shot-noise precision by local manipulation of a two-dimensional Mott insulator. We microscopically confirm the superatom picture by in situ detection of the Rydberg excitations and observe the characteristic square-root scaling of the optical coupling with the number of atoms. Enabled by the full control over the atomic sample, including the motional degrees of freedom, we infer the overlap of the produced many-body state with a W state from the observed Rabi oscillations and deduce the presence of entanglement. Finally, we investigate the breakdown of the superatom picture when two Rydberg excitations are present in the system, which leads to dephasing and a loss of coherence.

  6. Myria: Scalable Analytics as a Service

    Science.gov (United States)

    Howe, B.; Halperin, D.; Whitaker, A.

    2014-12-01

    At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.

  7. GASPRNG: GPU accelerated scalable parallel random number generator library

    Science.gov (United States)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or

  8. Optimal Switch Configuration in Software-Defined Networks

    Directory of Open Access Journals (Sweden)

    Béla GENGE

    2016-06-01

    Full Text Available The emerging Software-Defined Networks (SDN paradigm facilitates innovative applications and enables the seamless provisioning of resilient communications. Nevertheless, the installation of communication flows in SDN requires careful planning in order to avoid configuration errors and to fulfill communication requirements. In this paper we propose an approach that installs automatically and optimally static flows in SDN switches. The approach aims to select high capacity links and shortest path routing, and enforces communication link and switch capacity limitations. Experimental results demonstrate the effectiveness and scalability of the developed methodology.

  9. A Lossless Network for Data Acquisition

    Science.gov (United States)

    Jereczek, Grzegorz; Lehmann Miotto, Giovanna; Malone, David; Walukiewicz, Miroslaw

    2017-06-01

    The bursty many-to-one communication pattern, typical for data acquisition systems, is particularly demanding for commodity TCP/IP and Ethernet technologies. We expand the study of lossless switching in software running on commercial off-the-shelf servers, using the ATLAS experiment as a case study. In this paper, we extend the popular software switch, Open vSwitch, with a dedicated, throughput-oriented buffering mechanism for data acquisition. We compare the performance under heavy congestion on typical Ethernet switches to a commodity server acting as a switch. Our results indicate that software switches with large buffers perform significantly better. Next, we evaluate the scalability of the system when building a larger topology of interconnected software switches, exploiting the integration with software-defined networking technologies. We build an IP-only leaf-spine network consisting of eight software switches running on distinct physical servers as a demonstrator.

  10. A Lossless Network for Data Acquisition

    CERN Document Server

    AUTHOR|(SzGeCERN)698154; The ATLAS collaboration; Lehmann Miotto, Giovanna

    2017-01-01

    The bursty many-to-one communication pattern, typical for data acquisition systems, is particularly demanding for commodity TCP/IP and Ethernet technologies. We expand the study of lossless switching in software running on commercial-off-the-shelf servers, using the ATLAS experiment as a case study. In this paper we extend the popular software switch, Open vSwitch, with a dedicated, throughput-oriented buffering mechanism for data acquisition. We compare the performance under heavy congestion on typical Ethernet switches to a commodity server acting as a switch. Our results indicate that software switches with large buffers perform significantly better. Next, we evaluate the scalability of the system when building a larger topology of interconnected software switches, exploiting the integration with software-defined networking technologies. We build an IP-only leaf-spine network consisting of eight software switches running on separate physical servers as a demonstrator.

  11. A Cluster- Based Secure Active Network Environment

    Institute of Scientific and Technical Information of China (English)

    CHEN Xiao-lin; ZHOU Jing-yang; DAI Han; LU Sang-lu; CHEN Gui-hai

    2005-01-01

    We introduce a cluster-based secure active network environment (CSANE) which separates the processing of IP packets from that of active packets in active routers. In this environment, the active code authorized or trusted by privileged users is executed in the secure execution environment (EE) of the active router, while others are executed in the secure EE of the nodes in the distributed shared memory (DSM) cluster. With the supports of a multi-process Java virtual machine and KeyNote, untrusted active packets are controlled to securely consume resource. The DSM consistency management makes that active packets can be parallelly processed in the DSM cluster as if they were processed one by one in ANTS (Active Network Transport System). We demonstrate that CSANE has good security and scalability, but imposing little changes on traditional routers.

  12. HiDi: an efficient reverse engineering schema for large-scale dynamic regulatory network reconstruction using adaptive differentiation.

    Science.gov (United States)

    Deng, Yue; Zenil, Hector; Tegnér, Jesper; Kiani, Narsis A

    2017-12-15

    The use of differential equations (ODE) is one of the most promising approaches to network inference. The success of ODE-based approaches has, however, been limited, due to the difficulty in estimating parameters and by their lack of scalability. Here, we introduce a novel method and pipeline to reverse engineer gene regulatory networks from gene expression of time series and perturbation data based upon an improvement on the calculation scheme of the derivatives and a pre-filtration step to reduce the number of possible links. The method introduces a linear differential equation model with adaptive numerical differentiation that is scalable to extremely large regulatory networks. We demonstrate the ability of this method to outperform current state-of-the-art methods applied to experimental and synthetic data using test data from the DREAM4 and DREAM5 challenges. Our method displays greater accuracy and scalability. We benchmark the performance of the pipeline with respect to dataset size and levels of noise. We show that the computation time is linear over various network sizes. The Matlab code of the HiDi implementation is available at: www.complexitycalculator.com/HiDiScript.zip. hzenilc@gmail.com or narsis.kiani@ki.se. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  13. HiDi: an efficient reverse engineering schema for large-scale dynamic regulatory network reconstruction using adaptive differentiation

    KAUST Repository

    Deng, Yue

    2017-08-05

    Motivation: The use of differential equations (ODE) is one of the most promising approaches to network inference. The success of ODE-based approaches has, however, been limited, due to the difficulty in estimating parameters and by their lack of scalability. Here, we introduce a novel method and pipeline to reverse engineer gene regulatory networks from gene expression of time series and perturbation data based upon an improvement on the calculation scheme of the derivatives and a pre-filtration step to reduce the number of possible links. The method introduces a linear differential equation model with adaptive numerical differentiation that is scalable to extremely large regulatory networks. Results: We demonstrate the ability of this method to outperform current state-of-the-art methods applied to experimental and synthetic data using test data from the DREAM4 and DREAM5 challenges. Our method displays greater accuracy and scalability. We benchmark the performance of the pipeline with respect to dataset size and levels of noise. We show that the computation time is linear over various network sizes.

  14. Scalable Coating and Properties of Transparent, Flexible, Silver Nanowire Electrodes

    KAUST Repository

    Hu, Liangbing

    2010-05-25

    We report a comprehensive study of transparent and conductive silver nanowire (Ag NW) electrodes, including a scalable fabrication process, morphologies, and optical, mechanical adhesion, and flexibility properties, and various routes to improve the performance. We utilized a synthesis specifically designed for long and thin wires for improved performance in terms of sheet resistance and optical transmittance. Twenty Ω/sq and ∼ 80% specular transmittance, and 8 ohms/sq and 80% diffusive transmittance in the visible range are achieved, which fall in the same range as the best indium tin oxide (ITO) samples on plastic substrates for flexible electronics and solar cells. The Ag NW electrodes show optical transparencies superior to ITO for near-infrared wavelengths (2-fold higher transmission). Owing to light scattering effects, the Ag NW network has the largest difference between diffusive transmittance and specular transmittance when compared with ITO and carbon nanotube electrodes, a property which could greatly enhance solar cell performance. A mechanical study shows that Ag NW electrodes on flexible substrates show excellent robustness when subjected to bending. We also study the electrical conductance of Ag nanowires and their junctions and report a facile electrochemical method for a Au coating to reduce the wire-to-wire junction resistance for better overall film conductance. Simple mechanical pressing was also found to increase the NW film conductance due to the reduction of junction resistance. The overall properties of transparent Ag NW electrodes meet the requirements of transparent electrodes for many applications and could be an immediate ITO replacement for flexible electronics and solar cells. © 2010 American Chemical Society.

  15. Scalable Coating and Properties of Transparent, Flexible, Silver Nanowire Electrodes

    KAUST Repository

    Hu, Liangbing; Kim, Han Sun; Lee, Jung-Yong; Peumans, Peter; Cui, Yi

    2010-01-01

    We report a comprehensive study of transparent and conductive silver nanowire (Ag NW) electrodes, including a scalable fabrication process, morphologies, and optical, mechanical adhesion, and flexibility properties, and various routes to improve the performance. We utilized a synthesis specifically designed for long and thin wires for improved performance in terms of sheet resistance and optical transmittance. Twenty Ω/sq and ∼ 80% specular transmittance, and 8 ohms/sq and 80% diffusive transmittance in the visible range are achieved, which fall in the same range as the best indium tin oxide (ITO) samples on plastic substrates for flexible electronics and solar cells. The Ag NW electrodes show optical transparencies superior to ITO for near-infrared wavelengths (2-fold higher transmission). Owing to light scattering effects, the Ag NW network has the largest difference between diffusive transmittance and specular transmittance when compared with ITO and carbon nanotube electrodes, a property which could greatly enhance solar cell performance. A mechanical study shows that Ag NW electrodes on flexible substrates show excellent robustness when subjected to bending. We also study the electrical conductance of Ag nanowires and their junctions and report a facile electrochemical method for a Au coating to reduce the wire-to-wire junction resistance for better overall film conductance. Simple mechanical pressing was also found to increase the NW film conductance due to the reduction of junction resistance. The overall properties of transparent Ag NW electrodes meet the requirements of transparent electrodes for many applications and could be an immediate ITO replacement for flexible electronics and solar cells. © 2010 American Chemical Society.

  16. When the Web meets the cell: using personalized PageRank for analyzing protein interaction networks.

    Science.gov (United States)

    Iván, Gábor; Grolmusz, Vince

    2011-02-01

    Enormous and constantly increasing quantity of biological information is represented in metabolic and in protein interaction network databases. Most of these data are freely accessible through large public depositories. The robust analysis of these resources needs novel technologies, being developed today. Here we demonstrate a technique, originating from the PageRank computation for the World Wide Web, for analyzing large interaction networks. The method is fast, scalable and robust, and its capabilities are demonstrated on metabolic network data of the tuberculosis bacterium and the proteomics analysis of the blood of melanoma patients. The Perl script for computing the personalized PageRank in protein networks is available for non-profit research applications (together with sample input files) at the address: http://uratim.com/pp.zip.

  17. Scalable Track Detection in SAR CCD Images

    Energy Technology Data Exchange (ETDEWEB)

    Chow, James G [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Quach, Tu-Thach [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images ta ken at different times of the same scene, rely on simple, fast models to label track pixels. These models, however, are often too simple to capture natural track features such as continuity and parallelism. We present a simple convolutional network architecture consisting of a series of 3-by-3 convolutions to detect tracks. The network is trained end-to-end to learn natural track features entirely from data. The network is computationally efficient and improves the F-score on a standard dataset to 0.988, up fr om 0.907 obtained by the current state-of-the-art method.

  18. Cost Effective, Scalable Optically Pumped Molecular Laser

    National Research Council Canada - National Science Library

    Nicholson, Jeff

    2001-01-01

    An optically pumped, For laser was demonstrated operating at 4.0 micrometers. This is the first demonstration of an HBr laser by direct optical pumping of the 0 right arrow 3 vibrational overtone band at 1.34 micrometers...

  19. IPv6 Network Administration

    CERN Document Server

    Murphy, Niall Richard

    2009-01-01

    This essential guide explains what works, what doesn't, and most of all, what's practical about IPv6--the next-generation Internet standard. A must-have for network administrators everywhere looking to fix their network's scalability and management problems. Also covers other IPv6 benefits, such as routing, integrated auto-configuration, quality-of-services (QoS), enhanced mobility, and end-to-end security.

  20. Secure Interoperable Open Smart Grid Demonstration Project

    Energy Technology Data Exchange (ETDEWEB)

    Magee, Thoman [Consolidated Edison Company Of New York, Inc., NY (United States)

    2014-12-28

    The Consolidated Edison, Inc., of New York (Con Edison) Secure Interoperable Open Smart Grid Demonstration Project (SGDP), sponsored by the United States (US) Department of Energy (DOE), demonstrated that the reliability, efficiency, and flexibility of the grid can be improved through a combination of enhanced monitoring and control capabilities using systems and resources that interoperate within a secure services framework. The project demonstrated the capability to shift, balance, and reduce load where and when needed in response to system contingencies or emergencies by leveraging controllable field assets. The range of field assets includes curtailable customer loads, distributed generation (DG), battery storage, electric vehicle (EV) charging stations, building management systems (BMS), home area networks (HANs), high-voltage monitoring, and advanced metering infrastructure (AMI). The SGDP enables the seamless integration and control of these field assets through a common, cyber-secure, interoperable control platform, which integrates a number of existing legacy control and data systems, as well as new smart grid (SG) systems and applications. By integrating advanced technologies for monitoring and control, the SGDP helps target and reduce peak load growth, improves the reliability and efficiency of Con Edison’s grid, and increases the ability to accommodate the growing use of distributed resources. Con Edison is dedicated to lowering costs, improving reliability and customer service, and reducing its impact on the environment for its customers. These objectives also align with the policy objectives of New York State as a whole. To help meet these objectives, Con Edison’s long-term vision for the distribution grid relies on the successful integration and control of a growing penetration of distributed resources, including demand response (DR) resources, battery storage units, and DG. For example, Con Edison is expecting significant long-term growth of DG

  1. Scalable Content Authentication in H.264/SVC Videos Using Perceptual Hashing based on Dempster-Shafer theory

    Directory of Open Access Journals (Sweden)

    Ye Dengpan

    2012-09-01

    Full Text Available The content authenticity of the multimedia delivery is important issue with rapid development and widely used of multimedia technology. Till now many authentication solutions had been proposed, such as cryptology and watermarking based methods. However, in latest heterogeneous network the video stream transmission has been coded in scalable way such as H.264/SVC, there is still no good authentication solution. In this paper, we firstly summarized related works and proposed a scalable content authentication scheme using a ratio of different energy (RDE based perceptual hashing in Q/S dimension, which is used Dempster-Shafer theory and combined with the latest scalable video coding (H.264/SVC construction. The idea of aldquo;sign once and verify in scalable wayardquo; can be realized. Comparing with previous methods, the proposed scheme based on perceptual hashing outperforms previous works in uncertainty (robustness and efficiencies in the H.264/SVC video streams. At last, the experiment results verified the performance of our scheme.

  2. Scalability of voltage-controlled filamentary and nanometallic resistance memory devices.

    Science.gov (United States)

    Lu, Yang; Lee, Jong Ho; Chen, I-Wei

    2017-08-31

    Much effort has been devoted to device and materials engineering to realize nanoscale resistance random access memory (RRAM) for practical applications, but a rational physical basis to be relied on to design scalable devices spanning many length scales is still lacking. In particular, there is no clear criterion for switching control in those RRAM devices in which resistance changes are limited to localized nanoscale filaments that experience concentrated heat, electric current and field. Here, we demonstrate voltage-controlled resistance switching, always at a constant characteristic critical voltage, for macro and nanodevices in both filamentary RRAM and nanometallic RRAM, and the latter switches uniformly and does not require a forming process. As a result, area-scalability can be achieved under a device-area-proportional current compliance for the low resistance state of the filamentary RRAM, and for both the low and high resistance states of the nanometallic RRAM. This finding will help design area-scalable RRAM at the nanoscale. It also establishes an analogy between RRAM and synapses, in which signal transmission is also voltage-controlled.

  3. ESVD: An Integrated Energy Scalable Framework for Low-Power Video Decoding Systems

    Directory of Open Access Journals (Sweden)

    Wen Ji

    2010-01-01

    Full Text Available Video applications using mobile wireless devices are a challenging task due to the limited capacity of batteries. The higher complex functionality of video decoding needs high resource requirements. Thus, power efficient control has become more critical design with devices integrating complex video processing techniques. Previous works on power efficient control in video decoding systems often aim at the low complexity design and not explicitly consider the scalable impact of subfunctions in decoding process, and seldom consider the relationship with the features of compressed video date. This paper is dedicated to developing an energy-scalable video decoding (ESVD strategy for energy-limited mobile terminals. First, ESVE can dynamically adapt the variable energy resources due to the device aware technique. Second, ESVD combines the decoder control with decoded data, through classifying the data into different partition profiles according to its characteristics. Third, it introduces utility theoretical analysis during the resource allocation process, so as to maximize the resource utilization. Finally, it adapts the energy resource as different energy budget and generates the scalable video decoding output under energy-limited systems. Experimental results demonstrate the efficiency of the proposed approach.

  4. Laplacian embedded regression for scalable manifold regularization.

    Science.gov (United States)

    Chen, Lin; Tsang, Ivor W; Xu, Dong

    2012-06-01

    world data sets show the effectiveness and scalability of the proposed framework.

  5. Scalable Parallel Distributed Coprocessor System for Graph Searching Problems with Massive Data

    Directory of Open Access Journals (Sweden)

    Wanrong Huang

    2017-01-01

    Full Text Available The Internet applications, such as network searching, electronic commerce, and modern medical applications, produce and process massive data. Considerable data parallelism exists in computation processes of data-intensive applications. A traversal algorithm, breadth-first search (BFS, is fundamental in many graph processing applications and metrics when a graph grows in scale. A variety of scientific programming methods have been proposed for accelerating and parallelizing BFS because of the poor temporal and spatial locality caused by inherent irregular memory access patterns. However, new parallel hardware could provide better improvement for scientific methods. To address small-world graph problems, we propose a scalable and novel field-programmable gate array-based heterogeneous multicore system for scientific programming. The core is multithread for streaming processing. And the communication network InfiniBand is adopted for scalability. We design a binary search algorithm to address mapping to unify all processor addresses. Within the limits permitted by the Graph500 test bench after 1D parallel hybrid BFS algorithm testing, our 8-core and 8-thread-per-core system achieved superior performance and efficiency compared with the prior work under the same degree of parallelism. Our system is efficient not as a special acceleration unit but as a processor platform that deals with graph searching applications.

  6. Scalable Light Module for Low-Cost, High-Efficiency Light- Emitting Diode Luminaires

    Energy Technology Data Exchange (ETDEWEB)

    Tarsa, Eric [Cree, Inc., Goleta, CA (United States)

    2015-08-31

    During this two-year program Cree developed a scalable, modular optical architecture for low-cost, high-efficacy light emitting diode (LED) luminaires. Stated simply, the goal of this architecture was to efficiently and cost-effectively convey light from LEDs (point sources) to broad luminaire surfaces (area sources). By simultaneously developing warm-white LED components and low-cost, scalable optical elements, a high system optical efficiency resulted. To meet program goals, Cree evaluated novel approaches to improve LED component efficacy at high color quality while not sacrificing LED optical efficiency relative to conventional packages. Meanwhile, efficiently coupling light from LEDs into modular optical elements, followed by optimally distributing and extracting this light, were challenges that were addressed via novel optical design coupled with frequent experimental evaluations. Minimizing luminaire bill of materials and assembly costs were two guiding principles for all design work, in the effort to achieve luminaires with significantly lower normalized cost ($/klm) than existing LED fixtures. Chief project accomplishments included the achievement of >150 lm/W warm-white LEDs having primary optics compatible with low-cost modular optical elements. In addition, a prototype Light Module optical efficiency of over 90% was measured, demonstrating the potential of this scalable architecture for ultra-high-efficacy LED luminaires. Since the project ended, Cree has continued to evaluate optical element fabrication and assembly methods in an effort to rapidly transfer this scalable, cost-effective technology to Cree production development groups. The Light Module concept is likely to make a strong contribution to the development of new cost-effective, high-efficacy luminaries, thereby accelerating widespread adoption of energy-saving SSL in the U.S.

  7. High-Performance Scalable Information Service for the ATLAS Experiment

    International Nuclear Information System (INIS)

    Kolos, S; Boutsioukis, G; Hauser, R

    2012-01-01

    The ATLAS[1] experiment is operated by a highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to assess the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS Trigger and Data Acquisition (TDAQ)[2] project. The IS provides a high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about a hundred gigabytes of information which is being constantly updated with the update interval varying from a second to a few tens of seconds. IS provides access to any information item on request as well as distributing notification to all the information subscribers. In the latter case IS subscribers receive information within a few milliseconds after it was updated. IS can handle arbitrary types of information, including histograms produced by the HLT applications, and provides C++, Java and Python API. The Information Service is a unique source of information for the majority of the online monitoring analysis and GUI applications used to control and monitor the ATLAS experiment. Information Service provides streaming functionality allowing efficient replication of all or part of the managed information. This functionality is used to duplicate the subset of the ATLAS monitoring data to the CERN public network with a latency of a few milliseconds, allowing efficient real-time monitoring of the data taking from outside the protected ATLAS network. Each information

  8. Logistical networking: a global storage network

    International Nuclear Information System (INIS)

    Beck, Micah; Moore, Terry

    2005-01-01

    The absence of an adequate distributed storage infrastructure for data buffering has become a significant impediment to the flow of work in the wide area, data intensive collaborations that are increasingly characteristic of leading edge research in several fields. One solution to this problem, pioneered under DOE's SciDAC program, is Logistical Networking, which provides a framework for a globally scalable, maximally interoperable storage network based on the Internet Backplane Protocol (IBP). This paper provides a brief overview of the Logistical Networking (LN) architecture, the middleware developed to exploit its value, and a few of the applications that some of research communities have made of it

  9. Final Scientific/Technical Report for "Enabling Exascale Hardware and Software Design through Scalable System Virtualization"

    Energy Technology Data Exchange (ETDEWEB)

    Dinda, Peter August [Northwestern Univ., Evanston, IL (United States)

    2015-03-17

    This report describes the activities, findings, and products of the Northwestern University component of the "Enabling Exascale Hardware and Software Design through Scalable System Virtualization" project. The purpose of this project has been to extend the state of the art of systems software for high-end computing (HEC) platforms, and to use systems software to better enable the evaluation of potential future HEC platforms, for example exascale platforms. Such platforms, and their systems software, have the goal of providing scientific computation at new scales, thus enabling new research in the physical sciences and engineering. Over time, the innovations in systems software for such platforms also become applicable to more widely used computing clusters, data centers, and clouds. This was a five-institution project, centered on the Palacios virtual machine monitor (VMM) systems software, a project begun at Northwestern, and originally developed in a previous collaboration between Northwestern University and the University of New Mexico. In this project, Northwestern (including via our subcontract to the University of Pittsburgh) contributed to the continued development of Palacios, along with other team members. We took the leadership role in (1) continued extension of support for emerging Intel and AMD hardware, (2) integration and performance enhancement of overlay networking, (3) connectivity with architectural simulation, (4) binary translation, and (5) support for modern Non-Uniform Memory Access (NUMA) hosts and guests. We also took a supporting role in support for specialized hardware for I/O virtualization, profiling, configurability, and integration with configuration tools. The efforts we led (1-5) were largely successful and executed as expected, with code and papers resulting from them. The project demonstrated the feasibility of a virtualization layer for HEC computing, similar to such layers for cloud or datacenter computing. For effort (3

  10. Scalable Multicasting over Next-Generation Internet Design, Analysis and Applications

    CERN Document Server

    Tian, Xiaohua

    2013-01-01

    Next-generation Internet providers face high expectations, as contemporary users worldwide expect high-quality multimedia functionality in a landscape of ever-expanding network applications. This volume explores the critical research issue of turning today’s greatly enhanced hardware capacity to good use in designing a scalable multicast  protocol for supporting large-scale multimedia services. Linking new hardware to improved performance in the Internet’s next incarnation is a research hot-spot in the computer communications field.   The methodical presentation deals with the key questions in turn: from the mechanics of multicast protocols to current state-of-the-art designs, and from methods of theoretical analysis of these protocols to applying them in the ns2 network simulator, known for being hard to extend. The authors’ years of research in the field inform this thorough treatment, which covers details such as applying AOM (application-oriented multicast) protocol to IPTV provision and resolving...

  11. Effectiveness of a Littoral Combat Ship as a Major Node in a Wireless Mesh Network

    Science.gov (United States)

    2017-03-01

    responsible for connection management, handover control and measurement control (Scalable Network Technologies, 2014c). The final layer in the LTE ...26 2. Oceus Networks 4G LTE ...32 2. LTE Library .................................................................................32 3

  12. Scalable System Design for Covert MIMO Communications

    Science.gov (United States)

    2014-06-01

    Vehicles US United States VHDL VHSIC Hardware Description Language VLSI Very Large Scale Integration WARP Wireless open-Access Research Platform WLAN ...communications, satellite radio and Wireless Local Area Network ( WLAN ) OFDM has been utilized for its multi-path resistance. OFDM relies on the...develop hardware specific to the application provides faster computation times, making FPGA development a very powerful tool. 2.5.1 MIMO Receiver Latency

  13. Optical interconnection networks for high-performance computing systems

    International Nuclear Information System (INIS)

    Biberman, Aleksandr; Bergman, Keren

    2012-01-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. (review article)

  14. Experimental demonstration of reservoir computing on a silicon photonics chip

    Science.gov (United States)

    Vandoorne, Kristof; Mechet, Pauline; van Vaerenbergh, Thomas; Fiers, Martin; Morthier, Geert; Verstraeten, David; Schrauwen, Benjamin; Dambre, Joni; Bienstman, Peter

    2014-03-01

    In today’s age, companies employ machine learning to extract information from large quantities of data. One of those techniques, reservoir computing (RC), is a decade old and has achieved state-of-the-art performance for processing sequential data. Dedicated hardware realizations of RC could enable speed gains and power savings. Here we propose the first integrated passive silicon photonics reservoir. We demonstrate experimentally and through simulations that, thanks to the RC paradigm, this generic chip can be used to perform arbitrary Boolean logic operations with memory as well as 5-bit header recognition up to 12.5 Gbit s-1, without power consumption in the reservoir. It can also perform isolated spoken digit recognition. Our realization exploits optical phase for computing. It is scalable to larger networks and much higher bitrates, up to speeds >100 Gbit s-1. These results pave the way for the application of integrated photonic RC for a wide range of applications.

  15. Quality Scalability Compression on Single-Loop Solution in HEVC

    Directory of Open Access Journals (Sweden)

    Mengmeng Zhang

    2014-01-01

    Full Text Available This paper proposes a quality scalable extension design for the upcoming high efficiency video coding (HEVC standard. In the proposed design, the single-loop decoder solution is extended into the proposed scalable scenario. A novel interlayer intra/interprediction is added to reduce the amount of bits representation by exploiting the correlation between coding layers. The experimental results indicate that the average Bjøntegaard delta rate decrease of 20.50% can be gained compared with the simulcast encoding. The proposed technique achieved 47.98% Bjøntegaard delta rate reduction compared with the scalable video coding extension of the H.264/AVC. Consequently, significant rate savings confirm that the proposed method achieves better performance.

  16. Recurrent, Robust and Scalable Patterns Underlie Human Approach and Avoidance

    Science.gov (United States)

    Kennedy, David N.; Lehár, Joseph; Lee, Myung Joo; Blood, Anne J.; Lee, Sang; Perlis, Roy H.; Smoller, Jordan W.; Morris, Robert; Fava, Maurizio

    2010-01-01

    Background Approach and avoidance behavior provide a means for assessing the rewarding or aversive value of stimuli, and can be quantified by a keypress procedure whereby subjects work to increase (approach), decrease (avoid), or do nothing about time of exposure to a rewarding/aversive stimulus. To investigate whether approach/avoidance behavior might be governed by quantitative principles that meet engineering criteria for lawfulness and that encode known features of reward/aversion function, we evaluated whether keypress responses toward pictures with potential motivational value produced any regular patterns, such as a trade-off between approach and avoidance, or recurrent lawful patterns as observed with prospect theory. Methodology/Principal Findings Three sets of experiments employed this task with beautiful face images, a standardized set of affective photographs, and pictures of food during controlled states of hunger and satiety. An iterative modeling approach to data identified multiple law-like patterns, based on variables grounded in the individual. These patterns were consistent across stimulus types, robust to noise, describable by a simple power law, and scalable between individuals and groups. Patterns included: (i) a preference trade-off counterbalancing approach and avoidance, (ii) a value function linking preference intensity to uncertainty about preference, and (iii) a saturation function linking preference intensity to its standard deviation, thereby setting limits to both. Conclusions/Significance These law-like patterns were compatible with critical features of prospect theory, the matching law, and alliesthesia. Furthermore, they appeared consistent with both mean-variance and expected utility approaches to the assessment of risk. Ordering of responses across categories of stimuli demonstrated three properties thought to be relevant for preference-based choice, suggesting these patterns might be grouped together as a relative preference

  17. Recurrent, robust and scalable patterns underlie human approach and avoidance.

    Directory of Open Access Journals (Sweden)

    Byoung Woo Kim

    2010-05-01

    Full Text Available Approach and avoidance behavior provide a means for assessing the rewarding or aversive value of stimuli, and can be quantified by a keypress procedure whereby subjects work to increase (approach, decrease (avoid, or do nothing about time of exposure to a rewarding/aversive stimulus. To investigate whether approach/avoidance behavior might be governed by quantitative principles that meet engineering criteria for lawfulness and that encode known features of reward/aversion function, we evaluated whether keypress responses toward pictures with potential motivational value produced any regular patterns, such as a trade-off between approach and avoidance, or recurrent lawful patterns as observed with prospect theory.Three sets of experiments employed this task with beautiful face images, a standardized set of affective photographs, and pictures of food during controlled states of hunger and satiety. An iterative modeling approach to data identified multiple law-like patterns, based on variables grounded in the individual. These patterns were consistent across stimulus types, robust to noise, describable by a simple power law, and scalable between individuals and groups. Patterns included: (i a preference trade-off counterbalancing approach and avoidance, (ii a value function linking preference intensity to uncertainty about preference, and (iii a saturation function linking preference intensity to its standard deviation, thereby setting limits to both.These law-like patterns were compatible with critical features of prospect theory, the matching law, and alliesthesia. Furthermore, they appeared consistent with both mean-variance and expected utility approaches to the assessment of risk. Ordering of responses across categories of stimuli demonstrated three properties thought to be relevant for preference-based choice, suggesting these patterns might be grouped together as a relative preference theory. Since variables in these patterns have been

  18. Approaches for scalable modeling and emulation of cyber systems : LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Mayo, Jackson R.; Minnich, Ronald G.; Armstrong, Robert C.; Rudish, Don W.

    2009-09-01

    The goal of this research was to combine theoretical and computational approaches to better understand the potential emergent behaviors of large-scale cyber systems, such as networks of {approx} 10{sup 6} computers. The scale and sophistication of modern computer software, hardware, and deployed networked systems have significantly exceeded the computational research community's ability to understand, model, and predict current and future behaviors. This predictive understanding, however, is critical to the development of new approaches for proactively designing new systems or enhancing existing systems with robustness to current and future cyber threats, including distributed malware such as botnets. We have developed preliminary theoretical and modeling capabilities that can ultimately answer questions such as: How would we reboot the Internet if it were taken down? Can we change network protocols to make them more secure without disrupting existing Internet connectivity and traffic flow? We have begun to address these issues by developing new capabilities for understanding and modeling Internet systems at scale. Specifically, we have addressed the need for scalable network simulation by carrying out emulations of a network with {approx} 10{sup 6} virtualized operating system instances on a high-performance computing cluster - a 'virtual Internet'. We have also explored mappings between previously studied emergent behaviors of complex systems and their potential cyber counterparts. Our results provide foundational capabilities for further research toward understanding the effects of complexity in cyber systems, to allow anticipating and thwarting hackers.

  19. Scalable UWB photonic generator based on the combination of doublet pulses.

    Science.gov (United States)

    Moreno, Vanessa; Rius, Manuel; Mora, José; Muriel, Miguel A; Capmany, José

    2014-06-30

    We propose and experimentally demonstrate a scalable and reconfigurable optical scheme to generate high order UWB pulses. Firstly, various ultra wideband doublets are created through a process of phase-to-intensity conversion by means of a phase modulation and a dispersive media. In a second stage, doublets are combined in an optical processing unit that allows the reconfiguration of UWB high order pulses. Experimental results both in time and frequency domains are presented showing good performance related to the fractional bandwidth and spectral efficiency parameters.

  20. Bioinspired superhydrophobic surfaces, fabricated through simple and scalable roll-to-roll processing.

    Science.gov (United States)

    Park, Sung-Hoon; Lee, Sangeui; Moreira, David; Bandaru, Prabhakar R; Han, InTaek; Yun, Dong-Jin

    2015-10-22

    A simple, scalable, non-lithographic, technique for fabricating durable superhydrophobic (SH) surfaces, based on the fingering instabilities associated with non-Newtonian flow and shear tearing, has been developed. The high viscosity of the nanotube/elastomer paste has been exploited for the fabrication. The fabricated SH surfaces had the appearance of bristled shark skin and were robust with respect to mechanical forces. While flow instability is regarded as adverse to roll-coating processes for fabricating uniform films, we especially use the effect to create the SH surface. Along with their durability and self-cleaning capabilities, we have demonstrated drag reduction effects of the fabricated films through dynamic flow measurements.

  1. Development of a modular and scalable sensor system for the gathering of position and orientation of moved objects

    International Nuclear Information System (INIS)

    Klingbeil, L.

    2006-02-01

    A modular and scalable sensor system for the estimation of position and orientation of moving objects has been developed and characterized. A sensor unit, which is mounted to the moving object, consists of acceleration -, angular rate - and magnetic field sensors for every spatial axis. Customized Kalman filter algorithms provide a robust and low latency reconstruction of the sensor's orientation. Additionally an ultrasound transducer network is used to measure the distance of a sensor unit with respect to several reference points in the room. This allows reconstruction of the absolute position using trilateration methods. The system is scalable with respect to the number of sensor units and the covered tracking volume. It is suitable for various applications for example the analysis of body movements or head tracking in augmented or virtual reality environments. (orig.)

  2. A molecular quantum spin network controlled by a single qubit.

    Science.gov (United States)

    Schlipf, Lukas; Oeckinghaus, Thomas; Xu, Kebiao; Dasari, Durga Bhaktavatsala Rao; Zappe, Andrea; de Oliveira, Felipe Fávaro; Kern, Bastian; Azarkh, Mykhailo; Drescher, Malte; Ternes, Markus; Kern, Klaus; Wrachtrup, Jörg; Finkler, Amit

    2017-08-01

    Scalable quantum technologies require an unprecedented combination of precision and complexity for designing stable structures of well-controllable quantum systems on the nanoscale. It is a challenging task to find a suitable elementary building block, of which a quantum network can be comprised in a scalable way. We present the working principle of such a basic unit, engineered using molecular chemistry, whose collective control and readout are executed using a nitrogen vacancy (NV) center in diamond. The basic unit we investigate is a synthetic polyproline with electron spins localized on attached molecular side groups separated by a few nanometers. We demonstrate the collective readout and coherent manipulation of very few (≤ 6) of these S = 1/2 electronic spin systems and access their direct dipolar coupling tensor. Our results show that it is feasible to use spin-labeled peptides as a resource for a molecular qubit-based network, while at the same time providing simple optical readout of single quantum states through NV magnetometry. This work lays the foundation for building arbitrary quantum networks using well-established chemistry methods, which has many applications ranging from mapping distances in single molecules to quantum information processing.

  3. Multicast Performance Analysis for High-Speed Torus Networks

    National Research Council Canada - National Science Library

    Oral, S; George, A

    2002-01-01

    ... for unicast-based and path-based multicast communication on high-speed torus networks. Software-based multicast performance results of selected algorithms on a 16-node Scalable Coherent Interface (SCI) torus are given...

  4. A Review of the Topologies Used in Smart Water Meter Networks: A Wireless Sensor Network Application

    OpenAIRE

    Marais, Jaco; Malekian, Reza; Ye, Ning; Wang, Ruchuan

    2016-01-01

    This paper presents several proposed and existing smart utility meter systems as well as their communication networks to identify the challenges of creating scalable smart water meter networks. Network simulations are performed on 3 network topologies (star, tree, and mesh) to determine their suitability for smart water meter networks. The simulations found that once a number of nodes threshold is exceeded the network’s delay increases dramatically regardless of implemented topology. This thr...

  5. Modeling Temporal Behavior in Large Networks: A Dynamic Mixed-Membership Model

    Energy Technology Data Exchange (ETDEWEB)

    Rossi, R; Gallagher, B; Neville, J; Henderson, K

    2011-11-11

    Given a large time-evolving network, how can we model and characterize the temporal behaviors of individual nodes (and network states)? How can we model the behavioral transition patterns of nodes? We propose a temporal behavior model that captures the 'roles' of nodes in the graph and how they evolve over time. The proposed dynamic behavioral mixed-membership model (DBMM) is scalable, fully automatic (no user-defined parameters), non-parametric/data-driven (no specific functional form or parameterization), interpretable (identifies explainable patterns), and flexible (applicable to dynamic and streaming networks). Moreover, the interpretable behavioral roles are generalizable, computationally efficient, and natively supports attributes. We applied our model for (a) identifying patterns and trends of nodes and network states based on the temporal behavior, (b) predicting future structural changes, and (c) detecting unusual temporal behavior transitions. We use eight large real-world datasets from different time-evolving settings (dynamic and streaming). In particular, we model the evolving mixed-memberships and the corresponding behavioral transitions of Twitter, Facebook, IP-Traces, Email (University), Internet AS, Enron, Reality, and IMDB. The experiments demonstrate the scalability, flexibility, and effectiveness of our model for identifying interesting patterns, detecting unusual structural transitions, and predicting the future structural changes of the network and individual nodes.

  6. Decision making uncertainty, imperfection, deliberation and scalability

    CERN Document Server

    Kárný, Miroslav; Wolpert, David

    2015-01-01

    This volume focuses on uncovering the fundamental forces underlying dynamic decision making among multiple interacting, imperfect and selfish decision makers. The chapters are written by leading experts from different disciplines, all considering the many sources of imperfection in decision making, and always with an eye to decreasing the myriad discrepancies between theory and real world human decision making. Topics addressed include uncertainty, deliberation cost and the complexity arising from the inherent large computational scale of decision making in these systems. In particular, analyses and experiments are presented which concern: • task allocation to maximize “the wisdom of the crowd”; • design of a society of “edutainment” robots who account for one anothers’ emotional states; • recognizing and counteracting seemingly non-rational human decision making; • coping with extreme scale when learning causality in networks; • efficiently incorporating expert knowledge in personalized...

  7. Nordic campus retrofitting concepts - Scalable practices

    DEFF Research Database (Denmark)

    Eriksson, Robert; Nenonen, Suvi; Junghans, Antje

    2015-01-01

    Multidisciplinary collaboration and transformations in learning processes can be supported by activity-based campus retrofitting. The aim of this paper is to analyse the ongoing campus retrofitting processes at the three university campuses and to identify the elements of activity......-based retrofitting. We answer the questions “What kind of examples of retrofitting are there at Nordic Campuses?” and “What kind of elements are typical for activity-based retrofitting concepts?” The 3-level framework of campus retrofitting processes was employed when conducting the three case studies. The cases...... were about the new ways of researching, collaborating and learning with the concepts of Living lab, Creative community for innovation and entrepreneurship and Network of learning hubs. The cases provided the first insights on retrofitting based on users’ changing needs and the requirements of more...

  8. Data Intensive Architecture for Scalable Cyber Analytics

    Energy Technology Data Exchange (ETDEWEB)

    Olsen, Bryan K.; Johnson, John R.; Critchlow, Terence J.

    2011-11-15

    Cyber analysts are tasked with the identification and mitigation of network exploits and threats. These compromises are difficult to identify due to the characteristics of cyber communication, the volume of traffic, and the duration of possible attack. It is necessary to have analytical tools to help analysts identify anomalies that span seconds, days, and weeks. Unfortunately, providing analytical tools effective access to the volumes of underlying data requires novel architectures, which is often overlooked in operational deployments. Our work is focused on a summary record of communication, called a flow. Flow records are intended to summarize a communication session between a source and a destination, providing a level of aggregation from the base data. Despite this aggregation, many enterprise network perimeter sensors store millions of network flow records per day. The volume of data makes analytics difficult, requiring the development of new techniques to efficiently identify temporal patterns and potential threats. The massive volume makes analytics difficult, but there are other characteristics in the data which compound the problem. Within the billions of records of communication that transact, there are millions of distinct IP addresses involved. Characterizing patterns of entity behavior is very difficult with the vast number of entities that exist in the data. Research has struggled to validate a model for typical network behavior with hopes it will enable the identification of atypical behavior. Complicating matters more, typically analysts are only able to visualize and interact with fractions of data and have the potential to miss long term trends and behaviors. Our analysis approach focuses on aggregate views and visualization techniques to enable flexible and efficient data exploration as well as the capability to view trends over long periods of time. Realizing that interactively exploring summary data allowed analysts to effectively identify

  9. Visualizing Article Similarities via Sparsified Article Network and Map Projection for Systematic Reviews.

    Science.gov (United States)

    Ji, Xiaonan; Machiraju, Raghu; Ritter, Alan; Yen, Po-Yin

    2017-01-01

    Systematic Reviews (SRs) of biomedical literature summarize evidence from high-quality studies to inform clinical decisions, but are time and labor intensive due to the large number of article collections. Article similarities established from textual features have been shown to assist in the identification of relevant articles, thus facilitating the article screening process efficiently. In this study, we visualized article similarities to extend its utilization in practical settings for SR researchers, aiming to promote human comprehension of article distributions and hidden patterns. To prompt an effective visualization in an interpretable, intuitive, and scalable way, we implemented a graph-based network visualization with three network sparsification approaches and a distance-based map projection via dimensionality reduction. We evaluated and compared three network sparsification approaches and the visualization types (article network vs. article map). We demonstrated the effectiveness in revealing article distribution and exhibiting clustering patterns of relevant articles with practical meanings for SRs.

  10. Software-defined optical network for metro-scale geographically distributed data centers.

    Science.gov (United States)

    Samadi, Payman; Wen, Ke; Xu, Junjie; Bergman, Keren

    2016-05-30

    The emergence of cloud computing and big data has rapidly increased the deployment of small and mid-sized data centers. Enterprises and cloud providers require an agile network among these data centers to empower application reliability and flexible scalability. We present a software-defined inter data center network to enable on-demand scale out of data centers on a metro-scale optical network. The architecture consists of a combined space/wavelength switching platform and a Software-Defined Networking (SDN) control plane equipped with a wavelength and routing assignment module. It enables establishing transparent and bandwidth-selective connections from L2/L3 switches, on-demand. The architecture is evaluated in a testbed consisting of 3 data centers, 5-25 km apart. We successfully demonstrated end-to-end bulk data transfer and Virtual Machine (VM) migrations across data centers with less than 100 ms connection setup time and close to full link capacity utilization.

  11. The engineering of a scalable multi-site communications system utilizing quantum key distribution (QKD)

    Science.gov (United States)

    Tysowski, Piotr K.; Ling, Xinhua; Lütkenhaus, Norbert; Mosca, Michele

    2018-04-01

    Quantum key distribution (QKD) is a means of generating keys between a pair of computing hosts that is theoretically secure against cryptanalysis, even by a quantum computer. Although there is much active research into improving the QKD technology itself, there is still significant work to be done to apply engineering methodology and determine how it can be practically built to scale within an enterprise IT environment. Significant challenges exist in building a practical key management service (KMS) for use in a metropolitan network. QKD is generally a point-to-point technique only and is subject to steep performance constraints. The integration of QKD into enterprise-level computing has been researched, to enable quantum-safe communication. A novel method for constructing a KMS is presented that allows arbitrary computing hosts on one site to establish multiple secure communication sessions with the hosts of another site. A key exchange protocol is proposed where symmetric private keys are granted to hosts while satisfying the scalability needs of an enterprise population of users. The KMS operates within a layered architectural style that is able to interoperate with various underlying QKD implementations. Variable levels of security for the host population are enforced through a policy engine. A network layer provides key generation across a network of nodes connected by quantum links. Scheduling and routing functionality allows quantum key material to be relayed across trusted nodes. Optimizations are performed to match the real-time host demand for key material with the capacity afforded by the infrastructure. The result is a flexible and scalable architecture that is suitable for enterprise use and independent of any specific QKD technology.

  12. WIFIRE: A Scalable Data-Driven Monitoring, Dynamic Prediction and Resilience Cyberinfrastructure for Wildfires

    Science.gov (United States)

    Altintas, I.; Block, J.; Braun, H.; de Callafon, R. A.; Gollner, M. J.; Smarr, L.; Trouve, A.

    2013-12-01

    Recent studies confirm that climate change will cause wildfires to increase in frequency and severity in the coming decades especially for California and in much of the North American West. The most critical sustainability issue in the midst of these ever-changing dynamics is how to achieve a new social-ecological equilibrium of this fire ecology. Wildfire wind speeds and directions change in an instant, and first responders can only be effective when they take action as quickly as the conditions change. To deliver information needed for sustainable policy and management in this dynamically changing fire regime, we must capture these details to understand the environmental processes. We are building an end-to-end cyberinfrastructure (CI), called WIFIRE, for real-time and data-driven simulation, prediction and visualization of wildfire behavior. The WIFIRE integrated CI system supports social-ecological resilience to the changing fire ecology regime in the face of urban dynamics and climate change. Networked observations, e.g., heterogeneous satellite data and real-time remote sensor data is integrated with computational techniques in signal processing, visualization, modeling and data assimilation to provide a scalable, technological, and educational solution to monitor weather patterns to predict a wildfire's Rate of Spread. Our collaborative WIFIRE team of scientists, engineers, technologists, government policy managers, private industry, and firefighters architects implement CI pathways that enable joint innovation for wildfire management. Scientific workflows are used as an integrative distributed programming model and simplify the implementation of engineering modules for data-driven simulation, prediction and visualization while allowing integration with large-scale computing facilities. WIFIRE will be scalable to users with different skill-levels via specialized web interfaces and user-specified alerts for environmental events broadcasted to receivers before

  13. A flood-based information flow analysis and network minimization method for gene regulatory networks.

    Science.gov (United States)

    Pavlogiannis, Andreas; Mozhayskiy, Vadim; Tagkopoulos, Ilias

    2013-04-24

    Biological networks tend to have high interconnectivity, complex topologies and multiple types of interactions. This renders difficult the identification of sub-networks that are involved in condition- specific responses. In addition, we generally lack scalable methods that can reveal the information flow in gene regulatory and biochemical pathways. Doing so will help us to identify key participants and paths under specific environmental and cellular context. This paper introduces the theory of network flooding, which aims to address the problem of network minimization and regulatory information flow in gene regulatory networks. Given a regulatory biological network, a set of source (input) nodes and optionally a set of sink (output) nodes, our task is to find (a) the minimal sub-network that encodes the regulatory program involving all input and output nodes and (b) the information flow from the source to the sink nodes of the network. Here, we describe a novel, scalable, network traversal algorithm and we assess its potential to achieve significant network size reduction in both synthetic and E. coli networks. Scalability and sensitivity analysis show that the proposed method scales well with the size of the network, and is robust to noise and missing data. The method of network flooding proves to be a useful, practical approach towards information flow analysis in gene regulatory networks. Further extension of the proposed theory has the potential to lead in a unifying framework for the simultaneous network minimization and information flow analysis across various "omics" levels.

  14. Design for scalability in 3D computer graphics architectures

    DEFF Research Database (Denmark)

    Holten-Lund, Hans Erik

    2002-01-01

    This thesis describes useful methods and techniques for designing scalable hybrid parallel rendering architectures for 3D computer graphics. Various techniques for utilizing parallelism in a pipelines system are analyzed. During the Ph.D study a prototype 3D graphics architecture named Hybris has...

  15. Scalable storage for a DBMS using transparent distribution

    NARCIS (Netherlands)

    J.S. Karlsson; M.L. Kersten (Martin)

    1997-01-01

    textabstractScalable Distributed Data Structures (SDDSs) provide a self-managing and self-organizing data storage of potentially unbounded size. This stands in contrast to common distribution schemas deployed in conventional distributed DBMS. SDDSs, however, have mostly been used in synthetic

  16. Scalable force directed graph layout algorithms using fast multipole methods

    KAUST Repository

    Yunis, Enas Abdulrahman; Yokota, Rio; Ahmadia, Aron

    2012-01-01

    We present an extension to ExaFMM, a Fast Multipole Method library, as a generalized approach for fast and scalable execution of the Force-Directed Graph Layout algorithm. The Force-Directed Graph Layout algorithm is a physics-based approach

  17. Cascaded column generation for scalable predictive demand side management

    NARCIS (Netherlands)

    Toersche, Hermen; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2014-01-01

    We propose a nested Dantzig-Wolfe decomposition, combined with dynamic programming, for the distributed scheduling of a large heterogeneous fleet of residential appliances with nonlinear behavior. A cascaded column generation approach gives a scalable optimization strategy, provided that the problem

  18. Scalable Robust Principal Component Analysis Using Grassmann Averages

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Enficiaud, Raffi

    2016-01-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortu...

  19. A Scalable Smart Meter Data Generator Using Spark

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Liu, Xiufeng; Danalachi, Sergiu

    2017-01-01

    Today, smart meters are being used worldwide. As a matter of fact smart meters produce large volumes of data. Thus, it is important for smart meter data management and analytics systems to process petabytes of data. Benchmarking and testing of these systems require scalable data, however, it can ...

  20. Scalability and efficiency of genetic algorithms for geometrical applications

    NARCIS (Netherlands)

    Dijk, van S.F.; Thierens, D.; Berg, de M.; Schoenauer, M.

    2000-01-01

    We study the scalability and efficiency of a GA that we developed earlier to solve the practical cartographic problem of labeling a map with point features. We argue that the special characteristics of our GA make that it fits in well with theoretical models predicting the optimal population size

  1. Scalable electro-photonic integration concept based on polymer waveguides

    NARCIS (Netherlands)

    Bosman, E.; Steenberge, G. van; Boersma, A.; Wiegersma, S.; Harmsma, P.J.; Karppinen, M.; Korhonen, T.; Offrein, B.J.; Dangel, R.; Daly, A.; Ortsiefer, M.; Justice, J.; Corbett, B.; Dorrestein, S.; Duis, J.

    2016-01-01

    A novel method for fabricating a single mode optical interconnection platform is presented. The method comprises the miniaturized assembly of optoelectronic single dies, the scalable fabrication of polymer single mode waveguides and the coupling to glass fiber arrays providing the I/O's. The low

  2. Adolescent sexuality education: An appraisal of some scalable ...

    African Journals Online (AJOL)

    Adolescent sexuality education: An appraisal of some scalable interventions for the Nigerian context. VC Pam. Abstract. Most issues around sexual intercourse are highly sensitive topics in Nigeria. Despite the disturbingly high adolescent HIV prevalence and teenage pregnancy rate in Nigeria, sexuality education is ...

  3. Scalable multifunction RF system concepts for joint operations

    NARCIS (Netherlands)

    Otten, M.P.G.; Wit, J.J.M. de; Smits, F.M.A.; Rossum, W.L. van; Huizing, A.

    2010-01-01

    RF systems based on modular architectures have the potential of better re-use of technology, decreasing development time, and decreasing life cycle cost. Moreover, modular architectures provide scalability, allowing low cost upgrades and adaptability to different platforms. To achieve maximum

  4. Estimates of the Sampling Distribution of Scalability Coefficient H

    Science.gov (United States)

    Van Onna, Marieke J. H.

    2004-01-01

    Coefficient "H" is used as an index of scalability in nonparametric item response theory (NIRT). It indicates the degree to which a set of items rank orders examinees. Theoretical sampling distributions, however, have only been derived asymptotically and only under restrictive conditions. Bootstrap methods offer an alternative possibility to…

  5. Scalable and Resilient Middleware to Handle Information Exchange during Environment Crisis

    Science.gov (United States)

    Tao, R.; Poslad, S.; Moßgraber, J.; Middleton, S.; Hammitzsch, M.

    2012-04-01

    The EU FP7 TRIDEC project focuses on enabling real-time, intelligent, information management of collaborative, complex, critical decision processes for earth management. A key challenge is to promote a communication infrastructure to facilitate interoperable environment information services during environment events and crises such as tsunamis and drilling, during which increasing volumes and dimensionality of disparate information sources, including sensor-based and human-based ones, can result, and need to be managed. Such a system needs to support: scalable, distributed messaging; asynchronous messaging; open messaging to handling changing clients such as new and retired automated system and human information sources becoming online or offline; flexible data filtering, and heterogeneous access networks (e.g., GSM, WLAN and LAN). In addition, the system needs to be resilient to handle the ICT system failures, e.g. failure, degradation and overloads, during environment events. There are several system middleware choices for TRIDEC based upon a Service-oriented-architecture (SOA), Event-driven-Architecture (EDA), Cloud Computing, and Enterprise Service Bus (ESB). In an SOA, everything is a service (e.g. data access, processing and exchange); clients can request on demand or subscribe to services registered by providers; more often interaction is synchronous. In an EDA system, events that represent significant changes in state can be processed simply, or as streams or more complexly. Cloud computing is a virtualization, interoperable and elastic resource allocation model. An ESB, a fundamental component for enterprise messaging, supports synchronous and asynchronous message exchange models and has inbuilt resilience against ICT failure. Our middleware proposal is an ESB based hybrid architecture model: an SOA extension supports more synchronous workflows; EDA assists the ESB to handle more complex event processing; Cloud computing can be used to increase and

  6. Data Intensive Architecture for Scalable Cyber Analytics

    Energy Technology Data Exchange (ETDEWEB)

    Olsen, Bryan K.; Johnson, John R.; Critchlow, Terence J.

    2011-12-19

    Cyber analysts are tasked with the identification and mitigation of network exploits and threats. These compromises are difficult to identify due to the characteristics of cyber communication, the volume of traffic, and the duration of possible attack. In this paper, we describe a prototype implementation designed to provide cyber analysts an environment where they can interactively explore a month’s worth of cyber security data. This prototype utilized On-Line Analytical Processing (OLAP) techniques to present a data cube to the analysts. The cube provides a summary of the data, allowing trends to be easily identified as well as the ability to easily pull up the original records comprising an event of interest. The cube was built using SQL Server Analysis Services (SSAS), with the interface to the cube provided by Tableau. This software infrastructure was supported by a novel hardware architecture comprising a Netezza TwinFin® for the underlying data warehouse and a cube server with a FusionIO drive hosting the data cube. We evaluated this environment on a month’s worth of artificial, but realistic, data using multiple queries provided by our cyber analysts. As our results indicate, OLAP technology has progressed to the point where it is in a unique position to provide novel insights to cyber analysts, as long as it is supported by an appropriate data intensive architecture.

  7. Evaluation of 3D printed anatomically scalable transfemoral prosthetic knee.

    Science.gov (United States)

    Ramakrishnan, Tyagi; Schlafly, Millicent; Reed, Kyle B

    2017-07-01

    This case study compares a transfemoral amputee's gait while using the existing Ossur Total Knee 2000 and our novel 3D printed anatomically scalable transfemoral prosthetic knee. The anatomically scalable transfemoral prosthetic knee is 3D printed out of a carbon-fiber and nylon composite that has a gear-mesh coupling with a hard-stop weight-actuated locking mechanism aided by a cross-linked four-bar spring mechanism. This design can be scaled using anatomical dimensions of a human femur and tibia to have a unique fit for each user. The transfemoral amputee who was tested is high functioning and walked on the Computer Assisted Rehabilitation Environment (CAREN) at a self-selected pace. The motion capture and force data that was collected showed that there were distinct differences in the gait dynamics. The data was used to perform the Combined Gait Asymmetry Metric (CGAM), where the scores revealed that the overall asymmetry of the gait on the Ossur Total Knee was more asymmetric than the anatomically scalable transfemoral prosthetic knee. The anatomically scalable transfemoral prosthetic knee had higher peak knee flexion that caused a large step time asymmetry. This made walking on the anatomically scalable transfemoral prosthetic knee more strenuous due to the compensatory movements in adapting to the different dynamics. This can be overcome by tuning the cross-linked spring mechanism to emulate the dynamics of the subject better. The subject stated that the knee would be good for daily use and has the potential to be adapted as a running knee.

  8. Graceful Degradation in 3GPP MBMS Mobile TV Services Using H.264/AVC Temporal Scalability

    Directory of Open Access Journals (Sweden)

    Thomas Wiegand

    2009-01-01

    Full Text Available These days, there is an increasing interest in Mobile TV broadcast services shown by customers as well as service providers. One general problem of Mobile TV broadcast services is to maximize the coverage of users receiving an acceptable service quality, which is mainly influenced by the user's position and mobility within the cell. In this paper, graceful degradation is considered as an approach for improved service availability and coverage. We present a layered transmission approach for 3GPP's Release 6 Multimedia and Broadcast Service (MBMS based on temporal scalability using H.264/AVC Baseline Profile. A differentiation in robustness between temporal quality layers is achieved by unequal error protection approach based on either application layer Forward Error Correction (FEC or unequal transmit power for the layers or even a combination of both. We discuss the corresponding MBMS service as well as network settings and define measures allowing for evaluating the amount of users reached with a certain mobile terminal play-out quality while considering the network cell capacity usage. Using simulated 3GPP Rel. 6 network conditions, we show that if the service and network settings are chosen carefully, a noticeable extension of the coverage of the MBMS service can be achieved.

  9. Quantum networks based on spins in diamond

    International Nuclear Information System (INIS)

    Ronald Hanson

    2014-01-01

    Entanglement of spatially separated objects is one of the most intriguing phenomena that can occur in physics. Besides being of fundamental interest, entanglement is also a valuable resource in quantum information technology enabling secure quantum communication networks and distributed quantum computing. Here we present our most recent results towards the realization of scalable quantum networks with solid-state qubits. (author)

  10. Fixed SMRF Sensor Network Application Concepts

    NARCIS (Netherlands)

    Wit, J.J.M. de; Rossum, W.L. van; Smits, F.M.A.; Theije, P.A.M. de; Monni, S.; Huizing, A.G.

    2010-01-01

    Advantages of scalable multifunction RF (SMRF) sensors and networked operation of sensors are well-known. Some advantages are surveillance persistence, multipath resistance, and interference resistance. The particular benefits of applying multifunction RF sensors in a network still need to be

  11. Hybrid RRM Architecture for Future Wireless Networks

    DEFF Research Database (Denmark)

    Tragos, Elias; Mihovska, Albena D.; Mino, Emilio

    2007-01-01

    The concept of ubiquitous and scalable system is applied in the IST WINNER II [1] project to deliver optimum performance for different deployment scenarios from local area to wide area wireless networks. The integration of cellular and local area networks in a unique radio system will provide a g...

  12. Mobility management in the future mobile network

    NARCIS (Netherlands)

    Karimzadeh Motallebi Azar, Morteza

    2018-01-01

    The current mobile network architectures are heavily hierarchical, which implies that all traffic must be traversed through a centralized core entity. This makes the network prone to several limitations, e.g., suboptimal communication paths, low scalability, signaling overhead, and single point of

  13. Scalable tensor factorizations for incomplete data

    DEFF Research Database (Denmark)

    Acar, Evrim; Dunlavy, Daniel M.; KOlda, Tamara G.

    2011-01-01

    to factorize data sets with missing values with the goal of capturing the underlying latent structure of the data and possibly reconstructing missing values (i.e., tensor completion). We focus on one of the most well-known tensor factorizations that captures multi-linear structure, CANDECOMP/PARAFAC (CP...... experiments, our algorithm is shown to successfully factorize tensors with noise and up to 99% missing data. A unique aspect of our approach is that it scales to sparse large-scale data, e.g., 1000 × 1000 × 1000 with five million known entries (0.5% dense). We further demonstrate the usefulness of CP...

  14. Scalable Production of Si Nanoparticles Directly from Low Grade Sources for Lithium-Ion Battery Anode.

    Science.gov (United States)

    Zhu, Bin; Jin, Yan; Tan, Yingling; Zong, Linqi; Hu, Yue; Chen, Lei; Chen, Yanbin; Zhang, Qiao; Zhu, Jia

    2015-09-09

    Silicon, one of the most promising candidates as lithium-ion battery anode, has attracted much attention due to its high theoretical capacity, abundant existence, and mature infrastructure. Recently, Si nanostructures-based lithium-ion battery anode, with sophisticated structure designs and process development, has made significant progress. However, low cost and scalable processes to produce these Si nanostructures remained as a challenge, which limits the widespread applications. Herein, we demonstrate that Si nanoparticles with controlled size can be massively produced directly from low grade Si sources through a scalable high energy mechanical milling process. In addition, we systematically studied Si nanoparticles produced from two major low grade Si sources, metallurgical silicon (∼99 wt % Si, $1/kg) and ferrosilicon (∼83 wt % Si, $0.6/kg). It is found that nanoparticles produced from ferrosilicon sources contain FeSi2, which can serve as a buffer layer to alleviate the mechanical fractures of volume expansion, whereas nanoparticles from metallurgical Si sources have higher capacity and better kinetic properties because of higher purity and better electronic transport properties. Ferrosilicon nanoparticles and metallurgical Si nanoparticles demonstrate over 100 stable deep cycling after carbon coating with the reversible capacities of 1360 mAh g(-1) and 1205 mAh g(-1), respectively. Therefore, our approach provides a new strategy for cost-effective, energy-efficient, large scale synthesis of functional Si electrode materials.

  15. Scalable fabrication of self-aligned graphene transistors and circuits on glass.

    Science.gov (United States)

    Liao, Lei; Bai, Jingwei; Cheng, Rui; Zhou, Hailong; Liu, Lixin; Liu, Yuan; Huang, Yu; Duan, Xiangfeng

    2012-06-13

    Graphene transistors are of considerable interest for radio frequency (rf) applications. High-frequency graphene transistors with the intrinsic cutoff frequency up to 300 GHz have been demonstrated. However, the graphene transistors reported to date only exhibit a limited extrinsic cutoff frequency up to about 10 GHz, and functional graphene circuits demonstrated so far can merely operate in the tens of megahertz regime, far from the potential the graphene transistors could offer. Here we report a scalable approach to fabricate self-aligned graphene transistors with the extrinsic cutoff frequency exceeding 50 GHz and graphene circuits that can operate in the 1-10 GHz regime. The devices are fabricated on a glass substrate through a self-aligned process by using chemical vapor deposition (CVD) grown graphene and a dielectrophoretic assembled nanowire gate array. The self-aligned process allows the achievement of unprecedented performance in CVD graphene transistors with a highest transconductance of 0.36 mS/μm. The use of an insulating substrate minimizes the parasitic capacitance and has therefore enabled graphene transistors with a record-high extrinsic cutoff frequency (> 50 GHz) achieved to date. The excellent extrinsic cutoff frequency readily allows configuring the graphene transistors into frequency doubling or mixing circuits functioning in the 1-10 GHz regime, a significant advancement over previous reports (∼20 MHz). The studies open a pathway to scalable fabrication of high-speed graphene transistors and functional circuits and represent a significant step forward to graphene based radio frequency devices.

  16. Manufacturing Demonstration Facility: Low Temperature Materials Synthesis

    International Nuclear Information System (INIS)

    Graham, David E.; Moon, Ji-Won; Armstrong, Beth L.; Datskos, Panos G.; Duty, Chad E.; Gresback, Ryan; Ivanov, Ilia N.; Jacobs, Christopher B.; Jellison, Gerald Earle; Jang, Gyoung Gug; Joshi, Pooran C.; Jung, Hyunsung; Meyer, Harry M.; Phelps, Tommy

    2015-01-01

    The Manufacturing Demonstration Facility (MDF) low temperature materials synthesis project was established to demonstrate a scalable and sustainable process to produce nanoparticles (NPs) for advanced manufacturing. Previous methods to chemically synthesize NPs typically required expensive, high-purity inorganic chemical reagents, organic solvents and high temperatures. These processes were typically applied at small laboratory scales at yields sufficient for NP characterization, but insufficient to support roll-to-roll processing efforts or device fabrication. The new NanoFermentation processes described here operated at a low temperature (~60 C) in low-cost, aqueous media using bacteria that produce extracellular NPs with controlled size and elemental stoichiometry. Up-scaling activities successfully demonstrated high NP yields and quality in a 900-L pilot-scale reactor, establishing this NanoFermentation process as a competitive biomanufacturing strategy to produce NPs for advanced manufacturing of power electronics, solid-state lighting and sensors.

  17. Manufacturing Demonstration Facility: Low Temperature Materials Synthesis

    Energy Technology Data Exchange (ETDEWEB)

    Graham, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Moon, Ji-Won [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Armstrong, Beth L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Datskos, Panos G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Duty, Chad E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gresback, Ryan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ivanov, Ilia N. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jacobs, Christopher B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jellison, Gerald Earle [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jang, Gyoung Gug [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Joshi, Pooran C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jung, Hyunsung [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Meyer, III, Harry M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Phelps, Tommy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-06-30

    The Manufacturing Demonstration Facility (MDF) low temperature materials synthesis project was established to demonstrate a scalable and sustainable process to produce nanoparticles (NPs) for advanced manufacturing. Previous methods to chemically synthesize NPs typically required expensive, high-purity inorganic chemical reagents, organic solvents and high temperatures. These processes were typically applied at small laboratory scales at yields sufficient for NP characterization, but insufficient to support roll-to-roll processing efforts or device fabrication. The new NanoFermentation processes described here operated at a low temperature (~60 C) in low-cost, aqueous media using bacteria that produce extracellular NPs with controlled size and elemental stoichiometry. Up-scaling activities successfully demonstrated high NP yields and quality in a 900-L pilot-scale reactor, establishing this NanoFermentation process as a competitive biomanufacturing strategy to produce NPs for advanced manufacturing of power electronics, solid-state lighting and sensors.

  18. Temporal scalability comparison of the H.264/SVC and distributed video codec

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Belyaev, Evgeny

    2009-01-01

    The problem of the multimedia scalable video streaming is a current topic of interest. There exist many methods for scalable video coding. This paper is focused on the scalable extension of H.264/AVC (H.264/SVC) and distributed video coding (DVC). The paper presents an efficiency comparison of SV...

  19. Scalable Quantum Simulation of Molecular Energies

    Directory of Open Access Journals (Sweden)

    P. J. J. O’Malley

    2016-07-01

    Full Text Available We report the first electronic structure calculation performed on a quantum computer without exponentially costly precompilation. We use a programmable array of superconducting qubits to compute the energy surface of molecular hydrogen using two distinct quantum algorithms. First, we experimentally execute the unitary coupled cluster method using the variational quantum eigensolver. Our efficient implementation predicts the correct dissociation energy to within chemical accuracy of the numerically exact result. Second, we experimentally demonstrate the canonical quantum algorithm for chemistry, which consists of Trotterization and quantum phase estimation. We compare the experimental performance of these approaches to show clear evidence that the variational quantum eigensolver is robust to certain errors. This error tolerance inspires hope that variational quantum simulations of classically intractable molecules may be viable in the near future.

  20. Engineering scalable fault-tolerant quantum computation

    Science.gov (United States)

    Kimchi-Schwartz, Mollie; Danna, Rosenberg; Kim, David; Yoder, Jonilyn; Kjaergaard, Morten; Das, Rabindra; Grover, Jeff; Gustavsson, Simon; Oliver, William

    Recent demonstrations of quantum protocols comprising on the order of 5-10 superconducting qubits are foundational to the future development of quantum information processors. A next critical step in the development of resilient quantum processors will be the integration of coherent quantum circuits with a hardware platform that is amenable to extending the system size to hundreds of qubits and beyond. In this talk, we will discuss progress toward integrating coherent superconducting qubits with signal routing via the third dimension. This research was funded in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA) and by the Assistant Secretary of Defense for Research & Engineering under Air Force Contract No. FA8721-05-C-0002. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of ODNI, IARPA, or the US Government.