WorldWideScience

Sample records for scalable nanofabrication platform

  1. Tip-Based Nanofabrication for Scalable Manufacturing

    Directory of Open Access Journals (Sweden)

    Huan Hu

    2017-03-01

    Full Text Available Tip-based nanofabrication (TBN is a family of emerging nanofabrication techniques that use a nanometer scale tip to fabricate nanostructures. In this review, we first introduce the history of the TBN and the technology development. We then briefly review various TBN techniques that use different physical or chemical mechanisms to fabricate features and discuss some of the state-of-the-art techniques. Subsequently, we focus on those TBN methods that have demonstrated potential to scale up the manufacturing throughput. Finally, we discuss several research directions that are essential for making TBN a scalable nano-manufacturing technology.

  2. Tip-Based Nanofabrication for Scalable Manufacturing

    International Nuclear Information System (INIS)

    Hu, Huan; Somnath, Suhas

    2017-01-01

    Tip-based nanofabrication (TBN) is a family of emerging nanofabrication techniques that use a nanometer scale tip to fabricate nanostructures. Here in this review, we first introduce the history of the TBN and the technology development. We then briefly review various TBN techniques that use different physical or chemical mechanisms to fabricate features and discuss some of the state-of-the-art techniques. Subsequently, we focus on those TBN methods that have demonstrated potential to scale up the manufacturing throughput. Finally, we discuss several research directions that are essential for making TBN a scalable nano-manufacturing technology.

  3. Scalable nanofabrication of U-shaped nanowire resonators with tunable optical magnetism.

    Science.gov (United States)

    Zhou, Fan; Wang, Chen; Dong, Biqin; Chen, Xiangfan; Zhang, Zhen; Sun, Cheng

    2016-03-21

    Split ring resonators have been studied extensively in reconstituting the diminishing magnetism at high electromagnetic frequencies in nature. However, breakdown in the linear scaling of artificial magnetism is found to occur at the near-infrared frequency mainly due to the increasing contribution of self-inductance while reducing dimensions of the resonators. Although alternative designs have enabled artificial magnetism at optical frequencies, their sophisticated configurations and fabrication procedures do not lend themselves to easy implementation. Here, we report scalable nanofabrication of U-shaped nanowire resonators (UNWRs) using the high-throughput nanotransfer printing method. By providing ample area for conducting oscillating electric current, UNWRs overcome the saturation of the geometric scaling of the artificial magnetism. We experimentally demonstrated coarse and fine tuning of LC resonances over a wide wavelength range from 748 nm to 1600 nm. The added flexibility in transferring to other substrates makes UNWR a versatile building block for creating functional metamaterials in three dimensions.

  4. Multiplexed, high density electrophysiology with nanofabricated neural probes.

    Directory of Open Access Journals (Sweden)

    Jiangang Du

    Full Text Available Extracellular electrode arrays can reveal the neuronal network correlates of behavior with single-cell, single-spike, and sub-millisecond resolution. However, implantable electrodes are inherently invasive, and efforts to scale up the number and density of recording sites must compromise on device size in order to connect the electrodes. Here, we report on silicon-based neural probes employing nanofabricated, high-density electrical leads. Furthermore, we address the challenge of reading out multichannel data with an application-specific integrated circuit (ASIC performing signal amplification, band-pass filtering, and multiplexing functions. We demonstrate high spatial resolution extracellular measurements with a fully integrated, low noise 64-channel system weighing just 330 mg. The on-chip multiplexers make possible recordings with substantially fewer external wires than the number of input channels. By combining nanofabricated probes with ASICs we have implemented a system for performing large-scale, high-density electrophysiology in small, freely behaving animals that is both minimally invasive and highly scalable.

  5. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy.

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-19

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available.

  6. A Platform for Scalable Satellite and Geospatial Data Analysis

    Science.gov (United States)

    Beneke, C. M.; Skillman, S.; Warren, M. S.; Kelton, T.; Brumby, S. P.; Chartrand, R.; Mathis, M.

    2017-12-01

    At Descartes Labs, we use the commercial cloud to run global-scale machine learning applications over satellite imagery. We have processed over 5 Petabytes of public and commercial satellite imagery, including the full Landsat and Sentinel archives. By combining open-source tools with a FUSE-based filesystem for cloud storage, we have enabled a scalable compute platform that has demonstrated reading over 200 GB/s of satellite imagery into cloud compute nodes. In one application, we generated global 15m Landsat-8, 20m Sentinel-1, and 10m Sentinel-2 composites from 15 trillion pixels, using over 10,000 CPUs. We recently created a public open-source Python client library that can be used to query and access preprocessed public satellite imagery from within our platform, and made this platform available to researchers for non-commercial projects. In this session, we will describe how you can use the Descartes Labs Platform for rapid prototyping and scaling of geospatial analyses and demonstrate examples in land cover classification.

  7. A Cross-Platform Infrastructure for Scalable Runtime Application Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jack Dongarra; Shirley Moore; Bart Miller, Jeffrey Hollingsworth; Tracy Rafferty

    2005-03-15

    The purpose of this project was to build an extensible cross-platform infrastructure to facilitate the development of accurate and portable performance analysis tools for current and future high performance computing (HPC) architectures. Major accomplishments include tools and techniques for multidimensional performance analysis, as well as improved support for dynamic performance monitoring of multithreaded and multiprocess applications. Previous performance tool development has been limited by the burden of having to re-write a platform-dependent low-level substrate for each architecture/operating system pair in order to obtain the necessary performance data from the system. Manual interpretation of performance data is not scalable for large-scale long-running applications. The infrastructure developed by this project provides a foundation for building portable and scalable performance analysis tools, with the end goal being to provide application developers with the information they need to analyze, understand, and tune the performance of terascale applications on HPC architectures. The backend portion of the infrastructure provides runtime instrumentation capability and access to hardware performance counters, with thread-safety for shared memory environments and a communication substrate to support instrumentation of multiprocess and distributed programs. Front end interfaces provides tool developers with a well-defined, platform-independent set of calls for requesting performance data. End-user tools have been developed that demonstrate runtime data collection, on-line and off-line analysis of performance data, and multidimensional performance analysis. The infrastructure is based on two underlying performance instrumentation technologies. These technologies are the PAPI cross-platform library interface to hardware performance counters and the cross-platform Dyninst library interface for runtime modification of executable images. The Paradyn and KOJAK

  8. A QFD-based optimization method for a scalable product platform

    Science.gov (United States)

    Luo, Xinggang; Tang, Jiafu; Kwong, C. K.

    2010-02-01

    In order to incorporate the customer into the early phase of the product development cycle and to better satisfy customers' requirements, this article adopts quality function deployment (QFD) for optimal design of a scalable product platform. A five-step QFD-based method is proposed to determine the optimal values for platform engineering characteristics (ECs) and non-platform ECs of the products within a product family. First of all, the houses of quality (HoQs) for all product variants are developed and a QFD-based optimization approach is used to determine the optimal ECs for each product variant. Sensitivity analysis is performed for each EC with respect to overall customer satisfaction (OCS). Based on the obtained sensitivity indices of ECs, a mathematical model is established to simultaneously optimize the values of the platform and the non-platform ECs. Finally, by comparing and analysing the optimal solutions with different number of platform ECs, the ECs with which the worst OCS loss can be avoided are selected as platform ECs. An illustrative example is used to demonstrate the feasibility of this method. A comparison between the proposed method and a two-step approach is conducted on the example. The comparison shows that, as a kind of single-stage approach, the proposed method yields better average degree of customer satisfaction due to the simultaneous optimization of platform and non-platform ECs.

  9. Nanofabrication principles, capabilities and limits

    CERN Document Server

    Cui, Zheng

    2017-01-01

    This second edition of Nanofabrication is one of the most comprehensive introductions on nanofabrication technologies and processes. A practical guide and reference, this book introduces readers to all of the developed technologies that are capable of making structures below 100nm. The principle of each technology is introduced and illustrated with minimum mathematics involved. Also analyzed are the capabilities of each technology in making sub-100nm structures, and the limits of preventing a technology from going further down the dimensional scale. This book provides readers with a toolkit that will help with any of their nanofabrication challenges.

  10. Scalable Multi-Platform Distribution of Spatial 3d Contents

    Science.gov (United States)

    Klimke, J.; Hagedorn, B.; Döllner, J.

    2013-09-01

    Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.

  11. Plasma nanofabrication and nanomaterials safety

    International Nuclear Information System (INIS)

    Han, Z J; Levchenko, I; Kumar, S; Yajadda, M M A; Yick, S; Seo, D H; Martin, P J; Ostrikov, K; Peel, S; Kuncic, Z

    2011-01-01

    The fast advances in nanotechnology have raised increasing concerns related to the safety of nanomaterials when exposed to humans, animals and the environment. However, despite several years of research, the nanomaterials safety field is still in its infancy owing to the complexities of structural and surface properties of these nanomaterials and organism-specific responses to them. Recently, plasma-based technology has been demonstrated as a versatile and effective way for nanofabrication, yet its health and environment-benign nature has not been widely recognized. Here we address the environmental and occupational health and safety effects of various zero- and one-dimensional nanomaterials and elaborate the advantages of using plasmas as a safe nanofabrication tool. These advantages include but are not limited to the production of substrate-bound nanomaterials, the isolation of humans from harmful nanomaterials, and the effective reforming of toxic and flammable gases. It is concluded that plasma nanofabrication can minimize the hazards in the workplace and represents a safe way for future nanofabrication technologies.

  12. A scalable FPGA-based digitizing platform for radiation data acquisition

    International Nuclear Information System (INIS)

    Schiffer, Randolph T.; Flaska, Marek; Pozzi, Sara A.; Carney, Sean; Wentzloff, David D.

    2011-01-01

    Regulating the proliferation of nuclear materials has become an important issue in our society. In order to detect the radiation given off by nuclear materials, systems implementing detectors connected to data processing modules have been developed. We have implemented a scalable, portable detection platform with a data processing module about the size of an external DVD drive. The data processing component of our system utilizes real-time data handling and has the potential for growth and behavior modifications through custom FPGA code editing. The size of our system is dynamic, so additional input channels can be implemented if necessary. This paper presents a scalable, portable detection system capable of transmitting streaming data from its inputs to a PC or laptop. The system also performs tail/total integral pulse shape discrimination (PSD) in real time on the FPGA to filter the data and selectively transmit pulses to a PC. The data arrives at the inputs of the data capturing module, is processed in real time by the onboard FPGA and is then transferred to a PC or laptop via a PCIe cord in discrete packets. The maximum transfer rate from the FPGA to the PC is 2000 MB/s. The Detection for Nuclear Non-Proliferation Group at University of Michigan will use the detection platform to achieve pre-processing of radiation data in real time. Such pre-processing includes PSD, pulse height distributions and particle times of arrival.

  13. Scalable and massively parallel Monte Carlo photon transport simulations for heterogeneous computing platforms.

    Science.gov (United States)

    Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian

    2018-01-01

    We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  14. Phonon-based scalable platform for chip-scale quantum computing

    Directory of Open Access Journals (Sweden)

    Charles M. Reinke

    2016-12-01

    Full Text Available We present a scalable phonon-based quantum computer on a phononic crystal platform. Practical schemes involve selective placement of a single acceptor atom in the peak of the strain field in a high-Q phononic crystal cavity that enables coupling of the phonon modes to the energy levels of the atom. We show theoretical optimization of the cavity design and coupling waveguide, along with estimated performance figures of the coupled system. A qubit can be created by entangling a phonon at the resonance frequency of the cavity with the atom states. Qubits based on this half-sound, half-matter quasi-particle, called a phoniton, may outcompete other quantum architectures in terms of combined emission rate, coherence lifetime, and fabrication demands.

  15. Harnessing Disorder in Compression Based Nanofabrication

    Science.gov (United States)

    Engel, Clifford John

    The future of nanotechnologies depends on the successful development of versatile, low-cost techniques for patterning micro- and nanoarchitectures. While most approaches to nanofabrication have focused primarily on making periodic structures at ever smaller length scales with an ultimate goal of massively scaling their production, I have focused on introducing control into relatively disordered nanofabrication systems. Well-ordered patterns are increasingly unnecessary for a growing range of applications, from anti-biofouling coatings to light trapping to omniphobic surfaces. The ability to manipulate disorder, at will and over multiple length scales, starting with the nanoscale, can open new prospects for textured substrates and unconventional applications. Taking advantage of previously considered defects; I have been able to develop nanofabrication techniques with potential for massive scalability and the incorporation into a wide range of potential application. This thesis first describes the manipulation of the non-Newtonian properties of liquid Ga and Ga alloys to confine the metal and metal alloys in gratings with sub-wavelength periodicities. Through a solid to liquid phase change, I was able to access the superior plasmonic properties of liquid Ga for the generation of surface plasmon polaritons (SPP). The switching contract between solid and liquid Ga confine in the nanogratings allowed for reversible manipulation of SPP properties through heating and cooling around the relatively low melting temperature of Ga (29.8 °C). The remaining chapters focus on the development and characterization of an all polymer wrinkle material system. Wrinkles, spontaneous disordered features that are produced in response to compressive force, are an ideal for a growing number of applications where fine feature control is no longer the main motivation. However the mechanical limitations of many wrinkle systems have restricted the potential applications of wrinkled surfaces

  16. Parallel computation with molecular-motor-propelled agents in nanofabricated networks.

    Science.gov (United States)

    Nicolau, Dan V; Lard, Mercy; Korten, Till; van Delft, Falco C M J M; Persson, Malin; Bengtsson, Elina; Månsson, Alf; Diez, Stefan; Linke, Heiner; Nicolau, Dan V

    2016-03-08

    The combinatorial nature of many important mathematical problems, including nondeterministic-polynomial-time (NP)-complete problems, places a severe limitation on the problem size that can be solved with conventional, sequentially operating electronic computers. There have been significant efforts in conceiving parallel-computation approaches in the past, for example: DNA computation, quantum computation, and microfluidics-based computation. However, these approaches have not proven, so far, to be scalable and practical from a fabrication and operational perspective. Here, we report the foundations of an alternative parallel-computation system in which a given combinatorial problem is encoded into a graphical, modular network that is embedded in a nanofabricated planar device. Exploring the network in a parallel fashion using a large number of independent, molecular-motor-propelled agents then solves the mathematical problem. This approach uses orders of magnitude less energy than conventional computers, thus addressing issues related to power consumption and heat dissipation. We provide a proof-of-concept demonstration of such a device by solving, in a parallel fashion, the small instance {2, 5, 9} of the subset sum problem, which is a benchmark NP-complete problem. Finally, we discuss the technical advances necessary to make our system scalable with presently available technology.

  17. A scalable platform for biomechanical studies of tissue cutting forces

    International Nuclear Information System (INIS)

    Valdastri, P; Tognarelli, S; Menciassi, A; Dario, P

    2009-01-01

    This paper presents a novel and scalable experimental platform for biomechanical analysis of tissue cutting that exploits a triaxial force-sensitive scalpel and a high resolution vision system. Real-time measurements of cutting forces can be used simultaneously with accurate visual information in order to extract important biomechanical clues in real time that would aid the surgeon during minimally invasive intervention in preserving healthy tissues. Furthermore, the in vivo data gathered can be used for modeling the viscoelastic behavior of soft tissues, which is an important issue in surgical simulator development. Thanks to a modular approach, this platform can be scaled down, thus enabling in vivo real-time robotic applications. Several cutting experiments were conducted with soft porcine tissues (lung, liver and kidney) chosen as ideal candidates for biopsy procedures. The cutting force curves show repeated self-similar units of localized loading followed by unloading. With regards to tissue properties, the depth of cut plays a significant role in the magnitude of the cutting force acting on the blade. Image processing techniques and dedicated algorithms were used to outline the surface of the tissues and estimate the time variation of the depth of cut. The depth of cut was finally used to obtain the normalized cutting force, thus allowing comparative biomechanical analysis

  18. MOSDEN: A Scalable Mobile Collaborative Platform for Opportunistic Sensing Applications

    Directory of Open Access Journals (Sweden)

    Prem Prakash Jayaraman

    2014-05-01

    Full Text Available Mobile smartphones along with embedded sensors have become an efficient enabler for various mobile applications including opportunistic sensing. The hi-tech advances in smartphones are opening up a world of possibilities. This paper proposes a mobile collaborative platform called MOSDEN that enables and supports opportunistic sensing at run time. MOSDEN captures and shares sensor data acrossmultiple apps, smartphones and users. MOSDEN supports the emerging trend of separating sensors from application-specific processing, storing and sharing. MOSDEN promotes reuse and re-purposing of sensor data hence reducing the efforts in developing novel opportunistic sensing applications. MOSDEN has been implemented on Android-based smartphones and tablets. Experimental evaluations validate the scalability and energy efficiency of MOSDEN and its suitability towards real world applications. The results of evaluation and lessons learned are presented and discussed in this paper.

  19. Nanofabrication strategies for advanced electrode materials

    Directory of Open Access Journals (Sweden)

    Chen Kunfeng

    2017-09-01

    Full Text Available The development of advanced electrode materials for high-performance energy storage devices becomes more and more important for growing demand of portable electronics and electrical vehicles. To speed up this process, rapid screening of exceptional materials among various morphologies, structures and sizes of materials is urgently needed. Benefitting from the advance of nanotechnology, tremendous efforts have been devoted to the development of various nanofabrication strategies for advanced electrode materials. This review focuses on the analysis of novel nanofabrication strategies and progress in the field of fast screening advanced electrode materials. The basic design principles for chemical reaction, crystallization, electrochemical reaction to control the composition and nanostructure of final electrodes are reviewed. Novel fast nanofabrication strategies, such as burning, electrochemical exfoliation, and their basic principles are also summarized. More importantly, colloid system served as one up-front design can skip over the materials synthesis, accelerating the screening rate of highperformance electrode. This work encourages us to create innovative design ideas for rapid screening high-active electrode materials for applications in energy-related fields and beyond.

  20. Application of a scalable plant transient gene expression platform for malaria vaccine development

    Directory of Open Access Journals (Sweden)

    Holger eSpiegel

    2015-12-01

    Full Text Available Despite decades of intensive research efforts there is currently no vaccine that provides sustained sterile immunity against malaria. In this context, a large number of targets from the different stages of the Plasmodium falciparum life cycle have been evaluated as vaccine candidates. None of these candidates has fulfilled expectations, and as long as we lack a single target that induces strain-transcending protective immune responses, combining key antigens from different life cycle stages seems to be the most promising route towards the development of efficacious malaria vaccines. After the identification of potential targets using approaches such as omics-based technology and reverse immunology, the rapid expression, purification and characterization of these proteins, as well as the generation and analysis of fusion constructs combining different promising antigens or antigen domains before committing to expensive and time consuming clinical development, represents one of the bottlenecks in the vaccine development pipeline. The production of recombinant proteins by transient gene expression in plants is a robust and versatile alternative to cell-based microbial and eukaryotic production platforms. The transfection of plant tissues and/or whole plants using Agrobacterium tumefaciens offers a low technical entry barrier, low costs and a high degree of flexibility embedded within a rapid and scalable workflow. Recombinant proteins can easily be targeted to different subcellular compartments according to their physicochemical requirements, including post-translational modifications, to ensure optimal yields of high quality product, and to support simple and economical downstream processing. Here we demonstrate the use of a plant transient expression platform based on transfection with A. tumefaciens as essential component of a malaria vaccine development workflow involving screens for expression, solubility and stability using fluorescent fusion

  1. KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery

    Science.gov (United States)

    Fraser, Joshua; Haridas, Anoop; Seetharaman, Guna; Rao, Raghuveer M.; Palaniappan, Kannappan

    2013-05-01

    KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.

  2. Micro/nanofabricated environments for synthetic biology.

    Science.gov (United States)

    Collier, C Patrick; Simpson, Michael L

    2011-08-01

    A better understanding of how confinement, crowding and reduced dimensionality modulate reactivity and reaction dynamics will aid in the rational and systematic discovery of functionality in complex biological systems. Artificial microfabricated and nanofabricated structures have helped elucidate the effects of nanoscale spatial confinement and segregation on biological behavior, particularly when integrated with microfluidics, through precise control in both space and time of diffusible signals and binding interactions. Examples of nanostructured interfaces for synthetic biology include the development of cell-like compartments for encapsulating biochemical reactions, nanostructured environments for fundamental studies of diffusion, molecular transport and biochemical reaction kinetics, and regulation of biomolecular interactions as functions of microfabricated and nanofabricated topological constraints. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Development of a scalable generic platform for adaptive optics real time control

    Science.gov (United States)

    Surendran, Avinash; Burse, Mahesh P.; Ramaprakash, A. N.; Parihar, Padmakar

    2015-06-01

    The main objective of the present project is to explore the viability of an adaptive optics control system based exclusively on Field Programmable Gate Arrays (FPGAs), making strong use of their parallel processing capability. In an Adaptive Optics (AO) system, the generation of the Deformable Mirror (DM) control voltages from the Wavefront Sensor (WFS) measurements is usually through the multiplication of the wavefront slopes with a predetermined reconstructor matrix. The ability to access several hundred hard multipliers and memories concurrently in an FPGA allows performance far beyond that of a modern CPU or GPU for tasks with a well-defined structure such as Adaptive Optics control. The target of the current project is to generate a signal for a real time wavefront correction, from the signals coming from a Wavefront Sensor, wherein the system would be flexible to accommodate all the current Wavefront Sensing techniques and also the different methods which are used for wavefront compensation. The system should also accommodate for different data transmission protocols (like Ethernet, USB, IEEE 1394 etc.) for transmitting data to and from the FPGA device, thus providing a more flexible platform for Adaptive Optics control. Preliminary simulation results for the formulation of the platform, and a design of a fully scalable slope computer is presented.

  4. Scalable Transactions for Web Applications in the Cloud

    NARCIS (Netherlands)

    Zhou, W.; Pierre, G.E.O.; Chi, C.-H.

    2009-01-01

    Cloud Computing platforms provide scalability and high availability properties for web applications but they sacrifice data consistency at the same time. However, many applications cannot afford any data inconsistency. We present a scalable transaction manager for NoSQL cloud database services to

  5. An FPGA Scalable Software Defined Radio Platform Design for Educational and Research Purposes

    Directory of Open Access Journals (Sweden)

    Marcos Hervás

    2016-06-01

    Full Text Available In a digital modem design, the integration of the Analog to Digital Converters (ADC and Digital to Analog Converters (DAC with the core processor is usually a major issue for the designer. In this paper an FPGA scalable Software Defined Radio platform based on a Spartan-6 as a control unit is presented, developed for both educational and research purposes, which can fit the different application requirements in terms of analog front-end performance, processing unit and cost. The resolution and sampling frequency of the analog front-end are its main adjustable parameters. The processing core requirements involve the FPGA and the communication ports. A multidisciplinary working group was required to design a high performance system for both analog front-end and digital processing core in terms of signal integrity and electromagnetic compatibility. The platform has 5 different peripheral ports ranging from 16 kbps to 2.5 Gbps. The communication ports allow our students to develop a high range of applications for both on-site and online courses applying teaching methodology based on learning by doing using a real system to help them to reach other transversal skills.

  6. Nanofabricated racks of aligned and anchored DNA substrates for single-molecule imaging.

    Science.gov (United States)

    Gorman, Jason; Fazio, Teresa; Wang, Feng; Wind, Shalom; Greene, Eric C

    2010-01-19

    Single-molecule studies of biological macromolecules can benefit from new experimental platforms that facilitate experimental design and data acquisition. Here we develop new strategies to construct curtains of DNA in which the molecules are aligned with respect to one another and maintained in an extended configuration by anchoring both ends of the DNA to the surface of a microfluidic sample chamber that is otherwise coated with an inert lipid bilayer. This "double-tethered" DNA substrate configuration is established through the use of nanofabricated rack patterns comprised of two distinct functional elements: linear barriers to lipid diffusion that align DNA molecules anchored by one end to the bilayer and antibody-coated pentagons that provide immobile anchor points for the opposite ends of the DNA. These devices enable the alignment and anchoring of thousands of individual DNA molecules, which can then be visualized using total internal reflection fluorescence microscopy under conditions that do not require continuous application of buffer flow to stretch the DNA. This unique strategy offers the potential for studying protein-DNA interactions on large DNA substrates without compromising measurements through application of hydrodynamic force. We provide a proof-of-principle demonstration that double-tethered DNA curtains made with nanofabricated rack patterns can be used in a one-dimensional diffusion assay that monitors the motion of quantum dot-tagged proteins along DNA.

  7. Influence of evanescent waves on the voxel profile in multipulse multiphoton polymerization nanofabrication

    International Nuclear Information System (INIS)

    Li Wei; Cao Tianxiang; Zhai Zhaohui; Yu Xuanyi; Zhang Xinzheng; Xu Jingjun

    2013-01-01

    The relationship between the profile of the structures obtained by multiphoton polymerization and the optical parameters of nanofabrication systems has been studied theoretically for a multipulse scheme. We find that the profile of sub-wavelength structures is greatly affected by the evanescent waves affect. Not only is the photocured polymer voxel affected by the beam profile, but the beam propagation behavior is influenced by the photocured polymer voxel. This gives us a new view of matter–light interactions in multipulse polymerization process, which is useful to the accurate control of the nanofabrication profile and the selection of new nanofabrication materials. (paper)

  8. Plasma-aided nanofabrication: where is the cutting edge?

    International Nuclear Information System (INIS)

    Ostrikov, K; Murphy, A B

    2007-01-01

    Plasma-aided nanofabrication is a rapidly expanding area of research spanning disciplines ranging from physics and chemistry of plasmas and gas discharges to solid state physics, materials science, surface science, nanoscience and nanotechnology and related engineering subjects. The current status of the research field is discussed and examples of superior performance and competitive advantage of plasma processes and techniques are given. These examples are selected to represent a range of applications of two major types of plasmas suitable for nanoscale synthesis and processing, namely thermally non-equilibrium and thermal plasmas. Major concepts and terminology used in the field are introduced. The paper also pinpoints the major challenges facing plasma-aided nanofabrication and identifies some emerging topics for future research. (editorial review)

  9. Scalability of DL_POLY on High Performance Computing Platform

    Directory of Open Access Journals (Sweden)

    Mabule Samuel Mabakane

    2017-12-01

    Full Text Available This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.

  10. A scalable double-barcode sequencing platform for characterization of dynamic protein-protein interactions.

    Science.gov (United States)

    Schlecht, Ulrich; Liu, Zhimin; Blundell, Jamie R; St Onge, Robert P; Levy, Sasha F

    2017-05-25

    Several large-scale efforts have systematically catalogued protein-protein interactions (PPIs) of a cell in a single environment. However, little is known about how the protein interactome changes across environmental perturbations. Current technologies, which assay one PPI at a time, are too low throughput to make it practical to study protein interactome dynamics. Here, we develop a highly parallel protein-protein interaction sequencing (PPiSeq) platform that uses a novel double barcoding system in conjunction with the dihydrofolate reductase protein-fragment complementation assay in Saccharomyces cerevisiae. PPiSeq detects PPIs at a rate that is on par with current assays and, in contrast with current methods, quantitatively scores PPIs with enough accuracy and sensitivity to detect changes across environments. Both PPI scoring and the bulk of strain construction can be performed with cell pools, making the assay scalable and easily reproduced across environments. PPiSeq is therefore a powerful new tool for large-scale investigations of dynamic PPIs.

  11. Safety Profile of TiO2-Based Photocatalytic Nanofabrics for Indoor Formaldehyde Degradation

    Directory of Open Access Journals (Sweden)

    Guixin Cui

    2015-11-01

    Full Text Available Anatase TiO2 nanoparticles (TNPs are synthesized using the sol-gel method and loaded onto the surface of polyester-cotton (65/35 fabrics. The nanofabrics degrade formaldehyde at an efficiency of 77% in eight hours with visible light irradiation or 97% with UV light. The loaded TNPs display very little release from nanofabrics (~0.0% during a standard fastness to rubbing test. Assuming TNPs may fall off nanofabrics during their life cycles, we also examine the possible toxicity of TNPs to human cells. We found that up to a concentration of 220 μg/mL, they do not affect viability of human acute monocytic leukemia cell line THP-1 macrophages and human liver and kidney cells.

  12. Friction-induced nanofabrication on monocrystalline silicon

    International Nuclear Information System (INIS)

    Yu Bingjun; Qian Linmao; Yu Jiaxin; Zhou Zhongrong; Dong Hanshan; Chen Yunfei

    2009-01-01

    Fabrication of nanostructures has become a major concern as the scaling of device dimensions continues. In this paper, a friction-induced nanofabrication method is proposed to fabricate protrusive nanostructures on silicon. Without applying any voltage, the nanofabrication is completed by sliding an AFM diamond tip on a sample surface under a given normal load. Nanostructured patterns, such as linear nanostructures, nanodots or nanowords, can be fabricated on the target surface. The height of these nanostructures increases rapidly at first and then levels off with the increasing normal load or number of scratching cycles. TEM analyses suggest that the friction-induced hillock is composed of silicon oxide, amorphous silicon and deformed silicon structures. Compared to the tribochemical reaction, the amorphization and crystal defects induced by the mechanical interaction may have played a dominating role in the formation of the hillocks. Similar to other proximal probe methods, the proposed method enables fabrication at specified locations and facilitates measuring the dimensions of nanostructures with high precision. It is highlighted that the fabrication can also be realized on electrical insulators or oxide surfaces, such as quartz and glass. Therefore, the friction-induced method points out a new route in fabricating nanostructures on demand.

  13. High resolution UV spectroscopy and laser-focused nanofabrication

    NARCIS (Netherlands)

    Myszkiewicz, G.

    2005-01-01

    This thesis combines two at first glance different techniques: High Resolution Laser Induced Fluorescence Spectroscopy (LIF) of small aromatic molecules and Laser Focusing of atoms for Nanofabrication. The thesis starts with the introduction to the high resolution LIF technique of small aromatic

  14. New Tools for New Research in Psychiatry: A Scalable and Customizable Platform to Empower Data Driven Smartphone Research.

    Science.gov (United States)

    Torous, John; Kiang, Mathew V; Lorme, Jeanette; Onnela, Jukka-Pekka

    2016-05-05

    A longstanding barrier to progress in psychiatry, both in clinical settings and research trials, has been the persistent difficulty of accurately and reliably quantifying disease phenotypes. Mobile phone technology combined with data science has the potential to offer medicine a wealth of additional information on disease phenotypes, but the large majority of existing smartphone apps are not intended for use as biomedical research platforms and, as such, do not generate research-quality data. Our aim is not the creation of yet another app per se but rather the establishment of a platform to collect research-quality smartphone raw sensor and usage pattern data. Our ultimate goal is to develop statistical, mathematical, and computational methodology to enable us and others to extract biomedical and clinical insights from smartphone data. We report on the development and early testing of Beiwe, a research platform featuring a study portal, smartphone app, database, and data modeling and analysis tools designed and developed specifically for transparent, customizable, and reproducible biomedical research use, in particular for the study of psychiatric and neurological disorders. We also outline a proposed study using the platform for patients with schizophrenia. We demonstrate the passive data capabilities of the Beiwe platform and early results of its analytical capabilities. Smartphone sensors and phone usage patterns, when coupled with appropriate statistical learning tools, are able to capture various social and behavioral manifestations of illnesses, in naturalistic settings, as lived and experienced by patients. The ubiquity of smartphones makes this type of moment-by-moment quantification of disease phenotypes highly scalable and, when integrated within a transparent research platform, presents tremendous opportunities for research, discovery, and patient health.

  15. Microchannel Reactors for ISRU Applications Using Nanofabricated Catalysts, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Makel Engineering, Inc. (MEI) and USRA propose to develop microchannel reactors for In-Situ Resources Utilization (ISRU) using nanofabricated catalysts. The proposed...

  16. Fast volume reconstruction in positron emission tomography: Implementation of four algorithms on a high-performance scalable parallel platform

    International Nuclear Information System (INIS)

    Egger, M.L.; Scheurer, A.H.; Joseph, C.

    1996-01-01

    The issue of long reconstruction times in PET has been addressed from several points of view, resulting in an affordable dedicated system capable of handling routine 3D reconstruction in a few minutes per frame: on the hardware side using fast processors and a parallel architecture, and on the software side, using efficient implementations of computationally less intensive algorithms. Execution times obtained for the PRT-1 data set on a parallel system of five hybrid nodes, each combining an Alpha processor for computation and a transputer for communication, are the following (256 sinograms of 96 views by 128 radial samples): Ramp algorithm 56 s, Favor 81 s and reprojection algorithm of Kinahan and Rogers 187 s. The implementation of fast rebinning algorithms has shown our hardware platform to become communications-limited; they execute faster on a conventional single-processor Alpha workstation: single-slice rebinning 7 s, Fourier rebinning 22 s, 2D filtered backprojection 5 s. The scalability of the system has been demonstrated, and a saturation effect at network sizes above ten nodes has become visible; new T9000-based products lifting most of the constraints on network topology and link throughput are expected to result in improved parallel efficiency and scalability properties

  17. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility.

    Science.gov (United States)

    Jaschob, Daniel; Riffle, Michael

    2012-07-30

    Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. JobCenter is a client-server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or "in the cloud") and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  18. Algorithmic psychometrics and the scalable subject.

    Science.gov (United States)

    Stark, Luke

    2018-04-01

    Recent public controversies, ranging from the 2014 Facebook 'emotional contagion' study to psychographic data profiling by Cambridge Analytica in the 2016 American presidential election, Brexit referendum and elsewhere, signal watershed moments in which the intersecting trajectories of psychology and computer science have become matters of public concern. The entangled history of these two fields grounds the application of applied psychological techniques to digital technologies, and an investment in applying calculability to human subjectivity. Today, a quantifiable psychological subject position has been translated, via 'big data' sets and algorithmic analysis, into a model subject amenable to classification through digital media platforms. I term this position the 'scalable subject', arguing it has been shaped and made legible by algorithmic psychometrics - a broad set of affordances in digital platforms shaped by psychology and the behavioral sciences. In describing the contours of this 'scalable subject', this paper highlights the urgent need for renewed attention from STS scholars on the psy sciences, and on a computational politics attentive to psychology, emotional expression, and sociality via digital media.

  19. Scalable multifunction RF system concepts for joint operations

    NARCIS (Netherlands)

    Otten, M.P.G.; Wit, J.J.M. de; Smits, F.M.A.; Rossum, W.L. van; Huizing, A.

    2010-01-01

    RF systems based on modular architectures have the potential of better re-use of technology, decreasing development time, and decreasing life cycle cost. Moreover, modular architectures provide scalability, allowing low cost upgrades and adaptability to different platforms. To achieve maximum

  20. A nanofabricated, monolithic, path-separated electron interferometer

    OpenAIRE

    Agarwal, Akshay; Kim, Chung-Soo; Hobbs, Richard; Dyck, Dirk van; Berggren, Karl K.

    2017-01-01

    Progress in nanofabrication technology has enabled the development of numerous electron optic elements for enhancing image contrast and manipulating electron wave functions. Here, we describe a modular, self-aligned, amplitude-division electron interferometer in a conventional transmission electron microscope. The interferometer consists of two 45-nm-thick silicon layers separated by 20??m. This interferometer is fabricated from a single-crystal silicon cantilever on a transmission electron m...

  1. Nanofabrication Technology for Production of Quantum Nano-Electronic Devices Integrating Niobium Electrodes and Optically Transparent Gates

    Science.gov (United States)

    2018-01-01

    TECHNICAL REPORT 3086 January 2018 Nanofabrication Technology for Production of Quantum Nano-electronic Devices Integrating Niobium Electrodes...work described in this report was performed for the by the Advanced Concepts and Applied Research Branch (Code 71730) and the Science and Technology ...Applied Sciences Division iii EXECUTIVE SUMMARY This technical report demonstrates nanofabrication technology for Niobium heterostructures and

  2. Space Situational Awareness Data Processing Scalability Utilizing Google Cloud Services

    Science.gov (United States)

    Greenly, D.; Duncan, M.; Wysack, J.; Flores, F.

    Space Situational Awareness (SSA) is a fundamental and critical component of current space operations. The term SSA encompasses the awareness, understanding and predictability of all objects in space. As the population of orbital space objects and debris increases, the number of collision avoidance maneuvers grows and prompts the need for accurate and timely process measures. The SSA mission continually evolves to near real-time assessment and analysis demanding the need for higher processing capabilities. By conventional methods, meeting these demands requires the integration of new hardware to keep pace with the growing complexity of maneuver planning algorithms. SpaceNav has implemented a highly scalable architecture that will track satellites and debris by utilizing powerful virtual machines on the Google Cloud Platform. SpaceNav algorithms for processing CDMs outpace conventional means. A robust processing environment for tracking data, collision avoidance maneuvers and various other aspects of SSA can be created and deleted on demand. Migrating SpaceNav tools and algorithms into the Google Cloud Platform will be discussed and the trials and tribulations involved. Information will be shared on how and why certain cloud products were used as well as integration techniques that were implemented. Key items to be presented are: 1.Scientific algorithms and SpaceNav tools integrated into a scalable architecture a) Maneuver Planning b) Parallel Processing c) Monte Carlo Simulations d) Optimization Algorithms e) SW Application Development/Integration into the Google Cloud Platform 2. Compute Engine Processing a) Application Engine Automated Processing b) Performance testing and Performance Scalability c) Cloud MySQL databases and Database Scalability d) Cloud Data Storage e) Redundancy and Availability

  3. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility

    Directory of Open Access Journals (Sweden)

    Jaschob Daniel

    2012-07-01

    Full Text Available Abstract Background Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. Results JobCenter is a client–server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or “in the cloud” and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. Conclusions JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  4. Platform for efficient switching between multiple devices in the intensive care unit.

    Science.gov (United States)

    De Backere, F; Vanhove, T; Dejonghe, E; Feys, M; Herinckx, T; Vankelecom, J; Decruyenaere, J; De Turck, F

    2015-01-01

    This article is part of the Focus Theme of METHODS of Information in Medicine on "Managing Interoperability and Complexity in Health Systems". Handheld computers, such as tablets and smartphones, are becoming more and more accessible in the clinical care setting and in Intensive Care Units (ICUs). By making the most useful and appropriate data available on multiple devices and facilitate the switching between those devices, staff members can efficiently integrate them in their workflow, allowing for faster and more accurate decisions. This paper addresses the design of a platform for the efficient switching between multiple devices in the ICU. The key functionalities of the platform are the integration of the platform into the workflow of the medical staff and providing tailored and dynamic information at the point of care. The platform is designed based on a 3-tier architecture with a focus on extensibility, scalability and an optimal user experience. After identification to a device using Near Field Communication (NFC), the appropriate medical information will be shown on the selected device. The visualization of the data is adapted to the type of the device. A web-centric approach was used to enable extensibility and portability. A prototype of the platform was thoroughly evaluated. The scalability, performance and user experience were evaluated. Performance tests show that the response time of the system scales linearly with the amount of data. Measurements with up to 20 devices have shown no performance loss due to the concurrent use of multiple devices. The platform provides a scalable and responsive solution to enable the efficient switching between multiple devices. Due to the web-centric approach new devices can easily be integrated. The performance and scalability of the platform have been evaluated and it was shown that the response time and scalability of the platform was within an acceptable range.

  5. HPC - Platforms Penta Chart

    Energy Technology Data Exchange (ETDEWEB)

    Trujillo, Angelina Michelle [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-10-08

    Strategy, Planning, Acquiring- very large scale computing platforms come and go and planning for immensely scalable machines often precedes actual procurement by 3 years. Procurement can be another year or more. Integration- After Acquisition, machines must be integrated into the computing environments at LANL. Connection to scalable storage via large scale storage networking, assuring correct and secure operations. Management and Utilization – Ongoing operations, maintenance, and trouble shooting of the hardware and systems software at massive scale is required.

  6. Scalable electro-photonic integration concept based on polymer waveguides

    NARCIS (Netherlands)

    Bosman, E.; Steenberge, G. van; Boersma, A.; Wiegersma, S.; Harmsma, P.J.; Karppinen, M.; Korhonen, T.; Offrein, B.J.; Dangel, R.; Daly, A.; Ortsiefer, M.; Justice, J.; Corbett, B.; Dorrestein, S.; Duis, J.

    2016-01-01

    A novel method for fabricating a single mode optical interconnection platform is presented. The method comprises the miniaturized assembly of optoelectronic single dies, the scalable fabrication of polymer single mode waveguides and the coupling to glass fiber arrays providing the I/O's. The low

  7. Embedded Linux platform for data acquisition systems

    International Nuclear Information System (INIS)

    Patel, Jigneshkumar J.; Reddy, Nagaraj; Kumari, Praveena; Rajpal, Rachana; Pujara, Harshad; Jha, R.; Kalappurakkal, Praveen

    2014-01-01

    Highlights: • The design and the development of data acquisition system on FPGA based reconfigurable hardware platform. • Embedded Linux configuration and compilation for FPGA based systems. • Hardware logic IP core and its Linux device driver development for the external peripheral to interface it with the FPGA based system. - Abstract: This scalable hardware–software system is designed and developed to explore the emerging open standards for data acquisition requirement of Tokamak experiments. To address the future need for a scalable data acquisition and control system for fusion experiments, we have explored the capability of software platform using Open Source Embedded Linux Operating System on a programmable hardware platform such as FPGA. The idea was to identify the platform which can be customizable, flexible and scalable to support the data acquisition system requirements. To do this, we have selected FPGA based reconfigurable and scalable hardware platform to design the system with Embedded Linux based operating system for flexibility in software development and Gigabit Ethernet interface for high speed data transactions. The proposed hardware–software platform using FPGA and Embedded Linux OS offers a single chip solution with processor, peripherals such ADC interface controller, Gigabit Ethernet controller, memory controller amongst other peripherals. The Embedded Linux platform for data acquisition is implemented and tested on a Virtex-5 FXT FPGA ML507 which has PowerPC 440 (PPC440) [2] hard block on FPGA. For this work, we have used the Linux Kernel version 2.6.34 with BSP support for the ML507 platform. It is downloaded from the Xilinx [1] GIT server. Cross-compiler tool chain is created using the Buildroot scripts. The Linux Kernel and Root File System are configured and compiled using the cross-tools to support the hardware platform. The Analog to Digital Converter (ADC) IO module is designed and interfaced with the ML507 through Xilinx

  8. Embedded Linux platform for data acquisition systems

    Energy Technology Data Exchange (ETDEWEB)

    Patel, Jigneshkumar J., E-mail: jjp@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Reddy, Nagaraj, E-mail: nagaraj.reddy@coreel.com [Sandeepani School of Embedded System Design, Bangalore, Karnataka (India); Kumari, Praveena, E-mail: praveena@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Rajpal, Rachana, E-mail: rachana@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Pujara, Harshad, E-mail: pujara@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Jha, R., E-mail: rjha@ipr.res.in [Institute for Plasma Research, Gandhinagar, Gujarat (India); Kalappurakkal, Praveen, E-mail: praveen.k@coreel.com [Sandeepani School of Embedded System Design, Bangalore, Karnataka (India)

    2014-05-15

    Highlights: • The design and the development of data acquisition system on FPGA based reconfigurable hardware platform. • Embedded Linux configuration and compilation for FPGA based systems. • Hardware logic IP core and its Linux device driver development for the external peripheral to interface it with the FPGA based system. - Abstract: This scalable hardware–software system is designed and developed to explore the emerging open standards for data acquisition requirement of Tokamak experiments. To address the future need for a scalable data acquisition and control system for fusion experiments, we have explored the capability of software platform using Open Source Embedded Linux Operating System on a programmable hardware platform such as FPGA. The idea was to identify the platform which can be customizable, flexible and scalable to support the data acquisition system requirements. To do this, we have selected FPGA based reconfigurable and scalable hardware platform to design the system with Embedded Linux based operating system for flexibility in software development and Gigabit Ethernet interface for high speed data transactions. The proposed hardware–software platform using FPGA and Embedded Linux OS offers a single chip solution with processor, peripherals such ADC interface controller, Gigabit Ethernet controller, memory controller amongst other peripherals. The Embedded Linux platform for data acquisition is implemented and tested on a Virtex-5 FXT FPGA ML507 which has PowerPC 440 (PPC440) [2] hard block on FPGA. For this work, we have used the Linux Kernel version 2.6.34 with BSP support for the ML507 platform. It is downloaded from the Xilinx [1] GIT server. Cross-compiler tool chain is created using the Buildroot scripts. The Linux Kernel and Root File System are configured and compiled using the cross-tools to support the hardware platform. The Analog to Digital Converter (ADC) IO module is designed and interfaced with the ML507 through Xilinx

  9. Development of nano-fabrication technique utilizing self-organizational behavior of point defects induced by ion irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Nitta, Noriko [Department of Environmental Systems Engineering, Kochi University of Technology, Tosayamada-Cho, Kochi-Prefecture 782-8502 (Japan); Taniwaki, Masafumi [Department of Environmental Systems Engineering, Kochi University of Technology, Tosayamada-Cho, Kochi-Prefecture 782-8502 (Japan)]. E-mail: taniwaki.masafumi@kochi-tech.ac.jp

    2006-04-01

    The present authors proposed a novel nano-fabrication technique that is able to arrange the fine cells orderly, based on their finding in GaSb implanted at a low temperature. In this article, first the experimental results that anomalous cellular structure was formed in GaSb by ion implantation is introduced and the self-organizational formation mechanism of the structure is described. Next a nano-fabrication technique that utilizes focused ion beam is described. This technique consists of two procedures, i.e. the formation process of the voids array and the development of the initial array to ordered cellular structure. Finally, the nano-fabrication is actually performed by this technique and their results are reported. Fabrication succeeded in structures where the dot (cell) interval was 100 nm or larger. The minimum ion dose for initial voids which develops to the ordered cellular structure is evaluated. It is also shown that the substrate temperature during implantation is an essential parameter for this technique.

  10. Development of nano-fabrication technique utilizing self-organizational behavior of point defects induced by ion irradiation

    International Nuclear Information System (INIS)

    Nitta, Noriko; Taniwaki, Masafumi

    2006-01-01

    The present authors proposed a novel nano-fabrication technique that is able to arrange the fine cells orderly, based on their finding in GaSb implanted at a low temperature. In this article, first the experimental results that anomalous cellular structure was formed in GaSb by ion implantation is introduced and the self-organizational formation mechanism of the structure is described. Next a nano-fabrication technique that utilizes focused ion beam is described. This technique consists of two procedures, i.e. the formation process of the voids array and the development of the initial array to ordered cellular structure. Finally, the nano-fabrication is actually performed by this technique and their results are reported. Fabrication succeeded in structures where the dot (cell) interval was 100 nm or larger. The minimum ion dose for initial voids which develops to the ordered cellular structure is evaluated. It is also shown that the substrate temperature during implantation is an essential parameter for this technique

  11. STUDY OF UREMIC TOXIN FLUXES ACROSS NANOFABRICATED HEMODIALYSIS MEMBRANES USING IRREVERSIBLE THERMODYNAMICS

    Directory of Open Access Journals (Sweden)

    Assem Hedayat

    2013-03-01

    Conclusions: Nanofabricated hemodialysis membranes with a reduced thickness and an applied electric potential can enhance the effective diffusivity and electro-migration flux of the respective uremic toxins by 3 orders of magnitude as compared to those passing through the high flux hemodialyzer.

  12. Monitoring WLCG with lambda-architecture: a new scalable data store and analytics platform for monitoring at petabyte scale.

    CERN Document Server

    Magnoni, L; Cordeiro, C; Georgiou, M; Andreeva, J; Khan, A; Smith, D R

    2015-01-01

    Monitoring the WLCG infrastructure requires the gathering and analysis of a high volume of heterogeneous data (e.g. data transfers, job monitoring, site tests) coming from different services and experiment-specific frameworks to provide a uniform and flexible interface for scientists and sites. The current architecture, where relational database systems are used to store, to process and to serve monitoring data, has limitations in coping with the foreseen increase in the volume (e.g. higher LHC luminosity) and the variety (e.g. new data-transfer protocols and new resource-types, as cloud-computing) of WLCG monitoring events. This paper presents a new scalable data store and analytics platform designed by the Support for Distributed Computing (SDC) group, at the CERN IT department, which uses a variety of technologies each one targeting specific aspects of big-scale distributed data-processing (commonly referred as lambda-architecture approach). Results of data processing on Hadoop for WLCG data activities mon...

  13. Silicon nanophotonics for scalable quantum coherent feedback networks

    International Nuclear Information System (INIS)

    Sarovar, Mohan; Brif, Constantin; Soh, Daniel B.S.; Cox, Jonathan; DeRose, Christopher T.; Camacho, Ryan; Davids, Paul

    2016-01-01

    The emergence of coherent quantum feedback control (CQFC) as a new paradigm for precise manipulation of dynamics of complex quantum systems has led to the development of efficient theoretical modeling and simulation tools and opened avenues for new practical implementations. This work explores the applicability of the integrated silicon photonics platform for implementing scalable CQFC networks. If proven successful, on-chip implementations of these networks would provide scalable and efficient nanophotonic components for autonomous quantum information processing devices and ultra-low-power optical processing systems at telecommunications wavelengths. We analyze the strengths of the silicon photonics platform for CQFC applications and identify the key challenges to both the theoretical formalism and experimental implementations. In particular, we determine specific extensions to the theoretical CQFC framework (which was originally developed with bulk-optics implementations in mind), required to make it fully applicable to modeling of linear and nonlinear integrated optics networks. We also report the results of a preliminary experiment that studied the performance of an in situ controllable silicon nanophotonic network of two coupled cavities and analyze the properties of this device using the CQFC formalism. (orig.)

  14. Silicon nanophotonics for scalable quantum coherent feedback networks

    Energy Technology Data Exchange (ETDEWEB)

    Sarovar, Mohan; Brif, Constantin [Sandia National Laboratories, Livermore, CA (United States); Soh, Daniel B.S. [Sandia National Laboratories, Livermore, CA (United States); Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States); Cox, Jonathan; DeRose, Christopher T.; Camacho, Ryan; Davids, Paul [Sandia National Laboratories, Albuquerque, NM (United States)

    2016-12-15

    The emergence of coherent quantum feedback control (CQFC) as a new paradigm for precise manipulation of dynamics of complex quantum systems has led to the development of efficient theoretical modeling and simulation tools and opened avenues for new practical implementations. This work explores the applicability of the integrated silicon photonics platform for implementing scalable CQFC networks. If proven successful, on-chip implementations of these networks would provide scalable and efficient nanophotonic components for autonomous quantum information processing devices and ultra-low-power optical processing systems at telecommunications wavelengths. We analyze the strengths of the silicon photonics platform for CQFC applications and identify the key challenges to both the theoretical formalism and experimental implementations. In particular, we determine specific extensions to the theoretical CQFC framework (which was originally developed with bulk-optics implementations in mind), required to make it fully applicable to modeling of linear and nonlinear integrated optics networks. We also report the results of a preliminary experiment that studied the performance of an in situ controllable silicon nanophotonic network of two coupled cavities and analyze the properties of this device using the CQFC formalism. (orig.)

  15. A scalable approach to modeling groundwater flow on massively parallel computers

    International Nuclear Information System (INIS)

    Ashby, S.F.; Falgout, R.D.; Tompson, A.F.B.

    1995-12-01

    We describe a fully scalable approach to the simulation of groundwater flow on a hierarchy of computing platforms, ranging from workstations to massively parallel computers. Specifically, we advocate the use of scalable conceptual models in which the subsurface model is defined independently of the computational grid on which the simulation takes place. We also describe a scalable multigrid algorithm for computing the groundwater flow velocities. We axe thus able to leverage both the engineer's time spent developing the conceptual model and the computing resources used in the numerical simulation. We have successfully employed this approach at the LLNL site, where we have run simulations ranging in size from just a few thousand spatial zones (on workstations) to more than eight million spatial zones (on the CRAY T3D)-all using the same conceptual model

  16. MOLNs: A CLOUD PLATFORM FOR INTERACTIVE, REPRODUCIBLE, AND SCALABLE SPATIAL STOCHASTIC COMPUTATIONAL EXPERIMENTS IN SYSTEMS BIOLOGY USING PyURDME.

    Science.gov (United States)

    Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas

    2016-01-01

    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.

  17. Nanofabrication of Plasmonic Circuits Containing Single Photon Sources

    DEFF Research Database (Denmark)

    Siampour, Hamidreza; Kumar, Shailesh; Bozhevolnyi, Sergey I.

    2017-01-01

    Nanofabrication of photonic components based on dielectric loaded surface plasmon polariton waveguides (DLSPPWs) excited by single nitrogen vacancy (NV) centers in nanodiamonds is demonstrated. DLSPPW circuits are built around NV containing nanodiamonds, which are certified to be single-photon...... emitters, using electron-beam lithography of hydrogen silsesquioxane (HSQ) resist on silver-coated silicon substrates. A propagation length of 20 ± 5 μm for the NV single-photon emission is measured with DLSPPWs. A 5-fold enhancement in the total decay rate, and 58% coupling efficiency to the DLSPPW mode...

  18. NANOFILM - New metallic nanocomposites for micro and nanofabrication

    DEFF Research Database (Denmark)

    Fischer, Søren Vang

    during the heat treatment or UV irradiation after spin coating. It was able possible to dissolve the gold precursor directly into the photoresist, but nanoparticles with large size distribution were formed within a time frame of 20 s. This made further processes such as spinning and formation...... or catalysts. The possibility to effectively structure the nanocomposites are however a limiting factor. In this project the UV sensitive photoresist SU-8 gold and silver nanocomposites have been fabricated which can be deposited and structured using standard micro and nanofabrication processes...

  19. Scalable Multifunction RF Systems: Combined vs. Separate Transmit and Receive Arrays

    NARCIS (Netherlands)

    Huizing, A.G.

    2008-01-01

    A scalable multifunction RF (SMRF) system allows the RF functionality (radar, electronic warfare and communications) to be easily extended and the RF performance to be scaled to the requirements of different missions and platforms. This paper presents the results of a trade-off study with respect to

  20. Wideband vs. Multiband Trade-offs for a Scalable Multifunction RF system

    NARCIS (Netherlands)

    Huizing, A.G.

    2005-01-01

    This paper presents a concept for a scalable multifunction RF (SMRF) system that allows the RF functionality (radar, electronic warfare and communications) to be easily extended and the RF performance to be scaled to the requirements of different missions and platforms. A trade-off analysis is

  1. Making Spatial Statistics Service Accessible On Cloud Platform

    OpenAIRE

    Mu, X.; Wu, J.; Li, T; Zhong, Y.; Gao, X.

    2014-01-01

    Web service can bring together applications running on diverse platforms, users can access and share various data, information and models more effectively and conveniently from certain web service platform. Cloud computing emerges as a paradigm of Internet computing in which dynamical, scalable and often virtualized resources are provided as services. With the rampant growth of massive data and restriction of net, traditional web services platforms have some prominent problems existi...

  2. An Embedded Software Platform for Distributed Automotive Environment Management

    Directory of Open Access Journals (Sweden)

    Seepold Ralf

    2009-01-01

    Full Text Available This paper discusses an innovative extension of the actual vehicle platforms that integrate intelligent environments in order to carry out e-safety tasks improving the driving security. These platforms are dedicated to automotive environments which are characterized by sensor networks deployed along the vehicles. Since this kind of platform infrastructure is hardly extensible and forms a non-scalable process unit, an embedded OSGi-based UPnP platform extension is proposed in this article. Such extension deploys a compatible and scalable uniform environment that allows to manage the vehicle components heterogeneity and to provide plug and play support, being compatible with all kind of devices and sensors located in a car network. Furthermore, such extension allows to autoregister any kind of external devices, wherever they are located, providing the in-vehicle system with additional services and data supplied by them. This extension also supports service provisioning and connections to external and remote network services using SIP technology.

  3. Scientific visualization uncertainty, multifield, biomedical, and scalable visualization

    CERN Document Server

    Chen, Min; Johnson, Christopher; Kaufman, Arie; Hagen, Hans

    2014-01-01

    Based on the seminar that took place in Dagstuhl, Germany in June 2011, this contributed volume studies the four important topics within the scientific visualization field: uncertainty visualization, multifield visualization, biomedical visualization and scalable visualization. • Uncertainty visualization deals with uncertain data from simulations or sampled data, uncertainty due to the mathematical processes operating on the data, and uncertainty in the visual representation, • Multifield visualization addresses the need to depict multiple data at individual locations and the combination of multiple datasets, • Biomedical is a vast field with select subtopics addressed from scanning methodologies to structural applications to biological applications, • Scalability in scientific visualization is critical as data grows and computational devices range from hand-held mobile devices to exascale computational platforms. Scientific Visualization will be useful to practitioners of scientific visualization, ...

  4. Femtosecond laser three-dimensional micro- and nanofabrication

    Energy Technology Data Exchange (ETDEWEB)

    Sugioka, Koji, E-mail: ksugioka@riken.jp [RIKEN Center for Advanced Photonics, Hirosawa 2-1, Wako, Saitama 351-0198 (Japan); Cheng, Ya, E-mail: ya.cheng@siom.ac.cn [Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, P.O. Box 800-211, Shanghai 201800 (China)

    2014-12-15

    The rapid development of the femtosecond laser has revolutionized materials processing due to its unique characteristics of ultrashort pulse width and extremely high peak intensity. The short pulse width suppresses the formation of a heat-affected zone, which is vital for ultrahigh precision fabrication, whereas the high peak intensity allows nonlinear interactions such as multiphoton absorption and tunneling ionization to be induced in transparent materials, which provides versatility in terms of the materials that can be processed. More interestingly, irradiation with tightly focused femtosecond laser pulses inside transparent materials makes three-dimensional (3D) micro- and nanofabrication available due to efficient confinement of the nonlinear interactions within the focal volume. Additive manufacturing (stereolithography) based on multiphoton absorption (two-photon polymerization) enables the fabrication of 3D polymer micro- and nanostructures for photonic devices, micro- and nanomachines, and microfluidic devices, and has applications for biomedical and tissue engineering. Subtractive manufacturing based on internal modification and fabrication can realize the direct fabrication of 3D microfluidics, micromechanics, microelectronics, and photonic microcomponents in glass. These microcomponents can be easily integrated in a single glass microchip by a simple procedure using a femtosecond laser to realize more functional microdevices, such as optofluidics and integrated photonic microdevices. The highly localized multiphoton absorption of a tightly focused femtosecond laser in glass can also induce strong absorption only at the interface of two closely stacked glass substrates. Consequently, glass bonding can be performed based on fusion welding with femtosecond laser irradiation, which provides the potential for applications in electronics, optics, microelectromechanical systems, medical devices, microfluidic devices, and small satellites. This review paper

  5. Nanofluidic channels of arbitrary shapes fabricated by tip-based nanofabrication

    International Nuclear Information System (INIS)

    Hu, Huan; Cunningham, Brian T; King, William P; Zhuo, Yue; Oruc, Muhammed E

    2014-01-01

    Nanofluidic channels have promising applications in biomolecule manipulation and sensing. While several different methods of fabrication have been demonstrated for nanofluidic channels, a rapid, low-cost fabrication method that can fabricate arbitrary shapes of nanofluidic channels is still in demand. Here, we report a tip-based nanofabrication (TBN) method for fabricating nanofluidic channels using a heated atomic force microscopy (AFM) tip. The heated AFM tip deposits polymer nanowires where needed to serve as etch mask to fabricate silicon molds through one step of etching. PDMS nanofluidic channels are easily fabricated through replicate molding using the silicon molds. Various shapes of nanofluidic channels with either straight or curvilinear features are demonstrated. The width of the nanofluidic channels is 500 nm, and is determined by the deposited polymer nanowire width. The height of the channel is 400 nm determined by the silicon etching time. Ion conductance measurement on one single curvy shaped nanofluidic channel exhibits the typical ion conductance saturation phenomenon as the ion concentration decreases. Moreover, fluorescence imaging of fluid flowing through a fabricated nanofluidic channel demonstrates the channel integrity. This TBN process is seamlessly compatible with existing nanofabrication processes and can be used to achieve new types of nanofluidic devices. (paper)

  6. NADIM-Travel: A Multiagent Platform for Travel Services Aggregation

    OpenAIRE

    Ben Ameur, Houssein; Bédard, François; Vaucher, Stéphane; Kropf, Peter; Chaib-draaa, Brahim; Gérin-Lajoie, Robert

    2010-01-01

    With the Internet as a growing channel for travel services distribution, sophisticated travel services aggregators are increasingly in demand. A travel services aggregation platform should be able to manage the heterogeneous characteristics of the many existing travel services. It should also be as scalable, robust, and flexible as possible. Using multiagent technology, we designed and implemented a multiagent platform for travel services aggregation called NADIM-Travel. In this platform, a p...

  7. Low-cost fabrication technologies for nanostructures: state-of-the-art and potential

    International Nuclear Information System (INIS)

    Santos, A; Deen, M J; Marsal, L F

    2015-01-01

    In the last decade, some low-cost nanofabrication technologies used in several disciplines of nanotechnology have demonstrated promising results in terms of versatility and scalability for producing innovative nanostructures. While conventional nanofabrication technologies such as photolithography are and will be an important part of nanofabrication, some low-cost nanofabrication technologies have demonstrated outstanding capabilities for large-scale production, providing high throughputs with acceptable resolution and broad versatility. Some of these nanotechnological approaches are reviewed in this article, providing information about the fundamentals, limitations and potential future developments towards nanofabrication processes capable of producing a broad range of nanostructures. Furthermore, in many cases, these low-cost nanofabrication approaches can be combined with traditional nanofabrication technologies. This combination is considered a promising way of generating innovative nanostructures suitable for a broad range of applications such as in opto-electronics, nano-electronics, photonics, sensing, biotechnology or medicine. (topical review)

  8. Low-cost fabrication technologies for nanostructures: state-of-the-art and potential

    Science.gov (United States)

    Santos, A.; Deen, M. J.; Marsal, L. F.

    2015-01-01

    In the last decade, some low-cost nanofabrication technologies used in several disciplines of nanotechnology have demonstrated promising results in terms of versatility and scalability for producing innovative nanostructures. While conventional nanofabrication technologies such as photolithography are and will be an important part of nanofabrication, some low-cost nanofabrication technologies have demonstrated outstanding capabilities for large-scale production, providing high throughputs with acceptable resolution and broad versatility. Some of these nanotechnological approaches are reviewed in this article, providing information about the fundamentals, limitations and potential future developments towards nanofabrication processes capable of producing a broad range of nanostructures. Furthermore, in many cases, these low-cost nanofabrication approaches can be combined with traditional nanofabrication technologies. This combination is considered a promising way of generating innovative nanostructures suitable for a broad range of applications such as in opto-electronics, nano-electronics, photonics, sensing, biotechnology or medicine.

  9. On the Scalability of Time-predictable Chip-Multiprocessing

    DEFF Research Database (Denmark)

    Puffitsch, Wolfgang; Schoeberl, Martin

    2012-01-01

    Real-time systems need a time-predictable execution platform to be able to determine the worst-case execution time statically. In order to be time-predictable, several advanced processor features, such as out-of-order execution and other forms of speculation, have to be avoided. However, just using...... simple processors is not an option for embedded systems with high demands on computing power. In order to provide high performance and predictability we argue to use multiprocessor systems with a time-predictable memory interface. In this paper we present the scalability of a Java chip......-multiprocessor system that is designed to be time-predictable. Adding time-predictable caches is mandatory to achieve scalability with a shared memory multi-processor system. As Java bytecode retains information about the nature of memory accesses, it is possible to implement a memory hierarchy that takes...

  10. Privacy-Preserving and Scalable Service Recommendation Based on SimHash in a Distributed Cloud Environment

    Directory of Open Access Journals (Sweden)

    Yanwei Xu

    2017-01-01

    Full Text Available With the increasing volume of web services in the cloud environment, Collaborative Filtering- (CF- based service recommendation has become one of the most effective techniques to alleviate the heavy burden on the service selection decisions of a target user. However, the service recommendation bases, that is, historical service usage data, are often distributed in different cloud platforms. Two challenges are present in such a cross-cloud service recommendation scenario. First, a cloud platform is often not willing to share its data to other cloud platforms due to privacy concerns, which decreases the feasibility of cross-cloud service recommendation severely. Second, the historical service usage data recorded in each cloud platform may update over time, which reduces the recommendation scalability significantly. In view of these two challenges, a novel privacy-preserving and scalable service recommendation approach based on SimHash, named SerRecSimHash, is proposed in this paper. Finally, through a set of experiments deployed on a real distributed service quality dataset WS-DREAM, we validate the feasibility of our proposal in terms of recommendation accuracy and efficiency while guaranteeing privacy-preservation.

  11. PM2006: a highly scalable urban planning management information system--Case study: Suzhou Urban Planning Bureau

    Science.gov (United States)

    Jing, Changfeng; Liang, Song; Ruan, Yong; Huang, Jie

    2008-10-01

    During the urbanization process, when facing complex requirements of city development, ever-growing urban data, rapid development of planning business and increasing planning complexity, a scalable, extensible urban planning management information system is needed urgently. PM2006 is such a system that can deal with these problems. In response to the status and problems in urban planning, the scalability and extensibility of PM2006 are introduced which can be seen as business-oriented workflow extensibility, scalability of DLL-based architecture, flexibility on platforms of GIS and database, scalability of data updating and maintenance and so on. It is verified that PM2006 system has good extensibility and scalability which can meet the requirements of all levels of administrative divisions and can adapt to ever-growing changes in urban planning business. At the end of this paper, the application of PM2006 in Urban Planning Bureau of Suzhou city is described.

  12. Scalable Simulation of Electromagnetic Hybrid Codes

    International Nuclear Information System (INIS)

    Perumalla, Kalyan S.; Fujimoto, Richard; Karimabadi, Dr. Homa

    2006-01-01

    New discrete-event formulations of physics simulation models are emerging that can outperform models based on traditional time-stepped techniques. Detailed simulation of the Earth's magnetosphere, for example, requires execution of sub-models that are at widely differing timescales. In contrast to time-stepped simulation which requires tightly coupled updates to entire system state at regular time intervals, the new discrete event simulation (DES) approaches help evolve the states of sub-models on relatively independent timescales. However, parallel execution of DES-based models raises challenges with respect to their scalability and performance. One of the key challenges is to improve the computation granularity to offset synchronization and communication overheads within and across processors. Our previous work was limited in scalability and runtime performance due to the parallelization challenges. Here we report on optimizations we performed on DES-based plasma simulation models to improve parallel performance. The net result is the capability to simulate hybrid particle-in-cell (PIC) models with over 2 billion ion particles using 512 processors on supercomputing platforms

  13. Final Report. Center for Scalable Application Development Software

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [Rice Univ., Houston, TX (United States)

    2014-10-26

    The Center for Scalable Application Development Software (CScADS) was established as a part- nership between Rice University, Argonne National Laboratory, University of California Berkeley, University of Tennessee – Knoxville, and University of Wisconsin – Madison. CScADS pursued an integrated set of activities with the aim of increasing the productivity of DOE computational scientists by catalyzing the development of systems software, libraries, compilers, and tools for leadership computing platforms. Principal Center activities were workshops to engage the research community in the challenges of leadership computing, research and development of open-source software, and work with computational scientists to help them develop codes for leadership computing platforms. This final report summarizes CScADS activities at Rice University in these areas.

  14. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    International Nuclear Information System (INIS)

    Desai, Ajit; Pettit, Chris; Poirel, Dominique; Sarkar, Abhijit

    2017-01-01

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolution in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.

  15. Photofunctional polyurethane nanofabrics doped by zinc tetraphenylporphyrin and zinc phthalocyanine photosensitizers

    Czech Academy of Sciences Publication Activity Database

    Mosinger, Jiří; Lang, Kamil; Kubát, Pavel; Sýkora, Jan; Hof, Martin; Plíštil, J.; Mosinger, B.

    2009-01-01

    Roč. 19, č. 4 (2009), s. 705-713 ISSN 1053-0509 R&D Projects: GA ČR GA203/08/0831; GA ČR GA203/07/1424; GA ČR(CZ) GA203/06/1244 Institutional research plan: CEZ:AV0Z40320502; CEZ:AV0Z40400503 Keywords : singlet oxygen * nanofabric * energy transfer Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.017, year: 2009

  16. From nanofabrication to self-fabrication--tailored chemistry for control of single molecule electronic devices

    DEFF Research Database (Denmark)

    Moth-Poulsen, Kasper; Bjørnholm, Thomas

    2010-01-01

    as alternatives to the dominant top-down nanofabrication techniques. One example is solution-based self-assembly of a molecule enclosed by two gold nanorod electrodes. This article will discuss recent attempts to control the self-assembly process by the use of supramolecular chemistry and how to tailor...

  17. X-ray lithography for micro- and nano-fabrication at ELETTRA for interdisciplinary applications

    International Nuclear Information System (INIS)

    Di Fabrizio, E; Fillipo, R; Cabrini, S

    2004-01-01

    ELETTRA (http://www.elettra.trieste.it/index.html) is a third generation synchrotron radiation source facility operating at Trieste, Italy, and hosts a wide range of research activities in advanced materials analysis and processing, biology and nano-science at several various beam lines. The energy spectrum of ELETTRA allows x-ray nano-lithography using soft (1.5 keV) and hard x-ray (10 keV) wavelengths. The Laboratory for Interdisciplinary Lithography (LIILIT) was established in 1998 as part of an Italian national initiative on micro- and nano-technology project of INFM and is funded and supported by the Italian National Research Council (CNR), INFM and ELETTRA. LILIT had developed two dedicated lithographic beam lines for soft (1.5 keV) and hard x-ray (10 keV) for micro- and nano-fabrication activities for their applications in engineering, science and bio-medical applications. In this paper, we present a summary of our research activities in micro- and nano-fabrication involving x-ray nanolithography at LILIT's soft and hard x-ray beam lines

  18. Compact Submillimeter-Wave Receivers Made with Semiconductor Nano-Fabrication Technologies

    Science.gov (United States)

    Jung, C.; Thomas, B.; Lee, C.; Peralta, A.; Chattopadhyay, G.; Gill, J.; Cooper, K.; Mehdi, I.

    2011-01-01

    Advanced semiconductor nanofabrication techniques are utilized to design, fabricate and demonstrate a super-compact, low-mass (<10 grams) submillimeter-wave heterodyne front-end. RF elements such as waveguides and channels are fabricated in a silicon wafer substrate using deep-reactive ion etching (DRIE). Etched patterns with sidewalls angles controlled with 1 deg precision are reported, while maintaining a surface roughness of better than 20 nm rms for the etched structures. This approach is being developed to build compact 2-D imaging arrays in the THz frequency range.

  19. Focused ion beam machining and deposition for nanofabrication

    Energy Technology Data Exchange (ETDEWEB)

    Davies, S T; Khamsehpour, B [Warwick Univ., Coventry (United Kingdom). Dept. of Engineering

    1996-05-01

    Focused ion beam micromatching (FIBM) and focused ion beam deposition (FIBD) enable spatially selective, maskless, patterning and processing of materials at extremely high levels of resolution. State-of-the-art focused ion beam (FIB) columns based on high brightness liquid metal ion source (LMIS) technology are capable of forming probes with dimensions of order 10 nm with a lower limit on spot size set by the inherent energy spread of the LMIS and the chromatic aberration of ion optical systems. The combination of high lateral and depth resolution make FIBM and FIBD powerful tools for nanotechnology applications. In this paper we present some methods of controlling FIBM and FIBD processes for nanofabrication purposes and discuss their limitations. (author).

  20. Scalable devices

    KAUST Repository

    Krüger, Jens J.

    2014-01-01

    In computer science in general and in particular the field of high performance computing and supercomputing the term scalable plays an important role. It indicates that a piece of hardware, a concept, an algorithm, or an entire system scales with the size of the problem, i.e., it can not only be used in a very specific setting but it\\'s applicable for a wide range of problems. From small scenarios to possibly very large settings. In this spirit, there exist a number of fixed areas of research on scalability. There are works on scalable algorithms, scalable architectures but what are scalable devices? In the context of this chapter, we are interested in a whole range of display devices, ranging from small scale hardware such as tablet computers, pads, smart-phones etc. up to large tiled display walls. What interests us mostly is not so much the hardware setup but mostly the visualization algorithms behind these display systems that scale from your average smart phone up to the largest gigapixel display walls.

  1. geoKepler Workflow Module for Computationally Scalable and Reproducible Geoprocessing and Modeling

    Science.gov (United States)

    Cowart, C.; Block, J.; Crawl, D.; Graham, J.; Gupta, A.; Nguyen, M.; de Callafon, R.; Smarr, L.; Altintas, I.

    2015-12-01

    The NSF-funded WIFIRE project has developed an open-source, online geospatial workflow platform for unifying geoprocessing tools and models for for fire and other geospatially dependent modeling applications. It is a product of WIFIRE's objective to build an end-to-end cyberinfrastructure for real-time and data-driven simulation, prediction and visualization of wildfire behavior. geoKepler includes a set of reusable GIS components, or actors, for the Kepler Scientific Workflow System (https://kepler-project.org). Actors exist for reading and writing GIS data in formats such as Shapefile, GeoJSON, KML, and using OGC web services such as WFS. The actors also allow for calling geoprocessing tools in other packages such as GDAL and GRASS. Kepler integrates functions from multiple platforms and file formats into one framework, thus enabling optimal GIS interoperability, model coupling, and scalability. Products of the GIS actors can be fed directly to models such as FARSITE and WRF. Kepler's ability to schedule and scale processes using Hadoop and Spark also makes geoprocessing ultimately extensible and computationally scalable. The reusable workflows in geoKepler can be made to run automatically when alerted by real-time environmental conditions. Here, we show breakthroughs in the speed of creating complex data for hazard assessments with this platform. We also demonstrate geoKepler workflows that use Data Assimilation to ingest real-time weather data into wildfire simulations, and for data mining techniques to gain insight into environmental conditions affecting fire behavior. Existing machine learning tools and libraries such as R and MLlib are being leveraged for this purpose in Kepler, as well as Kepler's Distributed Data Parallel (DDP) capability to provide a framework for scalable processing. geoKepler workflows can be executed via an iPython notebook as a part of a Jupyter hub at UC San Diego for sharing and reporting of the scientific analysis and results from

  2. Use of the NetBeans Platform for NASA Robotic Conjunction Assessment Risk Analysis

    Science.gov (United States)

    Sabey, Nickolas J.

    2014-01-01

    The latest Java and JavaFX technologies are very attractive software platforms for customers involved in space mission operations such as those of NASA and the US Air Force. For NASA Robotic Conjunction Assessment Risk Analysis (CARA), the NetBeans platform provided an environment in which scalable software solutions could be developed quickly and efficiently. Both Java 8 and the NetBeans platform are in the process of simplifying CARA development in secure environments by providing a significant amount of capability in a single accredited package, where accreditation alone can account for 6-8 months for each library or software application. Capabilities either in use or being investigated by CARA include: 2D and 3D displays with JavaFX, parallelization with the new Streams API, and scalability through the NetBeans plugin architecture.

  3. Content-Aware Scalability-Type Selection for Rate Adaptation of Scalable Video

    Directory of Open Access Journals (Sweden)

    Tekalp A Murat

    2007-01-01

    Full Text Available Scalable video coders provide different scaling options, such as temporal, spatial, and SNR scalabilities, where rate reduction by discarding enhancement layers of different scalability-type results in different kinds and/or levels of visual distortion depend on the content and bitrate. This dependency between scalability type, video content, and bitrate is not well investigated in the literature. To this effect, we first propose an objective function that quantifies flatness, blockiness, blurriness, and temporal jerkiness artifacts caused by rate reduction by spatial size, frame rate, and quantization parameter scaling. Next, the weights of this objective function are determined for different content (shot types and different bitrates using a training procedure with subjective evaluation. Finally, a method is proposed for choosing the best scaling type for each temporal segment that results in minimum visual distortion according to this objective function given the content type of temporal segments. Two subjective tests have been performed to validate the proposed procedure for content-aware selection of the best scalability type on soccer videos. Soccer videos scaled from 600 kbps to 100 kbps by the proposed content-aware selection of scalability type have been found visually superior to those that are scaled using a single scalability option over the whole sequence.

  4. Real-time nanofabrication with high-speed atomic force microscopy

    International Nuclear Information System (INIS)

    Vicary, J A; Miles, M J

    2009-01-01

    The ability to follow nanoscale processes in real-time has obvious benefits for the future of material science. In particular, the ability to evaluate the success of fabrication processes in situ would be an advantage for many in the semiconductor industry. We report on the application of a previously described high-speed atomic force microscope (AFM) for nanofabrication. The specific fabrication method presented here concerns the modification of a silicon surface by locally oxidizing the region in the vicinity of the AFM tip. Oxide features were fabricated during imaging, with relative tip-sample velocities of up to 10 cm s -1 , and with a data capture rate of 15 fps.

  5. The COMET Sleep Research Platform.

    Science.gov (United States)

    Nichols, Deborah A; DeSalvo, Steven; Miller, Richard A; Jónsson, Darrell; Griffin, Kara S; Hyde, Pamela R; Walsh, James K; Kushida, Clete A

    2014-01-01

    The Comparative Outcomes Management with Electronic Data Technology (COMET) platform is extensible and designed for facilitating multicenter electronic clinical research. Our research goals were the following: (1) to conduct a comparative effectiveness trial (CET) for two obstructive sleep apnea treatments-positive airway pressure versus oral appliance therapy; and (2) to establish a new electronic network infrastructure that would support this study and other clinical research studies. The COMET platform was created to satisfy the needs of CET with a focus on creating a platform that provides comprehensive toolsets, multisite collaboration, and end-to-end data management. The platform also provides medical researchers the ability to visualize and interpret data using business intelligence (BI) tools. COMET is a research platform that is scalable and extensible, and which, in a future version, can accommodate big data sets and enable efficient and effective research across multiple studies and medical specialties. The COMET platform components were designed for an eventual move to a cloud computing infrastructure that enhances sustainability, overall cost effectiveness, and return on investment.

  6. Cyber Learning Platform for Nuclear Education and Training

    International Nuclear Information System (INIS)

    Vojtela, Martin

    2014-01-01

    Cyber Learning Platform for Nuclear Education and Training: … support capacity building and knowledge transfer in the nuclear sector by empowering web-based development and dissemination of high-quality learning resources in a way that is cost-effective, scalable and easy to use …

  7. Analyzing composability of applications on MPSoC platforms

    NARCIS (Netherlands)

    Kumar, A.; Mesman, B.; Theelen, B.D.; Corporaal, H.; Yajun, H.

    2008-01-01

    Modern day applications require use of multi-processor systems for reasons of erformance, scalability and power efficiency. As more and more applications are integrated in a single system, mapping and analyzing them on a multi-processor platform becomes a multidimensional problem. Each possible set

  8. Developing cloud applications using the e-Science Central platform.

    Science.gov (United States)

    Hiden, Hugo; Woodman, Simon; Watson, Paul; Cala, Jacek

    2013-01-28

    This paper describes the e-Science Central (e-SC) cloud data processing system and its application to a number of e-Science projects. e-SC provides both software as a service (SaaS) and platform as a service for scientific data management, analysis and collaboration. It is a portable system and can be deployed on both private (e.g. Eucalyptus) and public clouds (Amazon AWS and Microsoft Windows Azure). The SaaS application allows scientists to upload data, edit and run workflows and share results in the cloud, using only a Web browser. It is underpinned by a scalable cloud platform consisting of a set of components designed to support the needs of scientists. The platform is exposed to developers so that they can easily upload their own analysis services into the system and make these available to other users. A representational state transfer-based application programming interface (API) is also provided so that external applications can leverage the platform's functionality, making it easier to build scalable, secure cloud-based applications. This paper describes the design of e-SC, its API and its use in three different case studies: spectral data visualization, medical data capture and analysis, and chemical property prediction.

  9. A layered and distributed approach to platform systems control

    NARCIS (Netherlands)

    Neef, R.M.; Lieburg, A. van; Gosliga, S.P. van; Gillis, M.P.W.

    2003-01-01

    With the increasing complexity of platform systems and the ongoing demand for reduced manning comes the need for novel approaches to ship control systems. Current control and management systems fall short on maintainability, robustness and scalability. From the operator's perspective information

  10. eTRIKS platform: Conception and operation of a highly scalable cloud-based platform for translational research and applications development.

    Science.gov (United States)

    Bussery, Justin; Denis, Leslie-Alexandre; Guillon, Benjamin; Liu, Pengfeï; Marchetti, Gino; Rahal, Ghita

    2018-04-01

    We describe the genesis, design and evolution of a computing platform designed and built to improve the success rate of biomedical translational research. The eTRIKS project platform was developed with the aim of building a platform that can securely host heterogeneous types of data and provide an optimal environment to run tranSMART analytical applications. Many types of data can now be hosted, including multi-OMICS data, preclinical laboratory data and clinical information, including longitudinal data sets. During the last two years, the platform has matured into a robust translational research knowledge management system that is able to host other data mining applications and support the development of new analytical tools. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. ENDEAVOUR: A Scalable SDN Architecture for Real-World IXPs

    KAUST Repository

    Antichi, Gianni

    2017-10-25

    Innovation in interdomain routing has remained stagnant for over a decade. Recently, IXPs have emerged as economically-advantageous interconnection points for reducing path latencies and exchanging ever increasing traffic volumes among, possibly, hundreds of networks. Given their far-reaching implications on interdomain routing, IXPs are the ideal place to foster network innovation and extend the benefits of SDN to the interdomain level. In this paper, we present, evaluate, and demonstrate ENDEAVOUR, an SDN platform for IXPs. ENDEAVOUR can be deployed on a multi-hop IXP fabric, supports a large number of use cases, and is highly-scalable while avoiding broadcast storms. Our evaluation with real data from one of the largest IXPs, demonstrates the benefits and scalability of our solution: ENDEAVOUR requires around 70% fewer rules than alternative SDN solutions thanks to our rule partitioning mechanism. In addition, by providing an open source solution, we invite everyone from the community to experiment (and improve) our implementation as well as adapt it to new use cases.

  12. Wolfram technologies as an integrated scalable platform for interactive learning

    Science.gov (United States)

    Kaurov, Vitaliy

    2012-02-01

    We rely on technology profoundly with the prospect of even greater integration in the future. Well known challenges in education are a technology-inadequate curriculum and many software platforms that are difficult to scale or interconnect. We'll review an integrated technology, much of it free, that addresses these issues for individuals and small schools as well as for universities. Topics include: Mathematica, a programming environment that offers a diverse range of functionality; natural language programming for getting started quickly and accessing data from Wolfram|Alpha; quick and easy construction of interactive courseware and scientific applications; partnering with publishers to create interactive e-textbooks; course assistant apps for mobile platforms; the computable document format (CDF); teacher-student and student-student collaboration on interactive projects and web publishing at the Wolfram Demonstrations site.

  13. Nanofabrication and Nanopatterning of Carbon Nanomaterials for Flexible Electronics

    Science.gov (United States)

    Ding, Junjun

    Stretchable electrodes have increasingly drawn attention as a vital component for flexible electronic devices. Carbon nanomaterials such as graphene and carbon nanotubes (CNTs) exhibit properties such as high mechanical flexibility and strength, optical transparency, and electrical conductivity which are naturally required for stretchable electrodes. Graphene growth, nanopatterning, and transfer processes are important steps to use graphene as flexible electrodes. However, advances in the large-area nanofabrication and nanopatterning of carbon nanomaterials such as graphene are necessary to realize the full potential of this technology. In particular, laser interference lithography (LIL), a fast and low cost large-area nanoscale patterning technique, shows tremendous promise for the patterning of graphene and other nanostructures for numerous applications. First, it was demonstrated that large-area nanopatterning and the transfer of chemical vapor deposition (CVD) grown graphene via LIL and plasma etching provide a reliable method to provide large area nanoengineered graphene on various target substrates. Then, to improve the electrode performance under large strain (naturally CVD grown graphene sheet will crack at tensile strains larger than 1%), a corrugated graphene structure on PDMS was designed, fabricated, and tested, with experimental results indicating that this approach successfully allows the graphene sheets to withstand cyclic tensile strains up to 15%. Lastly, to further enhance the performance of carbon-based stretchable electrodes, an approach was developed which coupled graphene and vertically aligned CNT (VACNT) on a flexible PDMS substrate. Characterization of the graphene-VACNT hybrid shows high electrical conductivity and durability through 50 cycles of loading up to 100% tensile strain. While flexible electronics promise tremendous advances in important technological areas such as healthcare, sensing, energy, and wearable electronics, continued

  14. Towards Scalable Graph Computation on Mobile Devices.

    Science.gov (United States)

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2014-10-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach.

  15. Towards Scalable Graph Computation on Mobile Devices

    Science.gov (United States)

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2015-01-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach. PMID:25859564

  16. Integrated Spintronic Platforms for Biomolecular Recognition Detection

    Science.gov (United States)

    Martins, V. C.; Cardoso, F. A.; Loureiro, J.; Mercier, M.; Germano, J.; Cardoso, S.; Ferreira, R.; Fonseca, L. P.; Sousa, L.; Piedade, M. S.; Freitas, P. P.

    2008-06-01

    This paper covers recent developments in magnetoresistive based biochip platforms fabricated at INESC-MN, and their application to the detection and quantification of pathogenic waterborn microorganisms in water samples for human consumption. Such platforms are intended to give response to the increasing concern related to microbial contaminated water sources. The presented results concern the development of biological active DNA chips and protein chips and the demonstration of the detection capability of the present platforms. Two platforms are described, one including spintronic sensors only (spin-valve based or magnetic tunnel junction based), and the other, a fully scalable platform where each probe site consists of a MTJ in series with a thin film diode (TFD). Two microfluidic systems are described, for cell separation and concentration, and finally, the read out and control integrated electronics are described, allowing the realization of bioassays with a portable point of care unit. The present platforms already allow the detection of complementary biomolecular target recognition with 1 pM concentration.

  17. Molecular engineering with artificial atoms: designing a material platform for scalable quantum spintronics and photonics

    Science.gov (United States)

    Doty, Matthew F.; Ma, Xiangyu; Zide, Joshua M. O.; Bryant, Garnett W.

    2017-09-01

    Self-assembled InAs Quantum Dots (QDs) are often called "artificial atoms" and have long been of interest as components of quantum photonic and spintronic devices. Although there has been substantial progress in demonstrating optical control of both single spins confined to a single QD and entanglement between two separated QDs, the path toward scalable quantum photonic devices based on spins remains challenging. Quantum Dot Molecules, which consist of two closely-spaced InAs QDs, have unique properties that can be engineered with the solid state analog of molecular engineering in which the composition, size, and location of both the QDs and the intervening barrier are controlled during growth. Moreover, applied electric, magnetic, and optical fields can be used to modulate, in situ, both the spin and optical properties of the molecular states. We describe how the unique photonic properties of engineered Quantum Dot Molecules can be leveraged to overcome long-standing challenges to the creation of scalable quantum devices that manipulate single spins via photonics.

  18. From Lab to Fab: Developing a Nanoscale Delivery Tool for Scalable Nanomanufacturing

    Science.gov (United States)

    Safi, Asmahan A.

    The emergence of nanomaterials with unique properties at the nanoscale over the past two decades carries a capacity to impact society and transform or create new industries ranging from nanoelectronics to nanomedicine. However, a gap in nanomanufacturing technologies has prevented the translation of nanomaterial into real-world commercialized products. Bridging this gap requires a paradigm shift in methods for fabricating structured devices with a nanoscale resolution in a repeatable fashion. This thesis explores the new paradigms for fabricating nanoscale structures devices and systems for high throughput high registration applications. We present a robust and scalable nanoscale delivery platform, the Nanofountain Probe (NFP), for parallel direct-write of functional materials. The design and microfabrication of NFP is presented. The new generation addresses the challenges of throughput, resolution and ink replenishment characterizing tip-based nanomanufacturing. To achieve these goals, optimized probe geometry is integrated to the process along with channel sealing and cantilever bending. The capabilities of the newly fabricated probes are demonstrated through two type of delivery: protein nanopatterning and single cell nanoinjection. The broad applications of the NFP for single cell delivery are investigated. An external microfluidic packaging is developed to enable delivery in liquid environment. The system is integrated to a combined atomic force microscope and inverted fluorescence microscope. Intracellular delivery is demonstrated by injecting a fluorescent dextran into Hela cells in vitro while monitoring the injection forces. Such developments enable in vitro cellular delivery for single cell studies and high throughput gene expression. The nanomanufacturing capabilities of NFPs are explored. Nanofabrication of carbon nanotube-based electronics presents all the manufacturing challenges characterizing of assembling nanomaterials precisely onto devices. The

  19. Affordable and Scalable Manufacturing of Wearable Multi-Functional Sensory “Skin” for Internet of Everything Applications

    KAUST Repository

    Nassar, Joanna M.

    2017-10-01

    Demand for wearable electronics is expected to at least triple by 2020, embracing all sorts of Internet of Everything (IoE) applications, such as activity tracking, environmental mapping, and advanced healthcare monitoring, in the purpose of enhancing the quality of life. This entails the wide availability of free-form multifunctional sensory systems (i.e “skin” platforms) that can conform to the variety of uneven surfaces, providing intimate contact and adhesion with the skin, necessary for localized and enhanced sensing capabilities. However, current wearable devices appear to be bulky, rigid and not convenient for continuous wear in everyday life, hindering their implementation into advanced and unexplored applications beyond fitness tracking. Besides, they retail at high price tags which limits their availability to at least half of the World’s population. Hence, form factor (physical flexibility and/or stretchability), cost, and accessibility become the key drivers for further developments. To support this need in affordable and adaptive wearables and drive academic developments in “skin” platforms into practical and functional consumer devices, compatibility and integration into a high performance yet low power system is crucial to sustain the high data rates and large data management driven by IoE. Likewise, scalability becomes essential for batch fabrication and precision. Therefore, I propose to develop three distinct but necessary “skin” platforms using scalable and cost effective manufacturing techniques. My first approach is the fabrication of a CMOS-compatible “silicon skin”, crucial for any truly autonomous and conformal wearable device, where monolithic integration between heterogeneous material-based sensory platform and system components is a challenge yet to be addressed. My second approach displays an even more affordable and accessible “paper skin”, using recyclable and off-the-shelf materials, targeting environmental

  20. Single-molecule dynamics in nanofabricated traps

    Science.gov (United States)

    Cohen, Adam

    2009-03-01

    The Anti-Brownian Electrokinetic trap (ABEL trap) provides a means to immobilize a single fluorescent molecule in solution, without surface attachment chemistry. The ABEL trap works by tracking the Brownian motion of a single molecule, and applying feedback electric fields to induce an electrokinetic motion that approximately cancels the Brownian motion. We present a new design for the ABEL trap that allows smaller molecules to be trapped and more information to be extracted from the dynamics of a single molecule than was previously possible. In particular, we present strategies for extracting dynamically fluctuating mobilities and diffusion coefficients, as a means to probe dynamic changes in molecular charge and shape. If one trapped molecule is good, many trapped molecules are better. An array of single molecules in solution, each immobilized without surface attachment chemistry, provides an ideal test-bed for single-molecule analyses of intramolecular dynamics and intermolecular interactions. We present a technology for creating such an array, using a fused silica plate with nanofabricated dimples and a removable cover for sealing single molecules within the dimples. With this device one can watch the shape fluctuations of single molecules of DNA or study cooperative interactions in weakly associating protein complexes.

  1. A lightweight scalable agarose-gel-synthesized thermoelectric composite

    Science.gov (United States)

    Kim, Jin Ho; Fernandes, Gustavo E.; Lee, Do-Joong; Hirst, Elizabeth S.; Osgood, Richard M., III; Xu, Jimmy

    2018-03-01

    Electronic devices are now advancing beyond classical, rigid systems and moving into lighweight flexible regimes, enabling new applications such as body-wearables and ‘e-textiles’. To support this new electronic platform, composite materials that are highly conductive yet scalable, flexible, and wearable are needed. Materials with high electrical conductivity often have poor thermoelectric properties because their thermal transport is made greater by the same factors as their electronic conductivity. We demonstrate, in proof-of-principle experiments, that a novel binary composite can disrupt thermal (phononic) transport, while maintaining high electrical conductivity, thus yielding promising thermoelectric properties. Highly conductive Multi-Wall Carbon Nanotube (MWCNT) composites are combined with a low-band gap semiconductor, PbS. The work functions of the two materials are closely matched, minimizing the electrical contact resistance within the composite. Disparities in the speed of sound in MWCNTs and PbS help to inhibit phonon propagation, and boundary layer scattering at interfaces between these two materials lead to large Seebeck coefficient (> 150 μV/K) (Mott N F and Davis E A 1971 Electronic Processes in Non-crystalline Materials (Oxford: Clarendon), p 47) and a power factor as high as 10 μW/(K2 m). The overall fabrication process is not only scalable but also conformal and compatible with large-area flexible hosts including metal sheets, films, coatings, possibly arrays of fibers, textiles and fabrics. We explain the behavior of this novel thermoelectric material platform in terms of differing length scales for electrical conductivity and phononic heat transfer, and explore new material configurations for potentially lightweight and flexible thermoelectric devices that could be networked in a textile.

  2. Integration of an intelligent systems behavior simulator and a scalable soldier-machine interface

    Science.gov (United States)

    Johnson, Tony; Manteuffel, Chris; Brewster, Benjamin; Tierney, Terry

    2007-04-01

    As the Army's Future Combat Systems (FCS) introduce emerging technologies and new force structures to the battlefield, soldiers will increasingly face new challenges in workload management. The next generation warfighter will be responsible for effectively managing robotic assets in addition to performing other missions. Studies of future battlefield operational scenarios involving the use of automation, including the specification of existing and proposed technologies, will provide significant insight into potential problem areas regarding soldier workload. The US Army Tank Automotive Research, Development, and Engineering Center (TARDEC) is currently executing an Army technology objective program to analyze and evaluate the effect of automated technologies and their associated control devices with respect to soldier workload. The Human-Robotic Interface (HRI) Intelligent Systems Behavior Simulator (ISBS) is a human performance measurement simulation system that allows modelers to develop constructive simulations of military scenarios with various deployments of interface technologies in order to evaluate operator effectiveness. One such interface is TARDEC's Scalable Soldier-Machine Interface (SMI). The scalable SMI provides a configurable machine interface application that is capable of adapting to several hardware platforms by recognizing the physical space limitations of the display device. This paper describes the integration of the ISBS and Scalable SMI applications, which will ultimately benefit both systems. The ISBS will be able to use the Scalable SMI to visualize the behaviors of virtual soldiers performing HRI tasks, such as route planning, and the scalable SMI will benefit from stimuli provided by the ISBS simulation environment. The paper describes the background of each system and details of the system integration approach.

  3. Dense Plasma Focus-Based Nanofabrication of III-V Semiconductors: Unique Features and Recent Advances.

    Science.gov (United States)

    Mangla, Onkar; Roy, Savita; Ostrikov, Kostya Ken

    2015-12-29

    The hot and dense plasma formed in modified dense plasma focus (DPF) device has been used worldwide for the nanofabrication of several materials. In this paper, we summarize the fabrication of III-V semiconductor nanostructures using the high fluence material ions produced by hot, dense and extremely non-equilibrium plasma generated in a modified DPF device. In addition, we present the recent results on the fabrication of porous nano-gallium arsenide (GaAs). The details of morphological, structural and optical properties of the fabricated nano-GaAs are provided. The effect of rapid thermal annealing on the above properties of porous nano-GaAs is studied. The study reveals that it is possible to tailor the size of pores with annealing temperature. The optical properties of these porous nano-GaAs also confirm the possibility to tailor the pore sizes upon annealing. Possible applications of the fabricated and subsequently annealed porous nano-GaAs in transmission-type photo-cathodes and visible optoelectronic devices are discussed. These results suggest that the modified DPF is an effective tool for nanofabrication of continuous and porous III-V semiconductor nanomaterials. Further opportunities for using the modified DPF device for the fabrication of novel nanostructures are discussed as well.

  4. SciCloud: A Scientific Cloud and Management Platform for Smart City Data

    DEFF Research Database (Denmark)

    Liu, Xiufeng; Nielsen, Per Sieverts; Heller, Alfred

    2017-01-01

    private scientific cloud, SciCloud, to tackle these grand challenges. SciCloud provides on-demand computing resource provisions, a scalable data management platform and an in-place data analytics environment to support the scientific research using smart city data....

  5. Run-time mappig of multiple communicating tasks on MPSoC platforms.

    NARCIS (Netherlands)

    Singh, A.K.; Jigang, W.; Kumar, A.; Srikanthan, Th.

    2010-01-01

    Multi-task supported processing elements (PEs) are required in a Multiprocessor System-on-Chip platform for better scalability, power consumption etc. Efficient utilization of PEs needs intelligent mapping of tasks onto them. The job becomes more challenging when the workload of tasks is dynamic.

  6. Relationship between Length and Surface-Enhanced Raman Spectroscopy Signal Strength in Metal Nanoparticle Chains: Ideal Models versus Nanofabrication

    Directory of Open Access Journals (Sweden)

    Kristen D. Alexander

    2012-01-01

    Full Text Available We have employed capillary force deposition on ion beam patterned substrates to fabricate chains of 60 nm gold nanospheres ranging in length from 1 to 9 nanoparticles. Measurements of the surface-averaged SERS enhancement factor strength for these chains were then compared to the numerical predictions. The SERS enhancement conformed to theoretical predictions in the case of only a few chains, with the vast majority of chains tested not matching such behavior. Although all of the nanoparticle chains appear identical under electron microscope observation, the extreme sensitivity of the SERS enhancement to nanoscale morphology renders current nanofabrication methods insufficient for consistent production of coupled nanoparticle chains. Notwithstanding this fact, the aggregate data also confirmed that nanoparticle dimers offer a large improvement over the monomer enhancement while conclusively showing that, within the limitations imposed by current state-of-the-art nanofabrication techniques, chains comprising more than two nanoparticles provide only a marginal signal boost over the already considerable dimer enhancement.

  7. Green chemistry and nanofabrication in a levitated Leidenfrost drop

    Science.gov (United States)

    Abdelaziz, Ramzy; Disci-Zayed, Duygu; Hedayati, Mehdi Keshavarz; Pöhls, Jan-Hendrik; Zillohu, Ahnaf Usman; Erkartal, Burak; Chakravadhanula, Venkata Sai Kiran; Duppel, Viola; Kienle, Lorenz; Elbahri, Mady

    2013-10-01

    Green nanotechnology focuses on the development of new and sustainable methods of creating nanoparticles, their localized assembly and integration into useful systems and devices in a cost-effective, simple and eco-friendly manner. Here we present our experimental findings on the use of the Leidenfrost drop as an overheated and charged green chemical reactor. Employing a droplet of aqueous solution on hot substrates, this method is capable of fabricating nanoparticles, creating nanoscale coatings on complex objects and designing porous metal in suspension and foam form, all in a levitated Leidenfrost drop. As examples of the potential applications of the Leidenfrost drop, fabrication of nanoporous black gold as a plasmonic wideband superabsorber, and synthesis of superhydrophilic and thermal resistive metal-polymer hybrid foams are demonstrated. We believe that the presented nanofabrication method may be a promising strategy towards the sustainable production of functional nanomaterials.

  8. A survey on platforms for big data analytics.

    Science.gov (United States)

    Singh, Dilpreet; Reddy, Chandan K

    The primary purpose of this paper is to provide an in-depth analysis of different platforms available for performing big data analytics. This paper surveys different hardware platforms available for big data analytics and assesses the advantages and drawbacks of each of these platforms based on various metrics such as scalability, data I/O rate, fault tolerance, real-time processing, data size supported and iterative task support. In addition to the hardware, a detailed description of the software frameworks used within each of these platforms is also discussed along with their strengths and drawbacks. Some of the critical characteristics described here can potentially aid the readers in making an informed decision about the right choice of platforms depending on their computational needs. Using a star ratings table, a rigorous qualitative comparison between different platforms is also discussed for each of the six characteristics that are critical for the algorithms of big data analytics. In order to provide more insights into the effectiveness of each of the platform in the context of big data analytics, specific implementation level details of the widely used k-means clustering algorithm on various platforms are also described in the form pseudocode.

  9. Scalable Nanomanufacturing—A Review

    Directory of Open Access Journals (Sweden)

    Khershed Cooper

    2017-01-01

    Full Text Available This article describes the field of scalable nanomanufacturing, its importance and need, its research activities and achievements. The National Science Foundation is taking a leading role in fostering basic research in scalable nanomanufacturing (SNM. From this effort several novel nanomanufacturing approaches have been proposed, studied and demonstrated, including scalable nanopatterning. This paper will discuss SNM research areas in materials, processes and applications, scale-up methods with project examples, and manufacturing challenges that need to be addressed to move nanotechnology discoveries closer to the marketplace.

  10. Bio-inspired silicon nanospikes fabricated by metal-assisted chemical etching for antibacterial surfaces

    Science.gov (United States)

    Hu, Huan; Siu, Vince S.; Gifford, Stacey M.; Kim, Sungcheol; Lu, Minhua; Meyer, Pablo; Stolovitzky, Gustavo A.

    2017-12-01

    The recently discovered bactericidal properties of nanostructures on wings of insects such as cicadas and dragonflies have inspired the development of similar nanostructured surfaces for antibacterial applications. Since most antibacterial applications require nanostructures covering a considerable amount of area, a practical fabrication method needs to be cost-effective and scalable. However, most reported nanofabrication methods require either expensive equipment or a high temperature process, limiting cost efficiency and scalability. Here, we report a simple, fast, low-cost, and scalable antibacterial surface nanofabrication methodology. Our method is based on metal-assisted chemical etching that only requires etching a single crystal silicon substrate in a mixture of silver nitrate and hydrofluoric acid for several minutes. We experimentally studied the effects of etching time on the morphology of the silicon nanospikes and the bactericidal properties of the resulting surface. We discovered that 6 minutes of etching results in a surface containing silicon nanospikes with optimal geometry. The bactericidal properties of the silicon nanospikes were supported by bacterial plating results, fluorescence images, and scanning electron microscopy images.

  11. Memory Hierarchy Design for Next Generation Scalable Many-core Platforms

    OpenAIRE

    Azarkhish, Erfan

    2016-01-01

    Performance and energy consumption in modern computing platforms is largely dominated by the memory hierarchy. The increasing computational power in the multiprocessors and accelerators, and the emergence of the data-intensive workloads (e.g. large-scale graph traversal and scientific algorithms) requiring fast transfer of large volumes of data, are two main trends which intensify this problem by putting even higher pressure on the memory hierarchy. This increasing gap between computation spe...

  12. Event metadata records as a testbed for scalable data mining

    International Nuclear Information System (INIS)

    Gemmeren, P van; Malon, D

    2010-01-01

    At a data rate of 200 hertz, event metadata records ('TAGs,' in ATLAS parlance) provide fertile grounds for development and evaluation of tools for scalable data mining. It is easy, of course, to apply HEP-specific selection or classification rules to event records and to label such an exercise 'data mining,' but our interest is different. Advanced statistical methods and tools such as classification, association rule mining, and cluster analysis are common outside the high energy physics community. These tools can prove useful, not for discovery physics, but for learning about our data, our detector, and our software. A fixed and relatively simple schema makes TAG export to other storage technologies such as HDF5 straightforward. This simplifies the task of exploiting very-large-scale parallel platforms such as Argonne National Laboratory's BlueGene/P, currently the largest supercomputer in the world for open science, in the development of scalable tools for data mining. Using a domain-neutral scientific data format may also enable us to take advantage of existing data mining components from other communities. There is, further, a substantial literature on the topic of one-pass algorithms and stream mining techniques, and such tools may be inserted naturally at various points in the event data processing and distribution chain. This paper describes early experience with event metadata records from ATLAS simulation and commissioning as a testbed for scalable data mining tool development and evaluation.

  13. Telemedicine Platform Enhanced visiophony solution to operate a Robot-Companion

    Science.gov (United States)

    Simonnet, Th.; Couet, A.; Ezvan, P.; Givernaud, O.; Hillereau, P.

    Nowadays, one of the ways to reduce medical care costs is to reduce the length of patients hospitalization and reinforce home sanitary support by formal (professionals) and non formal (family) caregivers. The aim is to design and operate a scalable and secured collaborative platform to handle specific tools for patients, their families and doctors.

  14. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    Directory of Open Access Journals (Sweden)

    Sven Van Poucke

    Full Text Available With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension. Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM, the ETL process (Extract, Transform, Load was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  15. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    Science.gov (United States)

    Van Poucke, Sven; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; De Deyne, Cathy

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  16. Solid-state single-photon emitters

    Science.gov (United States)

    Aharonovich, Igor; Englund, Dirk; Toth, Milos

    2016-10-01

    Single-photon emitters play an important role in many leading quantum technologies. There is still no 'ideal' on-demand single-photon emitter, but a plethora of promising material systems have been developed, and several have transitioned from proof-of-concept to engineering efforts with steadily improving performance. Here, we review recent progress in the race towards true single-photon emitters required for a range of quantum information processing applications. We focus on solid-state systems including quantum dots, defects in solids, two-dimensional hosts and carbon nanotubes, as these are well positioned to benefit from recent breakthroughs in nanofabrication and materials growth techniques. We consider the main challenges and key advantages of each platform, with a focus on scalable on-chip integration and fabrication of identical sources on photonic circuits.

  17. An ODMG-compatible testbed architecture for scalable management and analysis of physics data

    International Nuclear Information System (INIS)

    Malon, D.M.; May, E.N.

    1997-01-01

    This paper describes a testbed architecture for the investigation and development of scalable approaches to the management and analysis of massive amounts of high energy physics data. The architecture has two components: an interface layer that is compliant with a substantial subset of the ODMG-93 Version 1.2 specification, and a lightweight object persistence manager that provides flexible storage and retrieval services on a variety of single- and multi-level storage architectures, and on a range of parallel and distributed computing platforms

  18. The Rise of Ridesharing Platforms: an Uber-assessment of Bits and atoms

    OpenAIRE

    Ofstad, Magnus

    2017-01-01

    This thesis is a case study of Uber. The research questions are 1. How do ridesharing platforms evolve from startups into large-scale market disrupters? 2. How can ridesharing platforms augment its user base and capture more segments within the transportation business? Uber’s scaling is traditional. The initial growth consisted of capturing the high end of the market. Later on, Uber moved down market, making ridesharing affordable for the masses. The master stroke was to devise a scalable bus...

  19. An Extensible Sensing and Control Platform for Building Energy Management

    Energy Technology Data Exchange (ETDEWEB)

    Rowe, Anthony [Carnegie Mellon Univ., Pittsburgh, PA (United States); Berges, Mario [Carnegie Mellon Univ., Pittsburgh, PA (United States); Martin, Christopher [Robert Bosch LLC, Anderson, SC (United States)

    2016-04-03

    The goal of this project is to develop Mortar.io, an open-source BAS platform designed to simplify data collection, archiving, event scheduling and coordination of cross-system interactions. Mortar.io is optimized for (1) robustness to network outages, (2) ease of installation using plug-and-play and (3) scalable support for small to large buildings and campuses.

  20. Scalable and reusable emulator for evaluating the performance of SS7 networks

    Science.gov (United States)

    Lazar, Aurel A.; Tseng, Kent H.; Lim, Koon Seng; Choe, Winston

    1994-04-01

    A scalable and reusable emulator was designed and implemented for studying the behavior of SS7 networks. The emulator design was largely based on public domain software. It was developed on top of an environment supported by PVM, the Parallel Virtual Machine, and managed by OSIMIS-the OSI Management Information Service platform. The emulator runs on top of a commercially available ATM LAN interconnecting engineering workstations. As a case study for evaluating the emulator, the behavior of the Singapore National SS7 Network under fault and unbalanced loading conditions was investigated.

  1. The Nanofabrication and Application of Substrates for Surface-Enhanced Raman Scattering

    Directory of Open Access Journals (Sweden)

    Xian Zhang

    2012-01-01

    Full Text Available Surface-enhanced Raman scattering (SERS was discovered in 1974 and impacted Raman spectroscopy and surface science. Although SERS has not been developed to be an applicable detection tool so far, nanotechnology has promoted its development in recent decades. The traditional SERS substrates, such as silver electrode, metal island film, and silver colloid, cannot be applied because of their enhancement factor or stability, but newly developed substrates, such as electrochemical deposition surface, Ag porous film, and surface-confined colloids, have better sensitivity and stability. Surface enhanced Raman scattering is applied in other fields such as detection of chemical pollutant, biomolecules, DNA, bacteria, and so forth. In this paper, the development of nanofabrication and application of surface-enhanced Ramans scattering substrate are discussed.

  2. Low power femtosecond tip-based nanofabrication with advanced control

    Science.gov (United States)

    Liu, Jiangbo; Guo, Zhixiong; Zou, Qingze

    2018-02-01

    In this paper, we propose an approach to enable the use of low power femtosecond laser in tip-based nanofabrication (TBN) without thermal damage. One major challenge in laser-assisted TBN is in maintaining precision control of the tip-surface positioning throughout the fabrication process. An advanced iterative learning control technique is exploited to overcome this challenge in achieving high-quality patterning of arbitrary shape on a metal surface. The experimental results are analyzed to understand the ablation mechanism involved. Specifically, the near-field radiation enhancement is examined via the surface-enhanced Raman scattering effect, and it was revealed the near-field enhanced plasma-mediated ablation. Moreover, silicon nitride tip is utilized to alleviate the adverse thermal damage. Experiment results including line patterns fabricated under different writing speeds and an "R" pattern are presented. The fabrication quality with regard to the line width, depth, and uniformity is characterized to demonstrate the efficacy of the proposed approach.

  3. Scalable Algorithms for Clustering Large Geospatiotemporal Data Sets on Manycore Architectures

    Science.gov (United States)

    Mills, R. T.; Hoffman, F. M.; Kumar, J.; Sreepathi, S.; Sripathi, V.

    2016-12-01

    The increasing availability of high-resolution geospatiotemporal data sets from sources such as observatory networks, remote sensing platforms, and computational Earth system models has opened new possibilities for knowledge discovery using data sets fused from disparate sources. Traditional algorithms and computing platforms are impractical for the analysis and synthesis of data sets of this size; however, new algorithmic approaches that can effectively utilize the complex memory hierarchies and the extremely high levels of available parallelism in state-of-the-art high-performance computing platforms can enable such analysis. We describe a massively parallel implementation of accelerated k-means clustering and some optimizations to boost computational intensity and utilization of wide SIMD lanes on state-of-the art multi- and manycore processors, including the second-generation Intel Xeon Phi ("Knights Landing") processor based on the Intel Many Integrated Core (MIC) architecture, which includes several new features, including an on-package high-bandwidth memory. We also analyze the code in the context of a few practical applications to the analysis of climatic and remotely-sensed vegetation phenology data sets, and speculate on some of the new applications that such scalable analysis methods may enable.

  4. Scalable microcarrier-based manufacturing of mesenchymal stem/stromal cells.

    Science.gov (United States)

    de Soure, António M; Fernandes-Platzgummer, Ana; da Silva, Cláudia L; Cabral, Joaquim M S

    2016-10-20

    Due to their unique features, mesenchymal stem/stromal cells (MSC) have been exploited in clinical settings as therapeutic candidates for the treatment of a variety of diseases. However, the success in obtaining clinically-relevant MSC numbers for cell-based therapies is dependent on efficient isolation and ex vivo expansion protocols, able to comply with good manufacturing practices (GMP). In this context, the 2-dimensional static culture systems typically used for the expansion of these cells present several limitations that may lead to reduced cell numbers and compromise cell functions. Furthermore, many studies in the literature report the expansion of MSC using fetal bovine serum (FBS)-supplemented medium, which has been critically rated by regulatory agencies. Alternative platforms for the scalable manufacturing of MSC have been developed, namely using microcarriers in bioreactors, with also a considerable number of studies now reporting the production of MSC using xenogeneic/serum-free medium formulations. In this review we provide a comprehensive overview on the scalable manufacturing of human mesenchymal stem/stromal cells, depicting the various steps involved in the process from cell isolation to ex vivo expansion, using different cell tissue sources and culture medium formulations and exploiting bioprocess engineering tools namely microcarrier technology and bioreactors. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Numeric Analysis for Relationship-Aware Scalable Streaming Scheme

    Directory of Open Access Journals (Sweden)

    Heung Ki Lee

    2014-01-01

    Full Text Available Frequent packet loss of media data is a critical problem that degrades the quality of streaming services over mobile networks. Packet loss invalidates frames containing lost packets and other related frames at the same time. Indirect loss caused by losing packets decreases the quality of streaming. A scalable streaming service can decrease the amount of dropped multimedia resulting from a single packet loss. Content providers typically divide one large media stream into several layers through a scalable streaming service and then provide each scalable layer to the user depending on the mobile network. Also, a scalable streaming service makes it possible to decode partial multimedia data depending on the relationship between frames and layers. Therefore, a scalable streaming service provides a way to decrease the wasted multimedia data when one packet is lost. However, the hierarchical structure between frames and layers of scalable streams determines the service quality of the scalable streaming service. Even if whole packets of layers are transmitted successfully, they cannot be decoded as a result of the absence of reference frames and layers. Therefore, the complicated relationship between frames and layers in a scalable stream increases the volume of abandoned layers. For providing a high-quality scalable streaming service, we choose a proper relationship between scalable layers as well as the amount of transmitted multimedia data depending on the network situation. We prove that a simple scalable scheme outperforms a complicated scheme in an error-prone network. We suggest an adaptive set-top box (AdaptiveSTB to lower the dependency between scalable layers in a scalable stream. Also, we provide a numerical model to obtain the indirect loss of multimedia data and apply it to various multimedia streams. Our AdaptiveSTB enhances the quality of a scalable streaming service by removing indirect loss.

  6. Scalable coherent interface

    International Nuclear Information System (INIS)

    Alnaes, K.; Kristiansen, E.H.; Gustavson, D.B.; James, D.V.

    1990-01-01

    The Scalable Coherent Interface (IEEE P1596) is establishing an interface standard for very high performance multiprocessors, supporting a cache-coherent-memory model scalable to systems with up to 64K nodes. This Scalable Coherent Interface (SCI) will supply a peak bandwidth per node of 1 GigaByte/second. The SCI standard should facilitate assembly of processor, memory, I/O and bus bridge cards from multiple vendors into massively parallel systems with throughput far above what is possible today. The SCI standard encompasses two levels of interface, a physical level and a logical level. The physical level specifies electrical, mechanical and thermal characteristics of connectors and cards that meet the standard. The logical level describes the address space, data transfer protocols, cache coherence mechanisms, synchronization primitives and error recovery. In this paper we address logical level issues such as packet formats, packet transmission, transaction handshake, flow control, and cache coherence. 11 refs., 10 figs

  7. Monitoring WLCG with lambda-architecture: a new scalable data store and analytics platform for monitoring at petabyte scale

    International Nuclear Information System (INIS)

    Magnoni, L; Cordeiro, C; Georgiou, M; Andreeva, J; Suthakar, U; Khan, A; Smith, D R

    2015-01-01

    Monitoring the WLCG infrastructure requires the gathering and analysis of a high volume of heterogeneous data (e.g. data transfers, job monitoring, site tests) coming from different services and experiment-specific frameworks to provide a uniform and flexible interface for scientists and sites. The current architecture, where relational database systems are used to store, to process and to serve monitoring data, has limitations in coping with the foreseen increase in the volume (e.g. higher LHC luminosity) and the variety (e.g. new data-transfer protocols and new resource-types, as cloud-computing) of WLCG monitoring events. This paper presents a new scalable data store and analytics platform designed by the Support for Distributed Computing (SDC) group, at the CERN IT department, which uses a variety of technologies each one targeting specific aspects of big-scale distributed data-processing (commonly referred as lambda-architecture approach). Results of data processing on Hadoop for WLCG data activities monitoring are presented, showing how the new architecture can easily analyze hundreds of millions of transfer logs in a few minutes. Moreover, a comparison of data partitioning, compression and file format (e.g. CSV, Avro) is presented, with particular attention given to how the file structure impacts the overall MapReduce performance. In conclusion, the evolution of the current implementation, which focuses on data storage and batch processing, towards a complete lambda-architecture is discussed, with consideration of candidate technology for the serving layer (e.g. Elasticsearch) and a description of a proof of concept implementation, based on Apache Spark and Esper, for the real-time part which compensates for batch-processing latency and automates problem detection and failures. (paper)

  8. Monitoring WLCG with lambda-architecture: a new scalable data store and analytics platform for monitoring at petabyte scale.

    Science.gov (United States)

    Magnoni, L.; Suthakar, U.; Cordeiro, C.; Georgiou, M.; Andreeva, J.; Khan, A.; Smith, D. R.

    2015-12-01

    Monitoring the WLCG infrastructure requires the gathering and analysis of a high volume of heterogeneous data (e.g. data transfers, job monitoring, site tests) coming from different services and experiment-specific frameworks to provide a uniform and flexible interface for scientists and sites. The current architecture, where relational database systems are used to store, to process and to serve monitoring data, has limitations in coping with the foreseen increase in the volume (e.g. higher LHC luminosity) and the variety (e.g. new data-transfer protocols and new resource-types, as cloud-computing) of WLCG monitoring events. This paper presents a new scalable data store and analytics platform designed by the Support for Distributed Computing (SDC) group, at the CERN IT department, which uses a variety of technologies each one targeting specific aspects of big-scale distributed data-processing (commonly referred as lambda-architecture approach). Results of data processing on Hadoop for WLCG data activities monitoring are presented, showing how the new architecture can easily analyze hundreds of millions of transfer logs in a few minutes. Moreover, a comparison of data partitioning, compression and file format (e.g. CSV, Avro) is presented, with particular attention given to how the file structure impacts the overall MapReduce performance. In conclusion, the evolution of the current implementation, which focuses on data storage and batch processing, towards a complete lambda-architecture is discussed, with consideration of candidate technology for the serving layer (e.g. Elasticsearch) and a description of a proof of concept implementation, based on Apache Spark and Esper, for the real-time part which compensates for batch-processing latency and automates problem detection and failures.

  9. Scalable photoreactor for hydrogen production

    KAUST Repository

    Takanabe, Kazuhiro; Shinagawa, Tatsuya

    2017-01-01

    Provided herein are scalable photoreactors that can include a membrane-free water- splitting electrolyzer and systems that can include a plurality of membrane-free water- splitting electrolyzers. Also provided herein are methods of using the scalable photoreactors provided herein.

  10. Scalable photoreactor for hydrogen production

    KAUST Repository

    Takanabe, Kazuhiro

    2017-04-06

    Provided herein are scalable photoreactors that can include a membrane-free water- splitting electrolyzer and systems that can include a plurality of membrane-free water- splitting electrolyzers. Also provided herein are methods of using the scalable photoreactors provided herein.

  11. Resource-aware complexity scalability for mobile MPEG encoding

    NARCIS (Netherlands)

    Mietens, S.O.; With, de P.H.N.; Hentschel, C.; Panchanatan, S.; Vasudev, B.

    2004-01-01

    Complexity scalability attempts to scale the required resources of an algorithm with the chose quality settings, in order to broaden the application range. In this paper, we present complexity-scalable MPEG encoding of which the core processing modules are modified for scalability. Scalability is

  12. Scalability of Several Asynchronous Many-Task Models for In Situ Statistical Analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bennett, Janine Camille [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Kolla, Hemanth [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Borghesi, Giulio [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2017-05-01

    This report is a sequel to [PB16], in which we provided a first progress report on research and development towards a scalable, asynchronous many-task, in situ statistical analysis engine using the Legion runtime system. This earlier work included a prototype implementation of a proposed solution, using a proxy mini-application as a surrogate for a full-scale scientific simulation code. The first scalability studies were conducted with the above on modestly-sized experimental clusters. In contrast, in the current work we have integrated our in situ analysis engines with a full-size scientific application (S3D, using the Legion-SPMD model), and have conducted nu- merical tests on the largest computational platform currently available for DOE science ap- plications. We also provide details regarding the design and development of a light-weight asynchronous collectives library. We describe how this library is utilized within our SPMD- Legion S3D workflow, and compare the data aggregation technique deployed herein to the approach taken within our previous work.

  13. Scalable Stream Processing with Quality of Service for Smart City Crowdsensing Applications

    Directory of Open Access Journals (Sweden)

    Paolo Bellavista

    2013-12-01

    Full Text Available Crowdsensing is emerging as a powerful paradigm capable of leveraging the collective, though imprecise, monitoring capabilities of common people carrying smartphones or other personal devices, which can effectively become real-time mobile sensors, collecting information about the physical places they live in. This unprecedented amount of information, considered collectively, offers new valuable opportunities to understand more thoroughly the environment in which we live and, more importantly, gives the chance to use this deeper knowledge to act and improve, in a virtuous loop, the environment itself. However, managing this process is a hard technical challenge, spanning several socio-technical issues: here, we focus on the related quality, reliability, and scalability trade-offs by proposing an architecture for crowdsensing platforms that dynamically self-configure and self-adapt depending on application-specific quality requirements. In the context of this general architecture, the paper will specifically focus on the Quasit distributed stream processing middleware, and show how Quasit can be used to process and analyze crowdsensing-generated data flows with differentiated quality requirements in a highly scalable and reliable way.

  14. Cloud-Based Software Platform for Smart Meter Data Management

    DEFF Research Database (Denmark)

    Liu, Xiufeng; Nielsen, Per Sieverts

    of the so-called big data possible. This can improve energy management, e.g., help utility companies to forecast energy loads and improve services, and help households to manage energy usage and save money. As this regard, the proposed paper focuses on building an innovative software platform for smart...... their knowledge; scalable data analytics platform for data mining over big data sets for energy demand forecasting and consumption discovering; data as the service for other applications using smart meter data; and a portal for visualizing data analytics results. The design will incorporate hybrid clouds......, including Infrastructure as a Service (IaaS) and Platform as a Service (PaaS), which are suitable for on-demand provisioning, massive scaling, and manageability. Besides, the design will impose extensibility, eciency, and high availability on the system. The paper will evaluate the system comprehensively...

  15. Adaptive format conversion for scalable video coding

    Science.gov (United States)

    Wan, Wade K.; Lim, Jae S.

    2001-12-01

    The enhancement layer in many scalable coding algorithms is composed of residual coding information. There is another type of information that can be transmitted instead of (or in addition to) residual coding. Since the encoder has access to the original sequence, it can utilize adaptive format conversion (AFC) to generate the enhancement layer and transmit the different format conversion methods as enhancement data. This paper investigates the use of adaptive format conversion information as enhancement data in scalable video coding. Experimental results are shown for a wide range of base layer qualities and enhancement bitrates to determine when AFC can improve video scalability. Since the parameters needed for AFC are small compared to residual coding, AFC can provide video scalability at low enhancement layer bitrates that are not possible with residual coding. In addition, AFC can also be used in addition to residual coding to improve video scalability at higher enhancement layer bitrates. Adaptive format conversion has not been studied in detail, but many scalable applications may benefit from it. An example of an application that AFC is well-suited for is the migration path for digital television where AFC can provide immediate video scalability as well as assist future migrations.

  16. Scalable Density-Based Subspace Clustering

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Günnemann, Stephan

    2011-01-01

    For knowledge discovery in high dimensional databases, subspace clustering detects clusters in arbitrary subspace projections. Scalability is a crucial issue, as the number of possible projections is exponential in the number of dimensions. We propose a scalable density-based subspace clustering...... method that steers mining to few selected subspace clusters. Our novel steering technique reduces subspace processing by identifying and clustering promising subspaces and their combinations directly. Thereby, it narrows down the search space while maintaining accuracy. Thorough experiments on real...... and synthetic databases show that steering is efficient and scalable, with high quality results. For future work, our steering paradigm for density-based subspace clustering opens research potential for speeding up other subspace clustering approaches as well....

  17. Scalable devices

    KAUST Repository

    Krü ger, Jens J.; Hadwiger, Markus

    2014-01-01

    In computer science in general and in particular the field of high performance computing and supercomputing the term scalable plays an important role. It indicates that a piece of hardware, a concept, an algorithm, or an entire system scales

  18. A measurement of electron-wall interactions using transmission diffraction from nanofabricated gratings

    International Nuclear Information System (INIS)

    Barwick, Brett; Gronniger, Glen; Yuan, Lu; Liou, Sy-Hwang; Batelaan, Herman

    2006-01-01

    Electron diffraction from metal coated freestanding nanofabricated gratings is presented, with a quantitative path integral analysis of the electron-grating interactions. Electron diffraction out to the 20th order was observed indicating the high quality of our nanofabricated gratings. The electron beam is collimated to its diffraction limit with ion-milled material slits. Our path integral analysis is first tested against single slit electron diffraction, and then further expanded with the same theoretical approach to describe grating diffraction. Rotation of the grating with respect to the incident electron beam varies the effective distance between the electron and grating bars. This allows the measurement of the image charge potential between the electron and the grating bars. Image charge potentials that were about 15% of the value for that of a pure electron-metal wall interaction were found. We varied the electron energy from 50 to 900 eV. The interaction time is of the order of typical metal image charge response times and in principle allows the investigation of image charge formation. In addition to the image charge interaction there is a dephasing process reducing the transverse coherence length of the electron wave. The dephasing process causes broadening of the diffraction peaks and is consistent with a model that ascribes the dephasing process to microscopic contact potentials. Surface structures with length scales of about 200 nm observed with a scanning tunneling microscope, and dephasing interaction strength typical of contact potentials of 0.35 eV support this claim. Such a dephasing model motivated the investigation of different metallic coatings, in particular Ni, Ti, Al, and different thickness Au-Pd coatings. Improved quality of diffraction patterns was found for Ni. This coating made electron diffraction possible at energies as low as 50 eV. This energy was limited by our electron gun design. These results are particularly relevant for the

  19. Assembling phosphorene flexagons for 2D electron-density-guided nanopatterning and nanofabrication.

    Science.gov (United States)

    Kang, Kisung; Jang, Woosun; Soon, Aloysius

    2017-07-27

    To build upon the rich structural diversity in the ever-increasing polymorphic phases of two-dimensional phosphorene, we propose different assembly methods (namely, the "bottom-up" and "top-down" approaches) that involve four commonly reported parent phases (i.e. the α-, β-, γ-, and δ-phosphorene) in combination with the lately reported remarkably low-energy one-dimensional defects in α-phosphorene. In doing so, we generate various periodically repeated phosphorene patterns in these so-called phosphorene flexagons and present their local electron density (via simulated scanning tunneling microscopy (STM) images). These interesting electron density patterns seen in the flexagons (mimicking symmetry patterns that one may typically see in a kaleidoscope) may assist as potential 2D templates where electron-density-guided nanopatterning and nanofabrication in complex organized nanoarchitectures are important.

  20. Computing Platforms for Big Biological Data Analytics: Perspectives and Challenges.

    Science.gov (United States)

    Yin, Zekun; Lan, Haidong; Tan, Guangming; Lu, Mian; Vasilakos, Athanasios V; Liu, Weiguo

    2017-01-01

    The last decade has witnessed an explosion in the amount of available biological sequence data, due to the rapid progress of high-throughput sequencing projects. However, the biological data amount is becoming so great that traditional data analysis platforms and methods can no longer meet the need to rapidly perform data analysis tasks in life sciences. As a result, both biologists and computer scientists are facing the challenge of gaining a profound insight into the deepest biological functions from big biological data. This in turn requires massive computational resources. Therefore, high performance computing (HPC) platforms are highly needed as well as efficient and scalable algorithms that can take advantage of these platforms. In this paper, we survey the state-of-the-art HPC platforms for big biological data analytics. We first list the characteristics of big biological data and popular computing platforms. Then we provide a taxonomy of different biological data analysis applications and a survey of the way they have been mapped onto various computing platforms. After that, we present a case study to compare the efficiency of different computing platforms for handling the classical biological sequence alignment problem. At last we discuss the open issues in big biological data analytics.

  1. I(Re2-WiNoC: Exploring scalable wireless on-chip micronetworks for heterogeneous embedded many-core SoCs

    Directory of Open Access Journals (Sweden)

    Dan Zhao

    2015-02-01

    In this work, an irregular and reconfigurable WiNoC platform is proposed to tackle ever increasing complexity, density and heterogeneity challenges. A flexible RF infrastructure is established where RF nodes are properly distributed and IP cores are clustered. Consequently, a performance-cost effective topology is formed. A region-aided routing scheme is further deigned and implemented to realize loop-free, minimum path cost and high scalability for irregular WiNoC infrastructure. To implement the data transmission protocol, the RF microarchitecture of WiNoC is developed where the RF nodes are designed to fulfill the functions of distributed table routing, multi-channel arbitration, virtual output queuing, and distributed flow control. Our simulation studies based on synthetic traffics demonstrate the network efficiency and scalability of WiNoC.

  2. Antibacterial Au nanostructured surfaces

    Science.gov (United States)

    Wu, Songmei; Zuber, Flavia; Brugger, Juergen; Maniura-Weber, Katharina; Ren, Qun

    2016-01-01

    We present here a technological platform for engineering Au nanotopographies by templated electrodeposition on antibacterial surfaces. Three different types of nanostructures were fabricated: nanopillars, nanorings and nanonuggets. The nanopillars are the basic structures and are 50 nm in diameter and 100 nm in height. Particular arrangement of the nanopillars in various geometries formed nanorings and nanonuggets. Flat surfaces, rough substrate surfaces, and various nanostructured surfaces were compared for their abilities to attach and kill bacterial cells. Methicillin-resistant Staphylococcus aureus, a Gram-positive bacterial strain responsible for many infections in health care system, was used as the model bacterial strain. It was found that all the Au nanostructures, regardless their shapes, exhibited similar excellent antibacterial properties. A comparison of live cells attached to nanotopographic surfaces showed that the number of live S. aureus cells was flat and rough reference surfaces. Our micro/nanofabrication process is a scalable approach based on cost-efficient self-organization and provides potential for further developing functional surfaces to study the behavior of microbes on nanoscale topographies.We present here a technological platform for engineering Au nanotopographies by templated electrodeposition on antibacterial surfaces. Three different types of nanostructures were fabricated: nanopillars, nanorings and nanonuggets. The nanopillars are the basic structures and are 50 nm in diameter and 100 nm in height. Particular arrangement of the nanopillars in various geometries formed nanorings and nanonuggets. Flat surfaces, rough substrate surfaces, and various nanostructured surfaces were compared for their abilities to attach and kill bacterial cells. Methicillin-resistant Staphylococcus aureus, a Gram-positive bacterial strain responsible for many infections in health care system, was used as the model bacterial strain. It was found that all

  3. ATLAS Analytics and Machine Learning Platforms

    CERN Document Server

    Vukotic, Ilija; The ATLAS collaboration; Legger, Federica; Gardner, Robert

    2018-01-01

    In 2015 ATLAS Distributed Computing started to migrate its monitoring systems away from Oracle DB and decided to adopt new big data platforms that are open source, horizontally scalable, and offer the flexibility of NoSQL systems. Three years later, the full software stack is in place, the system is considered in production and operating at near maximum capacity (in terms of storage capacity and tightly coupled analysis capability). The new model provides several tools for fast and easy to deploy monitoring and accounting. The main advantages are: ample ways to do complex analytics studies (using technologies such as java, pig, spark, python, jupyter), flexibility in reorganization of data flows, near real time and inline processing. The analytics studies improve our understanding of different computing systems and their interplay, thus enabling whole-system debugging and optimization. In addition, the platform provides services to alarm or warn on anomalous conditions, and several services closing feedback l...

  4. Integrated culture platform based on a human platelet lysate supplement for the isolation and scalable manufacturing of umbilical cord matrix-derived mesenchymal stem/stromal cells.

    Science.gov (United States)

    de Soure, António M; Fernandes-Platzgummer, Ana; Moreira, Francisco; Lilaia, Carla; Liu, Shi-Hwei; Ku, Chen-Peng; Huang, Yi-Feng; Milligan, William; Cabral, Joaquim M S; da Silva, Cláudia L

    2017-05-01

    Umbilical cord matrix (UCM)-derived mesenchymal stem/stromal cells (MSCs) are promising therapeutic candidates for regenerative medicine settings. UCM MSCs have advantages over adult cells as these can be obtained through a non-invasive harvesting procedure and display a higher proliferative capacity. However, the high cell doses required in the clinical setting make large-scale manufacturing of UCM MSCs mandatory. A commercially available human platelet lysate-based culture supplement (UltraGRO TM , AventaCell BioMedical) (5%(v/v)) was tested to effectively isolate UCM MSCs and to expand these cells under (1) static conditions, using planar culture systems and (2) stirred culture using plastic microcarriers in a spinner flask. The MSC-like cells were isolated from UCM explant cultures after 11 ± 2 days. After five passages in static culture, UCM MSCs retained their immunophenotype and multilineage differentiation potential. The UCM MSCs cultured under static conditions using UltraGRO TM -supplemented medium expanded more rapidly compared with UCM MSCs expanded using a previously established protocol. Importantly, UCM MSCs were successfully expanded under dynamic conditions on plastic microcarriers using UltraGRO TM -supplemented medium in spinner flasks. Upon an initial 54% cell adhesion to the beads, UCM MSCs expanded by >13-fold after 5-6 days, maintaining their immunophenotype and multilineage differentiation ability. The present paper reports the establishment of an easily scalable integrated culture platform based on a human platelet lysate supplement for the effective isolation and expansion of UCM MSCs in a xenogeneic-free microcarrier-based system. This platform represents an important advance in obtaining safer and clinically meaningful MSC numbers for clinical translation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Scalable algorithms for contact problems

    CERN Document Server

    Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít

    2016-01-01

    This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...

  6. An intermittent rocking platform for integrated expansion and differentiation of human pluripotent stem cells to cardiomyocytes in suspended microcarrier cultures

    Directory of Open Access Journals (Sweden)

    Sherwin Ting

    2014-09-01

    In conclusion, we have developed a simple robust and scalable platform that integrates both hESC expansion and CM differentiation in one unit process which is capable of meeting the need for large amounts of CMs.

  7. A holistic approach to SIM platform and its application to early-warning satellite system

    Science.gov (United States)

    Sun, Fuyu; Zhou, Jianping; Xu, Zheyao

    2018-01-01

    This study proposes a new simulation platform named Simulation Integrated Management (SIM) for the analysis of parallel and distributed systems. The platform eases the process of designing and testing both applications and architectures. The main characteristics of SIM are flexibility, scalability, and expandability. To improve the efficiency of project development, new models of early-warning satellite system were designed based on the SIM platform. Finally, through a series of experiments, the correctness of SIM platform and the aforementioned early-warning satellite models was validated, and the systematical analyses for the orbital determination precision of the ballistic missile during its entire flight process were presented, as well as the deviation of the launch/landing point. Furthermore, the causes of deviation and prevention methods will be fully explained. The simulation platform and the models will lay the foundations for further validations of autonomy technology in space attack-defense architecture research.

  8. Development, Verification and Validation of Parallel, Scalable Volume of Fluid CFD Program for Propulsion Applications

    Science.gov (United States)

    West, Jeff; Yang, H. Q.

    2014-01-01

    There are many instances involving liquid/gas interfaces and their dynamics in the design of liquid engine powered rockets such as the Space Launch System (SLS). Some examples of these applications are: Propellant tank draining and slosh, subcritical condition injector analysis for gas generators, preburners and thrust chambers, water deluge mitigation for launch induced environments and even solid rocket motor liquid slag dynamics. Commercially available CFD programs simulating gas/liquid interfaces using the Volume of Fluid approach are currently limited in their parallel scalability. In 2010 for instance, an internal NASA/MSFC review of three commercial tools revealed that parallel scalability was seriously compromised at 8 cpus and no additional speedup was possible after 32 cpus. Other non-interface CFD applications at the time were demonstrating useful parallel scalability up to 4,096 processors or more. Based on this review, NASA/MSFC initiated an effort to implement a Volume of Fluid implementation within the unstructured mesh, pressure-based algorithm CFD program, Loci-STREAM. After verification was achieved by comparing results to the commercial CFD program CFD-Ace+, and validation by direct comparison with data, Loci-STREAM-VoF is now the production CFD tool for propellant slosh force and slosh damping rate simulations at NASA/MSFC. On these applications, good parallel scalability has been demonstrated for problems sizes of tens of millions of cells and thousands of cpu cores. Ongoing efforts are focused on the application of Loci-STREAM-VoF to predict the transient flow patterns of water on the SLS Mobile Launch Platform in order to support the phasing of water for launch environment mitigation so that vehicle determinantal effects are not realized.

  9. Fable: Design of a Modular Robotic Playware Platform

    DEFF Research Database (Denmark)

    Pacheco, Moises; Moghadam, Mikael; Magnússon, Arnþór

    2013-01-01

    -based system composed of reconfigurable heterogeneous modules with a reliable and scalable connector. Furthermore, this paper describes tests where the connector design is tested with children, and presents examples of a moving snake and a quadruped robot, as well as an interactive upper humanoid torso.......We are developing the Fable modular robotic system as a playware platform that will enable non-expert users to develop robots ranging from advanced robotic toys to robotic solutions to problems encountered in their daily lives. This paper presents the mechanical design of Fable: a chain...

  10. Performance evaluations of advanced massively parallel platforms based on gyrokinetic toroidal five-dimensional Eulerian code GT5D

    International Nuclear Information System (INIS)

    Idomura, Yasuhiro; Jolliet, Sebastien

    2010-01-01

    A gyrokinetic toroidal five dimensional Eulerian code GT5D is ported on six advanced massively parallel platforms and comprehensive benchmark tests are performed. A parallelisation technique based on physical properties of the gyrokinetic equation is presented. By extending the parallelisation technique with a hybrid parallel model, the scalability of the code is improved on platforms with multi-core processors. In the benchmark tests, a good salability is confirmed up to several thousands cores on every platforms, and the maximum sustained performance of ∼18.6 Tflops is achieved using 16384 cores of BX900. (author)

  11. Template-free electrochemical nanofabrication of polyaniline nanobrush and hybrid polyaniline with carbon nanohorns for supercapacitors

    Energy Technology Data Exchange (ETDEWEB)

    Wei Di; Andrew, Piers; Ryhaenen, Tapani [Nokia Research Centre Cambridge, Broers Building, 21 J J Thomson Avenue, Cambridge CB3 0FA (United Kingdom); Wang, Haolan; Hiralal, Pritesh; Amaratunga, Gehan A J [Electrical Engineering Division, Department of Engineering, University of Cambridge, 9 J J Thomson Avenue, Cambridge CB3 0FA (United Kingdom); Hayashi, Yasuhiko, E-mail: di.wei@nokia.com, E-mail: gaja1@cam.ac.uk [Department of Materials Science, Nagoya Institute of Technology, Nagoya 466-8555 (Japan)

    2010-10-29

    Polyaniline (PANI) nanobrushes were synthesized by template-free electrochemical galvanostatic methods. When the same method was applied to the carbon nanohorn (CNH) solution containing aniline monomers, a hybrid nanostructure containing PANI and CNHs was enabled after electropolymerization. This is the first report on the template-free method to make PANI nanobrushes and homogeneous hybrid soft matter (PANI) with carbon nanoparticles. Raman spectroscopy was used to analyze the interaction between CNH and PANI. Electrochemical nanofabrication offers simplicity and good control when used to make electronic devices. Both of these materials were applied in supercapacitors and an improvement capacitive current by using the hybrid material was observed.

  12. Template-free electrochemical nanofabrication of polyaniline nanobrush and hybrid polyaniline with carbon nanohorns for supercapacitors

    Science.gov (United States)

    Wei, Di; Wang, Haolan; Hiralal, Pritesh; Andrew, Piers; Ryhänen, Tapani; Hayashi, Yasuhiko; Amaratunga, Gehan A. J.

    2010-10-01

    Polyaniline (PANI) nanobrushes were synthesized by template-free electrochemical galvanostatic methods. When the same method was applied to the carbon nanohorn (CNH) solution containing aniline monomers, a hybrid nanostructure containing PANI and CNHs was enabled after electropolymerization. This is the first report on the template-free method to make PANI nanobrushes and homogeneous hybrid soft matter (PANI) with carbon nanoparticles. Raman spectroscopy was used to analyze the interaction between CNH and PANI. Electrochemical nanofabrication offers simplicity and good control when used to make electronic devices. Both of these materials were applied in supercapacitors and an improvement capacitive current by using the hybrid material was observed.

  13. New Complexity Scalable MPEG Encoding Techniques for Mobile Applications

    Directory of Open Access Journals (Sweden)

    Stephan Mietens

    2004-03-01

    Full Text Available Complexity scalability offers the advantage of one-time design of video applications for a large product family, including mobile devices, without the need of redesigning the applications on the algorithmic level to meet the requirements of the different products. In this paper, we present complexity scalable MPEG encoding having core modules with modifications for scalability. The interdependencies of the scalable modules and the system performance are evaluated. Experimental results show scalability giving a smooth change in complexity and corresponding video quality. Scalability is basically achieved by varying the number of computed DCT coefficients and the number of evaluated motion vectors but other modules are designed such they scale with the previous parameters. In the experiments using the “Stefan” sequence, the elapsed execution time of the scalable encoder, reflecting the computational complexity, can be gradually reduced to roughly 50% of its original execution time. The video quality scales between 20 dB and 48 dB PSNR with unity quantizer setting, and between 21.5 dB and 38.5 dB PSNR for different sequences targeting 1500 kbps. The implemented encoder and the scalability techniques can be successfully applied in mobile systems based on MPEG video compression.

  14. CISP: Simulation Platform for Collective Instabilities in the BRing of HIAF project

    Science.gov (United States)

    Liu, J.; Yang, J. C.; Xia, J. W.; Yin, D. Y.; Shen, G. D.; Li, P.; Zhao, H.; Ruan, S.; Wu, B.

    2018-02-01

    To simulate collective instabilities during the complicated beam manipulation in the BRing (Booster Ring) of HIAF (High Intensity heavy-ion Accelerator Facility) or other high intensity accelerators, a code, named CISP (Simulation Platform for Collective Instabilities), is designed and constructed in China's IMP (Institute of Modern Physics). The CISP is a scalable multi-macroparticle simulation platform that can perform longitudinal and transverse tracking when chromaticity, space charge effect, nonlinear magnets and wakes are included. And due to its well object-oriented design, the CISP is also a basic platform used to develop many other applications (like feedback). Several simulations, completed by the CISP in this paper, agree with analytical results very well, which shows that the CISP is fully functional now and it is a powerful platform for the further collective instability research in the BRing or other accelerators. In the future, the CISP can also be extended easily into a physics control system for HIAF or other facilities.

  15. The Prodiguer Messaging Platform

    Science.gov (United States)

    Denvil, S.; Greenslade, M. A.; Carenton, N.; Levavasseur, G.; Raciazek, J.

    2015-12-01

    CONVERGENCE is a French multi-partner national project designed to gather HPC and informatics expertise to innovate in the context of running French global climate models with differing grids and at differing resolutions. Efficient and reliable execution of these models and the management and dissemination of model output are some of the complexities that CONVERGENCE aims to resolve.At any one moment in time, researchers affiliated with the Institut Pierre Simon Laplace (IPSL) climate modeling group, are running hundreds of global climate simulations. These simulations execute upon a heterogeneous set of French High Performance Computing (HPC) environments. The IPSL's simulation execution runtime libIGCM (library for IPSL Global Climate Modeling group) has recently been enhanced so as to support hitherto impossible realtime use cases such as simulation monitoring, data publication, metrics collection, simulation control, visualizations … etc. At the core of this enhancement is Prodiguer: an AMQP (Advanced Message Queue Protocol) based event driven asynchronous distributed messaging platform. libIGCM now dispatches copious amounts of information, in the form of messages, to the platform for remote processing by Prodiguer software agents at IPSL servers in Paris. Such processing takes several forms: Persisting message content to database(s); Launching rollback jobs upon simulation failure; Notifying downstream applications; Automation of visualization pipelines; We will describe and/or demonstrate the platform's: Technical implementation; Inherent ease of scalability; Inherent adaptiveness in respect to supervising simulations; Web portal receiving simulation notifications in realtime.

  16. Atomic layer deposition (ALD) for optical nanofabrication

    Science.gov (United States)

    Maula, Jarmo

    2010-02-01

    ALD is currently one of the most rapidly developing fields of thin film technology. Presentation gives an overview of ALD technology for optical film deposition, highlighting benefits, drawbacks and peculiarities of the ALD, especially compared to PVD. Viewpoint is practical, based on experience gained from tens of different applications over the last few decades. ALD is not competing, but enabling technology to provide coatings, which are difficult for traditional technologies. Examples of such cases are films inside of tubes; double side deposition on the substrate; large area accurate coatings; decorative coating for 3D parts; conformal coatings on high aspect ratio surfaces or inside porous structures. Novel materials can be easily engineered by making modifications on molecular level. ALD coats large surfaces effectively and fast. Opposite to common view, it actually provides high throughput (coated area/time), when used properly with a batch and/or in-line tools. It is possible to use ALD for many micrometers thick films or even produce thin parts with competitive cost. Besides optical films ALD provides large variety of features for nanofabrication. For example pin hole free films for passivation and barrier applications and best available films for conformal coatings like planarization or to improve surface smoothness. High deposition repeatability even with subnanometer film structures helps fabrication. ALD enters to production mostly through new products, not yet existing on the market and so the application IP field is reasonably open. ALD is an enabling, mature technology to fabricate novel optical materials and to open pathways for new applications.

  17. Scalable Fabrication of Integrated Nanophotonic Circuits on Arrays of Thin Single Crystal Diamond Membrane Windows.

    Science.gov (United States)

    Piracha, Afaq H; Rath, Patrik; Ganesan, Kumaravelu; Kühn, Stefan; Pernice, Wolfram H P; Prawer, Steven

    2016-05-11

    Diamond has emerged as a promising platform for nanophotonic, optical, and quantum technologies. High-quality, single crystalline substrates of acceptable size are a prerequisite to meet the demanding requirements on low-level impurities and low absorption loss when targeting large photonic circuits. Here, we describe a scalable fabrication method for single crystal diamond membrane windows that achieves three major goals with one fabrication method: providing high quality diamond, as confirmed by Raman spectroscopy; achieving homogeneously thin membranes, enabled by ion implantation; and providing compatibility with established planar fabrication via lithography and vertical etching. On such suspended diamond membranes we demonstrate a suite of photonic components as building blocks for nanophotonic circuits. Monolithic grating couplers are used to efficiently couple light between photonic circuits and optical fibers. In waveguide coupled optical ring resonators, we find loaded quality factors up to 66 000 at a wavelength of 1560 nm, corresponding to propagation loss below 7.2 dB/cm. Our approach holds promise for the scalable implementation of future diamond quantum photonic technologies and all-diamond photonic metrology tools.

  18. Scalable Parallel Distributed Coprocessor System for Graph Searching Problems with Massive Data

    Directory of Open Access Journals (Sweden)

    Wanrong Huang

    2017-01-01

    Full Text Available The Internet applications, such as network searching, electronic commerce, and modern medical applications, produce and process massive data. Considerable data parallelism exists in computation processes of data-intensive applications. A traversal algorithm, breadth-first search (BFS, is fundamental in many graph processing applications and metrics when a graph grows in scale. A variety of scientific programming methods have been proposed for accelerating and parallelizing BFS because of the poor temporal and spatial locality caused by inherent irregular memory access patterns. However, new parallel hardware could provide better improvement for scientific methods. To address small-world graph problems, we propose a scalable and novel field-programmable gate array-based heterogeneous multicore system for scientific programming. The core is multithread for streaming processing. And the communication network InfiniBand is adopted for scalability. We design a binary search algorithm to address mapping to unify all processor addresses. Within the limits permitted by the Graph500 test bench after 1D parallel hybrid BFS algorithm testing, our 8-core and 8-thread-per-core system achieved superior performance and efficiency compared with the prior work under the same degree of parallelism. Our system is efficient not as a special acceleration unit but as a processor platform that deals with graph searching applications.

  19. Flash NanoPrecipitation (FNP) for bioengineering nanoparticles to enhance the bioavailability

    Science.gov (United States)

    Feng, Jie; Zhang, Yingyue; McManus, Simone; Prud'Homme, Robert

    2017-11-01

    Nanoparticles for the delivery of therapeutics have been one of the successful areas in biomedical nanotechnology. Nanoparticles improve bioavailability by 1) the higher surface-to-volume ratios, enhancing dissolution rates, and 2) trapping drug molecules in higher energy, amorphous states for a higher solubility. However, conventional direct precipitation to prepare nanoparticles has the issues of low loading and encapsulation efficiency. Here we demonstrate a kinetically controlled and rapid-precipitation process called Flash NanoPrecipitation (FNP), to offer a multi-phase mixing platform for bioengineering nanoparticles. With the designed geometry in the micro-mixer, we can generate nanoparticles with a narrow size distribution, while maintaining high loading and encapsulation efficiency. By controlling the time scales in FNP, we can tune the nanoparticle size and the robustness of the process. Remarkably, the dissolution rates of the nanoparticles are significantly improved compared with crystalline drug powders. Furthermore, we investigate how to recover the drug-loaded nanoparticles from the aqueous dispersions. Regarding the maintenance of the bioavailability, we discuss the advantages and disadvantages of each drying process. These results suggest that FNP offers a versatile and scalable nano-fabrication platform for biomedical engineering.

  20. A scalable variational inequality approach for flow through porous media models with pressure-dependent viscosity

    Science.gov (United States)

    Mapakshi, N. K.; Chang, J.; Nakshatrala, K. B.

    2018-04-01

    Mathematical models for flow through porous media typically enjoy the so-called maximum principles, which place bounds on the pressure field. It is highly desirable to preserve these bounds on the pressure field in predictive numerical simulations, that is, one needs to satisfy discrete maximum principles (DMP). Unfortunately, many of the existing formulations for flow through porous media models do not satisfy DMP. This paper presents a robust, scalable numerical formulation based on variational inequalities (VI), to model non-linear flows through heterogeneous, anisotropic porous media without violating DMP. VI is an optimization technique that places bounds on the numerical solutions of partial differential equations. To crystallize the ideas, a modification to Darcy equations by taking into account pressure-dependent viscosity will be discretized using the lowest-order Raviart-Thomas (RT0) and Variational Multi-scale (VMS) finite element formulations. It will be shown that these formulations violate DMP, and, in fact, these violations increase with an increase in anisotropy. It will be shown that the proposed VI-based formulation provides a viable route to enforce DMP. Moreover, it will be shown that the proposed formulation is scalable, and can work with any numerical discretization and weak form. A series of numerical benchmark problems are solved to demonstrate the effects of heterogeneity, anisotropy and non-linearity on DMP violations under the two chosen formulations (RT0 and VMS), and that of non-linearity on solver convergence for the proposed VI-based formulation. Parallel scalability on modern computational platforms will be illustrated through strong-scaling studies, which will prove the efficiency of the proposed formulation in a parallel setting. Algorithmic scalability as the problem size is scaled up will be demonstrated through novel static-scaling studies. The performed static-scaling studies can serve as a guide for users to be able to select

  1. The possibility of multi-layer nanofabrication via atomic force microscope-based pulse electrochemical nanopatterning

    Science.gov (United States)

    Kim, Uk Su; Morita, Noboru; Lee, Deug Woo; Jun, Martin; Park, Jeong Woo

    2017-05-01

    Pulse electrochemical nanopatterning, a non-contact scanning probe lithography process using ultrashort voltage pulses, is based primarily on an electrochemical machining process using localized electrochemical oxidation between a sharp tool tip and the sample surface. In this study, nanoscale oxide patterns were formed on silicon Si (100) wafer surfaces via electrochemical surface nanopatterning, by supplying external pulsed currents through non-contact atomic force microscopy. Nanoscale oxide width and height were controlled by modulating the applied pulse duration. Additionally, protruding nanoscale oxides were removed completely by simple chemical etching, showing a depressed pattern on the sample substrate surface. Nanoscale two-dimensional oxides, prepared by a localized electrochemical reaction, can be defined easily by controlling physical and electrical variables, before proceeding further to a layer-by-layer nanofabrication process.

  2. Advanced Nanofabrication Process Development for Self-Powered System-on-Chip

    KAUST Repository

    Rojas, Jhonathan Prieto

    2010-01-01

    In summary, by using a novel sustainable energy component and scalable nano-patterning for logic and computing module, this work has successfully collected the essential base knowledge and joined two different elements that synergistically will contribute for the future implementation of a Self-Powered System-on-Chip.

  3. Coalescent: an open-source and scalable framework for exact calculations in coalescent theory

    Science.gov (United States)

    2012-01-01

    Background Currently, there is no open-source, cross-platform and scalable framework for coalescent analysis in population genetics. There is no scalable GUI based user application either. Such a framework and application would not only drive the creation of more complex and realistic models but also make them truly accessible. Results As a first attempt, we built a framework and user application for the domain of exact calculations in coalescent analysis. The framework provides an API with the concepts of model, data, statistic, phylogeny, gene tree and recursion. Infinite-alleles and infinite-sites models are considered. It defines pluggable computations such as counting and listing all the ancestral configurations and genealogies and computing the exact probability of data. It can visualize a gene tree, trace and visualize the internals of the recursion algorithm for further improvement and attach dynamically a number of output processors. The user application defines jobs in a plug-in like manner so that they can be activated, deactivated, installed or uninstalled on demand. Multiple jobs can be run and their inputs edited. Job inputs are persisted across restarts and running jobs can be cancelled where applicable. Conclusions Coalescent theory plays an increasingly important role in analysing molecular population genetic data. Models involved are mathematically difficult and computationally challenging. An open-source, scalable framework that lets users immediately take advantage of the progress made by others will enable exploration of yet more difficult and realistic models. As models become more complex and mathematically less tractable, the need for an integrated computational approach is obvious. Object oriented designs, though has upfront costs, are practical now and can provide such an integrated approach. PMID:23033878

  4. Coalescent: an open-source and scalable framework for exact calculations in coalescent theory

    Directory of Open Access Journals (Sweden)

    Tewari Susanta

    2012-10-01

    Full Text Available Abstract Background Currently, there is no open-source, cross-platform and scalable framework for coalescent analysis in population genetics. There is no scalable GUI based user application either. Such a framework and application would not only drive the creation of more complex and realistic models but also make them truly accessible. Results As a first attempt, we built a framework and user application for the domain of exact calculations in coalescent analysis. The framework provides an API with the concepts of model, data, statistic, phylogeny, gene tree and recursion. Infinite-alleles and infinite-sites models are considered. It defines pluggable computations such as counting and listing all the ancestral configurations and genealogies and computing the exact probability of data. It can visualize a gene tree, trace and visualize the internals of the recursion algorithm for further improvement and attach dynamically a number of output processors. The user application defines jobs in a plug-in like manner so that they can be activated, deactivated, installed or uninstalled on demand. Multiple jobs can be run and their inputs edited. Job inputs are persisted across restarts and running jobs can be cancelled where applicable. Conclusions Coalescent theory plays an increasingly important role in analysing molecular population genetic data. Models involved are mathematically difficult and computationally challenging. An open-source, scalable framework that lets users immediately take advantage of the progress made by others will enable exploration of yet more difficult and realistic models. As models become more complex and mathematically less tractable, the need for an integrated computational approach is obvious. Object oriented designs, though has upfront costs, are practical now and can provide such an integrated approach.

  5. A Novel Platform for Evaluating the Environmental Impacts on Bacterial Cellulose Production.

    Science.gov (United States)

    Basu, Anindya; Vadanan, Sundaravadanam Vishnu; Lim, Sierin

    2018-04-10

    Bacterial cellulose (BC) is a biocompatible material with versatile applications. However, its large-scale production is challenged by the limited biological knowledge of the bacteria. The advent of synthetic biology has lead the way to the development of BC producing microbes as a novel chassis. Hence, investigation on optimal growth conditions for BC production and understanding of the fundamental biological processes are imperative. In this study, we report a novel analytical platform that can be used for studying the biology and optimizing growth conditions of cellulose producing bacteria. The platform is based on surface growth pattern of the organism and allows us to confirm that cellulose fibrils produced by the bacteria play a pivotal role towards their chemotaxis. The platform efficiently determines the impacts of different growth conditions on cellulose production and is translatable to static culture conditions. The analytical platform provides a means for fundamental biological studies of bacteria chemotaxis as well as systematic approach towards rational design and development of scalable bioprocessing strategies for industrial production of bacterial cellulose.

  6. The Concept of Business Model Scalability

    DEFF Research Database (Denmark)

    Lund, Morten; Nielsen, Christian

    2018-01-01

    -term pro table business. However, the main message of this article is that while providing a good value proposition may help the rm ‘get by’, the really successful businesses of today are those able to reach the sweet-spot of business model scalability. Design/Methodology/Approach: The article is based...... on a ve-year longitudinal action research project of over 90 companies that participated in the International Center for Innovation project aimed at building 10 global network-based business models. Findings: This article introduces and discusses the term scalability from a company-level perspective......Purpose: The purpose of the article is to de ne what scalable business models are. Central to the contemporary understanding of business models is the value proposition towards the customer and the hypotheses generated about delivering value to the customer which become a good foundation for a long...

  7. Oracle database performance and scalability a quantitative approach

    CERN Document Server

    Liu, Henry H

    2011-01-01

    A data-driven, fact-based, quantitative text on Oracle performance and scalability With database concepts and theories clearly explained in Oracle's context, readers quickly learn how to fully leverage Oracle's performance and scalability capabilities at every stage of designing and developing an Oracle-based enterprise application. The book is based on the author's more than ten years of experience working with Oracle, and is filled with dependable, tested, and proven performance optimization techniques. Oracle Database Performance and Scalability is divided into four parts that enable reader

  8. PKI Scalability Issues

    OpenAIRE

    Slagell, Adam J; Bonilla, Rafael

    2004-01-01

    This report surveys different PKI technologies such as PKIX and SPKI and the issues of PKI that affect scalability. Much focus is spent on certificate revocation methodologies and status verification systems such as CRLs, Delta-CRLs, CRS, Certificate Revocation Trees, Windowed Certificate Revocation, OCSP, SCVP and DVCS.

  9. On Scalability and Replicability of Smart Grid Projects—A Case Study

    Directory of Open Access Journals (Sweden)

    Lukas Sigrist

    2016-03-01

    Full Text Available This paper studies the scalability and replicability of smart grid projects. Currently, most smart grid projects are still in the R&D or demonstration phases. The full roll-out of the tested solutions requires a suitable degree of scalability and replicability to prevent project demonstrators from remaining local experimental exercises. Scalability and replicability are the preliminary requisites to perform scaling-up and replication successfully; therefore, scalability and replicability allow for or at least reduce barriers for the growth and reuse of the results of project demonstrators. The paper proposes factors that influence and condition a project’s scalability and replicability. These factors involve technical, economic, regulatory and stakeholder acceptance related aspects, and they describe requirements for scalability and replicability. In order to assess and evaluate the identified scalability and replicability factors, data has been collected from European and national smart grid projects by means of a survey, reflecting the projects’ view and results. The evaluation of the factors allows quantifying the status quo of on-going projects with respect to the scalability and replicability, i.e., they provide a feedback on to what extent projects take into account these factors and on whether the projects’ results and solutions are actually scalable and replicable.

  10. Scalable-to-lossless transform domain distributed video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Veselov, Anton

    2010-01-01

    Distributed video coding (DVC) is a novel approach providing new features as low complexity encoding by mainly exploiting the source statistics at the decoder based on the availability of decoder side information. In this paper, scalable-tolossless DVC is presented based on extending a lossy Tran...... codec provides frame by frame encoding. Comparing the lossless coding efficiency, the proposed scalable-to-lossless TDWZ video codec can save up to 5%-13% bits compared to JPEG LS and H.264 Intra frame lossless coding and do so as a scalable-to-lossless coding....

  11. Center for Programming Models for Scalable Parallel Computing - Towards Enhancing OpenMP for Manycore and Heterogeneous Nodes

    Energy Technology Data Exchange (ETDEWEB)

    Barbara Chapman

    2012-02-01

    OpenMP was not well recognized at the beginning of the project, around year 2003, because of its limited use in DoE production applications and the inmature hardware support for an efficient implementation. Yet in the recent years, it has been graduately adopted both in HPC applications, mostly in the form of MPI+OpenMP hybrid code, and in mid-scale desktop applications for scientific and experimental studies. We have observed this trend and worked deligiently to improve our OpenMP compiler and runtimes, as well as to work with the OpenMP standard organization to make sure OpenMP are evolved in the direction close to DoE missions. In the Center for Programming Models for Scalable Parallel Computing project, the HPCTools team at the University of Houston (UH), directed by Dr. Barbara Chapman, has been working with project partners, external collaborators and hardware vendors to increase the scalability and applicability of OpenMP for multi-core (and future manycore) platforms and for distributed memory systems by exploring different programming models, language extensions, compiler optimizations, as well as runtime library support.

  12. GATECloud.net: a platform for large-scale, open-source text processing on the cloud.

    Science.gov (United States)

    Tablan, Valentin; Roberts, Ian; Cunningham, Hamish; Bontcheva, Kalina

    2013-01-28

    Cloud computing is increasingly being regarded as a key enabler of the 'democratization of science', because on-demand, highly scalable cloud computing facilities enable researchers anywhere to carry out data-intensive experiments. In the context of natural language processing (NLP), algorithms tend to be complex, which makes their parallelization and deployment on cloud platforms a non-trivial task. This study presents a new, unique, cloud-based platform for large-scale NLP research--GATECloud. net. It enables researchers to carry out data-intensive NLP experiments by harnessing the vast, on-demand compute power of the Amazon cloud. Important infrastructural issues are dealt with by the platform, completely transparently for the researcher: load balancing, efficient data upload and storage, deployment on the virtual machines, security and fault tolerance. We also include a cost-benefit analysis and usage evaluation.

  13. An extended systematic mapping study about the scalability of i* Models

    Directory of Open Access Journals (Sweden)

    Paulo Lima

    2016-12-01

    Full Text Available i* models have been used for requirements specification in many domains, such as healthcare, telecommunication, and air traffic control. Managing the scalability and the complexity of such models is an important challenge in Requirements Engineering (RE. Scalability is also one of the most intractable issues in the design of visual notations in general: a well-known problem with visual representations is that they do not scale well. This issue has led us to investigate scalability in i* models and its variants by means of a systematic mapping study. This paper is an extended version of a previous paper on the scalability of i* including papers indicated by specialists. Moreover, we also discuss the challenges and open issues regarding scalability of i* models and its variants. A total of 126 papers were analyzed in order to understand: how the RE community perceives scalability; and which proposals have considered this topic. We found that scalability issues are indeed perceived as relevant and that further work is still required, even though many potential solutions have already been proposed. This study can be a starting point for researchers aiming to further advance the treatment of scalability in i* models.

  14. Requirements for Scalable Access Control and Security Management Architectures

    National Research Council Canada - National Science Library

    Keromytis, Angelos D; Smith, Jonathan M

    2005-01-01

    Maximizing local autonomy has led to a scalable Internet. Scalability and the capacity for distributed control have unfortunately not extended well to resource access control policies and mechanisms...

  15. Industrial applications of micro/nanofabrication at Singapore Synchrotron Light Source

    International Nuclear Information System (INIS)

    Jian, L K; Casse, B D F; Heussler, S P; Kong, J R; Saw, B T; Mahmood, Shahrain bin; Moser, H O

    2006-01-01

    SSLS (Singapore Synchrotron Light Source) has set up a complete one-stop shop for micro/nanofabrication in the framework of the LIGA process. It is dubbed LiMiNT for Lithography for Micro and Nanotechnology and allows complete prototyping using the integral cycle of the LIGA process for producing micro/nanostructures from mask design/fabrication over X-ray lithography to electroplating in Ni, Cu, or Au, and, finally, hot embossing in a wide variety of plastics as one of the capabilities to cover a wide range of application fields and to go into higher volume production. The process chain also includes plasma cleaning and sputtering as well as substrate preparation processes including metal buffer layers, plating bases, and spin coating, polishing, and dicing. Furthermore, metrology using scanning electron microscopy (SEM), optical profilometry, and optical microscopy is available. LiMiNT is run as a research lab as well as a foundry. In this paper, several industrial applications will be presented, in which LiMiNT functions as a foundry to provide external customers the micro/nano fabrication services. These services include the fabrication of optical or X-ray masks, of micro/nano structures from polymers or from metals and of moulds for hot embossing or injection moulding

  16. Metal oxide multilayer hard mask system for 3D nanofabrication

    Science.gov (United States)

    Han, Zhongmei; Salmi, Emma; Vehkamäki, Marko; Leskelä, Markku; Ritala, Mikko

    2018-02-01

    We demonstrate the preparation and exploitation of multilayer metal oxide hard masks for lithography and 3D nanofabrication. Atomic layer deposition (ALD) and focused ion beam (FIB) technologies are applied for mask deposition and mask patterning, respectively. A combination of ALD and FIB was used and a patterning procedure was developed to avoid the ion beam defects commonly met when using FIB alone for microfabrication. ALD grown Al2O3/Ta2O5/Al2O3 thin film stacks were FIB milled with 30 keV gallium ions and chemically etched in 5% tetramethylammonium hydroxide at 50 °C. With metal evaporation, multilayers consisting of amorphous oxides Al2O3 and Ta2O5 can be tailored for use in 2D lift-off processing, in preparation of embedded sub-100 nm metal lines and for multilevel electrical contacts. Good pattern transfer was achieved by lift-off process from the 2D hard mask for micro- and nano-scaled fabrication. As a demonstration of the applicability of this method to 3D structures, self-supporting 3D Ta2O5 masks were made from a film stack on gold particles. Finally, thin film resistors were fabricated by utilizing controlled stiction of suspended Ta2O5 structures.

  17. Nanomanipulation and nanofabrication with multi-probe scanning tunneling microscope: from individual atoms to nanowires.

    Science.gov (United States)

    Qin, Shengyong; Kim, Tae-Hwan; Wang, Zhouhang; Li, An-Ping

    2012-06-01

    The wide variety of nanoscale structures and devices demands novel tools for handling, assembly, and fabrication at nanoscopic positioning precision. The manipulation tools should allow for in situ characterization and testing of fundamental building blocks, such as nanotubes and nanowires, as they are built into functional devices. In this paper, a bottom-up technique for nanomanipulation and nanofabrication is reported by using a 4-probe scanning tunneling microscope (STM) combined with a scanning electron microscope (SEM). The applications of this technique are demonstrated in a variety of nanosystems, from manipulating individual atoms to bending, cutting, breaking carbon nanofibers, and constructing nanodevices for electrical characterizations. The combination of the wide field of view of SEM, the atomic position resolution of STM, and the flexibility of multiple scanning probes is expected to be a valuable tool for rapid prototyping in the nanoscience and nanotechnology.

  18. Scalable cloud without dedicated storage

    Science.gov (United States)

    Batkovich, D. V.; Kompaniets, M. V.; Zarochentsev, A. K.

    2015-05-01

    We present a prototype of a scalable computing cloud. It is intended to be deployed on the basis of a cluster without the separate dedicated storage. The dedicated storage is replaced by the distributed software storage. In addition, all cluster nodes are used both as computing nodes and as storage nodes. This solution increases utilization of the cluster resources as well as improves fault tolerance and performance of the distributed storage. Another advantage of this solution is high scalability with a relatively low initial and maintenance cost. The solution is built on the basis of the open source components like OpenStack, CEPH, etc.

  19. Laser applications in nanotechnology: nanofabrication using laser ablation and laser nanolithography

    International Nuclear Information System (INIS)

    Makarov, G N

    2013-01-01

    The fact that nanoparticles and nanomaterials have fundamental properties different both from their constituent atoms or molecules and from their bulk counterparts has stimulated great interest, both theoretical and practical, in nanoparticles and nanoparticle-based assemblies (functional materials), with the result that these structures have become the subject of explosive research over the last twenty years or so. A great deal of progress in this field has relied on the use of lasers. In this paper, the directions followed and results obtained in laser nanotechnology research are reviewed. The parameters, properties, and applications of nanoparticles are discussed, along with the physical and chemical methods for their fabrication and investigation. Nanofabrication applications of and fundamental physical principles behind laser ablation and laser nanolithography are discussed in detail. The applications of laser radiation are shown to range from fabricating, melting, and evaporating nanoparticles to changing their shape, structure, size, and size distribution, through studying their dynamics and forming them into periodic arrays and various structures and assemblies. The historical development of research on nanoparticles and nanomaterials and the application of laser nanotechnology in various fields are briefly reviewed. (reviews of topical problems)

  20. Globus Nexus: A Platform-as-a-Service Provider of Research Identity, Profile, and Group Management.

    Science.gov (United States)

    Chard, Kyle; Lidman, Mattias; McCollam, Brendan; Bryan, Josh; Ananthakrishnan, Rachana; Tuecke, Steven; Foster, Ian

    2016-03-01

    Globus Nexus is a professionally hosted Platform-as-a-Service that provides identity, profile and group management functionality for the research community. Many collaborative e-Science applications need to manage large numbers of user identities, profiles, and groups. However, developing and maintaining such capabilities is often challenging given the complexity of modern security protocols and requirements for scalable, robust, and highly available implementations. By outsourcing this functionality to Globus Nexus, developers can leverage best-practice implementations without incurring development and operations overhead. Users benefit from enhanced capabilities such as identity federation, flexible profile management, and user-oriented group management. In this paper we present Globus Nexus, describe its capabilities and architecture, summarize how several e-Science applications leverage these capabilities, and present results that characterize its scalability, reliability, and availability.

  1. Globus Nexus: A Platform-as-a-Service provider of research identity, profile, and group management

    Energy Technology Data Exchange (ETDEWEB)

    Chard, Kyle; Lidman, Mattias; McCollam, Brendan; Bryan, Josh; Ananthakrishnan, Rachana; Tuecke, Steven; Foster, Ian

    2016-03-01

    Globus Nexus is a professionally hosted Platform-as-a-Service that provides identity, profile and group management functionality for the research community. Many collaborative e-Science applications need to manage large numbers of user identities, profiles, and groups. However, developing and maintaining such capabilities is often challenging given the complexity of modern security protocols and requirements for scalable, robust, and highly available implementations. By outsourcing this functionality to Globus Nexus, developers can leverage best-practice implementations without incurring development and operations overhead. Users benefit from enhanced capabilities such as identity federation, flexible profile management, and user-oriented group management. In this paper we present Globus Nexus, describe its capabilities and architecture, summarize how several e-Science applications leverage these capabilities, and present results that characterize its scalability, reliability, and availability.

  2. Harnessing the power of emerging petascale platforms

    International Nuclear Information System (INIS)

    Mellor-Crummey, John

    2007-01-01

    As part of the US Department of Energy's Scientific Discovery through Advanced Computing (SciDAC-2) program, science teams are tackling problems that require computational simulation and modeling at the petascale. A grand challenge for computer science is to develop software technology that makes it easier to harness the power of these systems to aid scientific discovery. As part of its activities, the SciDAC-2 Center for Scalable Application Development Software (CScADS) is building open source software tools to support efficient scientific computing on the emerging leadership-class platforms. In this paper, we describe two tools for performance analysis and tuning that are being developed as part of CScADS: a tool for analyzing scalability and performance, and a tool for optimizing loop nests for better node performance. We motivate these tools by showing how they apply to S3D, a turbulent combustion code under development at Sandia National Laboratory. For S3D, our node performance analysis tool helped uncover several performance bottlenecks. Using our loop nest optimization tool, we transformed S3D's most costly loop nest to reduce execution time by a factor of 2.94 for a processor working on a 50 3 domain

  3. Harnessing the power of emerging petascale platforms

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [Associate Professor, Department of Computer Science, Rice University, Houston, TX (United States)

    2007-07-15

    As part of the US Department of Energy's Scientific Discovery through Advanced Computing (SciDAC-2) program, science teams are tackling problems that require computational simulation and modeling at the petascale. A grand challenge for computer science is to develop software technology that makes it easier to harness the power of these systems to aid scientific discovery. As part of its activities, the SciDAC-2 Center for Scalable Application Development Software (CScADS) is building open source software tools to support efficient scientific computing on the emerging leadership-class platforms. In this paper, we describe two tools for performance analysis and tuning that are being developed as part of CScADS: a tool for analyzing scalability and performance, and a tool for optimizing loop nests for better node performance. We motivate these tools by showing how they apply to S3D, a turbulent combustion code under development at Sandia National Laboratory. For S3D, our node performance analysis tool helped uncover several performance bottlenecks. Using our loop nest optimization tool, we transformed S3D's most costly loop nest to reduce execution time by a factor of 2.94 for a processor working on a 50{sup 3} domain.

  4. Enhancing Scalability of Sparse Direct Methods

    International Nuclear Information System (INIS)

    Li, Xiaoye S.; Demmel, James; Grigori, Laura; Gu, Ming; Xia, Jianlin; Jardin, Steve; Sovinec, Carl; Lee, Lie-Quan

    2007-01-01

    TOPS is providing high-performance, scalable sparse direct solvers, which have had significant impacts on the SciDAC applications, including fusion simulation (CEMM), accelerator modeling (COMPASS), as well as many other mission-critical applications in DOE and elsewhere. Our recent developments have been focusing on new techniques to overcome scalability bottleneck of direct methods, in both time and memory. These include parallelizing symbolic analysis phase and developing linear-complexity sparse factorization methods. The new techniques will make sparse direct methods more widely usable in large 3D simulations on highly-parallel petascale computers

  5. Modular Universal Scalable Ion-trap Quantum Computer

    Science.gov (United States)

    2016-06-02

    SECURITY CLASSIFICATION OF: The main goal of the original MUSIQC proposal was to construct and demonstrate a modular and universally- expandable ion...Distribution Unlimited UU UU UU UU 02-06-2016 1-Aug-2010 31-Jan-2016 Final Report: Modular Universal Scalable Ion-trap Quantum Computer The views...P.O. Box 12211 Research Triangle Park, NC 27709-2211 Ion trap quantum computation, scalable modular architectures REPORT DOCUMENTATION PAGE 11

  6. Scalable and Media Aware Adaptive Video Streaming over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Béatrice Pesquet-Popescu

    2008-07-01

    Full Text Available This paper proposes an advanced video streaming system based on scalable video coding in order to optimize resource utilization in wireless networks with retransmission mechanisms at radio protocol level. The key component of this system is a packet scheduling algorithm which operates on the different substreams of a main scalable video stream and which is implemented in a so-called media aware network element. The concerned type of transport channel is a dedicated channel subject to parameters (bitrate, loss rate variations on the long run. Moreover, we propose a combined scalability approach in which common temporal and SNR scalability features can be used jointly with a partitioning of the image into regions of interest. Simulation results show that our approach provides substantial quality gain compared to classical packet transmission methods and they demonstrate how ROI coding combined with SNR scalability allows to improve again the visual quality.

  7. Design issues for numerical libraries on scalable multicore architectures

    International Nuclear Information System (INIS)

    Heroux, M A

    2008-01-01

    Future generations of scalable computers will rely on multicore nodes for a significant portion of overall system performance. At present, most applications and libraries cannot exploit multiple cores beyond running addition MPI processes per node. In this paper we discuss important multicore architecture issues, programming models, algorithms requirements and software design related to effective use of scalable multicore computers. In particular, we focus on important issues for library research and development, making recommendations for how to effectively develop libraries for future scalable computer systems

  8. Behavioral Indicators on a Mobile Sensing Platform Predict Clinically Validated Psychiatric Symptoms of Mood and Anxiety Disorders.

    Science.gov (United States)

    Place, Skyler; Blanch-Hartigan, Danielle; Rubin, Channah; Gorrostieta, Cristina; Mead, Caroline; Kane, John; Marx, Brian P; Feast, Joshua; Deckersbach, Thilo; Pentland, Alex Sandy; Nierenberg, Andrew; Azarbayejani, Ali

    2017-03-16

    There is a critical need for real-time tracking of behavioral indicators of mental disorders. Mobile sensing platforms that objectively and noninvasively collect, store, and analyze behavioral indicators have not yet been clinically validated or scalable. The aim of our study was to report on models of clinical symptoms for post-traumatic stress disorder (PTSD) and depression derived from a scalable mobile sensing platform. A total of 73 participants (67% [49/73] male, 48% [35/73] non-Hispanic white, 33% [24/73] veteran status) who reported at least one symptom of PTSD or depression completed a 12-week field trial. Behavioral indicators were collected through the noninvasive mobile sensing platform on participants' mobile phones. Clinical symptoms were measured through validated clinical interviews with a licensed clinical social worker. A combination hypothesis and data-driven approach was used to derive key features for modeling symptoms, including the sum of outgoing calls, count of unique numbers texted, absolute distance traveled, dynamic variation of the voice, speaking rate, and voice quality. Participants also reported ease of use and data sharing concerns. Behavioral indicators predicted clinically assessed symptoms of depression and PTSD (cross-validated area under the curve [AUC] for depressed mood=.74, fatigue=.56, interest in activities=.75, and social connectedness=.83). Participants reported comfort sharing individual data with physicians (Mean 3.08, SD 1.22), mental health providers (Mean 3.25, SD 1.39), and medical researchers (Mean 3.03, SD 1.36). Behavioral indicators passively collected through a mobile sensing platform predicted symptoms of depression and PTSD. The use of mobile sensing platforms can provide clinically validated behavioral indicators in real time; however, further validation of these models and this platform in large clinical samples is needed. ©Skyler Place, Danielle Blanch-Hartigan, Channah Rubin, Cristina Gorrostieta

  9. Scalable, ultra-resistant structural colors based on network metamaterials

    KAUST Repository

    Galinski, Henning

    2017-05-05

    Structural colors have drawn wide attention for their potential as a future printing technology for various applications, ranging from biomimetic tissues to adaptive camouflage materials. However, an efficient approach to realize robust colors with a scalable fabrication technique is still lacking, hampering the realization of practical applications with this platform. Here, we develop a new approach based on large-scale network metamaterials that combine dealloyed subwavelength structures at the nanoscale with lossless, ultra-thin dielectric coatings. By using theory and experiments, we show how subwavelength dielectric coatings control a mechanism of resonant light coupling with epsilon-near-zero regions generated in the metallic network, generating the formation of saturated structural colors that cover a wide portion of the spectrum. Ellipsometry measurements support the efficient observation of these colors, even at angles of 70°. The network-like architecture of these nanomaterials allows for high mechanical resistance, which is quantified in a series of nano-scratch tests. With such remarkable properties, these metastructures represent a robust design technology for real-world, large-scale commercial applications.

  10. Scuba: scalable kernel-based gene prioritization.

    Science.gov (United States)

    Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio

    2018-01-25

    The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .

  11. Building scalable apps with Redis and Node.js

    CERN Document Server

    Johanan, Joshua

    2014-01-01

    If the phrase scalability sounds alien to you, then this is an ideal book for you. You will not need much Node.js experience as each framework is demonstrated in a way that requires no previous knowledge of the framework. You will be building scalable Node.js applications in no time! Knowledge of JavaScript is required.

  12. A Scientific Workflow Platform for Generic and Scalable Object Recognition on Medical Images

    Science.gov (United States)

    Möller, Manuel; Tuot, Christopher; Sintek, Michael

    In the research project THESEUS MEDICO we aim at a system combining medical image information with semantic background knowledge from ontologies to give clinicians fully cross-modal access to biomedical image repositories. Therefore joint efforts have to be made in more than one dimension: Object detection processes have to be specified in which an abstraction is performed starting from low-level image features across landmark detection utilizing abstract domain knowledge up to high-level object recognition. We propose a system based on a client-server extension of the scientific workflow platform Kepler that assists the collaboration of medical experts and computer scientists during development and parameter learning.

  13. Scalable shared-memory multiprocessing

    CERN Document Server

    Lenoski, Daniel E

    1995-01-01

    Dr. Lenoski and Dr. Weber have experience with leading-edge research and practical issues involved in implementing large-scale parallel systems. They were key contributors to the architecture and design of the DASH multiprocessor. Currently, they are involved with commercializing scalable shared-memory technology.

  14. SiGe BiCMOS manufacturing platform for mmWave applications

    Science.gov (United States)

    Kar-Roy, Arjun; Howard, David; Preisler, Edward; Racanelli, Marco; Chaudhry, Samir; Blaschke, Volker

    2010-10-01

    TowerJazz offers high volume manufacturable commercial SiGe BiCMOS technology platforms to address the mmWave market. In this paper, first, the SiGe BiCMOS process technology platforms such as SBC18 and SBC13 are described. These manufacturing platforms integrate 200 GHz fT/fMAX SiGe NPN with deep trench isolation into 0.18μm and 0.13μm node CMOS processes along with high density 5.6fF/μm2 stacked MIM capacitors, high value polysilicon resistors, high-Q metal resistors, lateral PNP transistors, and triple well isolation using deep n-well for mixed-signal integration, and, multiple varactors and compact high-Q inductors for RF needs. Second, design enablement tools that maximize performance and lowers costs and time to market such as scalable PSP and HICUM models, statistical and Xsigma models, reliability modeling tools, process control model tools, inductor toolbox and transmission line models are described. Finally, demonstrations in silicon for mmWave applications in the areas of optical networking, mobile broadband, phased array radar, collision avoidance radar and W-band imaging are listed.

  15. Scalable, Secure Analysis of Social Sciences Data on the Azure Platform

    Energy Technology Data Exchange (ETDEWEB)

    Simmhan, Yogesh; Deng, Litao; Kumbhare, Alok; Redekopp, Mark; Prasanna, Viktor

    2012-05-07

    Human activity and interaction data is beginning to be collected at population scales through the pervasiveness of social media and willingness of people to volunteer information. This can allow social science researchers to understand and model human behavior with better accuracy and prediction power. Political and social scientists are starting to correlate such large scale social media datasets with events that impact society as evidence abound of the virtual and physical public spaces intersecting and influencing each other [1,2]. Managers of Cyber Physical Systems such as Smart Power Grid utilities are investigating the impact of consumer behavior on power consumption, and the possibility of influencing the usage profile [3]. Data collection is also made easier through technology such as mobile apps, social media sites and search engines that directly collect data, and sensors such smart meters and room occupancy sensors that indirectly measure human activity. These technology platforms also provide a convenient framework for “human sensors” to record and broadcast data for behavioral studies, as a form of crowd sourced citizen science. This has the added advantage of engaging the broader public in STEM activities and help influence public policy.

  16. A scalable healthcare information system based on a service-oriented architecture.

    Science.gov (United States)

    Yang, Tzu-Hsiang; Sun, Yeali S; Lai, Feipei

    2011-06-01

    Many existing healthcare information systems are composed of a number of heterogeneous systems and face the important issue of system scalability. This paper first describes the comprehensive healthcare information systems used in National Taiwan University Hospital (NTUH) and then presents a service-oriented architecture (SOA)-based healthcare information system (HIS) based on the service standard HL7. The proposed architecture focuses on system scalability, in terms of both hardware and software. Moreover, we describe how scalability is implemented in rightsizing, service groups, databases, and hardware scalability. Although SOA-based systems sometimes display poor performance, through a performance evaluation of our HIS based on SOA, the average response time for outpatient, inpatient, and emergency HL7Central systems are 0.035, 0.04, and 0.036 s, respectively. The outpatient, inpatient, and emergency WebUI average response times are 0.79, 1.25, and 0.82 s. The scalability of the rightsizing project and our evaluation results show that the SOA HIS we propose provides evidence that SOA can provide system scalability and sustainability in a highly demanding healthcare information system.

  17. Evaluation of secure capability-based access control in the M2M local cloud platform

    DEFF Research Database (Denmark)

    Anggorojati, Bayu; Prasad, Neeli R.; Prasad, Ramjee

    2016-01-01

    delegation. Recently, the capability based access control has been considered as method to manage access in the Internet of Things (IoT) or M2M domain. In this paper, the implementation and evaluation of a proposed secure capability based access control in the M2M local cloud platform is presented......Managing access to and protecting resources is one of the important aspect in managing security, especially in a distributed computing system such as Machine-to-Machine (M2M). One such platform known as the M2M local cloud platform, referring to BETaaS architecture [1], which conceptually consists...... of multiple distributed M2M gateways, creating new challenges in the access control. Some existing access control systems lack in scalability and flexibility to manage access from users or entity that belong to different authorization domains, or fails to provide fine grained and flexible access right...

  18. Nanocalorimeter platform for in situ specific heat measurements and x-ray diffraction at low temperature

    Science.gov (United States)

    Willa, K.; Diao, Z.; Campanini, D.; Welp, U.; Divan, R.; Hudl, M.; Islam, Z.; Kwok, W.-K.; Rydh, A.

    2017-12-01

    Recent advances in electronics and nanofabrication have enabled membrane-based nanocalorimetry for measurements of the specific heat of microgram-sized samples. We have integrated a nanocalorimeter platform into a 4.5 T split-pair vertical-field magnet to allow for the simultaneous measurement of the specific heat and x-ray scattering in magnetic fields and at temperatures as low as 4 K. This multi-modal approach empowers researchers to directly correlate scattering experiments with insights from thermodynamic properties including structural, electronic, orbital, and magnetic phase transitions. The use of a nanocalorimeter sample platform enables numerous technical advantages: precise measurement and control of the sample temperature, quantification of beam heating effects, fast and precise positioning of the sample in the x-ray beam, and fast acquisition of x-ray scans over a wide temperature range without the need for time-consuming re-centering and re-alignment. Furthermore, on an YBa2Cu3O7-δ crystal and a copper foil, we demonstrate a novel approach to x-ray absorption spectroscopy by monitoring the change in sample temperature as a function of incident photon energy. Finally, we illustrate the new insights that can be gained from in situ structural and thermodynamic measurements by investigating the superheated state occurring at the first-order magneto-elastic phase transition of Fe2P, a material that is of interest for magnetocaloric applications.

  19. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    Thomas André

    2007-03-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  20. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    André Thomas

    2007-01-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  1. A LabVIEW® based generic CT scanner control software platform.

    Science.gov (United States)

    Dierick, M; Van Loo, D; Masschaele, B; Boone, M; Van Hoorebeke, L

    2010-01-01

    UGCT, the Centre for X-ray tomography at Ghent University (Belgium) does research on X-ray tomography and its applications. This includes the development and construction of state-of-the-art CT scanners for scientific research. Because these scanners are built for very different purposes they differ considerably in their physical implementations. However, they all share common principle functionality. In this context a generic software platform was developed using LabVIEW® in order to provide the same interface and functionality on all scanners. This article describes the concept and features of this software, and its potential for tomography in a research setting. The core concept is to rigorously separate the abstract operation of a CT scanner from its actual physical configuration. This separation is achieved by implementing a sender-listener architecture. The advantages are that the resulting software platform is generic, scalable, highly efficient, easy to develop and to extend, and that it can be deployed on future scanners with minimal effort.

  2. Bioinformatics on the Cloud Computing Platform Azure

    Science.gov (United States)

    Shanahan, Hugh P.; Owen, Anne M.; Harrison, Andrew P.

    2014-01-01

    We discuss the applicability of the Microsoft cloud computing platform, Azure, for bioinformatics. We focus on the usability of the resource rather than its performance. We provide an example of how R can be used on Azure to analyse a large amount of microarray expression data deposited at the public database ArrayExpress. We provide a walk through to demonstrate explicitly how Azure can be used to perform these analyses in Appendix S1 and we offer a comparison with a local computation. We note that the use of the Platform as a Service (PaaS) offering of Azure can represent a steep learning curve for bioinformatics developers who will usually have a Linux and scripting language background. On the other hand, the presence of an additional set of libraries makes it easier to deploy software in a parallel (scalable) fashion and explicitly manage such a production run with only a few hundred lines of code, most of which can be incorporated from a template. We propose that this environment is best suited for running stable bioinformatics software by users not involved with its development. PMID:25050811

  3. Declarative and Scalable Selection for Map Visualizations

    DEFF Research Database (Denmark)

    Kefaloukos, Pimin Konstantin Balic

    and is itself a source and cause of prolific data creation. This calls for scalable map processing techniques that can handle the data volume and which play well with the predominant data models on the Web. (4) Maps are now consumed around the clock by a global audience. While historical maps were singleuser......-defined constraints as well as custom objectives. The purpose of the language is to derive a target multi-scale database from a source database according to holistic specifications. (b) The Glossy SQL compiler allows Glossy SQL to be scalably executed in a spatial analytics system, such as a spatial relational......, there are indications that the method is scalable for databases that contain millions of records, especially if the target language of the compiler is substituted by a cluster-ready variant of SQL. While several realistic use cases for maps have been implemented in CVL, additional non-geographic data visualization uses...

  4. Scalable robotic biofabrication of tissue spheroids

    International Nuclear Information System (INIS)

    Mehesz, A Nagy; Hajdu, Z; Visconti, R P; Markwald, R R; Mironov, V; Brown, J; Beaver, W; Da Silva, J V L

    2011-01-01

    Development of methods for scalable biofabrication of uniformly sized tissue spheroids is essential for tissue spheroid-based bioprinting of large size tissue and organ constructs. The most recent scalable technique for tissue spheroid fabrication employs a micromolded recessed template prepared in a non-adhesive hydrogel, wherein the cells loaded into the template self-assemble into tissue spheroids due to gravitational force. In this study, we present an improved version of this technique. A new mold was designed to enable generation of 61 microrecessions in each well of a 96-well plate. The microrecessions were seeded with cells using an EpMotion 5070 automated pipetting machine. After 48 h of incubation, tissue spheroids formed at the bottom of each microrecession. To assess the quality of constructs generated using this technology, 600 tissue spheroids made by this method were compared with 600 spheroids generated by the conventional hanging drop method. These analyses showed that tissue spheroids fabricated by the micromolded method are more uniform in diameter. Thus, use of micromolded recessions in a non-adhesive hydrogel, combined with automated cell seeding, is a reliable method for scalable robotic fabrication of uniform-sized tissue spheroids.

  5. Scalable robotic biofabrication of tissue spheroids

    Energy Technology Data Exchange (ETDEWEB)

    Mehesz, A Nagy; Hajdu, Z; Visconti, R P; Markwald, R R; Mironov, V [Advanced Tissue Biofabrication Center, Department of Regenerative Medicine and Cell Biology, Medical University of South Carolina, Charleston, SC (United States); Brown, J [Department of Mechanical Engineering, Clemson University, Clemson, SC (United States); Beaver, W [York Technical College, Rock Hill, SC (United States); Da Silva, J V L, E-mail: mironovv@musc.edu [Renato Archer Information Technology Center-CTI, Campinas (Brazil)

    2011-06-15

    Development of methods for scalable biofabrication of uniformly sized tissue spheroids is essential for tissue spheroid-based bioprinting of large size tissue and organ constructs. The most recent scalable technique for tissue spheroid fabrication employs a micromolded recessed template prepared in a non-adhesive hydrogel, wherein the cells loaded into the template self-assemble into tissue spheroids due to gravitational force. In this study, we present an improved version of this technique. A new mold was designed to enable generation of 61 microrecessions in each well of a 96-well plate. The microrecessions were seeded with cells using an EpMotion 5070 automated pipetting machine. After 48 h of incubation, tissue spheroids formed at the bottom of each microrecession. To assess the quality of constructs generated using this technology, 600 tissue spheroids made by this method were compared with 600 spheroids generated by the conventional hanging drop method. These analyses showed that tissue spheroids fabricated by the micromolded method are more uniform in diameter. Thus, use of micromolded recessions in a non-adhesive hydrogel, combined with automated cell seeding, is a reliable method for scalable robotic fabrication of uniform-sized tissue spheroids.

  6. Architectures and Applications for Scalable Quantum Information Systems

    Science.gov (United States)

    2007-01-01

    Gershenfeld and I. Chuang. Quantum computing with molecules. Scientific American, June 1998. [16] A. Globus, D. Bailey, J. Han, R. Jaffe, C. Levit , R...AFRL-IF-RS-TR-2007-12 Final Technical Report January 2007 ARCHITECTURES AND APPLICATIONS FOR SCALABLE QUANTUM INFORMATION SYSTEMS...NUMBER 5b. GRANT NUMBER FA8750-01-2-0521 4. TITLE AND SUBTITLE ARCHITECTURES AND APPLICATIONS FOR SCALABLE QUANTUM INFORMATION SYSTEMS 5c

  7. Extending JPEG-LS for low-complexity scalable video coding

    DEFF Research Database (Denmark)

    Ukhanova, Anna; Sergeev, Anton; Forchhammer, Søren

    2011-01-01

    JPEG-LS, the well-known international standard for lossless and near-lossless image compression, was originally designed for non-scalable applications. In this paper we propose a scalable modification of JPEG-LS and compare it with the leading image and video coding standards JPEG2000 and H.264/SVC...

  8. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    Science.gov (United States)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  9. Scalable Device for Automated Microbial Electroporation in a Digital Microfluidic Platform.

    Science.gov (United States)

    Madison, Andrew C; Royal, Matthew W; Vigneault, Frederic; Chen, Liji; Griffin, Peter B; Horowitz, Mark; Church, George M; Fair, Richard B

    2017-09-15

    Electrowetting-on-dielectric (EWD) digital microfluidic laboratory-on-a-chip platforms demonstrate excellent performance in automating labor-intensive protocols. When coupled with an on-chip electroporation capability, these systems hold promise for streamlining cumbersome processes such as multiplex automated genome engineering (MAGE). We integrated a single Ti:Au electroporation electrode into an otherwise standard parallel-plate EWD geometry to enable high-efficiency transformation of Escherichia coli with reporter plasmid DNA in a 200 nL droplet. Test devices exhibited robust operation with more than 10 transformation experiments performed per device without cross-contamination or failure. Despite intrinsic electric-field nonuniformity present in the EP/EWD device, the peak on-chip transformation efficiency was measured to be 8.6 ± 1.0 × 10 8 cfu·μg -1 for an average applied electric field strength of 2.25 ± 0.50 kV·mm -1 . Cell survival and transformation fractions at this electroporation pulse strength were found to be 1.5 ± 0.3 and 2.3 ± 0.1%, respectively. Our work expands the EWD toolkit to include on-chip microbial electroporation and opens the possibility of scaling advanced genome engineering methods, like MAGE, into the submicroliter regime.

  10. Traffic and Quality Characterization of the H.264/AVC Scalable Video Coding Extension

    Directory of Open Access Journals (Sweden)

    Geert Van der Auwera

    2008-01-01

    Full Text Available The recent scalable video coding (SVC extension to the H.264/AVC video coding standard has unprecedented compression efficiency while supporting a wide range of scalability modes, including temporal, spatial, and quality (SNR scalability, as well as combined spatiotemporal SNR scalability. The traffic characteristics, especially the bit rate variabilities, of the individual layer streams critically affect their network transport. We study the SVC traffic statistics, including the bit rate distortion and bit rate variability distortion, with long CIF resolution video sequences and compare them with the corresponding MPEG-4 Part 2 traffic statistics. We consider (i temporal scalability with three temporal layers, (ii spatial scalability with a QCIF base layer and a CIF enhancement layer, as well as (iii quality scalability modes FGS and MGS. We find that the significant improvement in RD efficiency of SVC is accompanied by substantially higher traffic variabilities as compared to the equivalent MPEG-4 Part 2 streams. We find that separately analyzing the traffic of temporal-scalability only encodings gives reasonable estimates of the traffic statistics of the temporal layers embedded in combined spatiotemporal encodings and in the base layer of combined FGS-temporal encodings. Overall, we find that SVC achieves significantly higher compression ratios than MPEG-4 Part 2, but produces unprecedented levels of traffic variability, thus presenting new challenges for the network transport of scalable video.

  11. Scalable, full-colour and controllable chromotropic plasmonic printing

    OpenAIRE

    Xue, Jiancai; Zhou, Zhang-Kai; Wei, Zhiqiang; Su, Rongbin; Lai, Juan; Li, Juntao; Li, Chao; Zhang, Tengwei; Wang, Xue-Hua

    2015-01-01

    Plasmonic colour printing has drawn wide attention as a promising candidate for the next-generation colour-printing technology. However, an efficient approach to realize full colour and scalable fabrication is still lacking, which prevents plasmonic colour printing from practical applications. Here we present a scalable and full-colour plasmonic printing approach by combining conjugate twin-phase modulation with a plasmonic broadband absorber. More importantly, our approach also demonstrates ...

  12. Temporal scalability comparison of the H.264/SVC and distributed video codec

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Belyaev, Evgeny

    2009-01-01

    The problem of the multimedia scalable video streaming is a current topic of interest. There exist many methods for scalable video coding. This paper is focused on the scalable extension of H.264/AVC (H.264/SVC) and distributed video coding (DVC). The paper presents an efficiency comparison of SV...

  13. Nanodiamonds as platforms for biology and medicine.

    Science.gov (United States)

    Man, Han B; Ho, Dean

    2013-02-01

    Nanoparticles possess a wide range of exceptional properties applicable to biology and medicine. In particular, nanodiamonds (NDs) are being studied extensively because they possess unique characteristics that make them suitable as platforms for diagnostics and therapeutics. This carbon-based material (2-8 nm) is medically relevant because it unites several key properties necessary for clinical applications, such as stability and compatibility in biological environments, and scalability in production. Research by the Ho group and others has yielded ND particles with a variety of capabilities ranging from delivery of chemotherapeutic drugs to targeted labeling and uptake studies. In addition, encouraging new findings have demonstrated the ability for NDs to effectively treat chemoresistant tumors in vivo. In this review, we highlight the progress made toward bringing nanodiamonds from the bench to the bedside.

  14. Scalable and near-optimal design space exploration for embedded systems

    CERN Document Server

    Kritikakou, Angeliki; Goutis, Costas

    2014-01-01

    This book describes scalable and near-optimal, processor-level design space exploration (DSE) methodologies.  The authors present design methodologies for data storage and processing in real-time, cost-sensitive data-dominated embedded systems.  Readers will be enabled to reduce time-to-market, while satisfying system requirements for performance, area, and energy consumption, thereby minimizing the overall cost of the final design.   • Describes design space exploration (DSE) methodologies for data storage and processing in embedded systems, which achieve near-optimal solutions with scalable exploration time; • Presents a set of principles and the processes which support the development of the proposed scalable and near-optimal methodologies; • Enables readers to apply scalable and near-optimal methodologies to the intra-signal in-place optimization step for both regular and irregular memory accesses.

  15. Software performance and scalability a quantitative approach

    CERN Document Server

    Liu, Henry H

    2009-01-01

    Praise from the Reviewers:"The practicality of the subject in a real-world situation distinguishes this book from othersavailable on the market."—Professor Behrouz Far, University of Calgary"This book could replace the computer organization texts now in use that every CS and CpEstudent must take. . . . It is much needed, well written, and thoughtful."—Professor Larry Bernstein, Stevens Institute of TechnologyA distinctive, educational text onsoftware performance and scalabilityThis is the first book to take a quantitative approach to the subject of software performance and scalability

  16. Low-cost scalable quartz crystal microbalance array for environmental sensing

    Energy Technology Data Exchange (ETDEWEB)

    Anazagasty, Cristain [University of Puerto Rico; Hianik, Tibor [Comenius University, Bratislava, Slovakia; Ivanov, Ilia N [ORNL

    2016-01-01

    Proliferation of environmental sensors for internet of things (IoT) applications has increased the need for low-cost platforms capable of accommodating multiple sensors. Quartz crystal microbalance (QCM) crystals coated with nanometer-thin sensor films are suitable for use in high-resolution (~1 ng) selective gas sensor applications. We demonstrate a scalable array for measuring frequency response of six QCM sensors controlled by low-cost Arduino microcontrollers and a USB multiplexer. Gas pulses and data acquisition were controlled by a LabVIEW user interface. We test the sensor array by measuring the frequency shift of crystals coated with different compositions of polymer composites based on poly(3,4-ethylenedioxythiophene):polystyrene sulfonate (PEDOT:PSS) while films are exposed to water vapor and oxygen inside a controlled environmental chamber. Our sensor array exhibits comparable performance to that of a commercial QCM system, while enabling high-throughput 6 QCM testing for under $1,000. We use deep neural network structures to process sensor response and demonstrate that the QCM array is suitable for gas sensing, environmental monitoring, and electronic-nose applications.

  17. Antibacterial Au nanostructured surfaces.

    Science.gov (United States)

    Wu, Songmei; Zuber, Flavia; Brugger, Juergen; Maniura-Weber, Katharina; Ren, Qun

    2016-02-07

    We present here a technological platform for engineering Au nanotopographies by templated electrodeposition on antibacterial surfaces. Three different types of nanostructures were fabricated: nanopillars, nanorings and nanonuggets. The nanopillars are the basic structures and are 50 nm in diameter and 100 nm in height. Particular arrangement of the nanopillars in various geometries formed nanorings and nanonuggets. Flat surfaces, rough substrate surfaces, and various nanostructured surfaces were compared for their abilities to attach and kill bacterial cells. Methicillin-resistant Staphylococcus aureus, a Gram-positive bacterial strain responsible for many infections in health care system, was used as the model bacterial strain. It was found that all the Au nanostructures, regardless their shapes, exhibited similar excellent antibacterial properties. A comparison of live cells attached to nanotopographic surfaces showed that the number of live S. aureus cells was flat and rough reference surfaces. Our micro/nanofabrication process is a scalable approach based on cost-efficient self-organization and provides potential for further developing functional surfaces to study the behavior of microbes on nanoscale topographies.

  18. Cloud Computing Platform for an Online Model Library System

    Directory of Open Access Journals (Sweden)

    Mingang Chen

    2013-01-01

    Full Text Available The rapid developing of digital content industry calls for online model libraries. For the efficiency, user experience, and reliability merits of the model library, this paper designs a Web 3D model library system based on a cloud computing platform. Taking into account complex models, which cause difficulties in real-time 3D interaction, we adopt the model simplification and size adaptive adjustment methods to make the system with more efficient interaction. Meanwhile, a cloud-based architecture is developed to ensure the reliability and scalability of the system. The 3D model library system is intended to be accessible by online users with good interactive experiences. The feasibility of the solution has been tested by experiments.

  19. Quality Scalability Compression on Single-Loop Solution in HEVC

    Directory of Open Access Journals (Sweden)

    Mengmeng Zhang

    2014-01-01

    Full Text Available This paper proposes a quality scalable extension design for the upcoming high efficiency video coding (HEVC standard. In the proposed design, the single-loop decoder solution is extended into the proposed scalable scenario. A novel interlayer intra/interprediction is added to reduce the amount of bits representation by exploiting the correlation between coding layers. The experimental results indicate that the average Bjøntegaard delta rate decrease of 20.50% can be gained compared with the simulcast encoding. The proposed technique achieved 47.98% Bjøntegaard delta rate reduction compared with the scalable video coding extension of the H.264/AVC. Consequently, significant rate savings confirm that the proposed method achieves better performance.

  20. SOL: A Library for Scalable Online Learning Algorithms

    OpenAIRE

    Wu, Yue; Hoi, Steven C. H.; Liu, Chenghao; Lu, Jing; Sahoo, Doyen; Yu, Nenghai

    2016-01-01

    SOL is an open-source library for scalable online learning algorithms, and is particularly suitable for learning with high-dimensional data. The library provides a family of regular and sparse online learning algorithms for large-scale binary and multi-class classification tasks with high efficiency, scalability, portability, and extensibility. SOL was implemented in C++, and provided with a collection of easy-to-use command-line tools, python wrappers and library calls for users and develope...

  1. A scalable distributed RRT for motion planning

    KAUST Repository

    Jacobs, Sam Ade

    2013-05-01

    Rapidly-exploring Random Tree (RRT), like other sampling-based motion planning methods, has been very successful in solving motion planning problems. Even so, sampling-based planners cannot solve all problems of interest efficiently, so attention is increasingly turning to parallelizing them. However, one challenge in parallelizing RRT is the global computation and communication overhead of nearest neighbor search, a key operation in RRTs. This is a critical issue as it limits the scalability of previous algorithms. We present two parallel algorithms to address this problem. The first algorithm extends existing work by introducing a parameter that adjusts how much local computation is done before a global update. The second algorithm radially subdivides the configuration space into regions, constructs a portion of the tree in each region in parallel, and connects the subtrees,i removing cycles if they exist. By subdividing the space, we increase computation locality enabling a scalable result. We show that our approaches are scalable. We present results demonstrating almost linear scaling to hundreds of processors on a Linux cluster and a Cray XE6 machine. © 2013 IEEE.

  2. A scalable distributed RRT for motion planning

    KAUST Repository

    Jacobs, Sam Ade; Stradford, Nicholas; Rodriguez, Cesar; Thomas, Shawna; Amato, Nancy M.

    2013-01-01

    Rapidly-exploring Random Tree (RRT), like other sampling-based motion planning methods, has been very successful in solving motion planning problems. Even so, sampling-based planners cannot solve all problems of interest efficiently, so attention is increasingly turning to parallelizing them. However, one challenge in parallelizing RRT is the global computation and communication overhead of nearest neighbor search, a key operation in RRTs. This is a critical issue as it limits the scalability of previous algorithms. We present two parallel algorithms to address this problem. The first algorithm extends existing work by introducing a parameter that adjusts how much local computation is done before a global update. The second algorithm radially subdivides the configuration space into regions, constructs a portion of the tree in each region in parallel, and connects the subtrees,i removing cycles if they exist. By subdividing the space, we increase computation locality enabling a scalable result. We show that our approaches are scalable. We present results demonstrating almost linear scaling to hundreds of processors on a Linux cluster and a Cray XE6 machine. © 2013 IEEE.

  3. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Chase Qishi [New Jersey Inst. of Technology, Newark, NJ (United States); Univ. of Memphis, TN (United States); Zhu, Michelle Mengxia [Southern Illinois Univ., Carbondale, IL (United States)

    2016-06-06

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models feature diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific

  4. SAChES: Scalable Adaptive Chain-Ensemble Sampling.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Huang, Maoyi [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hou, Zhangshuan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bao, Jie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ren, Huiying [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-08-01

    We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the use of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.

  5. CloudTPS: Scalable Transactions for Web Applications in the Cloud

    NARCIS (Netherlands)

    Zhou, W.; Pierre, G.E.O.; Chi, C.-H.

    2010-01-01

    NoSQL Cloud data services provide scalability and high availability properties for web applications but at the same time they sacrifice data consistency. However, many applications cannot afford any data inconsistency. CloudTPS is a scalable transaction manager to allow cloud database services to

  6. Nanocalorimeter platform for in situ specific heat measurements and x-ray diffraction at low temperature

    Energy Technology Data Exchange (ETDEWEB)

    Willa, K. [Materials Science Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, Illinois 60439, USA; Diao, Z. [Department of Physics, Stockholm University, SE-106 91 Stockholm, Sweden; Laboratory of Mathematics, Physics and Electrical Engineering, Halmstad University, P.O. Box 823, SE-301 18 Halmstad, Sweden; Campanini, D. [Department of Physics, Stockholm University, SE-106 91 Stockholm, Sweden; Welp, U. [Materials Science Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, Illinois 60439, USA; Divan, R. [Center for Nanoscale Materials, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, Illinois 60439, USA; Hudl, M. [Department of Physics, Stockholm University, SE-106 91 Stockholm, Sweden; Islam, Z. [X-ray Science Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, Illinois 60439, USA; Kwok, W. -K. [Materials Science Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, Illinois 60439, USA; Rydh, A. [Department of Physics, Stockholm University, SE-106 91 Stockholm, Sweden

    2017-12-01

    Recent advances in electronics and nanofabrication have enabled membrane-based nanocalorimetry for measurements of the specific heat of microgram-sized samples. We have integrated a nanocalorimeter platform into a 4.5 T split-pair vertical-field magnet to allow for the simultaneous measurement of the specific heat and x-ray scattering in magnetic fields and at temperatures as low as 4 K. This multi-modal approach empowers researchers to directly correlate scattering experiments with insights from thermodynamic properties including structural, electronic, orbital, and magnetic phase transitions. The use of a nanocalorimeter sample platform enables numerous technical advantages: precise measurement and control of the sample temperature, quantification of beam heating effects, fast and precise positioning of the sample in the x-ray beam, and fast acquisition of x-ray scans over a wide temperature range without the need for time-consuming re-centering and re-alignment. Furthermore, on an YBa2Cu3O7-delta crystal and a copper foil, we demonstrate a novel approach to x-ray absorption spectroscopy by monitoring the change in sample temperature as a function of incident photon energy. Finally, we illustrate the new insights that can be gained from in situ structural and thermodynamic measurements by investigating the superheated state occurring at the first-order magneto-elastic phase transition of Fe2P, a material that is of interest for magnetocaloric applications.

  7. From Digital Disruption to Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten; Thomsen, Peter Poulsen

    2017-01-01

    This article discusses the terms disruption, digital disruption, business models and business model scalability. It illustrates how managers should be using these terms for the benefit of their business by developing business models capable of achieving exponentially increasing returns to scale...... will seldom lead to business model scalability capable of competing with digital disruption(s)....... as a response to digital disruption. A series of case studies illustrate that besides frequent existing messages in the business literature relating to the importance of creating agile businesses, both in growing and declining economies, as well as hard to copy value propositions or value propositions that take...

  8. Scalable Packet Classification with Hash Tables

    Science.gov (United States)

    Wang, Pi-Chung

    In the last decade, the technique of packet classification has been widely deployed in various network devices, including routers, firewalls and network intrusion detection systems. In this work, we improve the performance of packet classification by using multiple hash tables. The existing hash-based algorithms have superior scalability with respect to the required space; however, their search performance may not be comparable to other algorithms. To improve the search performance, we propose a tuple reordering algorithm to minimize the number of accessed hash tables with the aid of bitmaps. We also use pre-computation to ensure the accuracy of our search procedure. Performance evaluation based on both real and synthetic filter databases shows that our scheme is effective and scalable and the pre-computation cost is moderate.

  9. A scalable method for parallelizing sampling-based motion planning algorithms

    KAUST Repository

    Jacobs, Sam Ade; Manavi, Kasra; Burgos, Juan; Denny, Jory; Thomas, Shawna; Amato, Nancy M.

    2012-01-01

    This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.

  10. A scalable method for parallelizing sampling-based motion planning algorithms

    KAUST Repository

    Jacobs, Sam Ade

    2012-05-01

    This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.

  11. Efficient Enhancement for Spatial Scalable Video Coding Transmission

    Directory of Open Access Journals (Sweden)

    Mayada Khairy

    2017-01-01

    Full Text Available Scalable Video Coding (SVC is an international standard technique for video compression. It is an extension of H.264 Advanced Video Coding (AVC. In the encoding of video streams by SVC, it is suitable to employ the macroblock (MB mode because it affords superior coding efficiency. However, the exhaustive mode decision technique that is usually used for SVC increases the computational complexity, resulting in a longer encoding time (ET. Many other algorithms were proposed to solve this problem with imperfection of increasing transmission time (TT across the network. To minimize the ET and TT, this paper introduces four efficient algorithms based on spatial scalability. The algorithms utilize the mode-distribution correlation between the base layer (BL and enhancement layers (ELs and interpolation between the EL frames. The proposed algorithms are of two categories. Those of the first category are based on interlayer residual SVC spatial scalability. They employ two methods, namely, interlayer interpolation (ILIP and the interlayer base mode (ILBM method, and enable ET and TT savings of up to 69.3% and 83.6%, respectively. The algorithms of the second category are based on full-search SVC spatial scalability. They utilize two methods, namely, full interpolation (FIP and the full-base mode (FBM method, and enable ET and TT savings of up to 55.3% and 76.6%, respectively.

  12. The ELPA library: scalable parallel eigenvalue solutions for electronic structure theory and computational science.

    Science.gov (United States)

    Marek, A; Blum, V; Johanni, R; Havu, V; Lang, B; Auckenthaler, T; Heinecke, A; Bungartz, H-J; Lederer, H

    2014-05-28

    Obtaining the eigenvalues and eigenvectors of large matrices is a key problem in electronic structure theory and many other areas of computational science. The computational effort formally scales as O(N(3)) with the size of the investigated problem, N (e.g. the electron count in electronic structure theory), and thus often defines the system size limit that practical calculations cannot overcome. In many cases, more than just a small fraction of the possible eigenvalue/eigenvector pairs is needed, so that iterative solution strategies that focus only on a few eigenvalues become ineffective. Likewise, it is not always desirable or practical to circumvent the eigenvalue solution entirely. We here review some current developments regarding dense eigenvalue solvers and then focus on the Eigenvalue soLvers for Petascale Applications (ELPA) library, which facilitates the efficient algebraic solution of symmetric and Hermitian eigenvalue problems for dense matrices that have real-valued and complex-valued matrix entries, respectively, on parallel computer platforms. ELPA addresses standard as well as generalized eigenvalue problems, relying on the well documented matrix layout of the Scalable Linear Algebra PACKage (ScaLAPACK) library but replacing all actual parallel solution steps with subroutines of its own. For these steps, ELPA significantly outperforms the corresponding ScaLAPACK routines and proprietary libraries that implement the ScaLAPACK interface (e.g. Intel's MKL). The most time-critical step is the reduction of the matrix to tridiagonal form and the corresponding backtransformation of the eigenvectors. ELPA offers both a one-step tridiagonalization (successive Householder transformations) and a two-step transformation that is more efficient especially towards larger matrices and larger numbers of CPU cores. ELPA is based on the MPI standard, with an early hybrid MPI-OpenMPI implementation available as well. Scalability beyond 10,000 CPU cores for problem

  13. Embedded High Performance Scalable Computing Systems

    National Research Council Canada - National Science Library

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  14. Investigation on Reliability and Scalability of an FBG-Based Hierarchical AOFSN

    Directory of Open Access Journals (Sweden)

    Li-Mei Peng

    2010-03-01

    Full Text Available The reliability and scalability of large-scale based optical fiber sensor networks (AOFSN are considered in this paper. The AOFSN network consists of three-level hierarchical sensor network architectures. The first two levels consist of active interrogation and remote nodes (RNs and the third level, called the sensor subnet (SSN, consists of passive Fiber Bragg Gratings (FBGs and a few switches. The switch architectures in the RN and various SSNs to improve the reliability and scalability of AOFSN are studied. Two SSNs with a regular topology are proposed to support simple routing and scalability in AOFSN: square-based sensor cells (SSC and pentagon-based sensor cells (PSC. The reliability and scalability are evaluated in terms of the available sensing coverage in the case of one or multiple link failures.

  15. Research on distributed heterogeneous data PCA algorithm based on cloud platform

    Science.gov (United States)

    Zhang, Jin; Huang, Gang

    2018-05-01

    Principal component analysis (PCA) of heterogeneous data sets can solve the problem that centralized data scalability is limited. In order to reduce the generation of intermediate data and error components of distributed heterogeneous data sets, a principal component analysis algorithm based on heterogeneous data sets under cloud platform is proposed. The algorithm performs eigenvalue processing by using Householder tridiagonalization and QR factorization to calculate the error component of the heterogeneous database associated with the public key to obtain the intermediate data set and the lost information. Experiments on distributed DBM heterogeneous datasets show that the model method has the feasibility and reliability in terms of execution time and accuracy.

  16. Outlook in the application of Chlamydomonas reinhardtii chloroplast as a platform for recombinant protein production.

    Science.gov (United States)

    Shamriz, Shabnam; Ofoghi, Hamideh

    Microalgae, also called microphytes, are a vast group of microscopic photosynthetic organisms living in aquatic ecosystems. Microalgae have attracted the attention of biotechnology industry as a platform for extracting natural products with high commercial value. During last decades, microalgae have been also used as cost-effective and easily scalable platform for the production of recombinant proteins with medical and industrial applications. Most progress in this field has been made with Chlamydomonas reinhardtii as a model organism mainly because of its simple life cycle, well-established genetics and ease of cultivation. However, due to the scarcity of existing infrastructure for commercial production and processing together with relatively low product yields, no recombinant products from C. reinhardtii have gained approval for commercial production and most of them are still in research and development. In this review, we focus on the chloroplast of C. reinhardtii as an algal recombinant expression platform and compare its advantages and disadvantages to other currently used expression systems. We then discuss the strategies for engineering the chloroplast of C. reinhardtii to produce recombinant cells and present a comprehensive overview of works that have used this platform for the expression of high-value products.

  17. Scalable Coverage Maintenance for Dense Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jun Lu

    2007-06-01

    Full Text Available Owing to numerous potential applications, wireless sensor networks have been attracting significant research effort recently. The critical challenge that wireless sensor networks often face is to sustain long-term operation on limited battery energy. Coverage maintenance schemes can effectively prolong network lifetime by selecting and employing a subset of sensors in the network to provide sufficient sensing coverage over a target region. We envision future wireless sensor networks composed of a vast number of miniaturized sensors in exceedingly high density. Therefore, the key issue of coverage maintenance for future sensor networks is the scalability to sensor deployment density. In this paper, we propose a novel coverage maintenance scheme, scalable coverage maintenance (SCOM, which is scalable to sensor deployment density in terms of communication overhead (i.e., number of transmitted and received beacons and computational complexity (i.e., time and space complexity. In addition, SCOM achieves high energy efficiency and load balancing over different sensors. We have validated our claims through both analysis and simulations.

  18. Fusion virtual laboratory: The experiments' collaboration platform in Japan

    Energy Technology Data Exchange (ETDEWEB)

    Nakanishi, H., E-mail: nakanisi@nifs.ac.jp [National Institute for Fusion Science, Toki, Gifu 509-5292 (Japan); Kojima, M.; Takahashi, C.; Ohsuna, M.; Imazu, S.; Nonomura, M. [National Institute for Fusion Science, Toki, Gifu 509-5292 (Japan); Hasegawa, M. [RIAM, Kyushu University, Kasuga, Fukuoka 816-8560 (Japan); Yoshikawa, M. [PRC, University of Tsukuba, Tsukuba, Ibaraki 305-8577 (Japan); Nagayama, Y.; Kawahata, K. [National Institute for Fusion Science, Toki, Gifu 509-5292 (Japan)

    2012-12-15

    'Fusion virtual laboratory (FVL)' is the experiments' collaboration platform covering multiple fusion projects in Japan. Major Japanese fusion laboratories and universities are mutually connected through the dedicated virtual private network, named SNET, on SINET4. It has 3 different categories; (i) LHD remote participation, (ii) bilateral experiments' collaboration, and (iii) remote use of supercomputer. By extending the LABCOM data system developed at LHD, FVL supports (i) and (ii) so that it can deal with not only LHD data but also the data of two remote experiments: QUEST at Kyushu University and GAMMA10 at University of Tsukuba. FVL has applied the latest 'cloud' technology for both data acquisition and storage architecture. It can provide us high availability and performance scalability of the whole system. With a well optimized TCP data transferring method, the unified data access platform for both experimental data and numerical computation results could become realistic on FVL. The FVL project will continue demonstrating the ITER-era international collaboration schemes and the necessary technology.

  19. DISP: Optimizations towards Scalable MPI Startup

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Huansong [Florida State University, Tallahassee; Pophale, Swaroop S [ORNL; Gorentla Venkata, Manjunath [ORNL; Yu, Weikuan [Florida State University, Tallahassee

    2016-01-01

    Despite the popularity of MPI for high performance computing, the startup of MPI programs faces a scalability challenge as both the execution time and memory consumption increase drastically at scale. We have examined this problem using the collective modules of Cheetah and Tuned in Open MPI as representative implementations. Previous improvements for collectives have focused on algorithmic advances and hardware off-load. In this paper, we examine the startup cost of the collective module within a communicator and explore various techniques to improve its efficiency and scalability. Accordingly, we have developed a new scalable startup scheme with three internal techniques, namely Delayed Initialization, Module Sharing and Prediction-based Topology Setup (DISP). Our DISP scheme greatly benefits the collective initialization of the Cheetah module. At the same time, it helps boost the performance of non-collective initialization in the Tuned module. We evaluate the performance of our implementation on Titan supercomputer at ORNL with up to 4096 processes. The results show that our delayed initialization can speed up the startup of Tuned and Cheetah by an average of 32.0% and 29.2%, respectively, our module sharing can reduce the memory consumption of Tuned and Cheetah by up to 24.1% and 83.5%, respectively, and our prediction-based topology setup can speed up the startup of Cheetah by up to 80%.

  20. Nanofabrication and coloration study of artificial Morpho butterfly wings with aligned lamellae layers

    Science.gov (United States)

    Zhang, Sichao; Chen, Yifang

    2015-11-01

    The bright and iridescent blue color from Morpho butterfly wings has attracted worldwide attentions to explore its mysterious nature for long time. Although the physics of structural color by the nanophotonic structures built on the wing scales has been well established, replications of the wing structure by standard top-down lithography still remains a challenge. This paper reports a technical breakthrough to mimic the blue color of Morpho butterfly wings, by developing a novel nanofabrication process, based on electron beam lithography combined with alternate PMMA/LOR development/dissolution, for photonic structures with aligned lamellae multilayers in colorless polymers. The relationship between the coloration and geometric dimensions as well as shapes is systematically analyzed by solving Maxwell’s Equations with a finite domain time difference simulator. Careful characterization of the mimicked blue by spectral measurements under both normal and oblique angles are carried out. Structural color in blue reflected by the fabricated wing scales, is demonstrated and further extended to green as an application exercise of the new technique. The effects of the regularity in the replicas on coloration are analyzed. In principle, this approach establishes a starting point for mimicking structural colors beyond the blue in Morpho butterfly wings.

  1. Nanofabrication and coloration study of artificial Morpho butterfly wings with aligned lamellae layers.

    Science.gov (United States)

    Zhang, Sichao; Chen, Yifang

    2015-11-18

    The bright and iridescent blue color from Morpho butterfly wings has attracted worldwide attentions to explore its mysterious nature for long time. Although the physics of structural color by the nanophotonic structures built on the wing scales has been well established, replications of the wing structure by standard top-down lithography still remains a challenge. This paper reports a technical breakthrough to mimic the blue color of Morpho butterfly wings, by developing a novel nanofabrication process, based on electron beam lithography combined with alternate PMMA/LOR development/dissolution, for photonic structures with aligned lamellae multilayers in colorless polymers. The relationship between the coloration and geometric dimensions as well as shapes is systematically analyzed by solving Maxwell's Equations with a finite domain time difference simulator. Careful characterization of the mimicked blue by spectral measurements under both normal and oblique angles are carried out. Structural color in blue reflected by the fabricated wing scales, is demonstrated and further extended to green as an application exercise of the new technique. The effects of the regularity in the replicas on coloration are analyzed. In principle, this approach establishes a starting point for mimicking structural colors beyond the blue in Morpho butterfly wings.

  2. Analysis of establishing back-end system for mobile devices IOS and Android on Google App Engine platform

    OpenAIRE

    Kambič, Dušan

    2013-01-01

    The following thesis discusses the Google App Engine (GAE), in relation to Google Cloud Endpoints (GCE). GAE is an example of cloud computing model, which is called Platform as a Service (PaaS). It enables developers to develop and host scalable applications on Google's infrastructure. The companies can hire the right amount of server resources for current needs, thereby reducing business costs. For developed GAE applications we can automatically generate libraries for Android, iOS and Javasc...

  3. Future mobile access for open-data platforms and the BBC-DaaS system

    Science.gov (United States)

    Edlich, Stefan; Singh, Sonam; Pfennigstorf, Ingo

    2013-03-01

    In this paper, we develop an open data platform on multimedia devices to act as marketplace of data for information seekers and data providers. We explore the important aspects of Data-as-a-Service (DaaS) service in the cloud with a mobile access point. The basis of the DaaS service is to act as a marketplace for information, utilizing new technologies and recent new scalable polyglot architectures based on NoSql databases. Whereas Open-Data platforms are beginning to be widely accepted, its mobile use is not. We compare similar products, their approach and a possible mobile usage. We discuss several approaches to address the mobile access as a native app, html5 and a mobile first approach together with the several frontend presentation techniques. Big data visualization itself is in the early days and we explore some possibilities to get big data / open data accessed by mobile users.

  4. Blind Cooperative Routing for Scalable and Energy-Efficient Internet of Things

    KAUST Repository

    Bader, Ahmed; Alouini, Mohamed-Slim

    2016-01-01

    Multihop networking is promoted in this paper for energy-efficient and highly-scalable Internet of Things (IoT). Recognizing concerns related to the scalability of classical multihop routing and medium access techniques, the use of blind cooperation

  5. Large Scale Simulation Platform for NODES Validation Study

    Energy Technology Data Exchange (ETDEWEB)

    Sotorrio, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Qin, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Min, L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-04-27

    This report summarizes the Large Scale (LS) simulation platform created for the Eaton NODES project. The simulation environment consists of both wholesale market simulator and distribution simulator and includes the CAISO wholesale market model and a PG&E footprint of 25-75 feeders to validate the scalability under a scenario of 33% RPS in California with additional 17% of DERS coming from distribution and customers. The simulator can generate hourly unit commitment, 5-minute economic dispatch, and 4-second AGC regulation signals. The simulator is also capable of simulating greater than 10k individual controllable devices. Simulated DERs include water heaters, EVs, residential and light commercial HVAC/buildings, and residential-level battery storage. Feeder-level voltage regulators and capacitor banks are also simulated for feeder-level real and reactive power management and Vol/Var control.

  6. Towards a Scalable, Biomimetic, Antibacterial Coating

    Science.gov (United States)

    Dickson, Mary Nora

    Corneal afflictions are the second leading cause of blindness worldwide. When a corneal transplant is unavailable or contraindicated, an artificial cornea device is the only chance to save sight. Bacterial or fungal biofilm build up on artificial cornea devices can lead to serious complications including the need for systemic antibiotic treatment and even explantation. As a result, much emphasis has been placed on anti-adhesion chemical coatings and antibiotic leeching coatings. These methods are not long-lasting, and microorganisms can eventually circumvent these measures. Thus, I have developed a surface topographical antimicrobial coating. Various surface structures including rough surfaces, superhydrophobic surfaces, and the natural surfaces of insects' wings and sharks' skin are promising anti-biofilm candidates, however none meet the criteria necessary for implementation on the surface of an artificial cornea device. In this thesis I: 1) developed scalable fabrication protocols for a library of biomimetic nanostructure polymer surfaces 2) assessed the potential these for poly(methyl methacrylate) nanopillars to kill or prevent formation of biofilm by E. coli bacteria and species of Pseudomonas and Staphylococcus bacteria and improved upon a proposed mechanism for the rupture of Gram-negative bacterial cell walls 3) developed a scalable, commercially viable method for producing antibacterial nanopillars on a curved, PMMA artificial cornea device and 4) developed scalable fabrication protocols for implantation of antibacterial nanopatterned surfaces on the surfaces of thermoplastic polyurethane materials, commonly used in catheter tubings. This project constitutes a first step towards fabrication of the first entirely PMMA artificial cornea device. The major finding of this work is that by precisely controlling the topography of a polymer surface at the nano-scale, we can kill adherent bacteria and prevent biofilm formation of certain pathogenic bacteria

  7. Homomorphic encryption experiments on IBM's cloud quantum computing platform

    Science.gov (United States)

    Huang, He-Liang; Zhao, You-Wei; Li, Tan; Li, Feng-Guang; Du, Yu-Tao; Fu, Xiang-Qun; Zhang, Shuo; Wang, Xiang; Bao, Wan-Su

    2017-02-01

    Quantum computing has undergone rapid development in recent years. Owing to limitations on scalability, personal quantum computers still seem slightly unrealistic in the near future. The first practical quantum computer for ordinary users is likely to be on the cloud. However, the adoption of cloud computing is possible only if security is ensured. Homomorphic encryption is a cryptographic protocol that allows computation to be performed on encrypted data without decrypting them, so it is well suited to cloud computing. Here, we first applied homomorphic encryption on IBM's cloud quantum computer platform. In our experiments, we successfully implemented a quantum algorithm for linear equations while protecting our privacy. This demonstration opens a feasible path to the next stage of development of cloud quantum information technology.

  8. Rate control scheme for consistent video quality in scalable video codec.

    Science.gov (United States)

    Seo, Chan-Won; Han, Jong-Ki; Nguyen, Truong Q

    2011-08-01

    Multimedia data delivered to mobile devices over wireless channels or the Internet are complicated by bandwidth fluctuation and the variety of mobile devices. Scalable video coding has been developed as an extension of H.264/AVC to solve this problem. Since scalable video codec provides various scalabilities to adapt the bitstream for the channel conditions and terminal types, scalable codec is one of the useful codecs for wired or wireless multimedia communication systems, such as IPTV and streaming services. In such scalable multimedia communication systems, video quality fluctuation degrades the visual perception significantly. It is important to efficiently use the target bits in order to maintain a consistent video quality or achieve a small distortion variation throughout the whole video sequence. The scheme proposed in this paper provides a useful function to control video quality in applications supporting scalability, whereas conventional schemes have been proposed to control video quality in the H.264 and MPEG-4 systems. The proposed algorithm decides the quantization parameter of the enhancement layer to maintain a consistent video quality throughout the entire sequence. The video quality of the enhancement layer is controlled based on a closed-form formula which utilizes the residual data and quantization error of the base layer. The simulation results show that the proposed algorithm controls the frame quality of the enhancement layer in a simple operation, where the parameter decision algorithm is applied to each frame.

  9. Plasmonic Colors: Toward Mass Production of Metasurfaces

    DEFF Research Database (Denmark)

    Højlund-Nielsen, Emil; Clausen, Jeppe Sandvik; Mäkela, Tapio

    2016-01-01

    Plasmonic metasurface coloration has attracted considerable attention in recent years due to its industrial potential. So far, demonstrations have been limited to small patterned areas fabricated using expensive techniques with limited scalability. This study elevates the technology beyond...... the common size and volume limitations of nanofabrication and demonstrates aluminumcoated polymer-based colored metasurfaces of square-centimeter size by embossing, injection molding, roll-to-roll printing, and fi lm insert molding. Different techniques are compared and the requirements and bottlenecks...

  10. Advanced Nanofabrication Process Development for Self-Powered System-on-Chip

    KAUST Repository

    Rojas, Jhonathan Prieto

    2010-11-01

    In this work the development of a Self-Powered System-On-Chip is explored by examining two components of process development in different perspectives. On one side, an energy component is approached from a biochemical standpoint where a Microbial Fuel Cell (MFC) is built with standard microfabrication techniques, displaying a novel electrode based on Carbon Nanotubes (CNTs). The fabrication process involves the formation of a micrometric chamber that hosts an enhanced CNT-based anode. Preliminary results are promising, showing a high current density (113.6mA/m2) compared with other similar cells. Nevertheless many improvements can be done to the main design and further characterization of the anode will give a more complete understanding and bring the device closer to a practical implementation. On a second point of view, nano-patterning through silicon nitride spacer width control is developed, aimed at producing alternative sub-100nm device fabrication with the potential of further scaling thanks to nanowire based structures. These nanostructures are formed from a nano-pattern template, by using a bottom-up fabrication scheme. Uniformity and scalability of the process are demonstrated and its potential described. An estimated area of 0.120μm2 for a 6T-SRAM (Static Random Access Memory) bitcell (6 devices) can be achieved. In summary, by using a novel sustainable energy component and scalable nano-patterning for logic and computing module, this work has successfully collected the essential base knowledge and joined two different elements that synergistically will contribute for the future implementation of a Self-Powered System-on-Chip.

  11. Horizontally scaling dCache SRM with the Terracotta platform

    International Nuclear Information System (INIS)

    Perelmutov, T; Crawford, M; Moibenko, A; Oleynik, G

    2011-01-01

    The dCache disk caching file system has been chosen by a majority of LHC experiments' Tier 1 centers for their data storage needs. It is also deployed at many Tier 2 centers. The Storage Resource Manager (SRM) is a standardized grid storage interface and a single point of remote entry into dCache, and hence is a critical component. SRM must scale to increasing transaction rates and remain resilient against changing usage patterns. The initial implementation of the SRM service in dCache suffered from an inability to support clustered deployment, and its performance was limited by the hardware of a single node. Using the Terracotta platform[l], we added the ability to horizontally scale the dCache SRM service to run on multiple nodes in a cluster configuration, coupled with network load balancing. This gives site administrators the ability to increase the performance and reliability of SRM service to face the ever-increasing requirements of LHC data handling. In this paper we will describe the previous limitations of the architecture SRM server and how the Terracotta platform allowed us to readily convert single node service into a highly scalable clustered application.

  12. Horizontally scaling dChache SRM with the Terracotta platform

    International Nuclear Information System (INIS)

    Perelmutov, T.; Crawford, M.; Moibenko, A.; Oleynik, G.

    2011-01-01

    The dCache disk caching file system has been chosen by a majority of LHC experiments Tier 1 centers for their data storage needs. It is also deployed at many Tier 2 centers. The Storage Resource Manager (SRM) is a standardized grid storage interface and a single point of remote entry into dCache, and hence is a critical component. SRM must scale to increasing transaction rates and remain resilient against changing usage patterns. The initial implementation of the SRM service in dCache suffered from an inability to support clustered deployment, and its performance was limited by the hardware of a single node. Using the Terracotta platform, we added the ability to horizontally scale the dCache SRM service to run on multiple nodes in a cluster configuration, coupled with network load balancing. This gives site administrators the ability to increase the performance and reliability of SRM service to face the ever-increasing requirements of LHC data handling. In this paper we will describe the previous limitations of the architecture SRM server and how the Terracotta platform allowed us to readily convert single node service into a highly scalable clustered application.

  13. Disposable inkjet-printed electrochemical platform for detection of clinically relevant HER-2 breast cancer biomarker.

    Science.gov (United States)

    Carvajal, Susanita; Fera, Samantha N; Jones, Abby L; Baldo, Thaisa A; Mosa, Islam M; Rusling, James F; Krause, Colleen E

    2018-05-01

    Rapidly fabricated, disposable sensor platforms hold tremendous promise for point-of-care detection. Here, we present an inexpensive (Receptor 2 (HER-2). Capture antibodies were bound to a chemically modified surface on the WEA and placed into a microfluidic device. A full sandwich immunoassay was constructed following a simultaneous injection of target protein, biotinylated antibody, and polymerized horseradish peroxide labels into the microfluidic device housing the WEA. With an ultra fast assay time, of only 15mins a clinically relevant limit of detection of 12pgmL -1 was achieved. Excellent reproducibility and sensitivity were observed through recovery assays preformed in human serum with recoveries ranging from 76% to 103%. These easily fabricated and scalable electrochemical sensor platforms can be readily adapted for multiplex detection following this rapid assay protocol for cancer diagnostics. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Transient classification for the IRIS reactor using self-organized maps built in free platform

    International Nuclear Information System (INIS)

    Doraskevicius Junior, Waldemar

    2005-01-01

    Kohonen's Self Organized Maps (SOM) were tested with data from several operational conditions of the nuclear reactor IRIS (International Reactor Innovative and Secure) to develop an effective tool in the classification and transient identification in nuclear reactors. The data were derived from 56 simulations of the operation of IRIS, from steady-state conditions to accidents. The digital system built for the tests was based on the JAVA platform for the portability and scalability, and for being one of the free development platforms. Satisfactory results of operation classification were obtained with reasonable processing time in personal computers; about two to five minutes were spent for ordination and convergence of the learning on the data base. The methodology of this work was extended to the supervision of logistics of natural gas for Brazilian pipelines, showing satisfactory results for the classification of deliveries for simultaneous measurement in several points. (author)

  15. Scalable Integrated Region-Based Image Retrieval Using IRM and Statistical Clustering.

    Science.gov (United States)

    Wang, James Z.; Du, Yanping

    Statistical clustering is critical in designing scalable image retrieval systems. This paper presents a scalable algorithm for indexing and retrieving images based on region segmentation. The method uses statistical clustering on region features and IRM (Integrated Region Matching), a measure developed to evaluate overall similarity between images…

  16. Scalable fast multipole accelerated vortex methods

    KAUST Repository

    Hu, Qi; Gumerov, Nail A.; Yokota, Rio; Barba, Lorena A.; Duraiswami, Ramani

    2014-01-01

    -node communication and load balance efficiently, with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff

  17. Scalability Dilemma and Statistic Multiplexed Computing — A Theory and Experiment

    Directory of Open Access Journals (Sweden)

    Justin Yuan Shi

    2017-08-01

    Full Text Available The For the last three decades, end-to-end computing paradigms, such as MPI (Message Passing Interface, RPC (Remote Procedure Call and RMI (Remote Method Invocation, have been the de facto paradigms for distributed and parallel programming. Despite of the successes, applications built using these paradigms suffer due to the proportionality factor of crash in the application with its size. Checkpoint/restore and backup/recovery are the only means to save otherwise lost critical information. The scalability dilemma is such a practical challenge that the probability of the data losses increases as the application scales in size. The theoretical significance of this practical challenge is that it undermines the fundamental structure of the scientific discovery process and mission critical services in production today. In 1997, the direct use of end-to-end reference model in distributed programming was recognized as a fallacy. The scalability dilemma was predicted. However, this voice was overrun by the passage of time. Today, the rapidly growing digitized data demands solving the increasingly critical scalability challenges. Computing architecture scalability, although loosely defined, is now the front and center of large-scale computing efforts. Constrained only by the economic law of diminishing returns, this paper proposes a narrow definition of a Scalable Computing Service (SCS. Three scalability tests are also proposed in order to distinguish service architecture flaws from poor application programming. Scalable data intensive service requires additional treatments. Thus, the data storage is assumed reliable in this paper. A single-sided Statistic Multiplexed Computing (SMC paradigm is proposed. A UVR (Unidirectional Virtual Ring SMC architecture is examined under SCS tests. SMC was designed to circumvent the well-known impossibility of end-to-end paradigms. It relies on the proven statistic multiplexing principle to deliver reliable service

  18. Three-Dimensional Porous Nickel Frameworks Anchored with Cross-Linked Ni(OH)2 Nanosheets as a Highly Sensitive Nonenzymatic Glucose Sensor.

    Science.gov (United States)

    Mao, Weiwei; He, Haiping; Sun, Pengcheng; Ye, Zhizhen; Huang, Jingyun

    2018-05-02

    A facile and scalable in situ microelectrolysis nanofabrication technique is developed for preparing cross-linked Ni(OH) 2 nanosheets on a novel three-dimensional porous nickel template (Ni(OH) 2 @3DPN). For the constructed template, the porogen of NaCl particles not only induces a self-limiting surficial hot corrosion to claim the "start engine stop" mechanism but also serves as the primary battery electrolyte to greatly accelerate the growth of Ni(OH) 2 . As far as we know, the microelectrolysis nanofabrication is superior to the other reported Ni(OH) 2 synthesis methods due to the mild condition (60 °C, 6 h, NaCl solution, ambient environment) and without any post-treatment. The integrated Ni(OH) 2 @3DPN electrode with a highly suitable microstructure and a porous architecture implies a potential application in electrochemistry. As a proof-of-concept demonstration, the electrode was employed for nonenzymatic glucose sensing, which exhibits an outstanding sensitivity of 2761.6 μA mM -1 cm -2 ranging from 0.46 to 2100 μM, a fast response, and a low detection limit. The microelectrolysis nanofabrication is a one-step, binder-free, entirely green, and therefore it has a distinct advantage to improve clean production and reduce energy consumption.

  19. Seqcrawler: biological data indexing and browsing platform.

    Science.gov (United States)

    Sallou, Olivier; Bretaudeau, Anthony; Roult, Aurelien

    2012-07-24

    Seqcrawler takes its roots in software like SRS or Lucegene. It provides an indexing platform to ease the search of data and meta-data in biological banks and it can scale to face the current flow of data. While many biological bank search tools are available on the Internet, mainly provided by large organizations to search their data, there is a lack of free and open source solutions to browse one's own set of data with a flexible query system and able to scale from a single computer to a cloud system. A personal index platform will help labs and bioinformaticians to search their meta-data but also to build a larger information system with custom subsets of data. The software is scalable from a single computer to a cloud-based infrastructure. It has been successfully tested in a private cloud with 3 index shards (pieces of index) hosting ~400 millions of sequence information (whole GenBank, UniProt, PDB and others) for a total size of 600 GB in a fault tolerant architecture (high-availability). It has also been successfully integrated with software to add extra meta-data from blast results to enhance users' result analysis. Seqcrawler provides a complete open source search and store solution for labs or platforms needing to manage large amount of data/meta-data with a flexible and customizable web interface. All components (search engine, visualization and data storage), though independent, share a common and coherent data system that can be queried with a simple HTTP interface. The solution scales easily and can also provide a high availability infrastructure.

  20. Seqcrawler: biological data indexing and browsing platform

    Directory of Open Access Journals (Sweden)

    Sallou Olivier

    2012-07-01

    Full Text Available Abstract Background Seqcrawler takes its roots in software like SRS or Lucegene. It provides an indexing platform to ease the search of data and meta-data in biological banks and it can scale to face the current flow of data. While many biological bank search tools are available on the Internet, mainly provided by large organizations to search their data, there is a lack of free and open source solutions to browse one’s own set of data with a flexible query system and able to scale from a single computer to a cloud system. A personal index platform will help labs and bioinformaticians to search their meta-data but also to build a larger information system with custom subsets of data. Results The software is scalable from a single computer to a cloud-based infrastructure. It has been successfully tested in a private cloud with 3 index shards (pieces of index hosting ~400 millions of sequence information (whole GenBank, UniProt, PDB and others for a total size of 600 GB in a fault tolerant architecture (high-availability. It has also been successfully integrated with software to add extra meta-data from blast results to enhance users’ result analysis. Conclusions Seqcrawler provides a complete open source search and store solution for labs or platforms needing to manage large amount of data/meta-data with a flexible and customizable web interface. All components (search engine, visualization and data storage, though independent, share a common and coherent data system that can be queried with a simple HTTP interface. The solution scales easily and can also provide a high availability infrastructure.

  1. Scalable Resolution Display Walls

    KAUST Repository

    Leigh, Jason; Johnson, Andrew; Renambot, Luc; Peterka, Tom; Jeong, Byungil; Sandin, Daniel J.; Talandis, Jonas; Jagodic, Ratko; Nam, Sungwon; Hur, Hyejung; Sun, Yiwen

    2013-01-01

    This article will describe the progress since 2000 on research and development in 2-D and 3-D scalable resolution display walls that are built from tiling individual lower resolution flat panel displays. The article will describe approaches and trends in display hardware construction, middleware architecture, and user-interaction design. The article will also highlight examples of use cases and the benefits the technology has brought to their respective disciplines. © 1963-2012 IEEE.

  2. Evaluation of 3D printed anatomically scalable transfemoral prosthetic knee.

    Science.gov (United States)

    Ramakrishnan, Tyagi; Schlafly, Millicent; Reed, Kyle B

    2017-07-01

    This case study compares a transfemoral amputee's gait while using the existing Ossur Total Knee 2000 and our novel 3D printed anatomically scalable transfemoral prosthetic knee. The anatomically scalable transfemoral prosthetic knee is 3D printed out of a carbon-fiber and nylon composite that has a gear-mesh coupling with a hard-stop weight-actuated locking mechanism aided by a cross-linked four-bar spring mechanism. This design can be scaled using anatomical dimensions of a human femur and tibia to have a unique fit for each user. The transfemoral amputee who was tested is high functioning and walked on the Computer Assisted Rehabilitation Environment (CAREN) at a self-selected pace. The motion capture and force data that was collected showed that there were distinct differences in the gait dynamics. The data was used to perform the Combined Gait Asymmetry Metric (CGAM), where the scores revealed that the overall asymmetry of the gait on the Ossur Total Knee was more asymmetric than the anatomically scalable transfemoral prosthetic knee. The anatomically scalable transfemoral prosthetic knee had higher peak knee flexion that caused a large step time asymmetry. This made walking on the anatomically scalable transfemoral prosthetic knee more strenuous due to the compensatory movements in adapting to the different dynamics. This can be overcome by tuning the cross-linked spring mechanism to emulate the dynamics of the subject better. The subject stated that the knee would be good for daily use and has the potential to be adapted as a running knee.

  3. Scalable optical switches for computing applications

    NARCIS (Netherlands)

    White, I.H.; Aw, E.T.; Williams, K.A.; Wang, Haibo; Wonfor, A.; Penty, R.V.

    2009-01-01

    A scalable photonic interconnection network architecture is proposed whereby a Clos network is populated with broadcast-and-select stages. This enables the efficient exploitation of an emerging class of photonic integrated switch fabric. A low distortion space switch technology based on recently

  4. On the scalability of LISP and advanced overlaid services

    OpenAIRE

    Coras, Florin

    2015-01-01

    In just four decades the Internet has gone from a lab experiment to a worldwide, business critical infrastructure that caters to the communication needs of almost a half of the Earth's population. With these figures on its side, arguing against the Internet's scalability would seem rather unwise. However, the Internet's organic growth is far from finished and, as billions of new devices are expected to be joined in the not so distant future, scalability, or lack thereof, is commonly believed ...

  5. Scalable Algorithms for Adaptive Statistical Designs

    Directory of Open Access Journals (Sweden)

    Robert Oehmke

    2000-01-01

    Full Text Available We present a scalable, high-performance solution to multidimensional recurrences that arise in adaptive statistical designs. Adaptive designs are an important class of learning algorithms for a stochastic environment, and we focus on the problem of optimally assigning patients to treatments in clinical trials. While adaptive designs have significant ethical and cost advantages, they are rarely utilized because of the complexity of optimizing and analyzing them. Computational challenges include massive memory requirements, few calculations per memory access, and multiply-nested loops with dynamic indices. We analyze the effects of various parallelization options, and while standard approaches do not work well, with effort an efficient, highly scalable program can be developed. This allows us to solve problems thousands of times more complex than those solved previously, which helps make adaptive designs practical. Further, our work applies to many other problems involving neighbor recurrences, such as generalized string matching.

  6. Scalable fabrication of perovskite solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Li, Zhen; Klein, Talysa R.; Kim, Dong Hoe; Yang, Mengjin; Berry, Joseph J.; van Hest, Maikel F. A. M.; Zhu, Kai

    2018-03-27

    Perovskite materials use earth-abundant elements, have low formation energies for deposition and are compatible with roll-to-roll and other high-volume manufacturing techniques. These features make perovskite solar cells (PSCs) suitable for terawatt-scale energy production with low production costs and low capital expenditure. Demonstrations of performance comparable to that of other thin-film photovoltaics (PVs) and improvements in laboratory-scale cell stability have recently made scale up of this PV technology an intense area of research focus. Here, we review recent progress and challenges in scaling up PSCs and related efforts to enable the terawatt-scale manufacturing and deployment of this PV technology. We discuss common device and module architectures, scalable deposition methods and progress in the scalable deposition of perovskite and charge-transport layers. We also provide an overview of device and module stability, module-level characterization techniques and techno-economic analyses of perovskite PV modules.

  7. Alginate based 3D hydrogels as an in vitro co-culture model platform for the toxicity screening of new chemical entities

    International Nuclear Information System (INIS)

    Lan, Shih-Feng; Starly, Binil

    2011-01-01

    Prediction of human response to potential therapeutic drugs is through conventional methods of in vitro cell culture assays and expensive in vivo animal testing. Alternatives to animal testing require sophisticated in vitro model systems that must replicate in vivo like function for reliable testing applications. Advancements in biomaterials have enabled the development of three-dimensional (3D) cell encapsulated hydrogels as in vitro drug screening tissue model systems. In this study, we have developed an in vitro platform to enable high density 3D culture of liver cells combined with a monolayer growth of target breast cancer cell line (MCF-7) in a static environment as a representative example of screening drug compounds for hepatotoxicity and drug efficacy. Alginate hydrogels encapsulated with serial cell densities of HepG2 cells (10 5 -10 8 cells/ml) are supported by a porous poly-carbonate disc platform and co-cultured with MCF-7 cells within standard cell culture plates during a 3 day study period. The clearance rates of drug transformation by HepG2 cells are measured using a coumarin based pro-drug. The platform was used to test for HepG2 cytotoxicity 50% (CT 50 ) using commercially available drugs which further correlated well with published in vivo LD 50 values. The developed test platform allowed us to evaluate drug dose concentrations to predict hepatotoxicity and its effect on the target cells. The in vitro 3D co-culture platform provides a scalable and flexible approach to test multiple-cell types in a hybrid setting within standard cell culture plates which may open up novel 3D in vitro culture techniques to screen new chemical entity compounds. - Graphical abstract: Display Omitted Highlights: → A porous support disc design to support the culture of desired cells in 3D hydrogels. → Demonstrated the co-culture of two cell types within standard cell-culture plates. → A scalable, low cost approach to toxicity screening involving multiple cell

  8. Model-Based Evaluation Of System Scalability: Bandwidth Analysis For Smartphone-Based Biosensing Applications

    DEFF Research Database (Denmark)

    Patou, François; Madsen, Jan; Dimaki, Maria

    2016-01-01

    Scalability is a design principle often valued for the engineering of complex systems. Scalability is the ability of a system to change the current value of one of its specification parameters. Although targeted frameworks are available for the evaluation of scalability for specific digital systems...... re-engineering of 5 independent system modules, from the replacement of a wireless Bluetooth interface, to the revision of the ADC sample-and-hold operation could help increase system bandwidth....

  9. Proba-V Mission Exploitation Platform

    Science.gov (United States)

    Goor, Erwin; Dries, Jeroen

    2017-04-01

    VITO and partners developed the Proba-V Mission Exploitation Platform (MEP) as an end-to-end solution to drastically improve the exploitation of the Proba-V (a Copernicus contributing mission) EO-data archive (http://proba-v.vgt.vito.be/), the past mission SPOT-VEGETATION and derived vegetation parameters by researchers, service providers and end-users. The analysis of time series of data (+1PB) is addressed, as well as the large scale on-demand processing of near real-time data on a powerful and scalable processing environment. Furthermore data from the Copernicus Global Land Service is in scope of the platform. From November 2015 an operational Proba-V MEP environment, as an ESA operation service, is gradually deployed at the VITO data center with direct access to the complete data archive. Since autumn 2016 the platform is operational and yet several applications are released to the users, e.g. - A time series viewer, showing the evolution of Proba-V bands and derived vegetation parameters from the Copernicus Global Land Service for any area of interest. - Full-resolution viewing services for the complete data archive. - On-demand processing chains on a powerfull Hadoop/Spark backend e.g. for the calculation of N-daily composites. - Virtual Machines can be provided with access to the data archive and tools to work with this data, e.g. various toolboxes (GDAL, QGIS, GrassGIS, SNAP toolbox, …) and support for R and Python. This allows users to immediately work with the data without having to install tools or download data, but as well to design, debug and test applications on the platform. - A prototype of jupyter Notebooks is available with some examples worked out to show the potential of the data. Today the platform is used by several third party projects to perform R&D activities on the data, and to develop/host data analysis toolboxes. In parallel the platform is further improved and extended. From the MEP PROBA-V, access to Sentinel-2 and landsat data will

  10. Scalability of Sustainable Business Models in Hybrid Organizations

    Directory of Open Access Journals (Sweden)

    Adam Jabłoński

    2016-02-01

    Full Text Available The dynamics of change in modern business create new mechanisms for company management to determine their pursuit and the achievement of their high performance. This performance maintained over a long period of time becomes a source of ensuring business continuity by companies. An ontological being enabling the adoption of such assumptions is such a business model that has the ability to generate results in every possible market situation and, moreover, it has the features of permanent adaptability. A feature that describes the adaptability of the business model is its scalability. Being a factor ensuring more work and more efficient work with an increasing number of components, scalability can be applied to the concept of business models as the company’s ability to maintain similar or higher performance through it. Ensuring the company’s performance in the long term helps to build the so-called sustainable business model that often balances the objectives of stakeholders and shareholders, and that is created by the implemented principles of value-based management and corporate social responsibility. This perception of business paves the way for building hybrid organizations that integrate business activities with pro-social ones. The combination of an approach typical of hybrid organizations in designing and implementing sustainable business models pursuant to the scalability criterion seems interesting from the cognitive point of view. Today, hybrid organizations are great spaces for building effective and efficient mechanisms for dialogue between business and society. This requires the appropriate business model. The purpose of the paper is to present the conceptualization and operationalization of scalability of sustainable business models that determine the performance of a hybrid organization in the network environment. The paper presents the original concept of applying scalability in sustainable business models with detailed

  11. Myria: Scalable Analytics as a Service

    Science.gov (United States)

    Howe, B.; Halperin, D.; Whitaker, A.

    2014-12-01

    At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.

  12. SINDBAD: a realistic multi-purpose and scalable X-ray simulation tool for NDT applications

    International Nuclear Information System (INIS)

    Tabary, J.; Hugonnard, P.; Mathy, F.

    2007-01-01

    The X-ray radiographic simulation software SINDBAD, has been developed to help the design stage of radiographic systems or to evaluate the efficiency of image processing techniques, in both medical imaging and Non-Destructive Evaluation (NDE) industrial fields. This software can model any radiographic set-up, including the X-ray source, the beam interaction inside the object represented by its Computed Aided Design (CAD) model, and the imaging process in the detector. For each step of the virtual experimental bench, SINDBAD combines different modelling modules, accessed via Graphical User Interfaces (GUI), to provide realistic synthetic images. In this paper, we present an overview of all the functionalities which are available in SINDBAD, with a complete description of all the physics taken into account in models as well as the CAD and GUI facilities available in many computing platforms. We underline the different modules usable for different applications which make SINDBAD a multi-purposed and scalable X-ray simulation tool. (authors)

  13. Fast & scalable pattern transfer via block copolymer nanolithography

    DEFF Research Database (Denmark)

    Li, Tao; Wang, Zhongli; Schulte, Lars

    2015-01-01

    A fully scalable and efficient pattern transfer process based on block copolymer (BCP) self-assembling directly on various substrates is demonstrated. PS-rich and PDMS-rich poly(styrene-b-dimethylsiloxane) (PS-b-PDMS) copolymers are used to give monolayer sphere morphology after spin-casting of s......A fully scalable and efficient pattern transfer process based on block copolymer (BCP) self-assembling directly on various substrates is demonstrated. PS-rich and PDMS-rich poly(styrene-b-dimethylsiloxane) (PS-b-PDMS) copolymers are used to give monolayer sphere morphology after spin...... on long range lateral order, including fabrication of substrates for catalysis, solar cells, sensors, ultrafiltration membranes and templating of semiconductors or metals....

  14. Scalable Motion Estimation Processor Core for Multimedia System-on-Chip Applications

    Science.gov (United States)

    Lai, Yeong-Kang; Hsieh, Tian-En; Chen, Lien-Fei

    2007-04-01

    In this paper, we describe a high-throughput and scalable motion estimation processor architecture for multimedia system-on-chip applications. The number of processing elements (PEs) is scalable according to the variable algorithm parameters and the performance required for different applications. Using the PE rings efficiently and an intelligent memory-interleaving organization, the efficiency of the architecture can be increased. Moreover, using efficient on-chip memories and a data management technique can effectively decrease the power consumption and memory bandwidth. Techniques for reducing the number of interconnections and external memory accesses are also presented. Our results demonstrate that the proposed scalable PE-ringed architecture is a flexible and high-performance processor core in multimedia system-on-chip applications.

  15. Accounting Fundamentals and the Variation of Stock Price: Factoring in the Investment Scalability

    Directory of Open Access Journals (Sweden)

    Sumiyana Sumiyana

    2010-05-01

    Full Text Available This study develops a new return model with respect to accounting fundamentals. The new return model is based on Chen and Zhang (2007. This study takes into account theinvestment scalability information. Specifically, this study splitsthe scale of firm’s operations into short-run and long-runinvestment scalabilities. We document that five accounting fun-damentals explain the variation of annual stock return. Thefactors, comprised book value, earnings yield, short-run andlong-run investment scalabilities, and growth opportunities, co associate positively with stock price. The remaining factor,which is the pure interest rate, is negatively related to annualstock return. This study finds that inducing short-run and long-run investment scalabilities into the model could improve the degree of association. In other words, they have value rel-evance. Finally, this study suggests that basic trading strategieswill improve if investors revert to the accounting fundamentals. Keywords: accounting fundamentals; book value; earnings yield; growth opportuni­ties; short­run and long­run investment scalabilities; trading strategy;value relevance

  16. Numerical simulations of electrohydrodynamic evolution of thin polymer films

    Science.gov (United States)

    Borglum, Joshua Christopher

    Recently developed needleless electrospinning and electrolithography are two successful techniques that have been utilized extensively for low-cost, scalable, and continuous nano-fabrication. Rational understanding of the electrohydrodynamic principles underneath these nano-manufacturing methods is crucial to fabrication of continuous nanofibers and patterned thin films. This research project is to formulate robust, high-efficiency finite-difference Fourier spectral methods to simulate the electrohydrodynamic evolution of thin polymer films. Two thin-film models were considered and refined. The first was based on reduced lubrication theory; the second further took into account the effect of solvent drying and dewetting of the substrate. Fast Fourier Transform (FFT) based spectral method was integrated into the finite-difference algorithms for fast, accurately solving the governing nonlinear partial differential equations. The present methods have been used to examine the dependencies of the evolving surface features of the thin films upon the model parameters. The present study can be used for fast, controllable nanofabrication.

  17. Scalable and balanced dynamic hybrid data assimilation

    Science.gov (United States)

    Kauranne, Tuomo; Amour, Idrissa; Gunia, Martin; Kallio, Kari; Lepistö, Ahti; Koponen, Sampsa

    2017-04-01

    Scalability of complex weather forecasting suites is dependent on the technical tools available for implementing highly parallel computational kernels, but to an equally large extent also on the dependence patterns between various components of the suite, such as observation processing, data assimilation and the forecast model. Scalability is a particular challenge for 4D variational assimilation methods that necessarily couple the forecast model into the assimilation process and subject this combination to an inherently serial quasi-Newton minimization process. Ensemble based assimilation methods are naturally more parallel, but large models force ensemble sizes to be small and that results in poor assimilation accuracy, somewhat akin to shooting with a shotgun in a million-dimensional space. The Variational Ensemble Kalman Filter (VEnKF) is an ensemble method that can attain the accuracy of 4D variational data assimilation with a small ensemble size. It achieves this by processing a Gaussian approximation of the current error covariance distribution, instead of a set of ensemble members, analogously to the Extended Kalman Filter EKF. Ensemble members are re-sampled every time a new set of observations is processed from a new approximation of that Gaussian distribution which makes VEnKF a dynamic assimilation method. After this a smoothing step is applied that turns VEnKF into a dynamic Variational Ensemble Kalman Smoother VEnKS. In this smoothing step, the same process is iterated with frequent re-sampling of the ensemble but now using past iterations as surrogate observations until the end result is a smooth and balanced model trajectory. In principle, VEnKF could suffer from similar scalability issues as 4D-Var. However, this can be avoided by isolating the forecast model completely from the minimization process by implementing the latter as a wrapper code whose only link to the model is calling for many parallel and totally independent model runs, all of them

  18. High-performance, scalable optical network-on-chip architectures

    Science.gov (United States)

    Tan, Xianfang

    The rapid advance of technology enables a large number of processing cores to be integrated into a single chip which is called a Chip Multiprocessor (CMP) or a Multiprocessor System-on-Chip (MPSoC) design. The on-chip interconnection network, which is the communication infrastructure for these processing cores, plays a central role in a many-core system. With the continuously increasing complexity of many-core systems, traditional metallic wired electronic networks-on-chip (NoC) became a bottleneck because of the unbearable latency in data transmission and extremely high energy consumption on chip. Optical networks-on-chip (ONoC) has been proposed as a promising alternative paradigm for electronic NoC with the benefits of optical signaling communication such as extremely high bandwidth, negligible latency, and low power consumption. This dissertation focus on the design of high-performance and scalable ONoC architectures and the contributions are highlighted as follow: 1. A micro-ring resonator (MRR)-based Generic Wavelength-routed Optical Router (GWOR) is proposed. A method for developing any sized GWOR is introduced. GWOR is a scalable non-blocking ONoC architecture with simple structure, low cost and high power efficiency compared to existing ONoC designs. 2. To expand the bandwidth and improve the fault tolerance of the GWOR, a redundant GWOR architecture is designed by cascading different type of GWORs into one network. 3. The redundant GWOR built with MRR-based comb switches is proposed. Comb switches can expand the bandwidth while keep the topology of GWOR unchanged by replacing the general MRRs with comb switches. 4. A butterfly fat tree (BFT)-based hybrid optoelectronic NoC (HONoC) architecture is developed in which GWORs are used for global communication and electronic routers are used for local communication. The proposed HONoC uses less numbers of electronic routers and links than its counterpart of electronic BFT-based NoC. It takes the advantages of

  19. Developing a New Wireless Sensor Network Platform and Its Application in Precision Agriculture

    Science.gov (United States)

    Aquino-Santos, Raúl; González-Potes, Apolinar; Edwards-Block, Arthur; Virgen-Ortiz, Raúl Alejandro

    2011-01-01

    Wireless sensor networks are gaining greater attention from the research community and industrial professionals because these small pieces of “smart dust” offer great advantages due to their small size, low power consumption, easy integration and support for “green” applications. Green applications are considered a hot topic in intelligent environments, ubiquitous and pervasive computing. This work evaluates a new wireless sensor network platform and its application in precision agriculture, including its embedded operating system and its routing algorithm. To validate the technological platform and the embedded operating system, two different routing strategies were compared: hierarchical and flat. Both of these routing algorithms were tested in a small-scale network applied to a watermelon field. However, we strongly believe that this technological platform can be also applied to precision agriculture because it incorporates a modified version of LORA-CBF, a wireless location-based routing algorithm that uses cluster-based flooding. Cluster-based flooding addresses the scalability concerns of wireless sensor networks, while the modified LORA-CBF routing algorithm includes a metric to monitor residual battery energy. Furthermore, results show that the modified version of LORA-CBF functions well with both the flat and hierarchical algorithms, although it functions better with the flat algorithm in a small-scale agricultural network. PMID:22346622

  20. Development of a Web-Based Visualization Platform for Climate Research Using Google Earth

    Science.gov (United States)

    Sun, Xiaojuan; Shen, Suhung; Leptoukh, Gregory G.; Wang, Panxing; Di, Liping; Lu, Mingyue

    2011-01-01

    Recently, it has become easier to access climate data from satellites, ground measurements, and models from various data centers, However, searching. accessing, and prc(essing heterogeneous data from different sources are very tim -consuming tasks. There is lack of a comprehensive visual platform to acquire distributed and heterogeneous scientific data and to render processed images from a single accessing point for climate studies. This paper. documents the design and implementation of a Web-based visual, interoperable, and scalable platform that is able to access climatological fields from models, satellites, and ground stations from a number of data sources using Google Earth (GE) as a common graphical interface. The development is based on the TCP/IP protocol and various data sharing open sources, such as OPeNDAP, GDS, Web Processing Service (WPS), and Web Mapping Service (WMS). The visualization capability of integrating various measurements into cE extends dramatically the awareness and visibility of scientific results. Using embedded geographic information in the GE, the designed system improves our understanding of the relationships of different elements in a four dimensional domain. The system enables easy and convenient synergistic research on a virtual platform for professionals and the general public, gr$tly advancing global data sharing and scientific research collaboration.

  1. Developing a new wireless sensor network platform and its application in precision agriculture.

    Science.gov (United States)

    Aquino-Santos, Raúl; González-Potes, Apolinar; Edwards-Block, Arthur; Virgen-Ortiz, Raúl Alejandro

    2011-01-01

    Wireless sensor networks are gaining greater attention from the research community and industrial professionals because these small pieces of "smart dust" offer great advantages due to their small size, low power consumption, easy integration and support for "green" applications. Green applications are considered a hot topic in intelligent environments, ubiquitous and pervasive computing. This work evaluates a new wireless sensor network platform and its application in precision agriculture, including its embedded operating system and its routing algorithm. To validate the technological platform and the embedded operating system, two different routing strategies were compared: hierarchical and flat. Both of these routing algorithms were tested in a small-scale network applied to a watermelon field. However, we strongly believe that this technological platform can be also applied to precision agriculture because it incorporates a modified version of LORA-CBF, a wireless location-based routing algorithm that uses cluster-based flooding. Cluster-based flooding addresses the scalability concerns of wireless sensor networks, while the modified LORA-CBF routing algorithm includes a metric to monitor residual battery energy. Furthermore, results show that the modified version of LORA-CBF functions well with both the flat and hierarchical algorithms, although it functions better with the flat algorithm in a small-scale agricultural network.

  2. Memory Device and Nanofabrication Techniques Using Electrically Configurable Materials

    Science.gov (United States)

    Ascenso Simões, Bruno

    Development of novel nanofabrication techniques and single-walled carbon nanotubes field configurable transistor (SWCNT-FCT) memory devices using electrically configurable materials is presented. A novel lithographic technique, electric lithography (EL), that uses electric field for pattern generation has been demonstrated. It can be used for patterning of biomolecules on a polymer surface and patterning of resist as well. Using electrical resist composed of a polymer having Boc protected amine group and iodonium salt, Boc group on the surface of polymer was modified to free amine by applying an electric field. On the modified surface of the polymer, Streptavidin pattern was fabricated with a sub-micron scale. Also patterning of polymer resin composed of epoxy monomers and diaryl iodonium salt by EL has been demonstrated. Reaction mechanism for electric resist configuration is believed to be induced by an acid generation via electrochemical reduction in the resist. We show a novel field configurable transistor (FCT) based on single-walled carbon nanotube network field-effect transistors in which poly (ethylene glycol) crosslinked by electron-beam is incorporated into the gate. The device conductance can be configured to arbitrary states reversibly and repeatedly by applying external gate voltages. Raman spectroscopy revealed that evolution of the ratio of D- to G-band intensity in the SWCNTs of the FCT progressively increases as the device is configured to lower conductance states. Electron transport studies at low temperatures showed a strong temperature dependence of the resistance. Band gap widening of CNTs up to ˜ 4 eV has been observed by examining the differential conductance-gate voltage-bias voltage relationship. The switching mechanism of the FCT is attributed a structural transformation of CNTs via reversible hydrogenation and dehydrogenations induced by gate voltages, which tunes the CNT bandgap continuously and reversibly to non-volatile analog values

  3. On the scalability of uncoordinated multiple access for the Internet of Things

    KAUST Repository

    Chisci, Giovanni

    2017-11-16

    The Internet of things (IoT) will entail massive number of wireless connections with sporadic traffic patterns. To support the IoT traffic, several technologies are evolving to support low power wide area (LPWA) wireless communications. However, LPWA networks rely on variations of uncoordinated spectrum access, either for data transmissions or scheduling requests, thus imposing a scalability problem to the IoT. This paper presents a novel spatiotemporal model to study the scalability of the ALOHA medium access. In particular, the developed mathematical model relies on stochastic geometry and queueing theory to account for spatial and temporal attributes of the IoT. To this end, the scalability of the ALOHA is characterized by the percentile of IoT devices that can be served while keeping their queues stable. The results highlight the scalability problem of ALOHA and quantify the extend to which ALOHA can support in terms of number of devices, traffic requirement, and transmission rate.

  4. THE DESIGN OF A HIGH PERFORMANCE EARTH IMAGERY AND RASTER DATA MANAGEMENT AND PROCESSING PLATFORM

    Directory of Open Access Journals (Sweden)

    Q. Xie

    2016-06-01

    Full Text Available This paper summarizes the general requirements and specific characteristics of both geospatial raster database management system and raster data processing platform from a domain-specific perspective as well as from a computing point of view. It also discusses the need of tight integration between the database system and the processing system. These requirements resulted in Oracle Spatial GeoRaster, a global scale and high performance earth imagery and raster data management and processing platform. The rationale, design, implementation, and benefits of Oracle Spatial GeoRaster are described. Basically, as a database management system, GeoRaster defines an integrated raster data model, supports image compression, data manipulation, general and spatial indices, content and context based queries and updates, versioning, concurrency, security, replication, standby, backup and recovery, multitenancy, and ETL. It provides high scalability using computer and storage clustering. As a raster data processing platform, GeoRaster provides basic operations, image processing, raster analytics, and data distribution featuring high performance computing (HPC. Specifically, HPC features include locality computing, concurrent processing, parallel processing, and in-memory computing. In addition, the APIs and the plug-in architecture are discussed.

  5. The Design of a High Performance Earth Imagery and Raster Data Management and Processing Platform

    Science.gov (United States)

    Xie, Qingyun

    2016-06-01

    This paper summarizes the general requirements and specific characteristics of both geospatial raster database management system and raster data processing platform from a domain-specific perspective as well as from a computing point of view. It also discusses the need of tight integration between the database system and the processing system. These requirements resulted in Oracle Spatial GeoRaster, a global scale and high performance earth imagery and raster data management and processing platform. The rationale, design, implementation, and benefits of Oracle Spatial GeoRaster are described. Basically, as a database management system, GeoRaster defines an integrated raster data model, supports image compression, data manipulation, general and spatial indices, content and context based queries and updates, versioning, concurrency, security, replication, standby, backup and recovery, multitenancy, and ETL. It provides high scalability using computer and storage clustering. As a raster data processing platform, GeoRaster provides basic operations, image processing, raster analytics, and data distribution featuring high performance computing (HPC). Specifically, HPC features include locality computing, concurrent processing, parallel processing, and in-memory computing. In addition, the APIs and the plug-in architecture are discussed.

  6. BIGSdb: Scalable analysis of bacterial genome variation at the population level

    Directory of Open Access Journals (Sweden)

    Maiden Martin CJ

    2010-12-01

    Full Text Available Abstract Background The opportunities for bacterial population genomics that are being realised by the application of parallel nucleotide sequencing require novel bioinformatics platforms. These must be capable of the storage, retrieval, and analysis of linked phenotypic and genotypic information in an accessible, scalable and computationally efficient manner. Results The Bacterial Isolate Genome Sequence Database (BIGSDB is a scalable, open source, web-accessible database system that meets these needs, enabling phenotype and sequence data, which can range from a single sequence read to whole genome data, to be efficiently linked for a limitless number of bacterial specimens. The system builds on the widely used mlstdbNet software, developed for the storage and distribution of multilocus sequence typing (MLST data, and incorporates the capacity to define and identify any number of loci and genetic variants at those loci within the stored nucleotide sequences. These loci can be further organised into 'schemes' for isolate characterisation or for evolutionary or functional analyses. Isolates and loci can be indexed by multiple names and any number of alternative schemes can be accommodated, enabling cross-referencing of different studies and approaches. LIMS functionality of the software enables linkage to and organisation of laboratory samples. The data are easily linked to external databases and fine-grained authentication of access permits multiple users to participate in community annotation by setting up or contributing to different schemes within the database. Some of the applications of BIGSDB are illustrated with the genera Neisseria and Streptococcus. The BIGSDB source code and documentation are available at http://pubmlst.org/software/database/bigsdb/. Conclusions Genomic data can be used to characterise bacterial isolates in many different ways but it can also be efficiently exploited for evolutionary or functional studies. BIGSDB

  7. Superlinearly scalable noise robustness of redundant coupled dynamical systems.

    Science.gov (United States)

    Kohar, Vivek; Kia, Behnam; Lindner, John F; Ditto, William L

    2016-03-01

    We illustrate through theory and numerical simulations that redundant coupled dynamical systems can be extremely robust against local noise in comparison to uncoupled dynamical systems evolving in the same noisy environment. Previous studies have shown that the noise robustness of redundant coupled dynamical systems is linearly scalable and deviations due to noise can be minimized by increasing the number of coupled units. Here, we demonstrate that the noise robustness can actually be scaled superlinearly if some conditions are met and very high noise robustness can be realized with very few coupled units. We discuss these conditions and show that this superlinear scalability depends on the nonlinearity of the individual dynamical units. The phenomenon is demonstrated in discrete as well as continuous dynamical systems. This superlinear scalability not only provides us an opportunity to exploit the nonlinearity of physical systems without being bogged down by noise but may also help us in understanding the functional role of coupled redundancy found in many biological systems. Moreover, engineers can exploit superlinear noise suppression by starting a coupled system near (not necessarily at) the appropriate initial condition.

  8. Continuity-Aware Scheduling Algorithm for Scalable Video Streaming

    Directory of Open Access Journals (Sweden)

    Atinat Palawan

    2016-05-01

    Full Text Available The consumer demand for retrieving and delivering visual content through consumer electronic devices has increased rapidly in recent years. The quality of video in packet networks is susceptible to certain traffic characteristics: average bandwidth availability, loss, delay and delay variation (jitter. This paper presents a scheduling algorithm that modifies the stream of scalable video to combat jitter. The algorithm provides unequal look-ahead by safeguarding the base layer (without the need for overhead of the scalable video. The results of the experiments show that our scheduling algorithm reduces the number of frames with a violated deadline and significantly improves the continuity of the video stream without compromising the average Y Peek Signal-to-Noise Ratio (PSNR.

  9. The breaking point of modern processor and platform technology

    CERN Document Server

    Nowak, A; Lazzaro, A; Leduc, J

    2011-01-01

    This work is an overview of state of the art processors used in High Energy Physics, their architecture and an extensive outline of the forthcoming technologies. Silicon process science and hardware design are making constant and rapid progress, and a solid grasp of these developments is imperative to the understanding of their possible future applications, which might include software strategy, optimizations, computing center operations and hardware acquisitions. In particular, the current issue of software and platform scalability is becoming more and more noticeable, and will develop in the near future with the growing core count of single chips and the approach of certain x86 architectural limits. Other topics brought forward include the hard, physical limits of innovation, the applicability of tried and tested computing formulas to modern technologies, as well as an analysis of viable alternate choices for continued development.

  10. Scalable Partitioning Algorithms for FPGAs With Heterogeneous Resources

    National Research Council Canada - National Science Library

    Selvakkumaran, Navaratnasothie; Ranjan, Abhishek; Raje, Salil; Karypis, George

    2004-01-01

    As FPGA densities increase, partitioning-based FPGA placement approaches are becoming increasingly important as they can be used to provide high-quality and computationally scalable placement solutions...

  11. Proba-V Mission Exploitation Platform

    Science.gov (United States)

    Goor, E.

    2017-12-01

    VITO and partners developed the Proba-V Mission Exploitation Platform (MEP) as an end-to-end solution to drastically improve the exploitation of the Proba-V (an EC Copernicus contributing mission) EO-data archive, the past mission SPOT-VEGETATION and derived vegetation parameters by researchers, service providers (e.g. the EC Copernicus Global Land Service) and end-users. The analysis of time series of data (PB range) is addressed, as well as the large scale on-demand processing of near real-time data on a powerful and scalable processing environment. New features are still developed, but the platform is yet fully operational since November 2016 and offers A time series viewer (browser web client and API), showing the evolution of Proba-V bands and derived vegetation parameters for any country, region, pixel or polygon defined by the user. Full-resolution viewing services for the complete data archive. On-demand processing chains on a powerfull Hadoop/Spark backend. Virtual Machines can be requested by users with access to the complete data archive mentioned above and pre-configured tools to work with this data, e.g. various toolboxes and support for R and Python. This allows users to immediately work with the data without having to install tools or download data, but as well to design, debug and test applications on the platform. Jupyter Notebooks is available with some examples python and R projects worked out to show the potential of the data. Today the platform is already used by several international third party projects to perform R&D activities on the data, and to develop/host data analysis toolboxes. From the Proba-V MEP, access to other data sources such as Sentinel-2 and landsat data is also addressed. Selected components of the MEP are also deployed on public cloud infrastructures in various R&D projects. Users can make use of powerful Web based tools and can self-manage virtual machines to perform their work on the infrastructure at VITO with access to

  12. Scalable Video Coding with Interlayer Signal Decorrelation Techniques

    Directory of Open Access Journals (Sweden)

    Yang Wenxian

    2007-01-01

    Full Text Available Scalability is one of the essential requirements in the compression of visual data for present-day multimedia communications and storage. The basic building block for providing the spatial scalability in the scalable video coding (SVC standard is the well-known Laplacian pyramid (LP. An LP achieves the multiscale representation of the video as a base-layer signal at lower resolution together with several enhancement-layer signals at successive higher resolutions. In this paper, we propose to improve the coding performance of the enhancement layers through efficient interlayer decorrelation techniques. We first show that, with nonbiorthogonal upsampling and downsampling filters, the base layer and the enhancement layers are correlated. We investigate two structures to reduce this correlation. The first structure updates the base-layer signal by subtracting from it the low-frequency component of the enhancement layer signal. The second structure modifies the prediction in order that the low-frequency component in the new enhancement layer is diminished. The second structure is integrated in the JSVM 4.0 codec with suitable modifications in the prediction modes. Experimental results with some standard test sequences demonstrate coding gains up to 1 dB for I pictures and up to 0.7 dB for both I and P pictures.

  13. Scalable Atomistic Simulation Algorithms for Materials Research

    Directory of Open Access Journals (Sweden)

    Aiichiro Nakano

    2002-01-01

    Full Text Available A suite of scalable atomistic simulation programs has been developed for materials research based on space-time multiresolution algorithms. Design and analysis of parallel algorithms are presented for molecular dynamics (MD simulations and quantum-mechanical (QM calculations based on the density functional theory. Performance tests have been carried out on 1,088-processor Cray T3E and 1,280-processor IBM SP3 computers. The linear-scaling algorithms have enabled 6.44-billion-atom MD and 111,000-atom QM calculations on 1,024 SP3 processors with parallel efficiency well over 90%. production-quality programs also feature wavelet-based computational-space decomposition for adaptive load balancing, spacefilling-curve-based adaptive data compression with user-defined error bound for scalable I/O, and octree-based fast visibility culling for immersive and interactive visualization of massive simulation data.

  14. Scalable DeNoise-and-Forward in Bidirectional Relay Networks

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Krigslund, Rasmus; Popovski, Petar

    2010-01-01

    In this paper a scalable relaying scheme is proposed based on an existing concept called DeNoise-and-Forward, DNF. We call it Scalable DNF, S-DNF, and it targets the scenario with multiple communication flows through a single common relay. The idea of the scheme is to combine packets at the relay...... in order to save transmissions. To ensure decodability at the end-nodes, a priori information about the content of the combined packets must be available. This is gathered during the initial transmissions to the relay. The trade-off between decodability and number of necessary transmissions is analysed...

  15. Weight loss efficacy of a novel mobile Diabetes Prevention Program delivery platform with human coaching

    Science.gov (United States)

    Michaelides, Andreas; Raby, Christine; Wood, Meghan; Farr, Kit

    2016-01-01

    Objective To evaluate the weight loss efficacy of a novel mobile platform delivering the Diabetes Prevention Program. Research Design and Methods 43 overweight or obese adult participants with a diagnosis of prediabetes signed-up to receive a 24-week virtual Diabetes Prevention Program with human coaching, through a mobile platform. Weight loss and engagement were the main outcomes, evaluated by repeated measures analysis of variance, backward regression, and mediation regression. Results Weight loss at 16 and 24 weeks was significant, with 56% of starters and 64% of completers losing over 5% body weight. Mean weight loss at 24 weeks was 6.58% in starters and 7.5% in completers. Participants were highly engaged, with 84% of the sample completing 9 lessons or more. In-app actions related to self-monitoring significantly predicted weight loss. Conclusions Our findings support the effectiveness of a uniquely mobile prediabetes intervention, producing weight loss comparable to studies with high engagement, with potential for scalable population health management. PMID:27651911

  16. An Application-Based Performance Evaluation of NASAs Nebula Cloud Computing Platform

    Science.gov (United States)

    Saini, Subhash; Heistand, Steve; Jin, Haoqiang; Chang, Johnny; Hood, Robert T.; Mehrotra, Piyush; Biswas, Rupak

    2012-01-01

    The high performance computing (HPC) community has shown tremendous interest in exploring cloud computing as it promises high potential. In this paper, we examine the feasibility, performance, and scalability of production quality scientific and engineering applications of interest to NASA on NASA's cloud computing platform, called Nebula, hosted at Ames Research Center. This work represents the comprehensive evaluation of Nebula using NUTTCP, HPCC, NPB, I/O, and MPI function benchmarks as well as four applications representative of the NASA HPC workload. Specifically, we compare Nebula performance on some of these benchmarks and applications to that of NASA s Pleiades supercomputer, a traditional HPC system. We also investigate the impact of virtIO and jumbo frames on interconnect performance. Overall results indicate that on Nebula (i) virtIO and jumbo frames improve network bandwidth by a factor of 5x, (ii) there is a significant virtualization layer overhead of about 10% to 25%, (iii) write performance is lower by a factor of 25x, (iv) latency for short MPI messages is very high, and (v) overall performance is 15% to 48% lower than that on Pleiades for NASA HPC applications. We also comment on the usability of the cloud platform.

  17. A robust and scalable TCR-based reporter cell assay to measure HIV-1 Nef-mediated T cell immune evasion.

    Science.gov (United States)

    Anmole, Gursev; Kuang, Xiaomei T; Toyoda, Mako; Martin, Eric; Shahid, Aniqa; Le, Anh Q; Markle, Tristan; Baraki, Bemuluyigza; Jones, R Brad; Ostrowski, Mario A; Ueno, Takamasa; Brumme, Zabrina L; Brockman, Mark A

    2015-11-01

    HIV-1 evades cytotoxic T cell responses through Nef-mediated downregulation of HLA class I molecules from the infected cell surface. Methods to quantify the impact of Nef on T cell recognition typically employ patient-derived T cell clones; however, these assays are limited by the cost and effort required to isolate and maintain primary cell lines. The variable activity of different T cell clones and the limited number of cells generated by re-stimulation can also hinder assay reproducibility and scalability. Here, we describe a heterologous T cell receptor reporter assay and use it to study immune evasion by Nef. Induction of NFAT-driven luciferase following co-culture with peptide-pulsed or virus-infected target cells serves as a rapid, quantitative and antigen-specific measure of T cell recognition of its cognate peptide/HLA complex. We demonstrate that Nef-mediated downregulation of HLA on target cells correlates inversely with T cell receptor-dependent luminescent signal generated by effector cells. This method provides a robust, flexible and scalable platform that is suitable for studies to measure Nef function in the context of different viral peptide/HLA antigens, to assess the function of patient-derived Nef alleles, or to screen small molecule libraries to identify novel Nef inhibitors. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Constraint Solver Techniques for Implementing Precise and Scalable Static Program Analysis

    DEFF Research Database (Denmark)

    Zhang, Ye

    solver using unification we could make a program analysis easier to design and implement, much more scalable, and still as precise as expected. We present an inclusion constraint language with the explicit equality constructs for specifying program analysis problems, and a parameterized framework...... developers to build reliable software systems more quickly and with fewer bugs or security defects. While designing and implementing a program analysis remains a hard work, making it both scalable and precise is even more challenging. In this dissertation, we show that with a general inclusion constraint...... data flow analyses for C language, we demonstrate a large amount of equivalences could be detected by off-line analyses, and they could then be used by a constraint solver to significantly improve the scalability of an analysis without sacrificing any precision....

  19. The Potential Role of Social Media Platforms in Community Awareness of Antibiotic Use in the Gulf Cooperation Council States: Luxury or Necessity?

    Science.gov (United States)

    Zowawi, Hosam Mamoon; Abedalthagafi, Malak; Mar, Florie A; Almalki, Turki; Kutbi, Abdullah H; Harris-Brown, Tiffany; Harbarth, Stephan; Balkhy, Hanan H; Paterson, David L; Hasanain, Rihab Abdalazez

    2015-10-15

    The increasing emergence and spread of antimicrobial resistance (AMR) is a serious public health issue. Increasing the awareness of the general public about appropriate antibiotic use is a key factor for combating this issue. Several public media campaigns worldwide have been launched; however, such campaigns can be costly and the outcomes are variable and difficult to assess. Social media platforms, including Twitter, Facebook, and YouTube, are now frequently utilized to address health-related issues. In many geographical locations, such as the countries of the Gulf Cooperation Council (GCC) States (Saudi Arabia, United Arab Emirates, Kuwait, Oman, Qatar, and Bahrain), these platforms are becoming increasingly popular. The socioeconomic status of the GCC states and their reliable communication and networking infrastructure has allowed the penetration and scalability of these platforms in the region. This might explain why the Saudi Ministry of Health is using social media platforms alongside various other media platforms in a large-scale public awareness campaign to educate at-risk communities about the recently emerged Middle East respiratory syndrome coronavirus (MERS-CoV). This paper discusses the potential for using social media tools as cost-efficient and mass education platforms to raise awareness of appropriate antibiotic use in the general public and in the medical communities of the Arabian Peninsula.

  20. Scalable, full-colour and controllable chromotropic plasmonic printing

    Science.gov (United States)

    Xue, Jiancai; Zhou, Zhang-Kai; Wei, Zhiqiang; Su, Rongbin; Lai, Juan; Li, Juntao; Li, Chao; Zhang, Tengwei; Wang, Xue-Hua

    2015-01-01

    Plasmonic colour printing has drawn wide attention as a promising candidate for the next-generation colour-printing technology. However, an efficient approach to realize full colour and scalable fabrication is still lacking, which prevents plasmonic colour printing from practical applications. Here we present a scalable and full-colour plasmonic printing approach by combining conjugate twin-phase modulation with a plasmonic broadband absorber. More importantly, our approach also demonstrates controllable chromotropic capability, that is, the ability of reversible colour transformations. This chromotropic capability affords enormous potentials in building functionalized prints for anticounterfeiting, special label, and high-density data encryption storage. With such excellent performances in functional colour applications, this colour-printing approach could pave the way for plasmonic colour printing in real-world commercial utilization. PMID:26567803

  1. Scalable Domain Decomposed Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  2. Novel remote monitoring platform for RES-hydrogen based smart microgrid

    International Nuclear Information System (INIS)

    González, I.; Calderón, A.J.; Andújar, J.M.

    2017-01-01

    Highlights: • A remote monitoring platform is developed to monitor an experimental smart microgrid. • Smart microgrid integrates renewable energy sources (solar and wind) and hydrogen. • The platform is implemented using open-source tool Easy Java/Javascript Simulations. • Remote user accesses online to graphical/numerical information of all components. • Results show proper operation of the SMG and prove effective real-time monitoring. - Abstract: In the context of the future power grids – Smart Grids (SGs) – Smart MicroGrids (SMGs) play a paramount role. These ones are very specific portions of the SGs that deal with integration of small-rated distributed energy and storage resources closer to the loads – chiefly within the distribution domain. Data acquisition and monitoring tasks are vital functions that must be developed at every stage of the grid for a proper operation. This paper presents a remote monitoring platform (RMP) to monitor an experimental SMG. It integrates Renewable Energy Sources (RESs) (solar and wind) and hydrogen to operate in isolated regime. The RMP has been developed using the open-source authoring tool Easy Java/Javascript Simulations (EJsS). The interface has been designed to be intuitive and easy-to-use, providing real-time information of all the involved magnitudes over the network. Scalability, easy development, portability and cost effective are the main features of the proposed framework. The microgrid and the proposed monitoring platform are described and the successful results are reported. The remote user executes a ready-to-use file with low computational requirements and is enabled to graphically and numerically track the SMG behaviour. These results prove the suitability of the RMP as an effective means for continuous visualization of the coordinated energy flows of a real SMG.

  3. Using the eServices platform for detecting behavior patterns deviation in the elderly assisted living: a case study.

    Science.gov (United States)

    Marcelino, Isabel; Lopes, David; Reis, Michael; Silva, Fernando; Laza, Rosalía; Pereira, António

    2015-01-01

    World's aging population is rising and the elderly are increasingly isolated socially and geographically. As a consequence, in many situations, they need assistance that is not granted in time. In this paper, we present a solution that follows the CRISP-DM methodology to detect the elderly's behavior pattern deviations that may indicate possible risk situations. To obtain these patterns, many variables are aggregated to ensure the alert system reliability and minimize eventual false positive alert situations. These variables comprehend information provided by body area network (BAN), by environment sensors, and also by the elderly's interaction in a service provider platform, called eServices--Elderly Support Service Platform. eServices is a scalable platform aggregating a service ecosystem developed specially for elderly people. This pattern recognition will further activate the adequate response. With the system evolution, it will learn to predict potential danger situations for a specified user, acting preventively and ensuring the elderly's safety and well-being. As the eServices platform is still in development, synthetic data, based on real data sample and empiric knowledge, is being used to populate the initial dataset. The presented work is a proof of concept of knowledge extraction using the eServices platform information. Regardless of not using real data, this work proves to be an asset, achieving a good performance in preventing alert situations.

  4. Using the eServices Platform for Detecting Behavior Patterns Deviation in the Elderly Assisted Living: A Case Study

    Directory of Open Access Journals (Sweden)

    Isabel Marcelino

    2015-01-01

    Full Text Available World’s aging population is rising and the elderly are increasingly isolated socially and geographically. As a consequence, in many situations, they need assistance that is not granted in time. In this paper, we present a solution that follows the CRISP-DM methodology to detect the elderly’s behavior pattern deviations that may indicate possible risk situations. To obtain these patterns, many variables are aggregated to ensure the alert system reliability and minimize eventual false positive alert situations. These variables comprehend information provided by body area network (BAN, by environment sensors, and also by the elderly’s interaction in a service provider platform, called eServices—Elderly Support Service Platform. eServices is a scalable platform aggregating a service ecosystem developed specially for elderly people. This pattern recognition will further activate the adequate response. With the system evolution, it will learn to predict potential danger situations for a specified user, acting preventively and ensuring the elderly’s safety and well-being. As the eServices platform is still in development, synthetic data, based on real data sample and empiric knowledge, is being used to populate the initial dataset. The presented work is a proof of concept of knowledge extraction using the eServices platform information. Regardless of not using real data, this work proves to be an asset, achieving a good performance in preventing alert situations.

  5. Using scalable vector graphics to evolve art

    NARCIS (Netherlands)

    den Heijer, E.; Eiben, A. E.

    2016-01-01

    In this paper, we describe our investigations of the use of scalable vector graphics as a genotype representation in evolutionary art. We describe the technical aspects of using SVG in evolutionary art, and explain our custom, SVG specific operators initialisation, mutation and crossover. We perform

  6. Paper Skin Multisensory Platform for Simultaneous Environmental Monitoring

    KAUST Repository

    Nassar, Joanna M.

    2016-02-19

    Human skin and hair can simultaneously feel pressure, temperature, humidity, strain, and flow—great inspirations for applications such as artificial skins for burn and acid victims, robotics, and vehicular technology. Previous efforts in this direction use sophisticated materials or processes. Chemically functionalized, inkjet printed or vacuum-technology-processed papers albeit cheap have shown limited functionalities. Thus, performance and/or functionalities per cost have been limited. Here, a scalable “garage” fabrication approach is shown using off-the-shelf inexpensive household elements such as aluminum foil, scotch tapes, sticky-notes, napkins, and sponges to build “paper skin” with simultaneous real-time sensing capability of pressure, temperature, humidity, proximity, pH, and flow. Enabling the basic principles of porosity, adsorption, and dimensions of these materials, a fully functioning distributed sensor network platform is reported, which, for the first time, can sense the vitals of its carrier (body temperature, blood pressure, heart rate, and skin hydration) and the surrounding environment.

  7. Poly(oligoethylene glycol methacrylate) dip-coating: turning cellulose paper into a protein-repellent platform for biosensors.

    Science.gov (United States)

    Deng, Xudong; Smeets, Niels M B; Sicard, Clémence; Wang, Jingyun; Brennan, John D; Filipe, Carlos D M; Hoare, Todd

    2014-09-17

    The passivation of nonspecific protein adsorption to paper is a major barrier to the use of paper as a platform for microfluidic bioassays. Herein we describe a simple, scalable protocol based on adsorption and cross-linking of poly(oligoethylene glycol methacrylate) (POEGMA) derivatives that reduces nonspecific adsorption of a range of proteins to filter paper by at least 1 order of magnitude without significantly changing the fiber morphology or paper macroporosity. A lateral-flow test strip coated with POEGMA facilitates effective protein transport while also confining the colorimetric reporting signal for easier detection, giving improved performance relative to bovine serum albumin (BSA)-blocked paper. Enzyme-linked immunosorbent assays based on POEGMA-coated paper also achieve lower blank values, higher sensitivities, and lower detection limits relative to ones based on paper blocked with BSA or skim milk. We anticipate that POEGMA-coated paper can function as a platform for the design of portable, disposable, and low-cost paper-based biosensors.

  8. Scalable Open Source Smart Grid Simulator (SGSim)

    DEFF Research Database (Denmark)

    Ebeid, Emad Samuel Malki; Jacobsen, Rune Hylsberg; Stefanni, Francesco

    2017-01-01

    . This paper presents an open source smart grid simulator (SGSim). The simulator is based on open source SystemC Network Simulation Library (SCNSL) and aims to model scalable smart grid applications. SGSim has been tested under different smart grid scenarios that contain hundreds of thousands of households...

  9. Scalable quantum memory in the ultrastrong coupling regime.

    Science.gov (United States)

    Kyaw, T H; Felicetti, S; Romero, G; Solano, E; Kwek, L-C

    2015-03-02

    Circuit quantum electrodynamics, consisting of superconducting artificial atoms coupled to on-chip resonators, represents a prime candidate to implement the scalable quantum computing architecture because of the presence of good tunability and controllability. Furthermore, recent advances have pushed the technology towards the ultrastrong coupling regime of light-matter interaction, where the qubit-resonator coupling strength reaches a considerable fraction of the resonator frequency. Here, we propose a qubit-resonator system operating in that regime, as a quantum memory device and study the storage and retrieval of quantum information in and from the Z2 parity-protected quantum memory, within experimentally feasible schemes. We are also convinced that our proposal might pave a way to realize a scalable quantum random-access memory due to its fast storage and readout performances.

  10. Decentralized control of a scalable photovoltaic (PV)-battery hybrid power system

    International Nuclear Information System (INIS)

    Kim, Myungchin; Bae, Sungwoo

    2017-01-01

    Highlights: • This paper introduces the design and control of a PV-battery hybrid power system. • Reliable and scalable operation of hybrid power systems is achieved. • System and power control are performed without a centralized controller. • Reliability and scalability characteristics are studied in a quantitative manner. • The system control performance is verified using realistic solar irradiation data. - Abstract: This paper presents the design and control of a sustainable standalone photovoltaic (PV)-battery hybrid power system (HPS). The research aims to develop an approach that contributes to increased level of reliability and scalability for an HPS. To achieve such objectives, a PV-battery HPS with a passively connected battery was studied. A quantitative hardware reliability analysis was performed to assess the effect of energy storage configuration to the overall system reliability. Instead of requiring the feedback control information of load power through a centralized supervisory controller, the power flow in the proposed HPS is managed by a decentralized control approach that takes advantage of the system architecture. Reliable system operation of an HPS is achieved through the proposed control approach by not requiring a separate supervisory controller. Furthermore, performance degradation of energy storage can be prevented by selecting the controller gains such that the charge rate does not exceed operational requirements. The performance of the proposed system architecture with the control strategy was verified by simulation results using realistic irradiance data and a battery model in which its temperature effect was considered. With an objective to support scalable operation, details on how the proposed design could be applied were also studied so that the HPS could satisfy potential system growth requirements. Such scalability was verified by simulating various cases that involve connection and disconnection of sources and loads. The

  11. Scalable optical quantum computer

    International Nuclear Information System (INIS)

    Manykin, E A; Mel'nichenko, E V

    2014-01-01

    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr 3+ , regularly located in the lattice of the orthosilicate (Y 2 SiO 5 ) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

  12. Information Integration Platform for Patient-Centric Healthcare Services: Design, Prototype and Dependability Aspects

    Directory of Open Access Journals (Sweden)

    Yohanes Baptista Dafferianto Trinugroho

    2014-03-01

    Full Text Available Technology innovations have pushed today’s healthcare sector to an unprecedented new level. Various portable and wearable medical and fitness devices are being sold in the consumer market to provide the self-empowerment of a healthier lifestyle to society. Many vendors provide additional cloud-based services for devices they manufacture, enabling the users to visualize, store and share the gathered information through the Internet. However, most of these services are integrated with the devices in a closed “silo” manner, where the devices can only be used with the provided services. To tackle this issue, an information integration platform (IIP has been developed to support communications between devices and Internet-based services in an event-driven fashion by adopting service-oriented architecture (SOA principles and a publish/subscribe messaging pattern. It follows the “Internet of Things” (IoT idea of connecting everyday objects to various networks and to enable the dissemination of the gathered information to the global information space through the Internet. A patient-centric healthcare service environment is chosen as the target scenario for the deployment of the platform, as this is a domain where IoT can have a direct positive impact on quality of life enhancement. This paper describes the developed platform, with emphasis on dependability aspects, including availability, scalability and security.

  13. SuperLU{_}DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiaoye S.; Demmel, James W.

    2002-03-27

    In this paper, we present the main algorithmic features in the software package SuperLU{_}DIST, a distributed-memory sparse direct solver for large sets of linear equations. We give in detail our parallelization strategies, with focus on scalability issues, and demonstrate the parallel performance and scalability on current machines. The solver is based on sparse Gaussian elimination, with an innovative static pivoting strategy proposed earlier by the authors. The main advantage of static pivoting over classical partial pivoting is that it permits a priori determination of data structures and communication pattern for sparse Gaussian elimination, which makes it more scalable on distributed memory machines. Based on this a priori knowledge, we designed highly parallel and scalable algorithms for both LU decomposition and triangular solve and we show that they are suitable for large-scale distributed memory machines.

  14. Advanced technologies for scalable ATLAS conditions database access on the grid

    CERN Document Server

    Basset, R; Dimitrov, G; Girone, M; Hawkings, R; Nevski, P; Valassi, A; Vaniachine, A; Viegas, F; Walker, R; Wong, A

    2010-01-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysi...

  15. Scalable architecture for a room temperature solid-state quantum information processor.

    Science.gov (United States)

    Yao, N Y; Jiang, L; Gorshkov, A V; Maurer, P C; Giedke, G; Cirac, J I; Lukin, M D

    2012-04-24

    The realization of a scalable quantum information processor has emerged over the past decade as one of the central challenges at the interface of fundamental science and engineering. Here we propose and analyse an architecture for a scalable, solid-state quantum information processor capable of operating at room temperature. Our approach is based on recent experimental advances involving nitrogen-vacancy colour centres in diamond. In particular, we demonstrate that the multiple challenges associated with operation at ambient temperature, individual addressing at the nanoscale, strong qubit coupling, robustness against disorder and low decoherence rates can be simultaneously achieved under realistic, experimentally relevant conditions. The architecture uses a novel approach to quantum information transfer and includes a hierarchy of control at successive length scales. Moreover, it alleviates the stringent constraints currently limiting the realization of scalable quantum processors and will provide fundamental insights into the physics of non-equilibrium many-body quantum systems.

  16. Scalable force directed graph layout algorithms using fast multipole methods

    KAUST Repository

    Yunis, Enas Abdulrahman

    2012-06-01

    We present an extension to ExaFMM, a Fast Multipole Method library, as a generalized approach for fast and scalable execution of the Force-Directed Graph Layout algorithm. The Force-Directed Graph Layout algorithm is a physics-based approach to graph layout that treats the vertices V as repelling charged particles with the edges E connecting them acting as springs. Traditionally, the amount of work required in applying the Force-Directed Graph Layout algorithm is O(|V|2 + |E|) using direct calculations and O(|V| log |V| + |E|) using truncation, filtering, and/or multi-level techniques. Correct application of the Fast Multipole Method allows us to maintain a lower complexity of O(|V| + |E|) while regaining most of the precision lost in other techniques. Solving layout problems for truly large graphs with millions of vertices still requires a scalable algorithm and implementation. We have been able to leverage the scalability and architectural adaptability of the ExaFMM library to create a Force-Directed Graph Layout implementation that runs efficiently on distributed multicore and multi-GPU architectures. © 2012 IEEE.

  17. Scalability of voltage-controlled filamentary and nanometallic resistance memory devices.

    Science.gov (United States)

    Lu, Yang; Lee, Jong Ho; Chen, I-Wei

    2017-08-31

    Much effort has been devoted to device and materials engineering to realize nanoscale resistance random access memory (RRAM) for practical applications, but a rational physical basis to be relied on to design scalable devices spanning many length scales is still lacking. In particular, there is no clear criterion for switching control in those RRAM devices in which resistance changes are limited to localized nanoscale filaments that experience concentrated heat, electric current and field. Here, we demonstrate voltage-controlled resistance switching, always at a constant characteristic critical voltage, for macro and nanodevices in both filamentary RRAM and nanometallic RRAM, and the latter switches uniformly and does not require a forming process. As a result, area-scalability can be achieved under a device-area-proportional current compliance for the low resistance state of the filamentary RRAM, and for both the low and high resistance states of the nanometallic RRAM. This finding will help design area-scalable RRAM at the nanoscale. It also establishes an analogy between RRAM and synapses, in which signal transmission is also voltage-controlled.

  18. [INVITED] Nanofabrication of phase-shifted Bragg gratings on the end facet of multimode fiber towards development of optical filters and sensors

    Science.gov (United States)

    Gallego, E. E.; Ascorbe, J.; Del Villar, I.; Corres, J. M.; Matias, I. R.

    2018-05-01

    This work describes the process of nanofabrication of phase-shifted Bragg gratings on the end facet of a multimode optical fiber with a pulsed DC sputtering system based on a single target. Several structures have been explored as a function of parameters such as the number of layers or the phase-shift. The experimental results, corroborated with simulations based on plane-wave propagation in a stack of homogeneous layers, indicate that the phase-shift can be controlled with a high degree of accuracy. The device could be used both in communications, as a filter, or in the sensors domain. As an example of application, a humidity sensor with wavelength shifts of 12 nm in the range of 30 to 90% relative humidity (200 pm/% relative humidity) is presented.

  19. Final Scientific/Technical Report for "Enabling Exascale Hardware and Software Design through Scalable System Virtualization"

    Energy Technology Data Exchange (ETDEWEB)

    Dinda, Peter August [Northwestern Univ., Evanston, IL (United States)

    2015-03-17

    This report describes the activities, findings, and products of the Northwestern University component of the "Enabling Exascale Hardware and Software Design through Scalable System Virtualization" project. The purpose of this project has been to extend the state of the art of systems software for high-end computing (HEC) platforms, and to use systems software to better enable the evaluation of potential future HEC platforms, for example exascale platforms. Such platforms, and their systems software, have the goal of providing scientific computation at new scales, thus enabling new research in the physical sciences and engineering. Over time, the innovations in systems software for such platforms also become applicable to more widely used computing clusters, data centers, and clouds. This was a five-institution project, centered on the Palacios virtual machine monitor (VMM) systems software, a project begun at Northwestern, and originally developed in a previous collaboration between Northwestern University and the University of New Mexico. In this project, Northwestern (including via our subcontract to the University of Pittsburgh) contributed to the continued development of Palacios, along with other team members. We took the leadership role in (1) continued extension of support for emerging Intel and AMD hardware, (2) integration and performance enhancement of overlay networking, (3) connectivity with architectural simulation, (4) binary translation, and (5) support for modern Non-Uniform Memory Access (NUMA) hosts and guests. We also took a supporting role in support for specialized hardware for I/O virtualization, profiling, configurability, and integration with configuration tools. The efforts we led (1-5) were largely successful and executed as expected, with code and papers resulting from them. The project demonstrated the feasibility of a virtualization layer for HEC computing, similar to such layers for cloud or datacenter computing. For effort (3

  20. Scalable fast multipole accelerated vortex methods

    KAUST Repository

    Hu, Qi

    2014-05-01

    The fast multipole method (FMM) is often used to accelerate the calculation of particle interactions in particle-based methods to simulate incompressible flows. To evaluate the most time-consuming kernels - the Biot-Savart equation and stretching term of the vorticity equation, we mathematically reformulated it so that only two Laplace scalar potentials are used instead of six. This automatically ensuring divergence-free far-field computation. Based on this formulation, we developed a new FMM-based vortex method on heterogeneous architectures, which distributed the work between multicore CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm uses new data structures which can dynamically manage inter-node communication and load balance efficiently, with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching calculation for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s.

  1. Scalability Optimization of Seamless Positioning Service

    Directory of Open Access Journals (Sweden)

    Juraj Machaj

    2016-01-01

    Full Text Available Recently positioning services are getting more attention not only within research community but also from service providers. From the service providers point of view positioning service that will be able to work seamlessly in all environments, for example, indoor, dense urban, and rural, has a huge potential to open new markets. However, such system does not only need to provide accurate position estimates but have to be scalable and resistant to fake positioning requests. In the previous works we have proposed a modular system, which is able to provide seamless positioning in various environments. The system automatically selects optimal positioning module based on available radio signals. The system currently consists of three positioning modules—GPS, GSM based positioning, and Wi-Fi based positioning. In this paper we will propose algorithm which will reduce time needed for position estimation and thus allow higher scalability of the modular system and thus allow providing positioning services to higher amount of users. Such improvement is extremely important, for real world application where large number of users will require position estimates, since positioning error is affected by response time of the positioning server.

  2. New Tools for New Research in Psychiatry: A Scalable and Customizable Platform to Empower Data Driven Smartphone Research

    OpenAIRE

    Torous, John; Kiang, Mathew V; Lorme, Jeanette; Onnela, Jukka-Pekka

    2016-01-01

    Background A longstanding barrier to progress in psychiatry, both in clinical settings and research trials, has been the persistent difficulty of accurately and reliably quantifying disease phenotypes. Mobile phone technology combined with data science has the potential to offer medicine a wealth of additional information on disease phenotypes, but the large majority of existing smartphone apps are not intended for use as biomedical research platforms and, as such, do not generate research-qu...

  3. Architecture Knowledge for Evaluating Scalable Databases

    Science.gov (United States)

    2015-01-16

    Architecture Knowledge for Evaluating Scalable Databases 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Nurgaliev... Scala , Erlang, Javascript Cursor-based queries Supported, Not Supported JOIN queries Supported, Not Supported Complex data types Lists, maps, sets...is therefore needed, using technology such as machine learning to extract content from product documentation. The terminology used in the database

  4. Cooperative Scalable Moving Continuous Query Processing

    DEFF Research Database (Denmark)

    Li, Xiaohui; Karras, Panagiotis; Jensen, Christian S.

    2012-01-01

    of the global view and handle the majority of the workload. Meanwhile, moving clients, having basic memory and computation resources, handle small portions of the workload. This model is further enhanced by dynamic region allocation and grid size adjustment mechanisms that reduce the communication...... and computation cost for both servers and clients. An experimental study demonstrates that our approaches offer better scalability than competitors...

  5. ATLAS Grid Data Processing: system evolution and scalability

    CERN Document Server

    Golubkov, D; The ATLAS collaboration; Klimentov, A; Minaenko, A; Nevski, P; Vaniachine, A; Walker, R

    2012-01-01

    The production system for Grid Data Processing handles petascale ATLAS data reprocessing and Monte Carlo activities. The production system empowered further data processing steps on the Grid performed by dozens of ATLAS physics groups with coordinated access to computing resources worldwide, including additional resources sponsored by regional facilities. The system provides knowledge management of configuration parameters for massive data processing tasks, reproducibility of results, scalable database access, orchestrated workflow and performance monitoring, dynamic workload sharing, automated fault tolerance and petascale data integrity control. The system evolves to accommodate a growing number of users and new requirements from our contacts in ATLAS main areas: Trigger, Physics, Data Preparation and Software & Computing. To assure scalability, the next generation production system architecture development is in progress. We report on scaling up the production system for a growing number of users provi...

  6. Iterative Integration of Visual Insights during Scalable Patent Search and Analysis.

    Science.gov (United States)

    Koch, S; Bosch, H; Giereth, M; Ertl, T

    2011-05-01

    Patents are of growing importance in current economic markets. Analyzing patent information has, therefore, become a common task for many interest groups. As a prerequisite for patent analysis, extensive search for relevant patent information is essential. Unfortunately, the complexity of patent material inhibits a straightforward retrieval of all relevant patent documents and leads to iterative, time-consuming approaches in practice. Already the amount of patent data to be analyzed poses challenges with respect to scalability. Further scalability issues arise concerning the diversity of users and the large variety of analysis tasks. With "PatViz", a system for interactive analysis of patent information has been developed addressing scalability at various levels. PatViz provides a visual environment allowing for interactive reintegration of insights into subsequent search iterations, thereby bridging the gap between search and analytic processes. Because of its extensibility, we expect that the approach we have taken can be employed in different problem domains that require high quality of search results regarding their completeness.

  7. Scalable Video Streaming Relay for Smart Mobile Devices in Wireless Networks.

    Science.gov (United States)

    Kwon, Dongwoo; Je, Huigwang; Kim, Hyeonwoo; Ju, Hongtaek; An, Donghyeok

    2016-01-01

    Recently, smart mobile devices and wireless communication technologies such as WiFi, third generation (3G), and long-term evolution (LTE) have been rapidly deployed. Many smart mobile device users can access the Internet wirelessly, which has increased mobile traffic. In 2014, more than half of the mobile traffic around the world was devoted to satisfying the increased demand for the video streaming. In this paper, we propose a scalable video streaming relay scheme. Because many collisions degrade the scalability of video streaming, we first separate networks to prevent excessive contention between devices. In addition, the member device controls the video download rate in order to adapt to video playback. If the data are sufficiently buffered, the member device stops the download. If not, it requests additional video data. We implemented apps to evaluate the proposed scheme and conducted experiments with smart mobile devices. The results showed that our scheme improves the scalability of video streaming in a wireless local area network (WLAN).

  8. Scalable Video Streaming Relay for Smart Mobile Devices in Wireless Networks

    Science.gov (United States)

    Kwon, Dongwoo; Je, Huigwang; Kim, Hyeonwoo; Ju, Hongtaek; An, Donghyeok

    2016-01-01

    Recently, smart mobile devices and wireless communication technologies such as WiFi, third generation (3G), and long-term evolution (LTE) have been rapidly deployed. Many smart mobile device users can access the Internet wirelessly, which has increased mobile traffic. In 2014, more than half of the mobile traffic around the world was devoted to satisfying the increased demand for the video streaming. In this paper, we propose a scalable video streaming relay scheme. Because many collisions degrade the scalability of video streaming, we first separate networks to prevent excessive contention between devices. In addition, the member device controls the video download rate in order to adapt to video playback. If the data are sufficiently buffered, the member device stops the download. If not, it requests additional video data. We implemented apps to evaluate the proposed scheme and conducted experiments with smart mobile devices. The results showed that our scheme improves the scalability of video streaming in a wireless local area network (WLAN). PMID:27907113

  9. Asynchronous Checkpoint Migration with MRNet in the Scalable Checkpoint / Restart Library

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, K; Moody, A; de Supinski, B R

    2012-03-20

    Applications running on today's supercomputers tolerate failures by periodically saving their state in checkpoint files on stable storage, such as a parallel file system. Although this approach is simple, the overhead of writing the checkpoints can be prohibitive, especially for large-scale jobs. In this paper, we present initial results of an enhancement to our Scalable Checkpoint/Restart Library (SCR). We employ MRNet, a tree-based overlay network library, to transfer checkpoints from the compute nodes to the parallel file system asynchronously. This enhancement increases application efficiency by removing the need for an application to block while checkpoints are transferred to the parallel file system. We show that the integration of SCR with MRNet can reduce the time spent in I/O operations by as much as 15x. However, our experiments exposed new scalability issues with our initial implementation. We discuss the sources of the scalability problems and our plans to address them.

  10. Applications of the scalable coherent interface to data acquisition at LHC

    CERN Document Server

    Bogaerts, A; Divià, R; Müller, H; Parkman, C; Ponting, P J; Skaali, B; Midttun, G; Wormald, D; Wikne, J; Falciano, S; Cesaroni, F; Vinogradov, V I; Kristiansen, E H; Solberg, B; Guglielmi, A M; Worm, F H; Bovier, J; Davis, C; CERN. Geneva. Detector Research and Development Committee

    1991-01-01

    We propose to use the Scalable Coherent Interface (SCI) as a very high speed interconnect between LHC detector data buffers and farms of commercial trigger processors. Both the global second and third level trigger can be based on SCI as a reconfigurable and scalable system. SCI is a proposed IEEE standard which uses fast point-to-point links to provide computer-bus like services. It can connect a maximum of 65 536 nodes (memories or processors), providing data transfer rates of up to 1 Gbyte/s. Scalable data acquisition systems can be built using either simple SCI rings or complex switches. The interconnections may be flat cables, coaxial cables, or optical fibres. SCI protocols have been entirely implemented in VLSI, resulting in a significant simplification of data acquisition software. Novel SCI features allow efficient implementation of both data and processor driven readout architectures. In particular, a very efficient implementation of the third level trigger can be achieved by combining SCI's shared ...

  11. Study on scalable Coulombic degradation for estimating the lifetime of organic light-emitting devices

    International Nuclear Information System (INIS)

    Zhang Wenwen; Hou Xun; Wu Zhaoxin; Liang Shixiong; Jiao Bo; Zhang Xinwen; Wang Dawei; Chen Zhijian; Gong Qihuang

    2011-01-01

    The luminance decays of organic light-emitting diodes (OLEDs) are investigated with initial luminance of 1000 to 20 000 cd m -2 through a scalable Coulombic degradation and a stretched exponential decay. We found that the estimated lifetime by scalable Coulombic degradation deviates from the experimental results when the OLEDs work with high initial luminance. By measuring the temperature of the device during degradation, we found that the higher device temperatures will lead to instabilities of organic materials in devices, which is expected to result in the difference between the experimental results and estimation using the scalable Coulombic degradation.

  12. Scalable IC Platform for Smart Cameras

    Directory of Open Access Journals (Sweden)

    Harry Broers

    2005-08-01

    Full Text Available Smart cameras are among the emerging new fields of electronics. The points of interest are in the application areas, software and IC development. In order to reduce cost, it is worthwhile to invest in a single architecture that can be scaled for the various application areas in performance (and resulting power consumption. In this paper, we show that the combination of an SIMD (single-instruction multiple-data processor and a general-purpose DSP is very advantageous for the image processing tasks encountered in smart cameras. While the SIMD processor gives the very high performance necessary by exploiting the inherent data parallelism found in the pixel crunching part of the algorithms, the DSP offers a friendly approach to the more complex tasks. The paper continues to motivate that SIMD processors have very convenient scaling properties in silicon, making the complete, SIMD-DSP architecture suitable for different application areas without changing the software suite. Analysis of the changes in power consumption due to scaling shows that for typical image processing tasks, it is beneficial to scale the SIMD processor to use the maximum level of parallelism available in the algorithm if the IC supply voltage can be lowered. If silicon cost is of importance, the parallelism of the processor should be scaled to just reach the desired performance given the speed of the silicon.

  13. Scalable power selection method for wireless mesh networks

    CSIR Research Space (South Africa)

    Olwal, TO

    2009-01-01

    Full Text Available This paper addresses the problem of a scalable dynamic power control (SDPC) for wireless mesh networks (WMNs) based on IEEE 802.11 standards. An SDPC model that accounts for architectural complexities witnessed in multiple radios and hops...

  14. Scalable storage for a DBMS using transparent distribution

    NARCIS (Netherlands)

    J.S. Karlsson; M.L. Kersten (Martin)

    1997-01-01

    textabstractScalable Distributed Data Structures (SDDSs) provide a self-managing and self-organizing data storage of potentially unbounded size. This stands in contrast to common distribution schemas deployed in conventional distributed DBMS. SDDSs, however, have mostly been used in synthetic

  15. Scalable optical quantum computer

    Energy Technology Data Exchange (ETDEWEB)

    Manykin, E A; Mel' nichenko, E V [Institute for Superconductivity and Solid-State Physics, Russian Research Centre ' Kurchatov Institute' , Moscow (Russian Federation)

    2014-12-31

    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

  16. Scalable Techniques for Formal Verification

    CERN Document Server

    Ray, Sandip

    2010-01-01

    This book presents state-of-the-art approaches to formal verification techniques to seamlessly integrate different formal verification methods within a single logical foundation. It should benefit researchers and practitioners looking to get a broad overview of the spectrum of formal verification techniques, as well as approaches to combining such techniques within a single framework. Coverage includes a range of case studies showing how such combination is fruitful in developing a scalable verification methodology for industrial designs. This book outlines both theoretical and practical issue

  17. Scalable Optical-Fiber Communication Networks

    Science.gov (United States)

    Chow, Edward T.; Peterson, John C.

    1993-01-01

    Scalable arbitrary fiber extension network (SAFEnet) is conceptual fiber-optic communication network passing digital signals among variety of computers and input/output devices at rates from 200 Mb/s to more than 100 Gb/s. Intended for use with very-high-speed computers and other data-processing and communication systems in which message-passing delays must be kept short. Inherent flexibility makes it possible to match performance of network to computers by optimizing configuration of interconnections. In addition, interconnections made redundant to provide tolerance to faults.

  18. The Promise of E-Platform Technology in Medical Education.

    Science.gov (United States)

    Dawd, Siraj

    2016-03-01

    Increasing the number as well as improving the capacity and quality of medical professionals to achieve an equitable health care for all is a global priority and a global challenge. In developing countries, which are facing the largest burden of disease, to achieve the above stated objective, there is a big need for more well-trained, competent and dedicated health care providers. Currently, there is a well-documented shortage of trained health workers globally, with the poorest countries having the greatest shortfalls. The time tested, traditional approach of training health care force by importing professionals from overseas is not only prohibitively expensive but also not sufficient to achieve the scale and pace of the required human capacity building. Considering this fact, distance learning programs, which include m-Health as well as other information technology (IT) platforms and tools, can provide unique, timely, cost-effective, easily scalable and valuable opportunities to expand access to training health care manpower in developing countries where the shortage is critical.

  19. A Microfluidic Platform to design crosslinked Hyaluronic Acid Nanoparticles (cHANPs) for enhanced MRI

    Science.gov (United States)

    Russo, Maria; Bevilacqua, Paolo; Netti, Paolo Antonio; Torino, Enza

    2016-11-01

    Recent advancements in imaging diagnostics have focused on the use of nanostructures that entrap Magnetic Resonance Imaging (MRI) Contrast Agents (CAs), without the need to chemically modify the clinically approved compounds. Nevertheless, the exploitation of microfluidic platforms for their controlled and continuous production is still missing. Here, a microfluidic platform is used to synthesize crosslinked Hyaluronic Acid NanoParticles (cHANPs) in which a clinically relevant MRI-CAs, gadolinium diethylenetriamine penta-acetic acid (Gd-DTPA), is entrapped. This microfluidic process facilitates a high degree of control over particle synthesis, enabling the production of monodisperse particles as small as 35 nm. Furthermore, the interference of Gd-DTPA during polymer precipitation is overcome by finely tuning process parameters and leveraging the use of hydrophilic-lipophilic balance (HLB) of surfactants and pH conditions. For both production strategies proposed to design Gd-loaded cHANPs, a boosting of the relaxation rate T1 is observed since a T1 of 1562 is achieved with a 10 μM of Gd-loaded cHANPs while a similar value is reached with 100 μM of the relevant clinical Gd-DTPA in solution. The advanced microfluidic platform to synthesize intravascularly-injectable and completely biocompatible hydrogel nanoparticles entrapping clinically approved CAs enables the implementation of straightforward and scalable strategies in diagnostics and therapy applications.

  20. Bitcoin-NG: A Scalable Blockchain Protocol

    OpenAIRE

    Eyal, Ittay; Gencer, Adem Efe; Sirer, Emin Gun; van Renesse, Robbert

    2015-01-01

    Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin's blockchain protocol, Bitcoin-NG is By...

  1. SWAP-Assembler: scalable and efficient genome assembly towards thousands of cores.

    Science.gov (United States)

    Meng, Jintao; Wang, Bingqiang; Wei, Yanjie; Feng, Shengzhong; Balaji, Pavan

    2014-01-01

    There is a widening gap between the throughput of massive parallel sequencing machines and the ability to analyze these sequencing data. Traditional assembly methods requiring long execution time and large amount of memory on a single workstation limit their use on these massive data. This paper presents a highly scalable assembler named as SWAP-Assembler for processing massive sequencing data using thousands of cores, where SWAP is an acronym for Small World Asynchronous Parallel model. In the paper, a mathematical description of multi-step bi-directed graph (MSG) is provided to resolve the computational interdependence on merging edges, and a highly scalable computational framework for SWAP is developed to automatically preform the parallel computation of all operations. Graph cleaning and contig extension are also included for generating contigs with high quality. Experimental results show that SWAP-Assembler scales up to 2048 cores on Yanhuang dataset using only 26 minutes, which is better than several other parallel assemblers, such as ABySS, Ray, and PASHA. Results also show that SWAP-Assembler can generate high quality contigs with good N50 size and low error rate, especially it generated the longest N50 contig sizes for Fish and Yanhuang datasets. In this paper, we presented a highly scalable and efficient genome assembly software, SWAP-Assembler. Compared with several other assemblers, it showed very good performance in terms of scalability and contig quality. This software is available at: https://sourceforge.net/projects/swapassembler.

  2. GoFFish: Graph-Oriented Framework for Foresight and Insight Using Scalable Heuristics

    Science.gov (United States)

    2015-09-01

    A. Biem, E. Bouillet, H. Feng, A. Ranganathan , A. Riabov, O. Verscheure, H. Koutsopoulos, and C. Moran, “Ibm infos- phere streams for scalable, real...Systems and Software. Elsevier, 2013, vol. 86, no. 1, pp. 2–11. [5] A. Biem, E. Bouillet, H. Feng, A. Ranganathan , A. Riabov, O. Verscheure, H...Feng, A. Ranganathan , A. Riabov, O. Verscheure, H. Koutsopoulos, and C. Moran. Ibm infosphere streams for scalable, real-time, intelligent

  3. Platformation: Cloud Computing Tools at the Service of Social Change

    Directory of Open Access Journals (Sweden)

    Anil Patel

    2012-07-01

    Full Text Available The following article establishes some context and definitions for what is termed the “sharing imperative” – a movement or tendency towards sharing information online and in real time that has rapidly transformed several industries. As internet-enabled devices proliferate to all corners of the globe, ways of working and accessing information have changed. Users now expect to be able to access the products, services, and information that they want from anywhere, at any time, on any device. This article addresses how the nonprofit sector might respond to those demands by embracing the sharing imperative. It suggests that how well an organization shares has become one of the most pressing governance questions a nonprofit organization must tackle. Finally, the article introduces Platformation, a project whereby tools that enable better inter and intra-organizational sharing are tested for scalability, affordability, interoperability, and security, all with a non-profit lens.

  4. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors.

    Science.gov (United States)

    Cheung, Kit; Schultz, Simon R; Luk, Wayne

    2015-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.

  5. Highly scalable Ab initio genomic motif identification

    KAUST Repository

    Marchand, Benoit; Bajic, Vladimir B.; Kaushik, Dinesh

    2011-01-01

    We present results of scaling an ab initio motif family identification system, Dragon Motif Finder (DMF), to 65,536 processor cores of IBM Blue Gene/P. DMF seeks groups of mutually similar polynucleotide patterns within a set of genomic sequences and builds various motif families from them. Such information is of relevance to many problems in life sciences. Prior attempts to scale such ab initio motif-finding algorithms achieved limited success. We solve the scalability issues using a combination of mixed-mode MPI-OpenMP parallel programming, master-slave work assignment, multi-level workload distribution, multi-level MPI collectives, and serial optimizations. While the scalability of our algorithm was excellent (94% parallel efficiency on 65,536 cores relative to 256 cores on a modest-size problem), the final speedup with respect to the original serial code exceeded 250,000 when serial optimizations are included. This enabled us to carry out many large-scale ab initio motiffinding simulations in a few hours while the original serial code would have needed decades of execution time. Copyright 2011 ACM.

  6. The intergroup protocols: Scalable group communication for the internet

    Energy Technology Data Exchange (ETDEWEB)

    Berket, Karlo [Univ. of California, Santa Barbara, CA (United States)

    2000-12-04

    Reliable group ordered delivery of multicast messages in a distributed system is a useful service that simplifies the programming of distributed applications. Such a service helps to maintain the consistency of replicated information and to coordinate the activities of the various processes. With the increasing popularity of the Internet, there is an increasing interest in scaling the protocols that provide this service to the environment of the Internet. The InterGroup protocol suite, described in this dissertation, provides such a service, and is intended for the environment of the Internet with scalability to large numbers of nodes and high latency links. The InterGroup protocols approach the scalability problem from various directions. They redefine the meaning of group membership, allow voluntary membership changes, add a receiver-oriented selection of delivery guarantees that permits heterogeneity of the receiver set, and provide a scalable reliability service. The InterGroup system comprises several components, executing at various sites within the system. Each component provides part of the services necessary to implement a group communication system for the wide-area. The components can be categorized as: (1) control hierarchy, (2) reliable multicast, (3) message distribution and delivery, and (4) process group membership. We have implemented a prototype of the InterGroup protocols in Java, and have tested the system performance in both local-area and wide-area networks.

  7. Temporal Scalability through Adaptive -Band Filter Banks for Robust H.264/MPEG-4 AVC Video Coding

    Directory of Open Access Journals (Sweden)

    Pau G

    2006-01-01

    Full Text Available This paper presents different structures that use adaptive -band hierarchical filter banks for temporal scalability. Open-loop and closed-loop configurations are introduced and illustrated using existing video codecs. In particular, it is shown that the H.264/MPEG-4 AVC codec allows us to introduce scalability by frame shuffling operations, thus keeping backward compatibility with the standard. The large set of shuffling patterns introduced here can be exploited to adapt the encoding process to the video content features, as well as to the user equipment and transmission channel characteristics. Furthermore, simulation results show that this scalability is obtained with no degradation in terms of subjective and objective quality in error-free environments, while in error-prone channels the scalable versions provide increased robustness.

  8. Adolescent sexuality education: An appraisal of some scalable ...

    African Journals Online (AJOL)

    Adolescent sexuality education: An appraisal of some scalable interventions for the Nigerian context. VC Pam. Abstract. Most issues around sexual intercourse are highly sensitive topics in Nigeria. Despite the disturbingly high adolescent HIV prevalence and teenage pregnancy rate in Nigeria, sexuality education is ...

  9. Impact of multiplexed reading scheme on nanocrossbar memristor memory's scalability

    International Nuclear Information System (INIS)

    Zhu Xuan; Tang Yu-Hua; Wu Jun-Jie; Yi Xun; Wu Chun-Qing

    2014-01-01

    Nanocrossbar is a potential memory architecture to integrate memristor to achieve large scale and high density memory. However, based on the currently widely-adopted parallel reading scheme, scalability of the nanocrossbar memory is limited, since the overhead of the reading circuits is in proportion with the size of the nanocrossbar component. In this paper, a multiplexed reading scheme is adopted as the foundation of the discussion. Through HSPICE simulation, we reanalyze scalability of the nanocrossbar memristor memory by investigating the impact of various circuit parameters on the output voltage swing as the memory scales to larger size. We find that multiplexed reading maintains sufficient noise margin in large size nanocrossbar memristor memory. In order to improve the scalability of the memory, memristors with nonlinear I—V characteristics and high LRS (low resistive state) resistance should be adopted. (interdisciplinary physics and related areas of science and technology)

  10. Scalable manufacturing processes with soft materials

    OpenAIRE

    White, Edward; Case, Jennifer; Kramer, Rebecca

    2014-01-01

    The emerging field of soft robotics will benefit greatly from new scalable manufacturing techniques for responsive materials. Currently, most of soft robotic examples are fabricated one-at-a-time, using techniques borrowed from lithography and 3D printing to fabricate molds. This limits both the maximum and minimum size of robots that can be fabricated, and hinders batch production, which is critical to gain wider acceptance for soft robotic systems. We have identified electrical structures, ...

  11. Randomized Algorithms for Scalable Machine Learning

    OpenAIRE

    Kleiner, Ariel Jacob

    2012-01-01

    Many existing procedures in machine learning and statistics are computationally intractable in the setting of large-scale data. As a result, the advent of rapidly increasing dataset sizes, which should be a boon yielding improved statistical performance, instead severely blunts the usefulness of a variety of existing inferential methods. In this work, we use randomness to ameliorate this lack of scalability by reducing complex, computationally difficult inferential problems to larger sets o...

  12. Error-Resilient Unequal Error Protection of Fine Granularity Scalable Video Bitstreams

    Science.gov (United States)

    Cai, Hua; Zeng, Bing; Shen, Guobin; Xiong, Zixiang; Li, Shipeng

    2006-12-01

    This paper deals with the optimal packet loss protection issue for streaming the fine granularity scalable (FGS) video bitstreams over IP networks. Unlike many other existing protection schemes, we develop an error-resilient unequal error protection (ER-UEP) method that adds redundant information optimally for loss protection and, at the same time, cancels completely the dependency among bitstream after loss recovery. In our ER-UEP method, the FGS enhancement-layer bitstream is first packetized into a group of independent and scalable data packets. Parity packets, which are also scalable, are then generated. Unequal protection is finally achieved by properly shaping the data packets and the parity packets. We present an algorithm that can optimally allocate the rate budget between data packets and parity packets, together with several simplified versions that have lower complexity. Compared with conventional UEP schemes that suffer from bit contamination (caused by the bit dependency within a bitstream), our method guarantees successful decoding of all received bits, thus leading to strong error-resilience (at any fixed channel bandwidth) and high robustness (under varying and/or unclean channel conditions).

  13. Photonic crystal ring resonator-based four-channel dense wavelength division multiplexing demultiplexer on silicon on insulator platform: design and analysis

    Science.gov (United States)

    Sreenivasulu, Tupakula; Bhowmick, Kaustav; Samad, Shafeek A.; Yadunath, Thamerassery Illam R.; Badrinarayana, Tarimala; Hegde, Gopalkrishna; Srinivas, Talabattula

    2018-04-01

    A micro/nanofabrication feasible compact photonic crystal (PC) ring-resonator-based channel drop filter has been designed and analyzed for operation in C and L bands of communication window. The four-channel demultiplexer consists of ring resonators of holes in two-dimensional PC slab. The proposed assembly design of dense wavelength division multiplexing setup is shown to achieve optimal quality factor, without altering the lattice parameters or resonator size or inclusion of scattering holes. Transmission characteristics are analyzed using the three-dimensional finite-difference time-domain simulation approach. The radiation loss of the ring resonator was minimized by forced cancelation of radiation fields by fine-tuning the air holes inside the ring resonator. An average cross talk of -34 dB has been achieved between the adjacent channels maintaining an average quality factor of 5000. Demultiplexing is achieved by engineering only the air holes inside the ring, which makes it a simple and tolerant design from the fabrication perspective. Also, the device footprint of 500 μm2 on silicon on insulator platform makes it easy to fabricate the device using e-beam lithography technique.

  14. Scalable Robust Principal Component Analysis Using Grassmann Averages

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Enficiaud, Raffi

    2016-01-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortu...

  15. MicROS-drt: supporting real-time and scalable data distribution in distributed robotic systems.

    Science.gov (United States)

    Ding, Bo; Wang, Huaimin; Fan, Zedong; Zhang, Pengfei; Liu, Hui

    A primary requirement in distributed robotic software systems is the dissemination of data to all interested collaborative entities in a timely and scalable manner. However, providing such a service in a highly dynamic and resource-limited robotic environment is a challenging task, and existing robot software infrastructure has limitations in this aspect. This paper presents a novel robot software infrastructure, micROS-drt, which supports real-time and scalable data distribution. The solution is based on a loosely coupled data publish-subscribe model with the ability to support various time-related constraints. And to realize this model, a mature data distribution standard, the data distribution service for real-time systems (DDS), is adopted as the foundation of the transport layer of this software infrastructure. By elaborately adapting and encapsulating the capability of the underlying DDS middleware, micROS-drt can meet the requirement of real-time and scalable data distribution in distributed robotic systems. Evaluation results in terms of scalability, latency jitter and transport priority as well as the experiment on real robots validate the effectiveness of this work.

  16. Parallel processing architecture for H.264 deblocking filter on multi-core platforms

    Science.gov (United States)

    Prasad, Durga P.; Sonachalam, Sekar; Kunchamwar, Mangesh K.; Gunupudi, Nageswara Rao

    2012-03-01

    filter for multi core platforms such as HyperX technology. Parallel techniques such as parallel processing of independent macroblocks, sub blocks, and pixel row level are examined in this work. The deblocking architecture consists of a basic cell called deblocking filter unit (DFU) and dependent data buffer manager (DFM). The DFU can be used in several instances, catering to different performance needs the DFM serves the data required for the different number of DFUs, and also manages all the neighboring data required for future data processing of DFUs. This approach achieves the scalability, flexibility, and performance excellence required in deblocking filters.

  17. A Testbed for Highly-Scalable Mission Critical Information Systems

    National Research Council Canada - National Science Library

    Birman, Kenneth P

    2005-01-01

    ... systems in a networked environment. Headed by Professor Ken Birman, the project is exploring a novel fusion of classical protocols for reliable multicast communication with a new style of peer-to-peer protocol called scalable "gossip...

  18. Scalable electro-photonic integration concept based on polymer waveguides

    Science.gov (United States)

    Bosman, E.; Van Steenberge, G.; Boersma, A.; Wiegersma, S.; Harmsma, P.; Karppinen, M.; Korhonen, T.; Offrein, B. J.; Dangel, R.; Daly, A.; Ortsiefer, M.; Justice, J.; Corbett, B.; Dorrestein, S.; Duis, J.

    2016-03-01

    A novel method for fabricating a single mode optical interconnection platform is presented. The method comprises the miniaturized assembly of optoelectronic single dies, the scalable fabrication of polymer single mode waveguides and the coupling to glass fiber arrays providing the I/O's. The low cost approach for the polymer waveguide fabrication is based on the nano-imprinting of a spin-coated waveguide core layer. The assembly of VCSELs and photodiodes is performed before waveguide layers are applied. By embedding these components in deep reactive ion etched pockets in the silicon substrate, the planarity of the substrate for subsequent layer processing is guaranteed and the thermal path of chip-to-substrate is minimized. Optical coupling of the embedded devices to the nano-imprinted waveguides is performed by laser ablating 45 degree trenches which act as optical mirror for 90 degree deviation of the light from VCSEL to waveguide. Laser ablation is also implemented for removing parts of the polymer stack in order to mount a custom fabricated connector containing glass fiber arrays. A demonstration device was built to show the proof of principle of the novel fabrication, packaging and optical coupling principles as described above, combined with a set of sub-demonstrators showing the functionality of the different techniques separately. The paper represents a significant part of the electro-photonic integration accomplishments in the European 7th Framework project "Firefly" and not only discusses the development of the different assembly processes described above, but the efforts on the complete integration of all process approaches into the single device demonstrator.

  19. Cpu/gpu Computing for AN Implicit Multi-Block Compressible Navier-Stokes Solver on Heterogeneous Platform

    Science.gov (United States)

    Deng, Liang; Bai, Hanli; Wang, Fang; Xu, Qingxin

    2016-06-01

    CPU/GPU computing allows scientists to tremendously accelerate their numerical codes. In this paper, we port and optimize a double precision alternating direction implicit (ADI) solver for three-dimensional compressible Navier-Stokes equations from our in-house Computational Fluid Dynamics (CFD) software on heterogeneous platform. First, we implement a full GPU version of the ADI solver to remove a lot of redundant data transfers between CPU and GPU, and then design two fine-grain schemes, namely “one-thread-one-point” and “one-thread-one-line”, to maximize the performance. Second, we present a dual-level parallelization scheme using the CPU/GPU collaborative model to exploit the computational resources of both multi-core CPUs and many-core GPUs within the heterogeneous platform. Finally, considering the fact that memory on a single node becomes inadequate when the simulation size grows, we present a tri-level hybrid programming pattern MPI-OpenMP-CUDA that merges fine-grain parallelism using OpenMP and CUDA threads with coarse-grain parallelism using MPI for inter-node communication. We also propose a strategy to overlap the computation with communication using the advanced features of CUDA and MPI programming. We obtain speedups of 6.0 for the ADI solver on one Tesla M2050 GPU in contrast to two Xeon X5670 CPUs. Scalability tests show that our implementation can offer significant performance improvement on heterogeneous platform.

  20. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.

    Science.gov (United States)

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-06-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Design and thermal performances of a scalable linear Fresnel reflector solar system

    International Nuclear Information System (INIS)

    Zhu, Yanqing; Shi, Jifu; Li, Yujian; Wang, Leilei; Huang, Qizhang; Xu, Gang

    2017-01-01

    Highlights: • A scalable linear Fresnel reflector which can supply different temperatures is proposed. • Inclination design of the mechanical structure is used to reduce the end losses. • The maximum thermal efficiency of 64% is achieved in Guangzhou. - Abstract: This paper proposes a scalable linear Fresnel reflector (SLFR) solar system. The optical mirror field which contains an array of linear plat mirrors closed to each other is designed to eliminate the inter-low shading and blocking. Scalable mechanical mirror support which can place different number of mirrors is designed to supply different temperatures. The mechanical structure can be inclined to reduce the end losses. Finally, the thermal efficiency of the SLFR with two stage mirrors is tested. After adjustment, the maximum thermal efficiency of 64% is obtained and the mean thermal efficiency is higher than that before adjustment. The results indicate that the end losses have been reduced effectively by the inclination design and excellent thermal performance can be obtained by the SLFR after adjustment.

  2. A Scalable Smart Meter Data Generator Using Spark

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Liu, Xiufeng; Danalachi, Sergiu

    2017-01-01

    Today, smart meters are being used worldwide. As a matter of fact smart meters produce large volumes of data. Thus, it is important for smart meter data management and analytics systems to process petabytes of data. Benchmarking and testing of these systems require scalable data, however, it can ...

  3. A Massively Scalable Architecture for Instant Messaging & Presence

    NARCIS (Netherlands)

    Schippers, Jorrit; Remke, Anne Katharina Ingrid; Punt, Henk; Wegdam, M.; Haverkort, Boudewijn R.H.M.; Thomas, N.; Bradley, J.; Knottenbelt, W.; Dingle, N.; Harder, U.

    2010-01-01

    This paper analyzes the scalability of Instant Messaging & Presence (IM&P) architectures. We take a queueing-based modelling and analysis approach to ��?nd the bottlenecks of the current IM&P architecture at the Dutch social network Hyves, as well as of alternative architectures. We use the

  4. Large-scale nanofabrication of periodic nanostructures using nanosphere-related techniques for green technology applications (Conference Presentation)

    Science.gov (United States)

    Yen, Chen-Chung; Wu, Jyun-De; Chien, Yi-Hsin; Wang, Chang-Han; Liu, Chi-Ching; Ku, Chen-Ta; Chen, Yen-Jon; Chou, Meng-Cheng; Chang, Yun-Chorng

    2016-09-01

    Nanotechnology has been developed for decades and many interesting optical properties have been demonstrated. However, the major hurdle for the further development of nanotechnology depends on finding economic ways to fabricate such nanostructures in large-scale. Here, we demonstrate how to achieve low-cost fabrication using nanosphere-related techniques, such as Nanosphere Lithography (NSL) and Nanospherical-Lens Lithography (NLL). NSL is a low-cost nano-fabrication technique that has the ability to fabricate nano-triangle arrays that cover a very large area. NLL is a very similar technique that uses polystyrene nanospheres to focus the incoming ultraviolet light and exposure the underlying photoresist (PR) layer. PR hole arrays form after developing. Metal nanodisk arrays can be fabricated following metal evaporation and lifting-off processes. Nanodisk or nano-ellipse arrays with various sizes and aspect ratios are routinely fabricated in our research group. We also demonstrate we can fabricate more complicated nanostructures, such as nanodisk oligomers, by combining several other key technologies such as angled exposure and deposition, we can modify these methods to obtain various metallic nanostructures. The metallic structures are of high fidelity and in large scale. The metallic nanostructures can be transformed into semiconductor nanostructures and be used in several green technology applications.

  5. Scalable privacy-preserving big data aggregation mechanism

    Directory of Open Access Journals (Sweden)

    Dapeng Wu

    2016-08-01

    Full Text Available As the massive sensor data generated by large-scale Wireless Sensor Networks (WSNs recently become an indispensable part of ‘Big Data’, the collection, storage, transmission and analysis of the big sensor data attract considerable attention from researchers. Targeting the privacy requirements of large-scale WSNs and focusing on the energy-efficient collection of big sensor data, a Scalable Privacy-preserving Big Data Aggregation (Sca-PBDA method is proposed in this paper. Firstly, according to the pre-established gradient topology structure, sensor nodes in the network are divided into clusters. Secondly, sensor data is modified by each node according to the privacy-preserving configuration message received from the sink. Subsequently, intra- and inter-cluster data aggregation is employed during the big sensor data reporting phase to reduce energy consumption. Lastly, aggregated results are recovered by the sink to complete the privacy-preserving big data aggregation. Simulation results validate the efficacy and scalability of Sca-PBDA and show that the big sensor data generated by large-scale WSNs is efficiently aggregated to reduce network resource consumption and the sensor data privacy is effectively protected to meet the ever-growing application requirements.

  6. NPTool: Towards Scalability and Reliability of Business Process Management

    Science.gov (United States)

    Braghetto, Kelly Rosa; Ferreira, João Eduardo; Pu, Calton

    Currently one important challenge in business process management is provide at the same time scalability and reliability of business process executions. This difficulty becomes more accentuated when the execution control assumes complex countless business processes. This work presents NavigationPlanTool (NPTool), a tool to control the execution of business processes. NPTool is supported by Navigation Plan Definition Language (NPDL), a language for business processes specification that uses process algebra as formal foundation. NPTool implements the NPDL language as a SQL extension. The main contribution of this paper is a description of the NPTool showing how the process algebra features combined with a relational database model can be used to provide a scalable and reliable control in the execution of business processes. The next steps of NPTool include reuse of control-flow patterns and support to data flow management.

  7. Developing Scalable Information Security Systems

    Directory of Open Access Journals (Sweden)

    Valery Konstantinovich Ablekov

    2013-06-01

    Full Text Available Existing physical security systems has wide range of lacks, including: high cost, a large number of vulnerabilities, problems of modification and support system. This paper covers an actual problem of developing systems without this list of drawbacks. The paper presents the architecture of the information security system, which operates through the network protocol TCP/IP, including the ability to connect different types of devices and integration with existing security systems. The main advantage is a significant increase in system reliability, scalability, both vertically and horizontally, with minimal cost of both financial and time resources.

  8. Accounting Fundamentals and the Variation of Stock Price: Factoring in the Investment Scalability

    OpenAIRE

    Sumiyana, Sumiyana; Baridwan, Zaki; Sugiri, Slamet; Hartono, Jogiyanto

    2010-01-01

    This study develops a new return model with respect to accounting fundamentals. The new return model is based on Chen and Zhang (2007). This study takes into account theinvestment scalability information. Specifically, this study splitsthe scale of firm’s operations into short-run and long-runinvestment scalabilities. We document that five accounting fun-damentals explain the variation of annual stock return. Thefactors, comprised book value, earnings yield, short-run andlong-run investment s...

  9. Cascaded column generation for scalable predictive demand side management

    NARCIS (Netherlands)

    Toersche, Hermen; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2014-01-01

    We propose a nested Dantzig-Wolfe decomposition, combined with dynamic programming, for the distributed scheduling of a large heterogeneous fleet of residential appliances with nonlinear behavior. A cascaded column generation approach gives a scalable optimization strategy, provided that the problem

  10. Impact of packet losses in scalable 3D holoscopic video coding

    Science.gov (United States)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2014-05-01

    Holoscopic imaging became a prospective glassless 3D technology to provide more natural 3D viewing experiences to the end user. Additionally, holoscopic systems also allow new post-production degrees of freedom, such as controlling the plane of focus or the viewing angle presented to the user. However, to successfully introduce this technology into the consumer market, a display scalable coding approach is essential to achieve backward compatibility with legacy 2D and 3D displays. Moreover, to effectively transmit 3D holoscopic content over error-prone networks, e.g., wireless networks or the Internet, error resilience techniques are required to mitigate the impact of data impairments in the user quality perception. Therefore, it is essential to deeply understand the impact of packet losses in terms of decoding video quality for the specific case of 3D holoscopic content, notably when a scalable approach is used. In this context, this paper studies the impact of packet losses when using a three-layer display scalable 3D holoscopic video coding architecture previously proposed, where each layer represents a different level of display scalability (i.e., L0 - 2D, L1 - stereo or multiview, and L2 - full 3D holoscopic). For this, a simple error concealment algorithm is used, which makes use of inter-layer redundancy between multiview and 3D holoscopic content and the inherent correlation of the 3D holoscopic content to estimate lost data. Furthermore, a study of the influence of 2D views generation parameters used in lower layers on the performance of the used error concealment algorithm is also presented.

  11. A repeatable and scalable fabrication method for sharp, hollow silicon microneedles

    Science.gov (United States)

    Kim, H.; Theogarajan, L. S.; Pennathur, S.

    2018-03-01

    Scalability and manufacturability are impeding the mass commercialization of microneedles in the medical field. Specifically, microneedle geometries need to be sharp, beveled, and completely controllable, difficult to achieve with microelectromechanical fabrication techniques. In this work, we performed a parametric study using silicon etch chemistries to optimize the fabrication of scalable and manufacturable beveled silicon hollow microneedles. We theoretically verified our parametric results with diffusion reaction equations and created a design guideline for a various set of miconeedles (80-160 µm needle base width, 100-1000 µm pitch, 40-50 µm inner bore diameter, and 150-350 µm height) to show the repeatability, scalability, and manufacturability of our process. As a result, hollow silicon microneedles with any dimensions can be fabricated with less than 2% non-uniformity across a wafer and 5% deviation between different processes. The key to achieving such high uniformity and consistency is a non-agitated HF-HNO3 bath, silicon nitride masks, and surrounding silicon filler materials with well-defined dimensions. Our proposed method is non-labor intensive, well defined by theory, and straightforward for wafer scale mass production, opening doors to a plethora of potential medical and biosensing applications.

  12. Fabrication of Scalable Indoor Light Energy Harvester and Study for Agricultural IoT Applications

    International Nuclear Information System (INIS)

    Watanabe, M; Nakamura, A; Kunii, A; Kusano, K; Futagawa, M

    2015-01-01

    A scalable indoor light energy harvester was fabricated by microelectromechanical system (MEMS) and printing hybrid technology and evaluated for agricultural IoT applications under different environmental input power density conditions, such as outdoor farming under the sun, greenhouse farming under scattered lighting, and a plant factory under LEDs. We fabricated and evaluated a dye- sensitized-type solar cell (DSC) as a low cost and “scalable” optical harvester device. We developed a transparent conductive oxide (TCO)-less process with a honeycomb metal mesh substrate fabricated by MEMS technology. In terms of the electrical and optical properties, we achieved scalable harvester output power by cell area sizing. Second, we evaluated the dependence of the input power scalable characteristics on the input light intensity, spectrum distribution, and light inlet direction angle, because harvested environmental input power is unstable. The TiO 2 fabrication relied on nanoimprint technology, which was designed for optical optimization and fabrication, and we confirmed that the harvesters are robust to a variety of environments. Finally, we studied optical energy harvesting applications for agricultural IoT systems. These scalable indoor light harvesters could be used in many applications and situations in smart agriculture. (paper)

  13. Fabrication of Scalable Indoor Light Energy Harvester and Study for Agricultural IoT Applications

    Science.gov (United States)

    Watanabe, M.; Nakamura, A.; Kunii, A.; Kusano, K.; Futagawa, M.

    2015-12-01

    A scalable indoor light energy harvester was fabricated by microelectromechanical system (MEMS) and printing hybrid technology and evaluated for agricultural IoT applications under different environmental input power density conditions, such as outdoor farming under the sun, greenhouse farming under scattered lighting, and a plant factory under LEDs. We fabricated and evaluated a dye- sensitized-type solar cell (DSC) as a low cost and “scalable” optical harvester device. We developed a transparent conductive oxide (TCO)-less process with a honeycomb metal mesh substrate fabricated by MEMS technology. In terms of the electrical and optical properties, we achieved scalable harvester output power by cell area sizing. Second, we evaluated the dependence of the input power scalable characteristics on the input light intensity, spectrum distribution, and light inlet direction angle, because harvested environmental input power is unstable. The TiO2 fabrication relied on nanoimprint technology, which was designed for optical optimization and fabrication, and we confirmed that the harvesters are robust to a variety of environments. Finally, we studied optical energy harvesting applications for agricultural IoT systems. These scalable indoor light harvesters could be used in many applications and situations in smart agriculture.

  14. NiftyPET: a High-throughput Software Platform for High Quantitative Accuracy and Precision PET Imaging and Analysis.

    Science.gov (United States)

    Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien

    2018-01-01

    We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.

  15. BUILDING A COMPLETE FREE AND OPEN SOURCE GIS INFRASTRUCTURE FOR HYDROLOGICAL COMPUTING AND DATA PUBLICATION USING GIS.LAB AND GISQUICK PLATFORMS

    Directory of Open Access Journals (Sweden)

    M. Landa

    2017-07-01

    Full Text Available Building a complete free and open source GIS computing and data publication platform can be a relatively easy task. This paper describes an automated deployment of such platform using two open source software projects – GIS.lab and Gisquick. GIS.lab (http: //web.gislab.io is a project for rapid deployment of a complete, centrally managed and horizontally scalable GIS infrastructure in the local area network, data center or cloud. It provides a comprehensive set of free geospatial software seamlessly integrated into one, easy-to-use system. A platform for GIS computing (in our case demonstrated on hydrological data processing requires core components as a geoprocessing server, map server, and a computation engine as eg. GRASS GIS, SAGA, or other similar GIS software. All these components can be rapidly, and automatically deployed by GIS.lab platform. In our demonstrated solution PyWPS is used for serving WPS processes built on the top of GRASS GIS computation platform. GIS.lab can be easily extended by other components running in Docker containers. This approach is shown on Gisquick seamless integration. Gisquick (http://gisquick.org is an open source platform for publishing geospatial data in the sense of rapid sharing of QGIS projects on the web. The platform consists of QGIS plugin, Django-based server application, QGIS server, and web/mobile clients. In this paper is shown how to easily deploy complete open source GIS infrastructure allowing all required operations as data preparation on desktop, data sharing, and geospatial computation as the service. It also includes data publication in the sense of OGC Web Services and importantly also as interactive web mapping applications.

  16. Exploiting light chains for the scalable generation and platform purification of native human bispecific IgG

    Science.gov (United States)

    Fischer, Nicolas; Elson, Greg; Magistrelli, Giovanni; Dheilly, Elie; Fouque, Nicolas; Laurendon, Amélie; Gueneau, Franck; Ravn, Ulla; Depoisier, Jean-François; Moine, Valery; Raimondi, Sylvain; Malinge, Pauline; Di Grazia, Laura; Rousseau, François; Poitevin, Yves; Calloud, Sébastien; Cayatte, Pierre-Alexis; Alcoz, Mathias; Pontini, Guillemette; Fagète, Séverine; Broyer, Lucile; Corbier, Marie; Schrag, Delphine; Didelot, Gérard; Bosson, Nicolas; Costes, Nessie; Cons, Laura; Buatois, Vanessa; Johnson, Zoe; Ferlin, Walter; Masternak, Krzysztof; Kosco-Vilbois, Marie

    2015-01-01

    Bispecific antibodies enable unique therapeutic approaches but it remains a challenge to produce them at the industrial scale, and the modifications introduced to achieve bispecificity often have an impact on stability and risk of immunogenicity. Here we describe a fully human bispecific IgG devoid of any modification, which can be produced at the industrial scale, using a platform process. This format, referred to as a κλ-body, is assembled by co-expressing one heavy chain and two different light chains, one κ and one λ. Using ten different targets, we demonstrate that light chains can play a dominant role in mediating specificity and high affinity. The κλ-bodies support multiple modes of action, and their stability and pharmacokinetic properties are indistinguishable from therapeutic antibodies. Thus, the κλ-body represents a unique, fully human format that exploits light-chain variable domains for antigen binding and light-chain constant domains for robust downstream processing, to realize the potential of bispecific antibodies. PMID:25672245

  17. EvAg: A Scalable Peer-to-Peer Evolutionary Algorithm

    NARCIS (Netherlands)

    Laredo, J.L.J.; Eiben, A.E.; van Steen, M.R.; Merelo, J.J.

    2010-01-01

    This paper studies the scalability of an Evolutionary Algorithm (EA) whose population is structured by means of a gossiping protocol and where the evolutionary operators act exclusively within the local neighborhoods. This makes the algorithm inherently suited for parallel execution in a

  18. GPU-based Scalable Volumetric Reconstruction for Multi-view Stereo

    Energy Technology Data Exchange (ETDEWEB)

    Kim, H; Duchaineau, M; Max, N

    2011-09-21

    We present a new scalable volumetric reconstruction algorithm for multi-view stereo using a graphics processing unit (GPU). It is an effectively parallelized GPU algorithm that simultaneously uses a large number of GPU threads, each of which performs voxel carving, in order to integrate depth maps with images from multiple views. Each depth map, triangulated from pair-wise semi-dense correspondences, represents a view-dependent surface of the scene. This algorithm also provides scalability for large-scale scene reconstruction in a high resolution voxel grid by utilizing streaming and parallel computation. The output is a photo-realistic 3D scene model in a volumetric or point-based representation. We demonstrate the effectiveness and the speed of our algorithm with a synthetic scene and real urban/outdoor scenes. Our method can also be integrated with existing multi-view stereo algorithms such as PMVS2 to fill holes or gaps in textureless regions.

  19. Parallel scalability and efficiency of vortex particle method for aeroelasticity analysis of bluff bodies

    Science.gov (United States)

    Tolba, Khaled Ibrahim; Morgenthal, Guido

    2018-01-01

    This paper presents an analysis of the scalability and efficiency of a simulation framework based on the vortex particle method. The code is applied for the numerical aerodynamic analysis of line-like structures. The numerical code runs on multicore CPU and GPU architectures using OpenCL framework. The focus of this paper is the analysis of the parallel efficiency and scalability of the method being applied to an engineering test case, specifically the aeroelastic response of a long-span bridge girder at the construction stage. The target is to assess the optimal configuration and the required computer architecture, such that it becomes feasible to efficiently utilise the method within the computational resources available for a regular engineering office. The simulations and the scalability analysis are performed on a regular gaming type computer.

  20. A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system

    Science.gov (United States)

    Toor, S.; Osmani, L.; Eerola, P.; Kraemer, O.; Lindén, T.; Tarkoma, S.; White, J.

    2014-06-01

    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

  1. A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system

    International Nuclear Information System (INIS)

    Toor, S; Eerola, P; Kraemer, O; Lindén, T; Osmani, L; Tarkoma, S; White, J

    2014-01-01

    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

  2. Scalable Performance Measurement and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gamblin, Todd [Univ. of North Carolina, Chapel Hill, NC (United States)

    2009-01-01

    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number of tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.

  3. Heat-treated stainless steel felt as scalable anode material for bioelectrochemical systems.

    Science.gov (United States)

    Guo, Kun; Soeriyadi, Alexander H; Feng, Huajun; Prévoteau, Antonin; Patil, Sunil A; Gooding, J Justin; Rabaey, Korneel

    2015-11-01

    This work reports a simple and scalable method to convert stainless steel (SS) felt into an effective anode for bioelectrochemical systems (BESs) by means of heat treatment. X-ray photoelectron spectroscopy and cyclic voltammetry elucidated that the heat treatment generated an iron oxide rich layer on the SS felt surface. The iron oxide layer dramatically enhanced the electroactive biofilm formation on SS felt surface in BESs. Consequently, the sustained current densities achieved on the treated electrodes (1 cm(2)) were around 1.5±0.13 mA/cm(2), which was seven times higher than the untreated electrodes (0.22±0.04 mA/cm(2)). To test the scalability of this material, the heat-treated SS felt was scaled up to 150 cm(2) and similar current density (1.5 mA/cm(2)) was achieved on the larger electrode. The low cost, straightforwardness of the treatment, high conductivity and high bioelectrocatalytic performance make heat-treated SS felt a scalable anodic material for BESs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Payment Platform

    DEFF Research Database (Denmark)

    Hjelholt, Morten; Damsgaard, Jan

    2012-01-01

    thoroughly and substitute current payment standards in the decades to come. This paper portrays how digital payment platforms evolve in socio-technical niches and how various technological platforms aim for institutional attention in their attempt to challenge earlier platforms and standards. The paper...... applies a co-evolutionary multilevel perspective to model the interplay and processes between technology and society wherein digital payment platforms potentially will substitute other payment platforms just like the credit card negated the check. On this basis this paper formulate a multilevel conceptual...

  5. Scalable Frequent Subgraph Mining

    KAUST Repository

    Abdelhamid, Ehab

    2017-06-19

    A graph is a data structure that contains a set of nodes and a set of edges connecting these nodes. Nodes represent objects while edges model relationships among these objects. Graphs are used in various domains due to their ability to model complex relations among several objects. Given an input graph, the Frequent Subgraph Mining (FSM) task finds all subgraphs with frequencies exceeding a given threshold. FSM is crucial for graph analysis, and it is an essential building block in a variety of applications, such as graph clustering and indexing. FSM is computationally expensive, and its existing solutions are extremely slow. Consequently, these solutions are incapable of mining modern large graphs. This slowness is caused by the underlying approaches of these solutions which require finding and storing an excessive amount of subgraph matches. This dissertation proposes a scalable solution for FSM that avoids the limitations of previous work. This solution is composed of four components. The first component is a single-threaded technique which, for each candidate subgraph, needs to find only a minimal number of matches. The second component is a scalable parallel FSM technique that utilizes a novel two-phase approach. The first phase quickly builds an approximate search space, which is then used by the second phase to optimize and balance the workload of the FSM task. The third component focuses on accelerating frequency evaluation, which is a critical step in FSM. To do so, a machine learning model is employed to predict the type of each graph node, and accordingly, an optimized method is selected to evaluate that node. The fourth component focuses on mining dynamic graphs, such as social networks. To this end, an incremental index is maintained during the dynamic updates. Only this index is processed and updated for the majority of graph updates. Consequently, search space is significantly pruned and efficiency is improved. The empirical evaluation shows that the

  6. Design for scalability in 3D computer graphics architectures

    DEFF Research Database (Denmark)

    Holten-Lund, Hans Erik

    2002-01-01

    This thesis describes useful methods and techniques for designing scalable hybrid parallel rendering architectures for 3D computer graphics. Various techniques for utilizing parallelism in a pipelines system are analyzed. During the Ph.D study a prototype 3D graphics architecture named Hybris has...

  7. PTaaS: Platform for Providing Software Developing Applications and Tools as a Service

    DEFF Research Database (Denmark)

    Chauhan, Muhammad Aufeef; Babar, Muhammad Ali

    2014-01-01

    technological support for it that is not limited to one specific tools and a particular phase of software development life cycle. In this thesis, we have explored the possibility of offering software development applications and tools as services that can be acquired on demand according to the software...... with process. Information gained from the review of literature on GSD tools and processes is used to extract functional requirements for the middleware platform for provisioning of software development applications and tools as services. Finding from the review of literature on architecture solutions for cloud......Cloud computing has become an established paradigm for enabling organizations to build scalable software systems and to meet challenges of rapid demand of computing and storage resources. There has been a significant success in building cloud-enabled applications for many disciplines ranging from...

  8. Scalable Content Authentication in H.264/SVC Videos Using Perceptual Hashing based on Dempster-Shafer theory

    Directory of Open Access Journals (Sweden)

    Ye Dengpan

    2012-09-01

    Full Text Available The content authenticity of the multimedia delivery is important issue with rapid development and widely used of multimedia technology. Till now many authentication solutions had been proposed, such as cryptology and watermarking based methods. However, in latest heterogeneous network the video stream transmission has been coded in scalable way such as H.264/SVC, there is still no good authentication solution. In this paper, we firstly summarized related works and proposed a scalable content authentication scheme using a ratio of different energy (RDE based perceptual hashing in Q/S dimension, which is used Dempster-Shafer theory and combined with the latest scalable video coding (H.264/SVC construction. The idea of aldquo;sign once and verify in scalable wayardquo; can be realized. Comparing with previous methods, the proposed scheme based on perceptual hashing outperforms previous works in uncertainty (robustness and efficiencies in the H.264/SVC video streams. At last, the experiment results verified the performance of our scheme.

  9. Towards Bandwidth Scalable Transceiver Technology for Optical Metro-Access Networks

    DEFF Research Database (Denmark)

    Spolitis, Sandis; Bobrovs, Vjaceslavs; Wagner, Christoph

    2015-01-01

    sliceable transceiver for 1 Gbit/s non-return to zero (NRZ) signal sliced into two slices is presented. Digital signal processing (DSP) power consumption and latency values for proposed sliceable transceiver technique are also discussed. In this research post FEC with 7% overhead error free transmission has......Massive fiber-to-the-home network deployment is creating a challenge for telecommunications network operators: exponential increase of the power consumption at the central offices and a never ending quest for equipment upgrades operating at higher bandwidth. In this paper, we report on flexible...... signal slicing technique, which allows transmission of high-bandwidth signals via low bandwidth electrical and optoelectrical equipment. The presented signal slicing technique is highly scalable in terms of bandwidth which is determined by the number of slices used. In this paper performance of scalable...

  10. Proof of Stake Blockchain: Performance and Scalability for Groupware Communications

    DEFF Research Database (Denmark)

    Spasovski, Jason; Eklund, Peter

    2017-01-01

    A blockchain is a distributed transaction ledger, a disruptive technology that creates new possibilities for digital ecosystems. The blockchain ecosystem maintains an immutable transaction record to support many types of digital services. This paper compares the performance and scalability of a web......-based groupware communication application using both non-blockchain and blockchain technologies. Scalability is measured where message load is synthesized over two typical communication topologies. The first is 1 to n network -- a typical client-server or star-topology with a central vertex (server) receiving all...... messages from the remaining n - 1 vertices (clients). The second is a more naturally occurring scale-free network topology, where multiple communication hubs are distributed throughout the network. System performance is tested with both blockchain and non-blockchain solutions using multiple cloud computing...

  11. SCALABLE TIME SERIES CHANGE DETECTION FOR BIOMASS MONITORING USING GAUSSIAN PROCESS

    Data.gov (United States)

    National Aeronautics and Space Administration — SCALABLE TIME SERIES CHANGE DETECTION FOR BIOMASS MONITORING USING GAUSSIAN PROCESS VARUN CHANDOLA AND RANGA RAJU VATSAVAI Abstract. Biomass monitoring,...

  12. Big data integration: scalability and sustainability

    KAUST Repository

    Zhang, Zhang

    2016-01-26

    Integration of various types of omics data is critically indispensable for addressing most important and complex biological questions. In the era of big data, however, data integration becomes increasingly tedious, time-consuming and expensive, posing a significant obstacle to fully exploit the wealth of big biological data. Here we propose a scalable and sustainable architecture that integrates big omics data through community-contributed modules. Community modules are contributed and maintained by different committed groups and each module corresponds to a specific data type, deals with data collection, processing and visualization, and delivers data on-demand via web services. Based on this community-based architecture, we build Information Commons for Rice (IC4R; http://ic4r.org), a rice knowledgebase that integrates a variety of rice omics data from multiple community modules, including genome-wide expression profiles derived entirely from RNA-Seq data, resequencing-based genomic variations obtained from re-sequencing data of thousands of rice varieties, plant homologous genes covering multiple diverse plant species, post-translational modifications, rice-related literatures, and community annotations. Taken together, such architecture achieves integration of different types of data from multiple community-contributed modules and accordingly features scalable, sustainable and collaborative integration of big data as well as low costs for database update and maintenance, thus helpful for building IC4R into a comprehensive knowledgebase covering all aspects of rice data and beneficial for both basic and translational researches.

  13. Programming Scala Scalability = Functional Programming + Objects

    CERN Document Server

    Wampler, Dean

    2009-01-01

    Learn how to be more productive with Scala, a new multi-paradigm language for the Java Virtual Machine (JVM) that integrates features of both object-oriented and functional programming. With this book, you'll discover why Scala is ideal for highly scalable, component-based applications that support concurrency and distribution. Programming Scala clearly explains the advantages of Scala as a JVM language. You'll learn how to leverage the wealth of Java class libraries to meet the practical needs of enterprise and Internet projects more easily. Packed with code examples, this book provides us

  14. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  15. Domain decomposition method of stochastic PDEs: a two-level scalable preconditioner

    International Nuclear Information System (INIS)

    Subber, Waad; Sarkar, Abhijit

    2012-01-01

    For uncertainty quantification in many practical engineering problems, the stochastic finite element method (SFEM) may be computationally challenging. In SFEM, the size of the algebraic linear system grows rapidly with the spatial mesh resolution and the order of the stochastic dimension. In this paper, we describe a non-overlapping domain decomposition method, namely the iterative substructuring method to tackle the large-scale linear system arising in the SFEM. The SFEM is based on domain decomposition in the geometric space and a polynomial chaos expansion in the probabilistic space. In particular, a two-level scalable preconditioner is proposed for the iterative solver of the interface problem for the stochastic systems. The preconditioner is equipped with a coarse problem which globally connects the subdomains both in the geometric and probabilistic spaces via their corner nodes. This coarse problem propagates the information quickly across the subdomains leading to a scalable preconditioner. For numerical illustrations, a two-dimensional stochastic elliptic partial differential equation (SPDE) with spatially varying non-Gaussian random coefficients is considered. The numerical scalability of the the preconditioner is investigated with respect to the mesh size, subdomain size, fixed problem size per subdomain and order of polynomial chaos expansion. The numerical experiments are performed on a Linux cluster using MPI and PETSc parallel libraries.

  16. Embedded DCT and wavelet methods for fine granular scalable video: analysis and comparison

    Science.gov (United States)

    van der Schaar-Mitrea, Mihaela; Chen, Yingwei; Radha, Hayder

    2000-04-01

    Video transmission over bandwidth-varying networks is becoming increasingly important due to emerging applications such as streaming of video over the Internet. The fundamental obstacle in designing such systems resides in the varying characteristics of the Internet (i.e. bandwidth variations and packet-loss patterns). In MPEG-4, a new SNR scalability scheme, called Fine-Granular-Scalability (FGS), is currently under standardization, which is able to adapt in real-time (i.e. at transmission time) to Internet bandwidth variations. The FGS framework consists of a non-scalable motion-predicted base-layer and an intra-coded fine-granular scalable enhancement layer. For example, the base layer can be coded using a DCT-based MPEG-4 compliant, highly efficient video compression scheme. Subsequently, the difference between the original and decoded base-layer is computed, and the resulting FGS-residual signal is intra-frame coded with an embedded scalable coder. In order to achieve high coding efficiency when compressing the FGS enhancement layer, it is crucial to analyze the nature and characteristics of residual signals common to the SNR scalability framework (including FGS). In this paper, we present a thorough analysis of SNR residual signals by evaluating its statistical properties, compaction efficiency and frequency characteristics. The signal analysis revealed that the energy compaction of the DCT and wavelet transforms is limited and the frequency characteristic of SNR residual signals decay rather slowly. Moreover, the blockiness artifacts of the low bit-rate coded base-layer result in artificial high frequencies in the residual signal. Subsequently, a variety of wavelet and embedded DCT coding techniques applicable to the FGS framework are evaluated and their results are interpreted based on the identified signal properties. As expected from the theoretical signal analysis, the rate-distortion performances of the embedded wavelet and DCT-based coders are very

  17. Scalability and efficiency of genetic algorithms for geometrical applications

    NARCIS (Netherlands)

    Dijk, van S.F.; Thierens, D.; Berg, de M.; Schoenauer, M.

    2000-01-01

    We study the scalability and efficiency of a GA that we developed earlier to solve the practical cartographic problem of labeling a map with point features. We argue that the special characteristics of our GA make that it fits in well with theoretical models predicting the optimal population size

  18. Estimates of the Sampling Distribution of Scalability Coefficient H

    Science.gov (United States)

    Van Onna, Marieke J. H.

    2004-01-01

    Coefficient "H" is used as an index of scalability in nonparametric item response theory (NIRT). It indicates the degree to which a set of items rank orders examinees. Theoretical sampling distributions, however, have only been derived asymptotically and only under restrictive conditions. Bootstrap methods offer an alternative possibility to…

  19. Data Mining Based on Cloud-Computing Technology

    Directory of Open Access Journals (Sweden)

    Ren Ying

    2016-01-01

    Full Text Available There are performance bottlenecks and scalability problems when traditional data-mining system is used in cloud computing. In this paper, we present a data-mining platform based on cloud computing. Compared with a traditional data mining system, this platform is highly scalable, has massive data processing capacities, is service-oriented, and has low hardware cost. This platform can support the design and applications of a wide range of distributed data-mining systems.

  20. LoRa Scalability: A Simulation Model Based on Interference Measurements.

    Science.gov (United States)

    Haxhibeqiri, Jetmir; Van den Abeele, Floris; Moerman, Ingrid; Hoebeke, Jeroen

    2017-05-23

    LoRa is a long-range, low power, low bit rate and single-hop wireless communication technology. It is intended to be used in Internet of Things (IoT) applications involving battery-powered devices with low throughput requirements. A LoRaWAN network consists of multiple end nodes that communicate with one or more gateways. These gateways act like a transparent bridge towards a common network server. The amount of end devices and their throughput requirements will have an impact on the performance of the LoRaWAN network. This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments. First, we determine the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node. Measurements show that even under concurrent transmission, one of the packets can be received under certain conditions. Based on these measurements, we create a simulation model for assessing the scalability of a single gateway LoRaWAN network. We show that when the number of nodes increases up to 1000 per gateway, the losses will be up to 32%. In such a case, pure Aloha will have around 90% losses. However, when the duty cycle of the application layer becomes lower than the allowed radio duty cycle of 1%, losses will be even lower. We also show network scalability simulation results for some IoT use cases based on real data.

  1. qPortal: A platform for data-driven biomedical research.

    Science.gov (United States)

    Mohr, Christopher; Friedrich, Andreas; Wojnar, David; Kenar, Erhan; Polatkan, Aydin Can; Codrea, Marius Cosmin; Czemmel, Stefan; Kohlbacher, Oliver; Nahnsen, Sven

    2018-01-01

    Modern biomedical research aims at drawing biological conclusions from large, highly complex biological datasets. It has become common practice to make extensive use of high-throughput technologies that produce big amounts of heterogeneous data. In addition to the ever-improving accuracy, methods are getting faster and cheaper, resulting in a steadily increasing need for scalable data management and easily accessible means of analysis. We present qPortal, a platform providing users with an intuitive way to manage and analyze quantitative biological data. The backend leverages a variety of concepts and technologies, such as relational databases, data stores, data models and means of data transfer, as well as front-end solutions to give users access to data management and easy-to-use analysis options. Users are empowered to conduct their experiments from the experimental design to the visualization of their results through the platform. Here, we illustrate the feature-rich portal by simulating a biomedical study based on publically available data. We demonstrate the software's strength in supporting the entire project life cycle. The software supports the project design and registration, empowers users to do all-digital project management and finally provides means to perform analysis. We compare our approach to Galaxy, one of the most widely used scientific workflow and analysis platforms in computational biology. Application of both systems to a small case study shows the differences between a data-driven approach (qPortal) and a workflow-driven approach (Galaxy). qPortal, a one-stop-shop solution for biomedical projects offers up-to-date analysis pipelines, quality control workflows, and visualization tools. Through intensive user interactions, appropriate data models have been developed. These models build the foundation of our biological data management system and provide possibilities to annotate data, query metadata for statistics and future re-analysis on

  2. Product Platform Performance

    DEFF Research Database (Denmark)

    Munk, Lone

    The aim of this research is to improve understanding of platform-based product development by studying platform performance in relation to internal effects in companies. Platform-based product development makes it possible to deliver product variety and at the same time reduce the needed resources...... engaging in platform-based product development. Similarly platform assessment criteria lack empirical verification regarding relevance and sufficiency. The thesis focuses on • the process of identifying and estimating internal effects, • verification of performance of product platforms, (i...... experienced representatives from the different life systems phase systems of the platform products. The effects are estimated and modeled within different scenarios, taking into account financial and real option aspects. The model illustrates and supports estimation and quantification of internal platform...

  3. Scalable Light Module for Low-Cost, High-Efficiency Light- Emitting Diode Luminaires

    Energy Technology Data Exchange (ETDEWEB)

    Tarsa, Eric [Cree, Inc., Goleta, CA (United States)

    2015-08-31

    During this two-year program Cree developed a scalable, modular optical architecture for low-cost, high-efficacy light emitting diode (LED) luminaires. Stated simply, the goal of this architecture was to efficiently and cost-effectively convey light from LEDs (point sources) to broad luminaire surfaces (area sources). By simultaneously developing warm-white LED components and low-cost, scalable optical elements, a high system optical efficiency resulted. To meet program goals, Cree evaluated novel approaches to improve LED component efficacy at high color quality while not sacrificing LED optical efficiency relative to conventional packages. Meanwhile, efficiently coupling light from LEDs into modular optical elements, followed by optimally distributing and extracting this light, were challenges that were addressed via novel optical design coupled with frequent experimental evaluations. Minimizing luminaire bill of materials and assembly costs were two guiding principles for all design work, in the effort to achieve luminaires with significantly lower normalized cost ($/klm) than existing LED fixtures. Chief project accomplishments included the achievement of >150 lm/W warm-white LEDs having primary optics compatible with low-cost modular optical elements. In addition, a prototype Light Module optical efficiency of over 90% was measured, demonstrating the potential of this scalable architecture for ultra-high-efficacy LED luminaires. Since the project ended, Cree has continued to evaluate optical element fabrication and assembly methods in an effort to rapidly transfer this scalable, cost-effective technology to Cree production development groups. The Light Module concept is likely to make a strong contribution to the development of new cost-effective, high-efficacy luminaries, thereby accelerating widespread adoption of energy-saving SSL in the U.S.

  4. NCI's Transdisciplinary High Performance Scientific Data Platform

    Science.gov (United States)

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  5. Scalable force directed graph layout algorithms using fast multipole methods

    KAUST Repository

    Yunis, Enas Abdulrahman; Yokota, Rio; Ahmadia, Aron

    2012-01-01

    We present an extension to ExaFMM, a Fast Multipole Method library, as a generalized approach for fast and scalable execution of the Force-Directed Graph Layout algorithm. The Force-Directed Graph Layout algorithm is a physics-based approach

  6. ESVD: An Integrated Energy Scalable Framework for Low-Power Video Decoding Systems

    Directory of Open Access Journals (Sweden)

    Wen Ji

    2010-01-01

    Full Text Available Video applications using mobile wireless devices are a challenging task due to the limited capacity of batteries. The higher complex functionality of video decoding needs high resource requirements. Thus, power efficient control has become more critical design with devices integrating complex video processing techniques. Previous works on power efficient control in video decoding systems often aim at the low complexity design and not explicitly consider the scalable impact of subfunctions in decoding process, and seldom consider the relationship with the features of compressed video date. This paper is dedicated to developing an energy-scalable video decoding (ESVD strategy for energy-limited mobile terminals. First, ESVE can dynamically adapt the variable energy resources due to the device aware technique. Second, ESVD combines the decoder control with decoded data, through classifying the data into different partition profiles according to its characteristics. Third, it introduces utility theoretical analysis during the resource allocation process, so as to maximize the resource utilization. Finally, it adapts the energy resource as different energy budget and generates the scalable video decoding output under energy-limited systems. Experimental results demonstrate the efficiency of the proposed approach.

  7. Visualization on supercomputing platform level II ASC milestone (3537-1B) results from Sandia.

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk (Kitware, Inc., Clifton Park, NY); Fabian, Nathan; Marion, Patrick (Kitware, Inc., Clifton Park, NY); Moreland, Kenneth D.

    2010-09-01

    This report provides documentation for the completion of the Sandia portion of the ASC Level II Visualization on the platform milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratories and Los Alamos National Laboratories. This milestone contains functionality required for performing visualization directly on a supercomputing platform, which is necessary for peta-scale visualization. Sandia's contribution concerns in-situ visualization, running a visualization in tandem with a solver. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors(GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the performance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. Scientific simulation on parallel supercomputers is traditionally performed in four

  8. The SIP express router: An open source SIP platform: Presentation held at EVOLUTE - seamlEss multimedia serVices Over alL IP-based infrastrUcTurEs Workshop, 10. November 2003, Guildford, UK

    OpenAIRE

    Rebahi, Y.; Sisalem, D.; Kuthan, J.; Pelinescu-Oncicul, A.; Iancu, B.; Janak, J.; Mierla, D.C.

    2003-01-01

    The session initiation protocol (SIP) is constantly gaining in popularity and acceptance as the signaling protocol for next generation multimedia communication. This paper describes a scalable and reliable open source SIP platform called the SIP Express Router (SER). SER does not only support basic SIP features but also advanced features such as messaging and presence, translation between SIP and SMS or Jabber as well as full featured application programming interfaces. In this paper we will ...

  9. Ultracold molecules: vehicles to scalable quantum information processing

    International Nuclear Information System (INIS)

    Brickman Soderberg, Kathy-Anne; Gemelke, Nathan; Chin Cheng

    2009-01-01

    In this paper, we describe a novel scheme to implement scalable quantum information processing using Li-Cs molecular states to entangle 6 Li and 133 Cs ultracold atoms held in independent optical lattices. The 6 Li atoms will act as quantum bits to store information and 133 Cs atoms will serve as messenger bits that aid in quantum gate operations and mediate entanglement between distant qubit atoms. Each atomic species is held in a separate optical lattice and the atoms can be overlapped by translating the lattices with respect to each other. When the messenger and qubit atoms are overlapped, targeted single-spin operations and entangling operations can be performed by coupling the atomic states to a molecular state with radio-frequency pulses. By controlling the frequency and duration of the radio-frequency pulses, entanglement can be either created or swapped between a qubit messenger pair. We estimate operation fidelities for entangling two distant qubits and discuss scalability of this scheme and constraints on the optical lattice lasers. Finally we demonstrate experimental control of the optical potentials sufficient to translate atoms in the lattice.

  10. The Platformization of the Web: Making Web Data Platform Ready

    NARCIS (Netherlands)

    Helmond, A.

    2015-01-01

    In this article, I inquire into Facebook’s development as a platform by situating it within the transformation of social network sites into social media platforms. I explore this shift with a historical perspective on, what I refer to as, platformization, or the rise of the platform as the dominant

  11. New Region-Scalable Discriminant and Fitting Energy Functional for Driving Geometric Active Contours in Medical Image Segmentation

    Directory of Open Access Journals (Sweden)

    Xuchu Wang

    2014-01-01

    that uses region-scalable discriminant and fitting energy functional for handling the intensity inhomogeneity and weak boundary problems in medical image segmentation. The region-scalable discriminant and fitting energy functional is defined to capture the image intensity characteristics in local and global regions for driving the evolution of active contour. The discriminant term in the model aims at separating background and foreground in scalable regions while the fitting term tends to fit the intensity in these regions. This model is then transformed into a variational level set formulation with a level set regularization term for accurate computation. The new model utilizes intensity information in the local and global regions as much as possible; so it not only handles better intensity inhomogeneity, but also allows more robustness to noise and more flexible initialization in comparison to the original global region and regional-scalable based models. Experimental results for synthetic and real medical image segmentation show the advantages of the proposed method in terms of accuracy and robustness.

  12. Earth system modelling on system-level heterogeneous architectures: EMAC (version 2.42) on the Dynamical Exascale Entry Platform (DEEP)

    Science.gov (United States)

    Christou, Michalis; Christoudias, Theodoros; Morillo, Julián; Alvarez, Damian; Merx, Hendrik

    2016-09-01

    We examine an alternative approach to heterogeneous cluster-computing in the many-core era for Earth system models, using the European Centre for Medium-Range Weather Forecasts Hamburg (ECHAM)/Modular Earth Submodel System (MESSy) Atmospheric Chemistry (EMAC) model as a pilot application on the Dynamical Exascale Entry Platform (DEEP). A set of autonomous coprocessors interconnected together, called Booster, complements a conventional HPC Cluster and increases its computing performance, offering extra flexibility to expose multiple levels of parallelism and achieve better scalability. The EMAC model atmospheric chemistry code (Module Efficiently Calculating the Chemistry of the Atmosphere (MECCA)) was taskified with an offload mechanism implemented using OmpSs directives. The model was ported to the MareNostrum 3 supercomputer to allow testing with Intel Xeon Phi accelerators on a production-size machine. The changes proposed in this paper are expected to contribute to the eventual adoption of Cluster-Booster division and Many Integrated Core (MIC) accelerated architectures in presently available implementations of Earth system models, towards exploiting the potential of a fully Exascale-capable platform.

  13. Scalable Tensor Factorizations with Missing Data

    DEFF Research Database (Denmark)

    Acar, Evrim; Dunlavy, Daniel M.; Kolda, Tamara G.

    2010-01-01

    of missing data, many important data sets will be discarded or improperly analyzed. Therefore, we need a robust and scalable approach for factorizing multi-way arrays (i.e., tensors) in the presence of missing data. We focus on one of the most well-known tensor factorizations, CANDECOMP/PARAFAC (CP...... is shown to successfully factor tensors with noise and up to 70% missing data. Moreover, our approach is significantly faster than the leading alternative and scales to larger problems. To show the real-world usefulness of CP-WOPT, we illustrate its applicability on a novel EEG (electroencephalogram...

  14. High-Throughput Computing on High-Performance Platforms: A Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Matteo, Turilli [Rutgers University; Angius, Alessio [Rutgers University; Oral, H Sarp [ORNL; De, K [University of Texas at Arlington; Klimentov, A [Brookhaven National Laboratory (BNL); Wells, Jack C. [ORNL; Jha, S [Rutgers University

    2017-10-01

    The computing systems used by LHC experiments has historically consisted of the federation of hundreds to thousands of distributed resources, ranging from small to mid-size resource. In spite of the impressive scale of the existing distributed computing solutions, the federation of small to mid-size resources will be insufficient to meet projected future demands. This paper is a case study of how the ATLAS experiment has embraced Titan -- a DOE leadership facility in conjunction with traditional distributed high- throughput computing to reach sustained production scales of approximately 52M core-hours a years. The three main contributions of this paper are: (i) a critical evaluation of design and operational considerations to support the sustained, scalable and production usage of Titan; (ii) a preliminary characterization of a next generation executor for PanDA to support new workloads and advanced execution modes; and (iii) early lessons for how current and future experimental and observational systems can be integrated with production supercomputers and other platforms in a general and extensible manner.

  15. Semantic Models for Scalable Search in the Internet of Things

    Directory of Open Access Journals (Sweden)

    Dennis Pfisterer

    2013-03-01

    Full Text Available The Internet of Things is anticipated to connect billions of embedded devices equipped with sensors to perceive their surroundings. Thereby, the state of the real world will be available online and in real-time and can be combined with other data and services in the Internet to realize novel applications such as Smart Cities, Smart Grids, or Smart Healthcare. This requires an open representation of sensor data and scalable search over data from diverse sources including sensors. In this paper we show how the Semantic Web technologies RDF (an open semantic data format and SPARQL (a query language for RDF-encoded data can be used to address those challenges. In particular, we describe how prediction models can be employed for scalable sensor search, how these prediction models can be encoded as RDF, and how the models can be queried by means of SPARQL.

  16. LoRa Scalability: A Simulation Model Based on Interference Measurements

    Directory of Open Access Journals (Sweden)

    Jetmir Haxhibeqiri

    2017-05-01

    Full Text Available LoRa is a long-range, low power, low bit rate and single-hop wireless communication technology. It is intended to be used in Internet of Things (IoT applications involving battery-powered devices with low throughput requirements. A LoRaWAN network consists of multiple end nodes that communicate with one or more gateways. These gateways act like a transparent bridge towards a common network server. The amount of end devices and their throughput requirements will have an impact on the performance of the LoRaWAN network. This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments. First, we determine the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node. Measurements show that even under concurrent transmission, one of the packets can be received under certain conditions. Based on these measurements, we create a simulation model for assessing the scalability of a single gateway LoRaWAN network. We show that when the number of nodes increases up to 1000 per gateway, the losses will be up to 32%. In such a case, pure Aloha will have around 90% losses. However, when the duty cycle of the application layer becomes lower than the allowed radio duty cycle of 1%, losses will be even lower. We also show network scalability simulation results for some IoT use cases based on real data.

  17. Scalable quality assurance for large SNOMED CT hierarchies using subject-based subtaxonomies.

    Science.gov (United States)

    Ochs, Christopher; Geller, James; Perl, Yehoshua; Chen, Yan; Xu, Junchuan; Min, Hua; Case, James T; Wei, Zhi

    2015-05-01

    Standards terminologies may be large and complex, making their quality assurance challenging. Some terminology quality assurance (TQA) methodologies are based on abstraction networks (AbNs), compact terminology summaries. We have tested AbNs and the performance of related TQA methodologies on small terminology hierarchies. However, some standards terminologies, for example, SNOMED, are composed of very large hierarchies. Scaling AbN TQA techniques to such hierarchies poses a significant challenge. We present a scalable subject-based approach for AbN TQA. An innovative technique is presented for scaling TQA by creating a new kind of subject-based AbN called a subtaxonomy for large hierarchies. New hypotheses about concentrations of erroneous concepts within the AbN are introduced to guide scalable TQA. We test the TQA methodology for a subject-based subtaxonomy for the Bleeding subhierarchy in SNOMED's large Clinical finding hierarchy. To test the error concentration hypotheses, three domain experts reviewed a sample of 300 concepts. A consensus-based evaluation identified 87 erroneous concepts. The subtaxonomy-based TQA methodology was shown to uncover statistically significantly more erroneous concepts when compared to a control sample. The scalability of TQA methodologies is a challenge for large standards systems like SNOMED. We demonstrated innovative subject-based TQA techniques by identifying groups of concepts with a higher likelihood of having errors within the subtaxonomy. Scalability is achieved by reviewing a large hierarchy by subject. An innovative methodology for scaling the derivation of AbNs and a TQA methodology was shown to perform successfully for the largest hierarchy of SNOMED. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. SRC: FenixOS - A Research Operating System Focused on High Scalability and Reliability

    DEFF Research Database (Denmark)

    Passas, Stavros; Karlsson, Sven

    2011-01-01

    Computer systems keep increasing in size. Systems scale in the number of processing units, memories and peripheral devices. This creates many and diverse architectural trade-offs that the existing operating systems are not able to address. We are designing and implementing, FenixOS, a new operating...... of the operating system....... system that aims to improve the state of the art in scalability and reliability. We achieve scalability through limiting data sharing when possible, and through extensive use of lock-free data structures. Reliability is addressed with a careful re-design of the programming interface and structure...

  19. A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data.

    Directory of Open Access Journals (Sweden)

    Giovanni Delussu

    Full Text Available This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR's formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called "Constant Load" and "Constant Number of Records", with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes.

  20. A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data

    Science.gov (United States)

    Lianas, Luca; Frexia, Francesca; Zanetti, Gianluigi

    2016-01-01

    This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR’s formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called “Constant Load” and “Constant Number of Records”, with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes. PMID:27936191

  1. A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data.

    Science.gov (United States)

    Delussu, Giovanni; Lianas, Luca; Frexia, Francesca; Zanetti, Gianluigi

    2016-01-01

    This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR's formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called "Constant Load" and "Constant Number of Records", with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes.

  2. Homemade Buckeye-Pi: A Learning Many-Node Platform for High-Performance Parallel Computing

    Science.gov (United States)

    Amooie, M. A.; Moortgat, J.

    2017-12-01

    We report on the "Buckeye-Pi" cluster, the supercomputer developed in The Ohio State University School of Earth Sciences from 128 inexpensive Raspberry Pi (RPi) 3 Model B single-board computers. Each RPi is equipped with fast Quad Core 1.2GHz ARMv8 64bit processor, 1GB of RAM, and 32GB microSD card for local storage. Therefore, the cluster has a total RAM of 128GB that is distributed on the individual nodes and a flash capacity of 4TB with 512 processors, while it benefits from low power consumption, easy portability, and low total cost. The cluster uses the Message Passing Interface protocol to manage the communications between each node. These features render our platform the most powerful RPi supercomputer to date and suitable for educational applications in high-performance-computing (HPC) and handling of large datasets. In particular, we use the Buckeye-Pi to implement optimized parallel codes in our in-house simulator for subsurface media flows with the goal of achieving a massively-parallelized scalable code. We present benchmarking results for the computational performance across various number of RPi nodes. We believe our project could inspire scientists and students to consider the proposed unconventional cluster architecture as a mainstream and a feasible learning platform for challenging engineering and scientific problems.

  3. Containment Domains: A Scalable, Efficient and Flexible Resilience Scheme for Exascale Systems

    Directory of Open Access Journals (Sweden)

    Jinsuk Chung

    2013-01-01

    Full Text Available This paper describes and evaluates a scalable and efficient resilience scheme based on the concept of containment domains. Containment domains are a programming construct that enable applications to express resilience needs and to interact with the system to tune and specialize error detection, state preservation and restoration, and recovery schemes. Containment domains have weak transactional semantics and are nested to take advantage of the machine and application hierarchies and to enable hierarchical state preservation, restoration and recovery. We evaluate the scalability and efficiency of containment domains using generalized trace-driven simulation and analytical analysis and show that containment domains are superior to both checkpoint restart and redundant execution approaches.

  4. Scalable Metadata Management for a Large Multi-Source Seismic Data Repository

    Energy Technology Data Exchange (ETDEWEB)

    Gaylord, J. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dodge, D. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Magana-Zook, S. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Barno, J. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Knapp, D. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Thomas, J. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sullivan, D. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ruppert, S. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mellors, R. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-26

    In this work, we implemented the key metadata management components of a scalable seismic data ingestion framework to address limitations in our existing system, and to position it for anticipated growth in volume and complexity.

  5. A Numerical Study of Scalable Cardiac Electro-Mechanical Solvers on HPC Architectures

    Directory of Open Access Journals (Sweden)

    Piero Colli Franzone

    2018-04-01

    Full Text Available We introduce and study some scalable domain decomposition preconditioners for cardiac electro-mechanical 3D simulations on parallel HPC (High Performance Computing architectures. The electro-mechanical model of the cardiac tissue is composed of four coupled sub-models: (1 the static finite elasticity equations for the transversely isotropic deformation of the cardiac tissue; (2 the active tension model describing the dynamics of the intracellular calcium, cross-bridge binding and myofilament tension; (3 the anisotropic Bidomain model describing the evolution of the intra- and extra-cellular potentials in the deforming cardiac tissue; and (4 the ionic membrane model describing the dynamics of ionic currents, gating variables, ionic concentrations and stretch-activated channels. This strongly coupled electro-mechanical model is discretized in time with a splitting semi-implicit technique and in space with isoparametric finite elements. The resulting scalable parallel solver is based on Multilevel Additive Schwarz preconditioners for the solution of the Bidomain system and on BDDC preconditioned Newton-Krylov solvers for the non-linear finite elasticity system. The results of several 3D parallel simulations show the scalability of both linear and non-linear solvers and their application to the study of both physiological excitation-contraction cardiac dynamics and re-entrant waves in the presence of different mechano-electrical feedbacks.

  6. In Plant Activation: An Inducible, Hyperexpression Platform for Recombinant Protein Production in Plants[W][OPEN

    Science.gov (United States)

    Dugdale, Benjamin; Mortimer, Cara L.; Kato, Maiko; James, Tess A.; Harding, Robert M.; Dale, James L.

    2013-01-01

    In this study, we describe a novel protein production platform that provides both activation and amplification of transgene expression in planta. The In Plant Activation (INPACT) system is based on the replication machinery of tobacco yellow dwarf mastrevirus (TYDV) and is essentially transient gene expression from a stably transformed plant, thus combining the advantages of both means of expression. The INPACT cassette is uniquely arranged such that the gene of interest is split and only reconstituted in the presence of the TYDV-encoded Rep/RepA proteins. Rep/RepA expression is placed under the control of the AlcA:AlcR gene switch, which is responsive to trace levels of ethanol. Transgenic tobacco (Nicotiana tabacum cv Samsun) plants containing an INPACT cassette encoding the β-glucuronidase (GUS) reporter had negligible background expression but accumulated very high GUS levels (up to 10% total soluble protein) throughout the plant, within 3 d of a 1% ethanol application. The GUS reporter was replaced with a gene encoding a lethal ribonuclease, barnase, demonstrating that the INPACT system provides exquisite control of transgene expression and can be adapted to potentially toxic or inhibitory compounds. The INPACT gene expression platform is scalable, not host-limited, and has been used to express both a therapeutic and an industrial protein. PMID:23839786

  7. IAServ: An Intelligent Home Care Web Services Platform in a Cloud for Aging-in-Place

    Directory of Open Access Journals (Sweden)

    Chang-Yu Chiang

    2013-11-01

    Full Text Available As the elderly population has been rapidly expanding and the core tax-paying population has been shrinking, the need for adequate elderly health and housing services continues to grow while the resources to provide such services are becoming increasingly scarce. Thus, increasing the efficiency of the delivery of healthcare services through the use of modern technology is a pressing issue. The seamless integration of such enabling technologies as ontology, intelligent agents, web services, and cloud computing is transforming healthcare from hospital-based treatments to home-based self-care and preventive care. A ubiquitous healthcare platform based on this technological integration, which synergizes service providers with patients’ needs to be developed to provide personalized healthcare services at the right time, in the right place, and the right manner. This paper presents the development and overall architecture of IAServ (the Intelligent Aging-in-place Home care Web Services Platform to provide personalized healthcare service ubiquitously in a cloud computing setting to support the most desirable and cost-efficient method of care for the aged-aging in place. The IAServ is expected to offer intelligent, pervasive, accurate and contextually-aware personal care services. Architecturally the implemented IAServ leverages web services and cloud computing to provide economic, scalable, and robust healthcare services over the Internet.

  8. IAServ: an intelligent home care web services platform in a cloud for aging-in-place.

    Science.gov (United States)

    Su, Chuan-Jun; Chiang, Chang-Yu

    2013-11-12

    As the elderly population has been rapidly expanding and the core tax-paying population has been shrinking, the need for adequate elderly health and housing services continues to grow while the resources to provide such services are becoming increasingly scarce. Thus, increasing the efficiency of the delivery of healthcare services through the use of modern technology is a pressing issue. The seamless integration of such enabling technologies as ontology, intelligent agents, web services, and cloud computing is transforming healthcare from hospital-based treatments to home-based self-care and preventive care. A ubiquitous healthcare platform based on this technological integration, which synergizes service providers with patients' needs to be developed to provide personalized healthcare services at the right time, in the right place, and the right manner. This paper presents the development and overall architecture of IAServ (the Intelligent Aging-in-place Home care Web Services Platform) to provide personalized healthcare service ubiquitously in a cloud computing setting to support the most desirable and cost-efficient method of care for the aged-aging in place. The IAServ is expected to offer intelligent, pervasive, accurate and contextually-aware personal care services. Architecturally the implemented IAServ leverages web services and cloud computing to provide economic, scalable, and robust healthcare services over the Internet.

  9. Cross-Platform Technologies

    Directory of Open Access Journals (Sweden)

    Maria Cristina ENACHE

    2017-04-01

    Full Text Available Cross-platform - a concept becoming increasingly used in recent years especially in the development of mobile apps, but this consistently over time and in the development of conventional desktop applications. The notion of cross-platform software (multi-platform or platform-independent refers to a software application that can run on more than one operating system or computing architecture. Thus, a cross-platform application can operate independent of software or hardware platform on which it is execute. As a generic definition presents a wide range of meanings for purposes of this paper we individualize this definition as follows: we will reduce the horizon of meaning and we use functionally following definition: a cross-platform application is a software application that can run on more than one operating system (desktop or mobile identical or in a similar way.

  10. Platform Performance and Challenges - using Platforms in Lego Company

    DEFF Research Database (Denmark)

    Munk, Lone; Mortensen, Niels Henrik

    2009-01-01

    needs focus on the incentive of using the platform. This problem lacks attention in literature, as well as industry, where assessment criteria do not cover this aspect. Therefore, we recommend including user incentive in platform assessment criteria to these challenges. Concrete solution elements...... ensuring user incentive in platforms is an object for future research...

  11. Using self-similarity compensation for improving inter-layer prediction in scalable 3D holoscopic video coding

    Science.gov (United States)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2013-09-01

    Holoscopic imaging, also known as integral imaging, has been recently attracting the attention of the research community, as a promising glassless 3D technology due to its ability to create a more realistic depth illusion than the current stereoscopic or multiview solutions. However, in order to gradually introduce this technology into the consumer market and to efficiently deliver 3D holoscopic content to end-users, backward compatibility with legacy displays is essential. Consequently, to enable 3D holoscopic content to be delivered and presented on legacy displays, a display scalable 3D holoscopic coding approach is required. Hence, this paper presents a display scalable architecture for 3D holoscopic video coding with a three-layer approach, where each layer represents a different level of display scalability: Layer 0 - a single 2D view; Layer 1 - 3D stereo or multiview; and Layer 2 - the full 3D holoscopic content. In this context, a prediction method is proposed, which combines inter-layer prediction, aiming to exploit the existing redundancy between the multiview and the 3D holoscopic layers, with self-similarity compensated prediction (previously proposed by the authors for non-scalable 3D holoscopic video coding), aiming to exploit the spatial redundancy inherent to the 3D holoscopic enhancement layer. Experimental results show that the proposed combined prediction can improve significantly the rate-distortion performance of scalable 3D holoscopic video coding with respect to the authors' previously proposed solutions, where only inter-layer or only self-similarity prediction is used.

  12. Platform Constellations

    DEFF Research Database (Denmark)

    Staykova, Kalina Stefanova; Damsgaard, Jan

    2016-01-01

    This research paper presents an initial attempt to introduce and explain the emergence of new phenomenon, which we refer to as platform constellations. Functioning as highly modular systems, the platform constellations are collections of highly connected platforms which co-exist in parallel and a......’ acquisition and users’ engagement rates as well as unlock new sources of value creation and diversify revenue streams....

  13. Scalable fast multipole methods for vortex element methods

    KAUST Repository

    Hu, Qi

    2012-11-01

    We use a particle-based method to simulate incompressible flows, where the Fast Multipole Method (FMM) is used to accelerate the calculation of particle interactions. The most time-consuming kernelsâ\\'the Biot-Savart equation and stretching term of the vorticity equationâ\\'are mathematically reformulated so that only two Laplace scalar potentials are used instead of six, while automatically ensuring divergence-free far-field computation. Based on this formulation, and on our previous work for a scalar heterogeneous FMM algorithm, we develop a new FMM-based vortex method capable of simulating general flows including turbulence on heterogeneous architectures, which distributes the work between multi-core CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm also uses new data structures which can dynamically manage inter-node communication and load balance efficiently but with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s. © 2012 IEEE.

  14. GeoChronos: An On-line Collaborative Platform for Earth Observation Scientists

    Science.gov (United States)

    Gamon, J. A.; Kiddle, C.; Curry, R.; Markatchev, N.; Zonta-Pastorello, G., Jr.; Rivard, B.; Sanchez-Azofeifa, G. A.; Simmonds, R.; Tan, T.

    2009-12-01

    Recent advances in cyberinfrastructure are offering new solutions to the growing challenges of managing and sharing large data volumes. Web 2.0 and social networking technologies, provide the means for scientists to collaborate and share information more effectively. Cloud computing technologies can provide scientists with transparent and on-demand access to applications served over the Internet in a dynamic and scalable manner. Semantic Web technologies allow for data to be linked together in a manner understandable by machines, enabling greater automation. Combining all of these technologies together can enable the creation of very powerful platforms. GeoChronos (http://geochronos.org/), part of a CANARIE Network Enabled Platforms project, is an online collaborative platform that incorporates these technologies to enable members of the earth observation science community to share data and scientific applications and to collaborate more effectively. The GeoChronos portal is built on an open source social networking platform called Elgg. Elgg provides a full set of social networking functionalities similar to Facebook including blogs, tags, media/document sharing, wikis, friends/contacts, groups, discussions, message boards, calendars, status, activity feeds and more. An underlying cloud computing infrastructure enables scientists to access dynamically provisioned applications via the portal for visualizing and analyzing data. Users are able to access and run the applications from any computer that has a Web browser and Internet connectivity and do not need to manage and maintain the applications themselves. Semantic Web Technologies, such as the Resource Description Framework (RDF) are being employed for relating and linking together spectral, satellite, meteorological and other data. Social networking functionality plays an integral part in facilitating the sharing of data and applications. Examples of recent GeoChronos users during the early testing phase have

  15. Temporal Scalability of Dynamic Volume Data using Mesh Compensated Wavelet Lifting.

    Science.gov (United States)

    Schnurrer, Wolfgang; Pallast, Niklas; Richter, Thomas; Kaup, Andre

    2017-10-12

    Due to their high resolution, dynamic medical 2D+t and 3D+t volumes from computed tomography (CT) and magnetic resonance tomography (MR) reach a size which makes them very unhandy for teleradiologic applications. A lossless scalable representation offers the advantage of a down-scaled version which can be used for orientation or previewing, while the remaining information for reconstructing the full resolution is transmitted on demand. The wavelet transform offers the desired scalability. A very high quality of the lowpass sub-band is crucial in order to use it as a down-scaled representation. We propose an approach based on compensated wavelet lifting for obtaining a scalable representation of dynamic CT and MR volumes with very high quality. The mesh compensation is feasible to model the displacement in dynamic volumes which is mainly given by expansion and contraction of tissue over time. To achieve this, we propose an optimized estimation of the mesh compensation parameters to optimally fit for dynamic volumes. Within the lifting structure, the inversion of the motion compensation is crucial in the update step. We propose to take this inversion directly into account during the estimation step and can improve the quality of the lowpass sub-band by 0.63 dB and 0.43 dB on average for our tested dynamic CT and MR volumes at the cost of an increase of the rate by 2.4% and 1.2% on average.

  16. Sustainability and scalability of a volunteer-based primary care intervention (Health TAPESTRY): a mixed-methods analysis.

    Science.gov (United States)

    Kastner, Monika; Sayal, Radha; Oliver, Doug; Straus, Sharon E; Dolovich, Lisa

    2017-08-01

    Chronic diseases are a significant public health concern, particularly in older adults. To address the delivery of health care services to optimally meet the needs of older adults with multiple chronic diseases, Health TAPESTRY (Teams Advancing Patient Experience: Strengthening Quality) uses a novel approach that involves patient home visits by trained volunteers to collect and transmit relevant health information using e-health technology to inform appropriate care from an inter-professional healthcare team. Health TAPESTRY was implemented, pilot tested, and evaluated in a randomized controlled trial (analysis underway). Knowledge translation (KT) interventions such as Health TAPESTRY should involve an investigation of their sustainability and scalability determinants to inform further implementation. However, this is seldom considered in research or considered early enough, so the objectives of this study were to assess the sustainability and scalability potential of Health TAPESTRY from the perspective of the team who developed and pilot-tested it. Our objectives were addressed using a sequential mixed-methods approach involving the administration of a validated, sustainability survey developed by the National Health Service (NHS) to all members of the Health TAPESTRY team who were actively involved in the development, implementation and pilot evaluation of the intervention (Phase 1: n = 38). Mean sustainability scores were calculated to identify the best potential for improvement across sustainability factors. Phase 2 was a qualitative study of interviews with purposively selected Health TAPESTRY team members to gain a more in-depth understanding of the factors that influence the sustainability and scalability Health TAPESTRY. Two independent reviewers coded transcribed interviews and completed a multi-step thematic analysis. Outcomes were participant perceptions of the determinants influencing the sustainability and scalability of Health TAPESTRY. Twenty

  17. Computational scalability of large size image dissemination

    Science.gov (United States)

    Kooper, Rob; Bajcsy, Peter

    2011-01-01

    We have investigated the computational scalability of image pyramid building needed for dissemination of very large image data. The sources of large images include high resolution microscopes and telescopes, remote sensing and airborne imaging, and high resolution scanners. The term 'large' is understood from a user perspective which means either larger than a display size or larger than a memory/disk to hold the image data. The application drivers for our work are digitization projects such as the Lincoln Papers project (each image scan is about 100-150MB or about 5000x8000 pixels with the total number to be around 200,000) and the UIUC library scanning project for historical maps from 17th and 18th century (smaller number but larger images). The goal of our work is understand computational scalability of the web-based dissemination using image pyramids for these large image scans, as well as the preservation aspects of the data. We report our computational benchmarks for (a) building image pyramids to be disseminated using the Microsoft Seadragon library, (b) a computation execution approach using hyper-threading to generate image pyramids and to utilize the underlying hardware, and (c) an image pyramid preservation approach using various hard drive configurations of Redundant Array of Independent Disks (RAID) drives for input/output operations. The benchmarks are obtained with a map (334.61 MB, JPEG format, 17591x15014 pixels). The discussion combines the speed and preservation objectives.

  18. Continuous Platform Development

    DEFF Research Database (Denmark)

    Nielsen, Ole Fiil

    low risks and investments but also with relatively fuzzy results. When looking for new platform projects, it is important to make sure that the company and market is ready for the introduction of platforms, and to make sure that people from marketing and sales, product development, and downstream......, but continuous product family evolution challenges this strategy. The concept of continuous platform development is based on the fact that platform development should not be a one-time experience but rather an ongoing process of developing new platforms and updating existing ones, so that product family...

  19. Blind Cooperative Routing for Scalable and Energy-Efficient Internet of Things

    KAUST Repository

    Bader, Ahmed

    2016-02-26

    Multihop networking is promoted in this paper for energy-efficient and highly-scalable Internet of Things (IoT). Recognizing concerns related to the scalability of classical multihop routing and medium access techniques, the use of blind cooperation in conjunction with multihop communications is advocated herewith. Blind cooperation however is actually shown to be inefficient unless power control is applied. Inefficiency in this paper is projected in terms of the transport rate normalized to energy consumption. To that end, an uncoordinated power control mechanism is proposed whereby each device in a blind cooperative cluster randomly adjusts its transmit power level. An upper bound is derived for the mean transmit power that must be observed at each device. Finally, the uncoordinated power control mechanism is demonstrated to consistently outperform the simple point-to-point routing case. © 2015 IEEE.

  20. VPLS: an effective technology for building scalable transparent LAN services

    Science.gov (United States)

    Dong, Ximing; Yu, Shaohua

    2005-02-01

    Virtual Private LAN Service (VPLS) is generating considerable interest with enterprises and service providers as it offers multipoint transparent LAN service (TLS) over MPLS networks. This paper describes an effective technology - VPLS, which links virtual switch instances (VSIs) through MPLS to form an emulated Ethernet switch and build Scalable Transparent Lan Services. It first focuses on the architecture of VPLS with Ethernet bridging technique at the edge and MPLS at the core, then it tries to elucidate the data forwarding mechanism within VPLS domain, including learning and aging MAC addresses on a per LSP basis, flooding of unknown frames and replication for unknown, multicast, and broadcast frames. The loop-avoidance mechanism, known as split horizon forwarding, is also analyzed. Another important aspect of VPLS service is its basic operation, including autodiscovery and signaling, is discussed. From the perspective of efficiency and scalability the paper compares two important signaling mechanism, BGP and LDP, which are used to set up a PW between the PEs and bind the PWs to a particular VSI. With the extension of VPLS and the increase of full mesh of PWs between PE devices (n*(n-1)/2 PWs in all, a n2 complete problem), VPLS instance could have a large number of remote PE associations, resulting in an inefficient use of network bandwidth and system resources as the ingress PE has to replicate each frame and append MPLS labels for remote PE. So the latter part of this paper focuses on the scalability issue: the Hierarchical VPLS. Within the architecture of HVPLS, this paper addresses two ways to cope with a possibly large number of MAC addresses, which make VPLS operate more efficiently.

  1. Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform

    Science.gov (United States)

    Moutsatsos, Ioannis K.; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J.; Jenkins, Jeremy L.; Holway, Nicholas; Tallarico, John; Parker, Christian N.

    2016-01-01

    High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an “off-the-shelf,” open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community. PMID:27899692

  2. Jenkins-CI, an Open-Source Continuous Integration System, as a Scientific Data and Image-Processing Platform.

    Science.gov (United States)

    Moutsatsos, Ioannis K; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J; Jenkins, Jeremy L; Holway, Nicholas; Tallarico, John; Parker, Christian N

    2017-03-01

    High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an "off-the-shelf," open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community.

  3. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    Science.gov (United States)

    Jedlovec, Gary; Srikishen, Jayanthi; Edwards, Rita; Cross, David; Welch, Jon; Smith, Matt

    2013-01-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of "big data" available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Shortterm Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video

  4. Scalable Adaptive Graphics Environment (SAGE) Software for the Visualization of Large Data Sets on a Video Wall

    Science.gov (United States)

    Jedlovec, G.; Srikishen, J.; Edwards, R.; Cross, D.; Welch, J. D.; Smith, M. R.

    2013-12-01

    The use of collaborative scientific visualization systems for the analysis, visualization, and sharing of 'big data' available from new high resolution remote sensing satellite sensors or four-dimensional numerical model simulations is propelling the wider adoption of ultra-resolution tiled display walls interconnected by high speed networks. These systems require a globally connected and well-integrated operating environment that provides persistent visualization and collaboration services. This abstract and subsequent presentation describes a new collaborative visualization system installed for NASA's Short-term Prediction Research and Transition (SPoRT) program at Marshall Space Flight Center and its use for Earth science applications. The system consists of a 3 x 4 array of 1920 x 1080 pixel thin bezel video monitors mounted on a wall in a scientific collaboration lab. The monitors are physically and virtually integrated into a 14' x 7' for video display. The display of scientific data on the video wall is controlled by a single Alienware Aurora PC with a 2nd Generation Intel Core 4.1 GHz processor, 32 GB memory, and an AMD Fire Pro W600 video card with 6 mini display port connections. Six mini display-to-dual DVI cables are used to connect the 12 individual video monitors. The open source Scalable Adaptive Graphics Environment (SAGE) windowing and media control framework, running on top of the Ubuntu 12 Linux operating system, allows several users to simultaneously control the display and storage of high resolution still and moving graphics in a variety of formats, on tiled display walls of any size. The Ubuntu operating system supports the open source Scalable Adaptive Graphics Environment (SAGE) software which provides a common environment, or framework, enabling its users to access, display and share a variety of data-intensive information. This information can be digital-cinema animations, high-resolution images, high-definition video

  5. Heterogeneous scalable framework for multiphase flows

    Energy Technology Data Exchange (ETDEWEB)

    Morris, Karla Vanessa [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-09-01

    Two categories of challenges confront the developer of computational spray models: those related to the computation and those related to the physics. Regarding the computation, the trend towards heterogeneous, multi- and many-core platforms will require considerable re-engineering of codes written for the current supercomputing platforms. Regarding the physics, accurate methods for transferring mass, momentum and energy from the dispersed phase onto the carrier fluid grid have so far eluded modelers. Significant challenges also lie at the intersection between these two categories. To be competitive, any physics model must be expressible in a parallel algorithm that performs well on evolving computer platforms. This work created an application based on a software architecture where the physics and software concerns are separated in a way that adds flexibility to both. The develop spray-tracking package includes an application programming interface (API) that abstracts away the platform-dependent parallelization concerns, enabling the scientific programmer to write serial code that the API resolves into parallel processes and threads of execution. The project also developed the infrastructure required to provide similar APIs to other application. The API allow object-oriented Fortran applications direct interaction with Trilinos to support memory management of distributed objects in central processing units (CPU) and graphic processing units (GPU) nodes for applications using C++.

  6. Scalable graphene production: perspectives and challenges of plasma applications

    Science.gov (United States)

    Levchenko, Igor; Ostrikov, Kostya (Ken); Zheng, Jie; Li, Xingguo; Keidar, Michael; B. K. Teo, Kenneth

    2016-05-01

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h-1 m-2 was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of various

  7. Scalable graphene production: perspectives and challenges of plasma applications.

    Science.gov (United States)

    Levchenko, Igor; Ostrikov, Kostya Ken; Zheng, Jie; Li, Xingguo; Keidar, Michael; B K Teo, Kenneth

    2016-05-19

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h(-1) m(-2) was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of

  8. Advanced technologies for scalable ATLAS conditions database access on the grid

    International Nuclear Information System (INIS)

    Basset, R; Canali, L; Girone, M; Hawkings, R; Valassi, A; Viegas, F; Dimitrov, G; Nevski, P; Vaniachine, A; Walker, R; Wong, A

    2010-01-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysis of server performance under stress tests indicates that Conditions Db data access is limited by the disk I/O throughput. An unacceptable side-effect of the disk I/O saturation is a degradation of the WLCG 3D Services that update Conditions Db data at all ten ATLAS Tier-1 sites using the technology of Oracle Streams. To avoid such bottlenecks we prototyped and tested a novel approach for database peak load avoidance in Grid computing. Our approach is based upon the proven idea of pilot job submission on the Grid: instead of the actual query, an ATLAS utility library sends to the database server a pilot query first.

  9. Scalable graphene aptasensors for drug quantification

    Science.gov (United States)

    Vishnubhotla, Ramya; Ping, Jinglei; Gao, Zhaoli; Lee, Abigail; Saouaf, Olivia; Vrudhula, Amey; Johnson, A. T. Charlie

    2017-11-01

    Simpler and more rapid approaches for therapeutic drug-level monitoring are highly desirable to enable use at the point-of-care. We have developed an all-electronic approach for detection of the HIV drug tenofovir based on scalable fabrication of arrays of graphene field-effect transistors (GFETs) functionalized with a commercially available DNA aptamer. The shift in the Dirac voltage of the GFETs varied systematically with the concentration of tenofovir in deionized water, with a detection limit less than 1 ng/mL. Tests against a set of negative controls confirmed the specificity of the sensor response. This approach offers the potential for further development into a rapid and convenient point-of-care tool with clinically relevant performance.

  10. Scalable quantum search using trapped ions

    International Nuclear Information System (INIS)

    Ivanov, S. S.; Ivanov, P. A.; Linington, I. E.; Vitanov, N. V.

    2010-01-01

    We propose a scalable implementation of Grover's quantum search algorithm in a trapped-ion quantum information processor. The system is initialized in an entangled Dicke state by using adiabatic techniques. The inversion-about-average and oracle operators take the form of single off-resonant laser pulses. This is made possible by utilizing the physical symmetries of the trapped-ion linear crystal. The physical realization of the algorithm represents a dramatic simplification: each logical iteration (oracle and inversion about average) requires only two physical interaction steps, in contrast to the large number of concatenated gates required by previous approaches. This not only facilitates the implementation but also increases the overall fidelity of the algorithm.

  11. Spatiotemporal Stochastic Modeling of IoT Enabled Cellular Networks: Scalability and Stability Analysis

    KAUST Repository

    Gharbieh, Mohammad; Elsawy, Hesham; Bader, Ahmed; Alouini, Mohamed-Slim

    2017-01-01

    The Internet of Things (IoT) is large-scale by nature, which is manifested by the massive number of connected devices as well as their vast spatial existence. Cellular networks, which provide ubiquitous, reliable, and efficient wireless access, will play fundamental rule in delivering the first-mile access for the data tsunami to be generated by the IoT. However, cellular networks may have scalability problems to provide uplink connectivity to massive numbers of connected things. To characterize the scalability of cellular uplink in the context of IoT networks, this paper develops a traffic-aware spatiotemporal mathematical model for IoT devices supported by cellular uplink connectivity. The developed model is based on stochastic geometry and queueing theory to account for the traffic requirement per IoT device, the different transmission strategies, and the mutual interference between the IoT devices. To this end, the developed model is utilized to characterize the extent to which cellular networks can accommodate IoT traffic as well as to assess and compare three different transmission strategies that incorporate a combination of transmission persistency, backoff, and power-ramping. The analysis and the results clearly illustrate the scalability problem imposed by IoT on cellular network and offer insights into effective scenarios for each transmission strategy.

  12. Spatiotemporal Stochastic Modeling of IoT Enabled Cellular Networks: Scalability and Stability Analysis

    KAUST Repository

    Gharbieh, Mohammad

    2017-05-02

    The Internet of Things (IoT) is large-scale by nature, which is manifested by the massive number of connected devices as well as their vast spatial existence. Cellular networks, which provide ubiquitous, reliable, and efficient wireless access, will play fundamental rule in delivering the first-mile access for the data tsunami to be generated by the IoT. However, cellular networks may have scalability problems to provide uplink connectivity to massive numbers of connected things. To characterize the scalability of cellular uplink in the context of IoT networks, this paper develops a traffic-aware spatiotemporal mathematical model for IoT devices supported by cellular uplink connectivity. The developed model is based on stochastic geometry and queueing theory to account for the traffic requirement per IoT device, the different transmission strategies, and the mutual interference between the IoT devices. To this end, the developed model is utilized to characterize the extent to which cellular networks can accommodate IoT traffic as well as to assess and compare three different transmission strategies that incorporate a combination of transmission persistency, backoff, and power-ramping. The analysis and the results clearly illustrate the scalability problem imposed by IoT on cellular network and offer insights into effective scenarios for each transmission strategy.

  13. A collaborative smartphone sensing platform for detecting and tracking hostile drones

    Science.gov (United States)

    Boddhu, Sanjay K.; McCartney, Matt; Ceccopieri, Oliver; Williams, Robert L.

    2013-05-01

    In recent years, not only United States Armed Services but other Law-enforcement agencies have shown increasing interest in employing drones for various surveillance and reconnaissance purposes. Further, recent advancements in autonomous drone control and navigation technology have tremendously increased the geographic extent of dronebased missions beyond the conventional line-of-sight coverage. Without any sophisticated requirement on data links to control them remotely (human-in-loop), drones are proving to be a reliable and effective means of securing personnel and soldiers operating in hostile environments. However, this autonomous breed of drones can potentially prove to be a significant threat when acquired by antisocial groups who wish to target property and life in urban settlements. To further escalate the issue, the standard detection techniques like RADARs, RF data link signature scanners, etc..., prove futile as the drones are smaller in size to evade successful detection by a RADAR based system in urban environment and being autonomous, have the capability of operating without a traceable active data link (RF). Hence, towards investigating possible practical solutions for the issue, the research team at AFRL's Tec^Edge Labs under SATE and YATE programs has developed a highly scalable, geographically distributable and easily deployable smartphone-based collaborative platform that can aid in detecting and tracking unidentified hostile drones. In its current state, this collaborative platform built on the paradigm of "Human-as-Sensors", consists primarily of an intelligent Smartphone application that leverages appropriate sensors on the device to capture a drone's attributes (flight direction, orientation, shape, color, etc..,) with real-time collaboration capabilities through a highly composable sensor cloud and an intelligent processing module (based on a Probabilistic model) that can estimate and predict the possible flight path of a hostile drone

  14. Programming time-multiplexed reconfigurable hardware using a scalable neuromorphic compiler.

    Science.gov (United States)

    Minkovich, Kirill; Srinivasa, Narayan; Cruz-Albrecht, Jose M; Cho, Youngkwan; Nogin, Aleksey

    2012-06-01

    Scalability and connectivity are two key challenges in designing neuromorphic hardware that can match biological levels. In this paper, we describe a neuromorphic system architecture design that addresses an approach to meet these challenges using traditional complementary metal-oxide-semiconductor (CMOS) hardware. A key requirement in realizing such neural architectures in hardware is the ability to automatically configure the hardware to emulate any neural architecture or model. The focus for this paper is to describe the details of such a programmable front-end. This programmable front-end is composed of a neuromorphic compiler and a digital memory, and is designed based on the concept of synaptic time-multiplexing (STM). The neuromorphic compiler automatically translates any given neural architecture to hardware switch states and these states are stored in digital memory to enable desired neural architectures. STM enables our proposed architecture to address scalability and connectivity using traditional CMOS hardware. We describe the details of the proposed design and the programmable front-end, and provide examples to illustrate its capabilities. We also provide perspectives for future extensions and potential applications.

  15. GPU-FS-kNN: a software tool for fast and scalable kNN computation using GPUs.

    Directory of Open Access Journals (Sweden)

    Ahmed Shamsul Arefin

    Full Text Available BACKGROUND: The analysis of biological networks has become a major challenge due to the recent development of high-throughput techniques that are rapidly producing very large data sets. The exploding volumes of biological data are craving for extreme computational power and special computing facilities (i.e. super-computers. An inexpensive solution, such as General Purpose computation based on Graphics Processing Units (GPGPU, can be adapted to tackle this challenge, but the limitation of the device internal memory can pose a new problem of scalability. An efficient data and computational parallelism with partitioning is required to provide a fast and scalable solution to this problem. RESULTS: We propose an efficient parallel formulation of the k-Nearest Neighbour (kNN search problem, which is a popular method for classifying objects in several fields of research, such as pattern recognition, machine learning and bioinformatics. Being very simple and straightforward, the performance of the kNN search degrades dramatically for large data sets, since the task is computationally intensive. The proposed approach is not only fast but also scalable to large-scale instances. Based on our approach, we implemented a software tool GPU-FS-kNN (GPU-based Fast and Scalable k-Nearest Neighbour for CUDA enabled GPUs. The basic approach is simple and adaptable to other available GPU architectures. We observed speed-ups of 50-60 times compared with CPU implementation on a well-known breast microarray study and its associated data sets. CONCLUSION: Our GPU-based Fast and Scalable k-Nearest Neighbour search technique (GPU-FS-kNN provides a significant performance improvement for nearest neighbour computation in large-scale networks. Source code and the software tool is available under GNU Public License (GPL at https://sourceforge.net/p/gpufsknn/.

  16. Mobile platform security

    CERN Document Server

    Asokan, N; Dmitrienko, Alexandra

    2013-01-01

    Recently, mobile security has garnered considerable interest in both the research community and industry due to the popularity of smartphones. The current smartphone platforms are open systems that allow application development, also for malicious parties. To protect the mobile device, its user, and other mobile ecosystem stakeholders such as network operators, application execution is controlled by a platform security architecture. This book explores how such mobile platform security architectures work. We present a generic model for mobile platform security architectures: the model illustrat

  17. A Scalable Communication Architecture for Advanced Metering Infrastructure

    OpenAIRE

    Ngo Hoang , Giang; Liquori , Luigi; Nguyen Chan , Hung

    2013-01-01

    Advanced Metering Infrastructure (AMI), seen as foundation for overall grid modernization, is an integration of many technologies that provides an intelligent connection between consumers and system operators [ami 2008]. One of the biggest challenge that AMI faces is to scalable collect and manage a huge amount of data from a large number of customers. In our paper, we address this challenge by introducing a mixed peer-to-peer (P2P) and client-server communication architecture for AMI in whic...

  18. Neutron generators with size scalability, ease of fabrication and multiple ion source functionalities

    Science.gov (United States)

    Elizondo-Decanini, Juan M

    2014-11-18

    A neutron generator is provided with a flat, rectilinear geometry and surface mounted metallizations. This construction provides scalability and ease of fabrication, and permits multiple ion source functionalities.

  19. Scalable and Anonymous Group Communication with MTor

    Directory of Open Access Journals (Sweden)

    Lin Dong

    2016-04-01

    Full Text Available This paper presents MTor, a low-latency anonymous group communication system. We construct MTor as an extension to Tor, allowing the construction of multi-source multicast trees on top of the existing Tor infrastructure. MTor does not depend on an external service to broker the group communication, and avoids central points of failure and trust. MTor’s substantial bandwidth savings and graceful scalability enable new classes of anonymous applications that are currently too bandwidth-intensive to be viable through traditional unicast Tor communication-e.g., group file transfer, collaborative editing, streaming video, and real-time audio conferencing.

  20. Techno-economic analysis of a transient plant-based platform for monoclonal antibody production

    Science.gov (United States)

    Nandi, Somen; Kwong, Aaron T.; Holtz, Barry R.; Erwin, Robert L.; Marcel, Sylvain; McDonald, Karen A.

    2016-01-01

    ABSTRACT Plant-based biomanufacturing of therapeutic proteins is a relatively new platform with a small number of commercial-scale facilities, but offers advantages of linear scalability, reduced upstream complexity, reduced time to market, and potentially lower capital and operating costs. In this study we present a detailed process simulation model for a large-scale new “greenfield” biomanufacturing facility that uses transient agroinfiltration of Nicotiana benthamiana plants grown hydroponically indoors under light-emitting diode lighting for the production of a monoclonal antibody. The model was used to evaluate the total capital investment, annual operating cost, and cost of goods sold as a function of mAb expression level in the plant (g mAb/kg fresh weight of the plant) and production capacity (kg mAb/year). For the Base Case design scenario (300 kg mAb/year, 1 g mAb/kg fresh weight, and 65% recovery in downstream processing), the model predicts a total capital investment of $122 million dollars and cost of goods sold of $121/g including depreciation. Compared with traditional biomanufacturing platforms that use mammalian cells grown in bioreactors, the model predicts significant reductions in capital investment and >50% reduction in cost of goods compared with published values at similar production scales. The simulation model can be modified or adapted by others to assess the profitability of alternative designs, implement different process assumptions, and help guide process development and optimization. PMID:27559626

  1. Techno-economic analysis of a transient plant-based platform for monoclonal antibody production.

    Science.gov (United States)

    Nandi, Somen; Kwong, Aaron T; Holtz, Barry R; Erwin, Robert L; Marcel, Sylvain; McDonald, Karen A

    Plant-based biomanufacturing of therapeutic proteins is a relatively new platform with a small number of commercial-scale facilities, but offers advantages of linear scalability, reduced upstream complexity, reduced time to market, and potentially lower capital and operating costs. In this study we present a detailed process simulation model for a large-scale new "greenfield" biomanufacturing facility that uses transient agroinfiltration of Nicotiana benthamiana plants grown hydroponically indoors under light-emitting diode lighting for the production of a monoclonal antibody. The model was used to evaluate the total capital investment, annual operating cost, and cost of goods sold as a function of mAb expression level in the plant (g mAb/kg fresh weight of the plant) and production capacity (kg mAb/year). For the Base Case design scenario (300 kg mAb/year, 1 g mAb/kg fresh weight, and 65% recovery in downstream processing), the model predicts a total capital investment of $122 million dollars and cost of goods sold of $121/g including depreciation. Compared with traditional biomanufacturing platforms that use mammalian cells grown in bioreactors, the model predicts significant reductions in capital investment and >50% reduction in cost of goods compared with published values at similar production scales. The simulation model can be modified or adapted by others to assess the profitability of alternative designs, implement different process assumptions, and help guide process development and optimization.

  2. Interoperability of remote handling control system software modules at Divertor Test Platform 2 using middleware

    International Nuclear Information System (INIS)

    Tuominen, Janne; Rasi, Teemu; Mattila, Jouni; Siuko, Mikko; Esque, Salvador; Hamilton, David

    2013-01-01

    Highlights: ► The prototype DTP2 remote handling control system is a heterogeneous collection of subsystems, each realizing a functional area of responsibility. ► Middleware provides well-known, reusable solutions to problems, such as heterogeneity, interoperability, security and dependability. ► A middleware solution was selected and integrated with the DTP2 RH control system. The middleware was successfully used to integrate all relevant subsystems and functionality was demonstrated. -- Abstract: This paper focuses on the inter-subsystem communication channels in a prototype distributed remote handling control system at Divertor Test Platform 2 (DTP2). The subsystems are responsible for specific tasks and, over the years, their development has been carried out using various platforms and programming languages. The communication channels between subsystems have different priorities, e.g. very high messaging rate and deterministic timing or high reliability in terms of individual messages. Generally, a control system's communication infrastructure should provide interoperability, scalability, performance and maintainability. An attractive approach to accomplish this is to use a standardized and proven middleware implementation. The selection of a middleware can have a major cost impact in future integration efforts. In this paper we present development done at DTP2 using the Object Management Group's (OMG) standard specification for Data Distribution Service (DDS) for ensuring communications interoperability. DDS has gained a stable foothold especially in the military field. It lacks a centralized broker, thereby avoiding a single-point-of-failure. It also includes an extensive set of Quality of Service (QoS) policies. The standard defines a platform- and programming language independent model and an interoperability wire protocol that enables DDS vendor interoperability, allowing software developers to avoid vendor lock-in situations

  3. Optimized bit extraction using distortion modeling in the scalable extension of H.264/AVC.

    Science.gov (United States)

    Maani, Ehsan; Katsaggelos, Aggelos K

    2009-09-01

    The newly adopted scalable extension of H.264/AVC video coding standard (SVC) demonstrates significant improvements in coding efficiency in addition to an increased degree of supported scalability relative to the scalable profiles of prior video coding standards. Due to the complicated hierarchical prediction structure of the SVC and the concept of key pictures, content-aware rate adaptation of SVC bit streams to intermediate bit rates is a nontrivial task. The concept of quality layers has been introduced in the design of the SVC to allow for fast content-aware prioritized rate adaptation. However, existing quality layer assignment methods are suboptimal and do not consider all network abstraction layer (NAL) units from different layers for the optimization. In this paper, we first propose a technique to accurately and efficiently estimate the quality degradation resulting from discarding an arbitrary number of NAL units from multiple layers of a bitstream by properly taking drift into account. Then, we utilize this distortion estimation technique to assign quality layers to NAL units for a more efficient extraction. Experimental results show that a significant gain can be achieved by the proposed scheme.

  4. Platform-based production development

    DEFF Research Database (Denmark)

    Bossen, Jacob; Brunoe, Thomas Ditlev; Nielsen, Kjeld

    2015-01-01

    Platforms as a means for applying modular thinking in product development is relatively well studied, but platforms in the production system has until now not been given much attention. With the emerging concept of platform-based co-development the importance of production platforms is though...

  5. Scalable Generation of Universal Platelets from Human Induced Pluripotent Stem Cells

    Directory of Open Access Journals (Sweden)

    Qiang Feng

    2014-11-01

    Full Text Available Human induced pluripotent stem cells (iPSCs provide a potentially replenishable source for the production of transfusable platelets. Here, we describe a method to generate megakaryocytes (MKs and functional platelets from iPSCs in a scalable manner under serum/feeder-free conditions. The method also permits the cryopreservation of MK progenitors, enabling a rapid “surge” capacity when large numbers of platelets are needed. Ultrastructural/morphological analyses show no major differences between iPSC platelets and human blood platelets. iPSC platelets form aggregates, lamellipodia, and filopodia after activation and circulate in macrophage-depleted animals and incorporate into developing mouse thrombi in a manner identical to human platelets. By knocking out the β2-microglobulin gene, we have generated platelets that are negative for the major histocompatibility antigens. The scalable generation of HLA-ABC-negative platelets from a renewable cell source represents an important step toward generating universal platelets for transfusion as well as a potential strategy for the management of platelet refractoriness.

  6. Scalable electrophysiology in intact small animals with nanoscale suspended electrode arrays

    Science.gov (United States)

    Gonzales, Daniel L.; Badhiwala, Krishna N.; Vercosa, Daniel G.; Avants, Benjamin W.; Liu, Zheng; Zhong, Weiwei; Robinson, Jacob T.

    2017-07-01

    Electrical measurements from large populations of animals would help reveal fundamental properties of the nervous system and neurological diseases. Small invertebrates are ideal for these large-scale studies; however, patch-clamp electrophysiology in microscopic animals typically requires invasive dissections and is low-throughput. To overcome these limitations, we present nano-SPEARs: suspended electrodes integrated into a scalable microfluidic device. Using this technology, we have made the first extracellular recordings of body-wall muscle electrophysiology inside an intact roundworm, Caenorhabditis elegans. We can also use nano-SPEARs to record from multiple animals in parallel and even from other species, such as Hydra littoralis. Furthermore, we use nano-SPEARs to establish the first electrophysiological phenotypes for C. elegans models for amyotrophic lateral sclerosis and Parkinson's disease, and show a partial rescue of the Parkinson's phenotype through drug treatment. These results demonstrate that nano-SPEARs provide the core technology for microchips that enable scalable, in vivo studies of neurobiology and neurological diseases.

  7. Omnidirectional holonomic platforms

    International Nuclear Information System (INIS)

    Pin, F.G.; Killough, S.M.

    1994-01-01

    This paper presents the concepts for a new family of wheeled platforms which feature full omnidirectionality with simultaneous and independently controlled rotational and translational motion capabilities. The authors first present the orthogonal-wheels concept and the two major wheel assemblies on which these platforms are based. They then describe how a combination of these assemblies with appropriate control can be used to generate an omnidirectional capability for mobile robot platforms. The design and control of two prototype platforms are then presented and their respective characteristics with respect to rotational and translational motion control are discussed

  8. Platform decommissioning costs

    International Nuclear Information System (INIS)

    Rodger, David

    1998-01-01

    There are over 6500 platforms worldwide contributing to the offshore oil and gas production industry. In the North Sea there are around 500 platforms in place. There are many factors to be considered in planning for platform decommissioning and the evaluation of options for removal and disposal. The environmental impact, technical feasibility, safety and cost factors all have to be considered. This presentation considers what information is available about the overall decommissioning costs for the North Sea and the costs of different removal and disposal options for individual platforms. 2 figs., 1 tab

  9. Particle Communication and Domain Neighbor Coupling: Scalable Domain Decomposed Algorithms for Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, M. J.; Brantley, P. S.

    2015-01-20

    In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 221 = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.

  10. Product Platform Replacements

    DEFF Research Database (Denmark)

    Sköld, Martin; Karlsson, Christer

    2012-01-01

    . To shed light on this unexplored and growing managerial concern, the purpose of this explorative study is to identify operational challenges to management when product platforms are replaced. Design/methodology/approach – The study uses a longitudinal field-study approach. Two companies, Gamma and Omega...... replacement was chosen in each company. Findings – The study shows that platform replacements primarily challenge managers' existing knowledge about platform architectures. A distinction can be made between “width” and “height” in platform replacements, and it is crucial that managers observe this in order...... to challenge their existing knowledge about platform architectures. Issues on technologies, architectures, components and processes as well as on segments, applications and functions are identified. Practical implications – Practical implications are summarized and discussed in relation to a framework...

  11. Scalable video on demand adaptive Internet-based distribution

    CERN Document Server

    Zink, Michael

    2013-01-01

    In recent years, the proliferation of available video content and the popularity of the Internet have encouraged service providers to develop new ways of distributing content to clients. Increasing video scaling ratios and advanced digital signal processing techniques have led to Internet Video-on-Demand applications, but these currently lack efficiency and quality. Scalable Video on Demand: Adaptive Internet-based Distribution examines how current video compression and streaming can be used to deliver high-quality applications over the Internet. In addition to analysing the problems

  12. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2008-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately......The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chip. This means that parallel processing is required in application areas that traditionally have not used...

  13. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2009-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately......The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chips. This means that parallel processing is required in application areas that traditionally have not used...

  14. A highly scalable peptide-based assay system for proteomics.

    Directory of Open Access Journals (Sweden)

    Igor A Kozlov

    Full Text Available We report a scalable and cost-effective technology for generating and screening high-complexity customizable peptide sets. The peptides are made as peptide-cDNA fusions by in vitro transcription/translation from pools of DNA templates generated by microarray-based synthesis. This approach enables large custom sets of peptides to be designed in silico, manufactured cost-effectively in parallel, and assayed efficiently in a multiplexed fashion. The utility of our peptide-cDNA fusion pools was demonstrated in two activity-based assays designed to discover protease and kinase substrates. In the protease assay, cleaved peptide substrates were separated from uncleaved and identified by digital sequencing of their cognate cDNAs. We screened the 3,011 amino acid HCV proteome for susceptibility to cleavage by the HCV NS3/4A protease and identified all 3 known trans cleavage sites with high specificity. In the kinase assay, peptide substrates phosphorylated by tyrosine kinases were captured and identified by sequencing of their cDNAs. We screened a pool of 3,243 peptides against Abl kinase and showed that phosphorylation events detected were specific and consistent with the known substrate preferences of Abl kinase. Our approach is scalable and adaptable to other protein-based assays.

  15. Scalable Nernst thermoelectric power using a coiled galfenol wire

    Science.gov (United States)

    Yang, Zihao; Codecido, Emilio A.; Marquez, Jason; Zheng, Yuanhua; Heremans, Joseph P.; Myers, Roberto C.

    2017-09-01

    The Nernst thermopower usually is considered far too weak in most metals for waste heat recovery. However, its transverse orientation gives it an advantage over the Seebeck effect on non-flat surfaces. Here, we experimentally demonstrate the scalable generation of a Nernst voltage in an air-cooled metal wire coiled around a hot cylinder. In this geometry, a radial temperature gradient generates an azimuthal electric field in the coil. A Galfenol (Fe0.85Ga0.15) wire is wrapped around a cartridge heater, and the voltage drop across the wire is measured as a function of axial magnetic field. As expected, the Nernst voltage scales linearly with the length of the wire. Based on heat conduction and fluid dynamic equations, finite-element method is used to calculate the temperature gradient across the Galfenol wire and determine the Nernst coefficient. A giant Nernst coefficient of -2.6 μV/KT at room temperature is estimated, in agreement with measurements on bulk Galfenol. We expect that the giant Nernst effect in Galfenol arises from its magnetostriction, presumably through enhanced magnon-phonon coupling. Our results demonstrate the feasibility of a transverse thermoelectric generator capable of scalable output power from non-flat heat sources.

  16. Scalable Nernst thermoelectric power using a coiled galfenol wire

    Directory of Open Access Journals (Sweden)

    Zihao Yang

    2017-09-01

    Full Text Available The Nernst thermopower usually is considered far too weak in most metals for waste heat recovery. However, its transverse orientation gives it an advantage over the Seebeck effect on non-flat surfaces. Here, we experimentally demonstrate the scalable generation of a Nernst voltage in an air-cooled metal wire coiled around a hot cylinder. In this geometry, a radial temperature gradient generates an azimuthal electric field in the coil. A Galfenol (Fe0.85Ga0.15 wire is wrapped around a cartridge heater, and the voltage drop across the wire is measured as a function of axial magnetic field. As expected, the Nernst voltage scales linearly with the length of the wire. Based on heat conduction and fluid dynamic equations, finite-element method is used to calculate the temperature gradient across the Galfenol wire and determine the Nernst coefficient. A giant Nernst coefficient of -2.6 μV/KT at room temperature is estimated, in agreement with measurements on bulk Galfenol. We expect that the giant Nernst effect in Galfenol arises from its magnetostriction, presumably through enhanced magnon-phonon coupling. Our results demonstrate the feasibility of a transverse thermoelectric generator capable of scalable output power from non-flat heat sources.

  17. Implementation of a Big Data Accessing and Processing Platform for Medical Records in Cloud.

    Science.gov (United States)

    Yang, Chao-Tung; Liu, Jung-Chun; Chen, Shuo-Tsung; Lu, Hsin-Wen

    2017-08-18

    Big Data analysis has become a key factor of being innovative and competitive. Along with population growth worldwide and the trend aging of population in developed countries, the rate of the national medical care usage has been increasing. Due to the fact that individual medical data are usually scattered in different institutions and their data formats are varied, to integrate those data that continue increasing is challenging. In order to have scalable load capacity for these data platforms, we must build them in good platform architecture. Some issues must be considered in order to use the cloud computing to quickly integrate big medical data into database for easy analyzing, searching, and filtering big data to obtain valuable information.This work builds a cloud storage system with HBase of Hadoop for storing and analyzing big data of medical records and improves the performance of importing data into database. The data of medical records are stored in HBase database platform for big data analysis. This system performs distributed computing on medical records data processing through Hadoop MapReduce programming, and to provide functions, including keyword search, data filtering, and basic statistics for HBase database. This system uses the Put with the single-threaded method and the CompleteBulkload mechanism to import medical data. From the experimental results, we find that when the file size is less than 300MB, the Put with single-threaded method is used and when the file size is larger than 300MB, the CompleteBulkload mechanism is used to improve the performance of data import into database. This system provides a web interface that allows users to search data, filter out meaningful information through the web, and analyze and convert data in suitable forms that will be helpful for medical staff and institutions.

  18. Continuous flow photocyclization of stilbenes – scalable synthesis of functionalized phenanthrenes and helicenes

    Directory of Open Access Journals (Sweden)

    Quentin Lefebvre

    2013-09-01

    Full Text Available A continuous flow oxidative photocyclization of stilbene derivatives has been developed which allows the scalable synthesis of backbone functionalized phenanthrenes and helicenes of various sizes in good yields.

  19. Scalable Multi-group Key Management for Advanced Metering Infrastructure

    OpenAIRE

    Benmalek , Mourad; Challal , Yacine; Bouabdallah , Abdelmadjid

    2015-01-01

    International audience; Advanced Metering Infrastructure (AMI) is composed of systems and networks to incorporate changes for modernizing the electricity grid, reduce peak loads, and meet energy efficiency targets. AMI is a privileged target for security attacks with potentially great damage against infrastructures and privacy. For this reason, Key Management has been identified as one of the most challenging topics in AMI development. In this paper, we propose a new Scalable multi-group key ...

  20. Product Platform Modeling

    DEFF Research Database (Denmark)

    Pedersen, Rasmus

    for customisation of products. In many companies these changes in the business environment have created a controversy between the need for a wide variety of products offered to the marketplace and a desire to reduce variation within the company in order to increase efficiency. Many companies use the concept...... other. These groups can be varied and combined to form different product variants without increasing the internal variety in the company. Based on the Theory of Domains, the concept of encapsulation in the organ domain is introduced, and organs are formulated as platform elements. Included......This PhD thesis has the title Product Platform Modelling. The thesis is about product platforms and visual product platform modelling. Product platforms have gained an increasing attention in industry and academia in the past decade. The reasons are many, yet the increasing globalisation...

  1. Towards Scalable Strain Gauge-Based Joint Torque Sensors

    Science.gov (United States)

    D’Imperio, Mariapaola; Cannella, Ferdinando; Caldwell, Darwin G.; Cuschieri, Alfred

    2017-01-01

    During recent decades, strain gauge-based joint torque sensors have been commonly used to provide high-fidelity torque measurements in robotics. Although measurement of joint torque/force is often required in engineering research and development, the gluing and wiring of strain gauges used as torque sensors pose difficulties during integration within the restricted space available in small joints. The problem is compounded by the need for a scalable geometric design to measure joint torque. In this communication, we describe a novel design of a strain gauge-based mono-axial torque sensor referred to as square-cut torque sensor (SCTS), the significant features of which are high degree of linearity, symmetry, and high scalability in terms of both size and measuring range. Most importantly, SCTS provides easy access for gluing and wiring of the strain gauges on sensor surface despite the limited available space. We demonstrated that the SCTS was better in terms of symmetry (clockwise and counterclockwise rotation) and more linear. These capabilities have been shown through finite element modeling (ANSYS) confirmed by observed data obtained by load testing experiments. The high performance of SCTS was confirmed by studies involving changes in size, material and/or wings width and thickness. Finally, we demonstrated that the SCTS can be successfully implementation inside the hip joints of miniaturized hydraulically actuated quadruped robot-MiniHyQ. This communication is based on work presented at the 18th International Conference on Climbing and Walking Robots (CLAWAR). PMID:28820446

  2. Scalable parallel prefix solvers for discrete ordinates transport

    International Nuclear Information System (INIS)

    Pautz, S.; Pandya, T.; Adams, M.

    2009-01-01

    The well-known 'sweep' algorithm for inverting the streaming-plus-collision term in first-order deterministic radiation transport calculations has some desirable numerical properties. However, it suffers from parallel scaling issues caused by a lack of concurrency. The maximum degree of concurrency, and thus the maximum parallelism, grows more slowly than the problem size for sweeps-based solvers. We investigate a new class of parallel algorithms that involves recasting the streaming-plus-collision problem in prefix form and solving via cyclic reduction. This method, although computationally more expensive at low levels of parallelism than the sweep algorithm, offers better theoretical scalability properties. Previous work has demonstrated this approach for one-dimensional calculations; we show how to extend it to multidimensional calculations. Notably, for multiple dimensions it appears that this approach is limited to long-characteristics discretizations; other discretizations cannot be cast in prefix form. We implement two variants of the algorithm within the radlib/SCEPTRE transport code library at Sandia National Laboratories and show results on two different massively parallel systems. Both the 'forward' and 'symmetric' solvers behave similarly, scaling well to larger degrees of parallelism then sweeps-based solvers. We do observe some issues at the highest levels of parallelism (relative to the system size) and discuss possible causes. We conclude that this approach shows good potential for future parallel systems, but the parallel scalability will depend heavily on the architecture of the communication networks of these systems. (authors)

  3. Vocal activity as a low cost and scalable index of seabird colony size.

    Science.gov (United States)

    Borker, Abraham L; McKown, Matthew W; Ackerman, Joshua T; Eagles-Smith, Collin A; Tershy, Bernie R; Croll, Donald A

    2014-08-01

    Although wildlife conservation actions have increased globally in number and complexity, the lack of scalable, cost-effective monitoring methods limits adaptive management and the evaluation of conservation efficacy. Automated sensors and computer-aided analyses provide a scalable and increasingly cost-effective tool for conservation monitoring. A key assumption of automated acoustic monitoring of birds is that measures of acoustic activity at colony sites are correlated with the relative abundance of nesting birds. We tested this assumption for nesting Forster's terns (Sterna forsteri) in San Francisco Bay for 2 breeding seasons. Sensors recorded ambient sound at 7 colonies that had 15-111 nests in 2009 and 2010. Colonies were spaced at least 250 m apart and ranged from 36 to 2,571 m(2) . We used spectrogram cross-correlation to automate the detection of tern calls from recordings. We calculated mean seasonal call rate and compared it with mean active nest count at each colony. Acoustic activity explained 71% of the variation in nest abundance between breeding sites and 88% of the change in colony size between years. These results validate a primary assumption of acoustic indices; that is, for terns, acoustic activity is correlated to relative abundance, a fundamental step toward designing rigorous and scalable acoustic monitoring programs to measure the effectiveness of conservation actions for colonial birds and other acoustically active wildlife. © 2014 Society for Conservation Biology.

  4. Design for Scalability: A Case Study of the River City Curriculum

    Science.gov (United States)

    Clarke, Jody; Dede, Chris

    2009-01-01

    One-size-fits-all educational innovations do not work because they ignore contextual factors that determine an intervention's efficacy in a particular local situation. This paper presents a framework on how to design educational innovations for scalability through enhancing their adaptability for effective usage in a wide variety of settings. The…

  5. Introducing Platform Interactions Model for Studying Multi-Sided Platforms

    DEFF Research Database (Denmark)

    Staykova, Kalina; Damsgaard, Jan

    2018-01-01

    Multi-Sided Platforms (MSPs) function as socio-technical entities that facilitate direct interactions between various affiliated to them constituencies through developing and managing IT architecture. In this paper, we aim to explain the nature of the platform interactions as key characteristic o...

  6. Interactive segmentation: a scalable superpixel-based method

    Science.gov (United States)

    Mathieu, Bérengère; Crouzil, Alain; Puel, Jean-Baptiste

    2017-11-01

    This paper addresses the problem of interactive multiclass segmentation of images. We propose a fast and efficient new interactive segmentation method called superpixel α fusion (SαF). From a few strokes drawn by a user over an image, this method extracts relevant semantic objects. To get a fast calculation and an accurate segmentation, SαF uses superpixel oversegmentation and support vector machine classification. We compare SαF with competing algorithms by evaluating its performances on reference benchmarks. We also suggest four new datasets to evaluate the scalability of interactive segmentation methods, using images from some thousand to several million pixels. We conclude with two applications of SαF.

  7. Scalability of a Low-Cost Multi-Teraflop Linux Cluster for High-End Classical Atomistic and Quantum Mechanical Simulations

    Science.gov (United States)

    Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash

    2003-01-01

    Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.

  8. On eliminating synchronous communication in molecular simulations to improve scalability

    Science.gov (United States)

    Straatsma, T. P.; Chavarría-Miranda, Daniel G.

    2013-12-01

    Molecular dynamics simulation, as a complementary tool to experimentation, has become an important methodology for the understanding and design of molecular systems as it provides access to properties that are difficult, impossible or prohibitively expensive to obtain experimentally. Many of the available software packages have been parallelized to take advantage of modern massively concurrent processing resources. The challenge in achieving parallel efficiency is commonly attributed to the fact that molecular dynamics algorithms are communication intensive. This paper illustrates how an appropriately chosen data distribution and asynchronous one-sided communication approach can be used to effectively deal with the data movement within the Global Arrays/ARMCI programming model framework. A new put_notify capability is presented here, allowing the implementation of the molecular dynamics algorithm without any explicit global or local synchronization or global data reduction operations. In addition, this push-data model is shown to very effectively allow hiding data communication behind computation. Rather than data movement or explicit global reductions, the implicit synchronization of the algorithm becomes the primary challenge for scalability. Without any explicit synchronous operations, the scalability of molecular simulations is shown to depend only on the ability to evenly balance computational load.

  9. Toward an ultra-high resolution community climate system model for the BlueGene platform

    International Nuclear Information System (INIS)

    Dennis, John M; Jacob, Robert; Vertenstein, Mariana; Craig, Tony; Loy, Raymond

    2007-01-01

    Global climate models need to simulate several small, regional-scale processes which affect the global circulation in order to accurately simulate the climate. This is particularly important in the ocean where small scale features such as oceanic eddies are currently represented with adhoc parameterizations. There is also a need for higher resolution to provide climate predictions at small, regional scales. New high-performance computing platforms such as the IBM BlueGene can provide the necessary computational power to perform ultra-high resolution climate model integrations. We have begun to investigate the scaling of the individual components of the Community Climate System Model to prepare it for integrations on BlueGene and similar platforms. Our investigations show that it is possible to successfully utilize O(32K) processors. We describe the scalability of five models: the Parallel Ocean Program (POP), the Community Ice CodE (CICE), the Community Land Model (CLM), and the new CCSM sequential coupler (CPL7) which are components of the next generation Community Climate System Model (CCSM); as well as the High-Order Method Modeling Environment (HOMME) which is a dynamical core currently being evaluated within the Community Atmospheric Model. For our studies we concentrate on 1/10 0 resolution for CICE, POP, and CLM models and 1/4 0 resolution for HOMME. The ability to simulate high resolutions on the massively parallel petascale systems that will dominate high-performance computing for the foreseeable future is essential to the advancement of climate science

  10. Development of a scalable suspension culture for cardiac differentiation from human pluripotent stem cells

    Directory of Open Access Journals (Sweden)

    Vincent C. Chen

    2015-09-01

    Full Text Available To meet the need of a large quantity of hPSC-derived cardiomyocytes (CM for pre-clinical and clinical studies, a robust and scalable differentiation system for CM production is essential. With a human pluripotent stem cells (hPSC aggregate suspension culture system we established previously, we developed a matrix-free, scalable, and GMP-compliant process for directing hPSC differentiation to CM in suspension culture by modulating Wnt pathways with small molecules. By optimizing critical process parameters including: cell aggregate size, small molecule concentrations, induction timing, and agitation rate, we were able to consistently differentiate hPSCs to >90% CM purity with an average yield of 1.5 to 2 × 109 CM/L at scales up to 1 L spinner flasks. CM generated from the suspension culture displayed typical genetic, morphological, and electrophysiological cardiac cell characteristics. This suspension culture system allows seamless transition from hPSC expansion to CM differentiation in a continuous suspension culture. It not only provides a cost and labor effective scalable process for large scale CM production, but also provides a bioreactor prototype for automation of cell manufacturing, which will accelerate the advance of hPSC research towards therapeutic applications.

  11. A micro-spectroscopy study on the influence of chemical residues from nanofabrication on the nitridation chemistry of Al nanopatterns

    Energy Technology Data Exchange (ETDEWEB)

    Qi, B., E-mail: bing@raunvis.hi.is [Physics Department, Science Institute, University of Iceland, Dunhaga 3,107 Reykjavik (Iceland); Olafsson, S. [Physics Department, Science Institute, University of Iceland, Dunhaga 3,107 Reykjavik (Iceland); Zakharov, A.A. [MAX-lab, Lund University, S-22100 Lund (Sweden); Agnarsson, B. [Physics Department, Science Institute, University of Iceland, Dunhaga 3,107 Reykjavik (Iceland); Department of Applied Physics, Chalmers University of Technology, S-41296 Gothenburg (Sweden); Gislason, H.P. [Physics Department, Science Institute, University of Iceland, Dunhaga 3,107 Reykjavik (Iceland); Goethelid, M. [Materialfysik, MAP, ICT, KTH, ELECTRUM 229, 16440 Kista (Sweden)

    2012-03-01

    We applied spatially resolved photoelectron spectroscopy implemented with an X-ray photoemission electron microscopy (XPEEM) using soft X-ray synchrotron radiation to identify the compositional and morphological inhomogeneities of a SiO{sub 2}/Si substrate surface nanopatterned with Al before and after nitridation. The nanofabrication was conducted by a polymethylmethacrylate (PMMA)-based e-beam lithography and a fluorine-based reactive ion etching (RIE), followed by Al metalization and acetone lift-off. Three types of chemical residues were identified before nitridation: (1) fluorocarbons produced and accumulated mainly during RIE process on the sidewalls of the nanopatterns; (2) a thick Al-bearing PMMA layer and/or (3) a thin PMMA residue layer owing to unsuccessful or partial lift-off of the e-beam unexposed PMMA between the nanopatterns. The fluorocarbons actively influenced the surface chemical composition of the nanopatterns by forming Al-F compounds. After nitridation, in the PMMA residue-free area, the Al-F compounds on the sidewalls were decomposed and transformed to AlN. The PMMA residues between the nanopatterns had no obvious influence on the surface chemical composition and nitridation properties of the Al nanopatterns. They were only partially decomposed by the nitridation. The regional surface morphology of the nanopatterns revealed by the secondary electron XPEEM was consistent with the scanning electron microscopy results.

  12. Tipping solutions: emerging 3D nano-fabrication/ -imaging technologies

    Directory of Open Access Journals (Sweden)

    Seniutinas Gediminas

    2017-06-01

    Full Text Available The evolution of optical microscopy from an imaging technique into a tool for materials modification and fabrication is now being repeated with other characterization techniques, including scanning electron microscopy (SEM, focused ion beam (FIB milling/imaging, and atomic force microscopy (AFM. Fabrication and in situ imaging of materials undergoing a three-dimensional (3D nano-structuring within a 1−100 nm resolution window is required for future manufacturing of devices. This level of precision is critically in enabling the cross-over between different device platforms (e.g. from electronics to micro-/nano-fluidics and/or photonics within future devices that will be interfacing with biological and molecular systems in a 3D fashion. Prospective trends in electron, ion, and nano-tip based fabrication techniques are presented.

  13. A cloud platform for remote diagnosis of breast cancer in mammography by fusion of machine and human intelligence

    Science.gov (United States)

    Jiang, Guodong; Fan, Ming; Li, Lihua

    2016-03-01

    Mammography is the gold standard for breast cancer screening, reducing mortality by about 30%. The application of a computer-aided detection (CAD) system to assist a single radiologist is important to further improve mammographic sensitivity for breast cancer detection. In this study, a design and realization of the prototype for remote diagnosis system in mammography based on cloud platform were proposed. To build this system, technologies were utilized including medical image information construction, cloud infrastructure and human-machine diagnosis model. Specifically, on one hand, web platform for remote diagnosis was established by J2EE web technology. Moreover, background design was realized through Hadoop open-source framework. On the other hand, storage system was built up with Hadoop distributed file system (HDFS) technology which enables users to easily develop and run on massive data application, and give full play to the advantages of cloud computing which is characterized by high efficiency, scalability and low cost. In addition, the CAD system was realized through MapReduce frame. The diagnosis module in this system implemented the algorithms of fusion of machine and human intelligence. Specifically, we combined results of diagnoses from doctors' experience and traditional CAD by using the man-machine intelligent fusion model based on Alpha-Integration and multi-agent algorithm. Finally, the applications on different levels of this system in the platform were also discussed. This diagnosis system will have great importance for the balanced health resource, lower medical expense and improvement of accuracy of diagnosis in basic medical institutes.

  14. GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data

    Science.gov (United States)

    Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.

    2016-12-01

    Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We

  15. Development of a high-throughput microscale cell disruption platform for Pichia pastoris in rapid bioprocess design.

    Science.gov (United States)

    Bláha, Benjamin A F; Morris, Stephen A; Ogonah, Olotu W; Maucourant, Sophie; Crescente, Vincenzo; Rosenberg, William; Mukhopadhyay, Tarit K

    2018-01-01

    The time and cost benefits of miniaturized fermentation platforms can only be gained by employing complementary techniques facilitating high-throughput at small sample volumes. Microbial cell disruption is a major bottleneck in experimental throughput and is often restricted to large processing volumes. Moreover, for rigid yeast species, such as Pichia pastoris, no effective high-throughput disruption methods exist. The development of an automated, miniaturized, high-throughput, noncontact, scalable platform based on adaptive focused acoustics (AFA) to disrupt P. pastoris and recover intracellular heterologous protein is described. Augmented modes of AFA were established by investigating vessel designs and a novel enzymatic pretreatment step. Three different modes of AFA were studied and compared to the performance high-pressure homogenization. For each of these modes of cell disruption, response models were developed to account for five different performance criteria. Using multiple responses not only demonstrated that different operating parameters are required for different response optima, with highest product purity requiring suboptimal values for other criteria, but also allowed for AFA-based methods to mimic large-scale homogenization processes. These results demonstrate that AFA-mediated cell disruption can be used for a wide range of applications including buffer development, strain selection, fermentation process development, and whole bioprocess integration. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 34:130-140, 2018. © 2017 American Institute of Chemical Engineers.

  16. Economical and scalable synthesis of 6-amino-2-cyanobenzothiazole

    Directory of Open Access Journals (Sweden)

    Jacob R. Hauser

    2016-09-01

    Full Text Available 2-Cyanobenzothiazoles (CBTs are useful building blocks for: 1 luciferin derivatives for bioluminescent imaging; and 2 handles for bioorthogonal ligations. A particularly versatile CBT is 6-amino-2-cyanobenzothiazole (ACBT, which has an amine handle for straight-forward derivatisation. Here we present an economical and scalable synthesis of ACBT based on a cyanation catalysed by 1,4-diazabicyclo[2.2.2]octane (DABCO, and discuss its advantages for scale-up over previously reported routes.

  17. Architectural Techniques to Enable Reliable and Scalable Memory Systems

    OpenAIRE

    Nair, Prashant J.

    2017-01-01

    High capacity and scalable memory systems play a vital role in enabling our desktops, smartphones, and pervasive technologies like Internet of Things (IoT). Unfortunately, memory systems are becoming increasingly prone to faults. This is because we rely on technology scaling to improve memory density, and at small feature sizes, memory cells tend to break easily. Today, memory reliability is seen as the key impediment towards using high-density devices, adopting new technologies, and even bui...

  18. Efficient Delivery of Scalable Video Using a Streaming Class Model

    Directory of Open Access Journals (Sweden)

    Jason J. Quinlan

    2018-03-01

    Full Text Available When we couple the rise in video streaming with the growing number of portable devices (smart phones, tablets, laptops, we see an ever-increasing demand for high-definition video online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide a graceful changes in video quality, all while respecting viewing satisfaction. In this context, the use of well-known scalable/layered media streaming techniques, commonly known as scalable video coding (SVC, is an attractive solution. SVC encodes a number of video quality levels within a single media stream. This has been shown to be an especially effective and efficient solution, but it fares badly in the presence of datagram losses. While multiple description coding (MDC can reduce the effects of packet loss on scalable video delivery, the increased delivery cost is counterproductive for constrained networks. This situation is accentuated in cases where only the lower quality level is required. In this paper, we assess these issues and propose a new approach called Streaming Classes (SC through which we can define a key set of quality levels, each of which can be delivered in a self-contained manner. This facilitates efficient delivery, yielding reduced transmission byte-cost for devices requiring lower quality, relative to MDC and Adaptive Layer Distribution (ALD (42% and 76% respective reduction for layer 2, while also maintaining high levels of consistent quality. We also illustrate how selective packetisation technique can further reduce the effects of packet loss on viewable quality by

  19. Towards scalable quantum communication and computation: Novel approaches and realizations

    Science.gov (United States)

    Jiang, Liang

    Quantum information science involves exploration of fundamental laws of quantum mechanics for information processing tasks. This thesis presents several new approaches towards scalable quantum information processing. First, we consider a hybrid approach to scalable quantum computation, based on an optically connected network of few-qubit quantum registers. Specifically, we develop a novel scheme for scalable quantum computation that is robust against various imperfections. To justify that nitrogen-vacancy (NV) color centers in diamond can be a promising realization of the few-qubit quantum register, we show how to isolate a few proximal nuclear spins from the rest of the environment and use them for the quantum register. We also demonstrate experimentally that the nuclear spin coherence is only weakly perturbed under optical illumination, which allows us to implement quantum logical operations that use the nuclear spins to assist the repetitive-readout of the electronic spin. Using this technique, we demonstrate more than two-fold improvement in signal-to-noise ratio. Apart from direct application to enhance the sensitivity of the NV-based nano-magnetometer, this experiment represents an important step towards the realization of robust quantum information processors using electronic and nuclear spin qubits. We then study realizations of quantum repeaters for long distance quantum communication. Specifically, we develop an efficient scheme for quantum repeaters based on atomic ensembles. We use dynamic programming to optimize various quantum repeater protocols. In addition, we propose a new protocol of quantum repeater with encoding, which efficiently uses local resources (about 100 qubits) to identify and correct errors, to achieve fast one-way quantum communication over long distances. Finally, we explore quantum systems with topological order. Such systems can exhibit remarkable phenomena such as quasiparticles with anyonic statistics and have been proposed as

  20. Parallel scalability of Hartree-Fock calculations

    Science.gov (United States)

    Chow, Edmond; Liu, Xing; Smelyanskiy, Mikhail; Hammond, Jeff R.

    2015-03-01

    Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree-Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.