WorldWideScience

Sample records for scalable nanofabrication platform

  1. Tip-Based Nanofabrication for Scalable Manufacturing

    Directory of Open Access Journals (Sweden)

    Huan Hu

    2017-03-01

    Full Text Available Tip-based nanofabrication (TBN is a family of emerging nanofabrication techniques that use a nanometer scale tip to fabricate nanostructures. In this review, we first introduce the history of the TBN and the technology development. We then briefly review various TBN techniques that use different physical or chemical mechanisms to fabricate features and discuss some of the state-of-the-art techniques. Subsequently, we focus on those TBN methods that have demonstrated potential to scale up the manufacturing throughput. Finally, we discuss several research directions that are essential for making TBN a scalable nano-manufacturing technology.

  2. Scalable nanofabrication of U-shaped nanowire resonators with tunable optical magnetism.

    Science.gov (United States)

    Zhou, Fan; Wang, Chen; Dong, Biqin; Chen, Xiangfan; Zhang, Zhen; Sun, Cheng

    2016-03-21

    Split ring resonators have been studied extensively in reconstituting the diminishing magnetism at high electromagnetic frequencies in nature. However, breakdown in the linear scaling of artificial magnetism is found to occur at the near-infrared frequency mainly due to the increasing contribution of self-inductance while reducing dimensions of the resonators. Although alternative designs have enabled artificial magnetism at optical frequencies, their sophisticated configurations and fabrication procedures do not lend themselves to easy implementation. Here, we report scalable nanofabrication of U-shaped nanowire resonators (UNWRs) using the high-throughput nanotransfer printing method. By providing ample area for conducting oscillating electric current, UNWRs overcome the saturation of the geometric scaling of the artificial magnetism. We experimentally demonstrated coarse and fine tuning of LC resonances over a wide wavelength range from 748 nm to 1600 nm. The added flexibility in transferring to other substrates makes UNWR a versatile building block for creating functional metamaterials in three dimensions.

  3. Corfu: A Platform for Scalable Consistency

    OpenAIRE

    Wei, Michael

    2017-01-01

    Corfu is a platform for building systems which are extremely scalable, strongly consistent and robust. Unlike other systems which weaken guarantees to provide better performance, we have built Corfu with a resilient fabric tuned and engineered for scalability and strong consistency at its core: the Corfu shared log. On top of the Corfu log, we have built a layer of advanced data services which leverage the properties of the Corfu log. Today, Corfu is already replacing data platforms in commer...

  4. MOSDEN: A Scalable Mobile Collaborative Platform for Opportunistic Sensing Applications

    Directory of Open Access Journals (Sweden)

    Prem Prakash Jayaraman

    2014-05-01

    Full Text Available Mobile smartphones along with embedded sensors have become an efficient enabler for various mobile applications including opportunistic sensing. The hi-tech advances in smartphones are opening up a world of possibilities. This paper proposes a mobile collaborative platform called MOSDEN that enables and supports opportunistic sensing at run time. MOSDEN captures and shares sensor data acrossmultiple apps, smartphones and users. MOSDEN supports the emerging trend of separating sensors from application-specific processing, storing and sharing. MOSDEN promotes reuse and re-purposing of sensor data hence reducing the efforts in developing novel opportunistic sensing applications. MOSDEN has been implemented on Android-based smartphones and tablets. Experimental evaluations validate the scalability and energy efficiency of MOSDEN and its suitability towards real world applications. The results of evaluation and lessons learned are presented and discussed in this paper.

  5. Scalability of DL_POLY on High Performance Computing Platform

    Directory of Open Access Journals (Sweden)

    Mabule Samuel Mabakane

    2017-12-01

    Full Text Available This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.

  6. Through a Window, Brightly: A Review of Selected Nanofabricated Thin-Film Platforms for Spectroscopy, Imaging, and Detection.

    Science.gov (United States)

    Dwyer, Jason R; Harb, Maher

    2017-09-01

    We present a review of the use of selected nanofabricated thin films to deliver a host of capabilities and insights spanning bioanalytical and biophysical chemistry, materials science, and fundamental molecular-level research. We discuss approaches where thin films have been vital, enabling experimental studies using a variety of optical spectroscopies across the visible and infrared spectral range, electron microscopies, and related techniques such as electron energy loss spectroscopy, X-ray photoelectron spectroscopy, and single molecule sensing. We anchor this broad discussion by highlighting two particularly exciting exemplars: a thin-walled nanofluidic sample cell concept that has advanced the discovery horizons of ultrafast spectroscopy and of electron microscopy investigations of in-liquid samples; and a unique class of thin-film-based nanofluidic devices, designed around a nanopore, with expansive prospects for single molecule sensing. Free-standing, low-stress silicon nitride membranes are a canonical structural element for these applications, and we elucidate the fabrication and resulting features-including mechanical stability, optical properties, X-ray and electron scattering properties, and chemical nature-of this material in this format. We also outline design and performance principles and include a discussion of underlying material preparations and properties suitable for understanding the use of alternative thin-film materials such as graphene.

  7. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  8. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy.

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-19

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available.

  9. Wolfram technologies as an integrated scalable platform for interactive learning

    Science.gov (United States)

    Kaurov, Vitaliy

    2012-02-01

    We rely on technology profoundly with the prospect of even greater integration in the future. Well known challenges in education are a technology-inadequate curriculum and many software platforms that are difficult to scale or interconnect. We'll review an integrated technology, much of it free, that addresses these issues for individuals and small schools as well as for universities. Topics include: Mathematica, a programming environment that offers a diverse range of functionality; natural language programming for getting started quickly and accessing data from Wolfram|Alpha; quick and easy construction of interactive courseware and scientific applications; partnering with publishers to create interactive e-textbooks; course assistant apps for mobile platforms; the computable document format (CDF); teacher-student and student-student collaboration on interactive projects and web publishing at the Wolfram Demonstrations site.

  10. A QFD-based optimization method for a scalable product platform

    Science.gov (United States)

    Luo, Xinggang; Tang, Jiafu; Kwong, C. K.

    2010-02-01

    In order to incorporate the customer into the early phase of the product development cycle and to better satisfy customers' requirements, this article adopts quality function deployment (QFD) for optimal design of a scalable product platform. A five-step QFD-based method is proposed to determine the optimal values for platform engineering characteristics (ECs) and non-platform ECs of the products within a product family. First of all, the houses of quality (HoQs) for all product variants are developed and a QFD-based optimization approach is used to determine the optimal ECs for each product variant. Sensitivity analysis is performed for each EC with respect to overall customer satisfaction (OCS). Based on the obtained sensitivity indices of ECs, a mathematical model is established to simultaneously optimize the values of the platform and the non-platform ECs. Finally, by comparing and analysing the optimal solutions with different number of platform ECs, the ECs with which the worst OCS loss can be avoided are selected as platform ECs. An illustrative example is used to demonstrate the feasibility of this method. A comparison between the proposed method and a two-step approach is conducted on the example. The comparison shows that, as a kind of single-stage approach, the proposed method yields better average degree of customer satisfaction due to the simultaneous optimization of platform and non-platform ECs.

  11. Energy Harvesting Using PVDF Piezoelectric Nanofabric

    Science.gov (United States)

    Shafii, Chakameh Shafii

    Energy harvesting using piezoelectric nanomaterial provides an opportunity for advancement towards self-powered electronics. The fabrication complexities and limited power output of these nano/micro generators have hindered these advancements thus far. This thesis presents a fabrication technique with electrospinning using a grounded cylinder as the collector. This method addresses the difficulties with the production and scalability of the nanogenerators. The non-aligned nanofibers are woven into a textile form onto the cylindrical drum that can be easily removed. The electrical poling and mechanical stretching induced by the electric field and the drum rotation increase the concentration of the piezoelectric beta phase in the PVDF nanofabric. The nanofabric is placed between two layers of polyethylene terephthalate (PET) that have interdigitated electrodes painted on them with silver paint. Applying continuous load onto the flexible PVDF nanofabric at 35Hz produces a peak voltage of 320 mV and maximum power of 2200 pW/(cm2) .

  12. The EDRN knowledge environment: an open source, scalable informatics platform for biological sciences research

    Science.gov (United States)

    Crichton, Daniel; Mahabal, Ashish; Anton, Kristen; Cinquini, Luca; Colbert, Maureen; Djorgovski, S. George; Kincaid, Heather; Kelly, Sean; Liu, David

    2017-05-01

    We describe here the Early Detection Research Network (EDRN) for Cancer's knowledge environment. It is an open source platform built by NASA's Jet Propulsion Laboratory with contributions from the California Institute of Technology, and Giesel School of Medicine at Dartmouth. It uses tools like Apache OODT, Plone, and Solr, and borrows heavily from JPL's Planetary Data System's ontological infrastructure. It has accumulated data on hundreds of thousands of biospecemens and serves over 1300 registered users across the National Cancer Institute (NCI). The scalable computing infrastructure is built such that we are being able to reach out to other agencies, provide homogeneous access, and provide seamless analytics support and bioinformatics tools through community engagement.

  13. Nanofabrication beyond electronics.

    Science.gov (United States)

    Wang, YuHuang; Mirkin, Chad A; Park, So-Jung

    2009-05-26

    This Nano Focus article reviews recent developments in nanofabrication based on invited talks given at the "Chemical Methods of Nanofabrication" symposium, which was organized by the authors and presented at the 237th ACS National Meeting and Exhibition as one of seven symposia within the meeting theme, "Nanoscience: Challenges for the Future". The three-day symposium included 25 experts from academia, national laboratories, and industry from around the world, to discuss current progress and future directions in nanofabrication. We highlight several of the key results discussed and future directions and challenges in this rapidly changing field.

  14. Application of a Scalable Plant Transient Gene Expression Platform for Malaria Vaccine Development.

    Science.gov (United States)

    Spiegel, Holger; Boes, Alexander; Voepel, Nadja; Beiss, Veronique; Edgue, Gueven; Rademacher, Thomas; Sack, Markus; Schillberg, Stefan; Reimann, Andreas; Fischer, Rainer

    2015-01-01

    Despite decades of intensive research efforts there is currently no vaccine that provides sustained sterile immunity against malaria. In this context, a large number of targets from the different stages of the Plasmodium falciparum life cycle have been evaluated as vaccine candidates. None of these candidates has fulfilled expectations, and as long as we lack a single target that induces strain-transcending protective immune responses, combining key antigens from different life cycle stages seems to be the most promising route toward the development of efficacious malaria vaccines. After the identification of potential targets using approaches such as omics-based technology and reverse immunology, the rapid expression, purification, and characterization of these proteins, as well as the generation and analysis of fusion constructs combining different promising antigens or antigen domains before committing to expensive and time consuming clinical development, represents one of the bottlenecks in the vaccine development pipeline. The production of recombinant proteins by transient gene expression in plants is a robust and versatile alternative to cell-based microbial and eukaryotic production platforms. The transfection of plant tissues and/or whole plants using Agrobacterium tumefaciens offers a low technical entry barrier, low costs, and a high degree of flexibility embedded within a rapid and scalable workflow. Recombinant proteins can easily be targeted to different subcellular compartments according to their physicochemical requirements, including post-translational modifications, to ensure optimal yields of high quality product, and to support simple and economical downstream processing. Here, we demonstrate the use of a plant transient expression platform based on transfection with A. tumefaciens as essential component of a malaria vaccine development workflow involving screens for expression, solubility, and stability using fluorescent fusion proteins. Our

  15. Application of a scalable plant transient gene expression platform for malaria vaccine development

    Directory of Open Access Journals (Sweden)

    Holger eSpiegel

    2015-12-01

    Full Text Available Despite decades of intensive research efforts there is currently no vaccine that provides sustained sterile immunity against malaria. In this context, a large number of targets from the different stages of the Plasmodium falciparum life cycle have been evaluated as vaccine candidates. None of these candidates has fulfilled expectations, and as long as we lack a single target that induces strain-transcending protective immune responses, combining key antigens from different life cycle stages seems to be the most promising route towards the development of efficacious malaria vaccines. After the identification of potential targets using approaches such as omics-based technology and reverse immunology, the rapid expression, purification and characterization of these proteins, as well as the generation and analysis of fusion constructs combining different promising antigens or antigen domains before committing to expensive and time consuming clinical development, represents one of the bottlenecks in the vaccine development pipeline. The production of recombinant proteins by transient gene expression in plants is a robust and versatile alternative to cell-based microbial and eukaryotic production platforms. The transfection of plant tissues and/or whole plants using Agrobacterium tumefaciens offers a low technical entry barrier, low costs and a high degree of flexibility embedded within a rapid and scalable workflow. Recombinant proteins can easily be targeted to different subcellular compartments according to their physicochemical requirements, including post-translational modifications, to ensure optimal yields of high quality product, and to support simple and economical downstream processing. Here we demonstrate the use of a plant transient expression platform based on transfection with A. tumefaciens as essential component of a malaria vaccine development workflow involving screens for expression, solubility and stability using fluorescent fusion

  16. Scalable, high-performance 3D imaging software platform: system architecture and application to virtual colonoscopy.

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2012-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system.

  17. An FPGA Scalable Software Defined Radio Platform Design for Educational and Research Purposes

    Directory of Open Access Journals (Sweden)

    Marcos Hervás

    2016-06-01

    Full Text Available In a digital modem design, the integration of the Analog to Digital Converters (ADC and Digital to Analog Converters (DAC with the core processor is usually a major issue for the designer. In this paper an FPGA scalable Software Defined Radio platform based on a Spartan-6 as a control unit is presented, developed for both educational and research purposes, which can fit the different application requirements in terms of analog front-end performance, processing unit and cost. The resolution and sampling frequency of the analog front-end are its main adjustable parameters. The processing core requirements involve the FPGA and the communication ports. A multidisciplinary working group was required to design a high performance system for both analog front-end and digital processing core in terms of signal integrity and electromagnetic compatibility. The platform has 5 different peripheral ports ranging from 16 kbps to 2.5 Gbps. The communication ports allow our students to develop a high range of applications for both on-site and online courses applying teaching methodology based on learning by doing using a real system to help them to reach other transversal skills.

  18. Flash NanoPrecipitation as a scalable platform for the production of structured and hybrid nanocolloids

    Science.gov (United States)

    Lee, Victoria; Sosa, Chris; Liu, Rui; Prud'Homme, Robert; Priestley, Rodney

    Geometrically-structured polymer nanocolloids have been widely investigated for their unique properties, which are derived from their anisotropy. Decoration with inorganic nanoparticles in a controlled manner could induce another level of functionality into structured nanocolloids that could enable applications in fields such as re-writeable electronics and biphasic catalysis. Here, Flash NanoPrecipitation (FNP) is demonstrated as a one-step and scalable process platform to manufacture hybrid polymer-inorganic nanocolloids in which one phase is selectively decorated with a metal nanocatalyst by tuning the interactions between the feed ingredients. For instance, by modifying polymer end-group functionality, we are able to tune the location of the metal nanocatalyst, including placement at the Janus nanocolloid circumference. Moreover, the addition of surfactant to the system is shown to transform the Janus nanocolloid structure from spherical to dumbbell or snowman while still maintaining control over nanocatalyst location. Considering the flexibility and continuous nature of the FNP process, it offers an industrial-scale platform for manufacturing of nanomaterials that are anticipated to impact many technologies.

  19. Scalable Indium Phosphide Thin-Film Nanophotonics Platform for Photovoltaic and Photoelectrochemical Devices.

    Science.gov (United States)

    Lin, Qingfeng; Sarkar, Debarghya; Lin, Yuanjing; Yeung, Matthew; Blankemeier, Louis; Hazra, Jubin; Wang, Wei; Niu, Shanyuan; Ravichandran, Jayakanth; Fan, Zhiyong; Kapadia, Rehan

    2017-05-23

    Recent developments in nanophotonics have provided a clear roadmap for improving the efficiency of photonic devices through control over absorption and emission of devices. These advances could prove transformative for a wide variety of devices, such as photovoltaics, photoelectrochemical devices, photodetectors, and light-emitting diodes. However, it is often challenging to physically create the nanophotonic designs required to engineer the optical properties of devices. Here, we present a platform based on crystalline indium phosphide that enables thin-film nanophotonic structures with physical morphologies that are impossible to achieve through conventional state-of-the-art material growth techniques. Here, nanostructured InP thin films have been demonstrated on non-epitaxial alumina inverted nanocone (i-cone) substrates via a low-cost and scalable thin-film vapor-liquid-solid growth technique. In this process, indium films are first evaporated onto the i-cone structures in the desired morphology, followed by a high-temperature step that causes a phase transformation of the indium into indium phosphide, preserving the original morphology of the deposited indium. Through this approach, a wide variety of nanostructured film morphologies are accessible using only control over evaporation process variables. Critically, the as-grown nanotextured InP thin films demonstrate excellent optoelectronic properties, suggesting this platform is promising for future high-performance nanophotonic devices.

  20. Development of a scalable generic platform for adaptive optics real time control

    Science.gov (United States)

    Surendran, Avinash; Burse, Mahesh P.; Ramaprakash, A. N.; Parihar, Padmakar

    2015-06-01

    The main objective of the present project is to explore the viability of an adaptive optics control system based exclusively on Field Programmable Gate Arrays (FPGAs), making strong use of their parallel processing capability. In an Adaptive Optics (AO) system, the generation of the Deformable Mirror (DM) control voltages from the Wavefront Sensor (WFS) measurements is usually through the multiplication of the wavefront slopes with a predetermined reconstructor matrix. The ability to access several hundred hard multipliers and memories concurrently in an FPGA allows performance far beyond that of a modern CPU or GPU for tasks with a well-defined structure such as Adaptive Optics control. The target of the current project is to generate a signal for a real time wavefront correction, from the signals coming from a Wavefront Sensor, wherein the system would be flexible to accommodate all the current Wavefront Sensing techniques and also the different methods which are used for wavefront compensation. The system should also accommodate for different data transmission protocols (like Ethernet, USB, IEEE 1394 etc.) for transmitting data to and from the FPGA device, thus providing a more flexible platform for Adaptive Optics control. Preliminary simulation results for the formulation of the platform, and a design of a fully scalable slope computer is presented.

  1. Design and Evaluation of a Scalable and Reconfigurable Multi-Platform System for Acoustic Imaging

    Directory of Open Access Journals (Sweden)

    Alberto Izquierdo

    2016-10-01

    Full Text Available This paper proposes a scalable and multi-platform framework for signal acquisition and processing, which allows for the generation of acoustic images using planar arrays of MEMS (Micro-Electro-Mechanical Systems microphones with low development and deployment costs. Acoustic characterization of MEMS sensors was performed, and the beam pattern of a module, based on an 8 × 8 planar array and of several clusters of modules, was obtained. A flexible framework, formed by an FPGA, an embedded processor, a computer desktop, and a graphic processing unit, was defined. The processing times of the algorithms used to obtain the acoustic images, including signal processing and wideband beamforming via FFT, were evaluated in each subsystem of the framework. Based on this analysis, three frameworks are proposed, defined by the specific subsystems used and the algorithms shared. Finally, a set of acoustic images obtained from sound reflected from a person are presented as a case study in the field of biometric identification. These results reveal the feasibility of the proposed system.

  2. Nanofabrication with Pulsed Lasers

    Directory of Open Access Journals (Sweden)

    Kabashin AV

    2010-01-01

    Full Text Available Abstract An overview of pulsed laser-assisted methods for nanofabrication, which are currently developed in our Institute (LP3, is presented. The methods compass a variety of possibilities for material nanostructuring offered by laser–matter interactions and imply either the nanostructuring of the laser-illuminated surface itself, as in cases of direct laser ablation or laser plasma-assisted treatment of semiconductors to form light-absorbing and light-emitting nano-architectures, as well as periodic nanoarrays, or laser-assisted production of nanoclusters and their controlled growth in gaseous or liquid medium to form nanostructured films or colloidal nanoparticles. Nanomaterials synthesized by laser-assisted methods have a variety of unique properties, not reproducible by any other route, and are of importance for photovoltaics, optoelectronics, biological sensing, imaging and therapeutics.

  3. Nanofabrication principles, capabilities and limits

    CERN Document Server

    Cui, Zheng

    2017-01-01

    This second edition of Nanofabrication is one of the most comprehensive introductions on nanofabrication technologies and processes. A practical guide and reference, this book introduces readers to all of the developed technologies that are capable of making structures below 100nm. The principle of each technology is introduced and illustrated with minimum mathematics involved. Also analyzed are the capabilities of each technology in making sub-100nm structures, and the limits of preventing a technology from going further down the dimensional scale. This book provides readers with a toolkit that will help with any of their nanofabrication challenges.

  4. Silicon Micro- and Nanofabrication for Medicine

    Science.gov (United States)

    Fine, Daniel; Goodall, Randy; Bansal, Shyam S.; Chiappini, Ciro; Hosali, Sharath; van de Ven, Anne L.; Srinivasan, Srimeenkashi; Liu, Xuewu; Godin, Biana; Brousseau, Louis; Yazdi, Iman K.; Fernandez-Moure, Joseph; Tasciotti, Ennio; Wu, Hung-Jen; Hu, Ye; Klemm, Steve; Ferrari, Mauro

    2013-01-01

    This manuscript constitutes a review of several innovative biomedical technologies fabricated using the precision and accuracy of silicon micro- and nanofabrication. The technologies to be reviewed are subcutaneous nanochannel drug delivery implants for the continuous tunable zero-order release of therapeutics, multi-stage logic embedded vectors for the targeted systemic distribution of both therapeutic and imaging contrast agents, silicon and porous silicon nanowires for investigating cellular interactions and processes as well as for molecular and drug delivery applications, porous silicon (pSi) as inclusions into biocomposites for tissue engineering, especially as it applies to bone repair and regrowth, and porous silica chips for proteomic profiling. In the case of the biocomposites, the specifically designed pSi inclusions not only add to the structural robustness, but can also promote tissue and bone regrowth, fight infection, and reduce pain by releasing stimulating factors and other therapeutic agents stored within their porous network. The common material thread throughout all of these constructs, silicon and its associated dielectrics (silicon dioxide, silicon nitride, etc.), can be precisely and accurately machined using the same scalable micro- and nanofabrication protocols that are ubiquitous within the semiconductor industry. These techniques lend themselves to the high throughput production of exquisitely defined and monodispersed nanoscale features that should eliminate architectural randomness as a source of experimental variation thereby potentially leading to more rapid clinical translation. PMID:23584841

  5. CIC portal: a Collaborative and Scalable Integration Platform for High Availability Grid Operations

    OpenAIRE

    Cavalli, Alessandro; Cordier, Hélène; L'Orphelin, Cyril; Reynaud, Sylvain; Mathieu, Gilles; Pagano, Alfredo; Aidel, Osman

    2016-01-01

    International audience; EGEE, along with its sister project LCG, manages the world's largest Grid production infrastructure which is spreading nowadays over 260 sites in more than 40 countries. Just as building such a system requires novel approaches; its management also requires innovation. From an operational point of view, the first challenge we face is to provide scalable procedures and tools able to monitor the ever expanding infrastructure and the constant evolution of the needs. The se...

  6. New Tools for New Research in Psychiatry: A Scalable and Customizable Platform to Empower Data Driven Smartphone Research.

    Science.gov (United States)

    Torous, John; Kiang, Mathew V; Lorme, Jeanette; Onnela, Jukka-Pekka

    2016-05-05

    A longstanding barrier to progress in psychiatry, both in clinical settings and research trials, has been the persistent difficulty of accurately and reliably quantifying disease phenotypes. Mobile phone technology combined with data science has the potential to offer medicine a wealth of additional information on disease phenotypes, but the large majority of existing smartphone apps are not intended for use as biomedical research platforms and, as such, do not generate research-quality data. Our aim is not the creation of yet another app per se but rather the establishment of a platform to collect research-quality smartphone raw sensor and usage pattern data. Our ultimate goal is to develop statistical, mathematical, and computational methodology to enable us and others to extract biomedical and clinical insights from smartphone data. We report on the development and early testing of Beiwe, a research platform featuring a study portal, smartphone app, database, and data modeling and analysis tools designed and developed specifically for transparent, customizable, and reproducible biomedical research use, in particular for the study of psychiatric and neurological disorders. We also outline a proposed study using the platform for patients with schizophrenia. We demonstrate the passive data capabilities of the Beiwe platform and early results of its analytical capabilities. Smartphone sensors and phone usage patterns, when coupled with appropriate statistical learning tools, are able to capture various social and behavioral manifestations of illnesses, in naturalistic settings, as lived and experienced by patients. The ubiquity of smartphones makes this type of moment-by-moment quantification of disease phenotypes highly scalable and, when integrated within a transparent research platform, presents tremendous opportunities for research, discovery, and patient health.

  7. Automata-based Optimization of Interaction Protocols for Scalable Multicore Platforms (Technical Report)

    NARCIS (Netherlands)

    S.-S.T.Q. Jongmans (Sung-Shik); S. Halle; F. Arbab (Farhad)

    2014-01-01

    htmlabstractMulticore platforms offer the opportunity for utilizing massively parallel resources. However, programming them is challenging. We need good compilers that optimize commonly occurring synchronization/interaction patterns. To facilitate optimization, a programming language must convey

  8. Molecular engineering with artificial atoms: designing a material platform for scalable quantum spintronics and photonics

    Science.gov (United States)

    Doty, Matthew F.; Ma, Xiangyu; Zide, Joshua M. O.; Bryant, Garnett W.

    2017-09-01

    Self-assembled InAs Quantum Dots (QDs) are often called "artificial atoms" and have long been of interest as components of quantum photonic and spintronic devices. Although there has been substantial progress in demonstrating optical control of both single spins confined to a single QD and entanglement between two separated QDs, the path toward scalable quantum photonic devices based on spins remains challenging. Quantum Dot Molecules, which consist of two closely-spaced InAs QDs, have unique properties that can be engineered with the solid state analog of molecular engineering in which the composition, size, and location of both the QDs and the intervening barrier are controlled during growth. Moreover, applied electric, magnetic, and optical fields can be used to modulate, in situ, both the spin and optical properties of the molecular states. We describe how the unique photonic properties of engineered Quantum Dot Molecules can be leveraged to overcome long-standing challenges to the creation of scalable quantum devices that manipulate single spins via photonics.

  9. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility.

    Science.gov (United States)

    Jaschob, Daniel; Riffle, Michael

    2012-07-30

    Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. JobCenter is a client-server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or "in the cloud") and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  10. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility

    Directory of Open Access Journals (Sweden)

    Jaschob Daniel

    2012-07-01

    Full Text Available Abstract Background Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. Results JobCenter is a client–server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or “in the cloud” and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. Conclusions JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  11. MOLNs: A CLOUD PLATFORM FOR INTERACTIVE, REPRODUCIBLE, AND SCALABLE SPATIAL STOCHASTIC COMPUTATIONAL EXPERIMENTS IN SYSTEMS BIOLOGY USING PyURDME

    Science.gov (United States)

    Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas

    2017-01-01

    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948

  12. A Universal and Robust Integrated Platform for the Scalable Production of Human Cardiomyocytes From Pluripotent Stem Cells.

    Science.gov (United States)

    Fonoudi, Hananeh; Ansari, Hassan; Abbasalizadeh, Saeed; Larijani, Mehran Rezaei; Kiani, Sahar; Hashemizadeh, Shiva; Zarchi, Ali Sharifi; Bosman, Alexis; Blue, Gillian M; Pahlavan, Sara; Perry, Matthew; Orr, Yishay; Mayorchak, Yaroslav; Vandenberg, Jamie; Talkhabi, Mahmood; Winlaw, David S; Harvey, Richard P; Aghdami, Nasser; Baharvand, Hossein

    2015-12-01

    Recent advances in the generation of cardiomyocytes (CMs) from human pluripotent stem cells (hPSCs), in conjunction with the promising outcomes from preclinical and clinical studies, have raised new hopes for cardiac cell therapy. We report the development of a scalable, robust, and integrated differentiation platform for large-scale production of hPSC-CM aggregates in a stirred suspension bioreactor as a single-unit operation. Precise modulation of the differentiation process by small molecule activation of WNT signaling, followed by inactivation of transforming growth factor-β and WNT signaling and activation of sonic hedgehog signaling in hPSCs as size-controlled aggregates led to the generation of approximately 100% beating CM spheroids containing virtually pure (∼90%) CMs in 10 days. Moreover, the developed differentiation strategy was universal, as demonstrated by testing multiple hPSC lines (5 human embryonic stem cell and 4 human inducible PSC lines) without cell sorting or selection. The produced hPSC-CMs successfully expressed canonical lineage-specific markers and showed high functionality, as demonstrated by microelectrode array and electrophysiology tests. This robust and universal platform could become a valuable tool for the mass production of functional hPSC-CMs as a prerequisite for realizing their promising potential for therapeutic and industrial applications, including drug discovery and toxicity assays. Recent advances in the generation of cardiomyocytes (CMs) from human pluripotent stem cells (hPSCs) and the development of novel cell therapy strategies using hPSC-CMs (e.g., cardiac patches) in conjunction with promising preclinical and clinical studies, have raised new hopes for patients with end-stage cardiovascular disease, which remains the leading cause of morbidity and mortality globally. In this study, a simplified, scalable, robust, and integrated differentiation platform was developed to generate clinical grade hPSC-CMs as cell

  13. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform

    Science.gov (United States)

    Poucke, Sven Van; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; Deyne, Cathy De

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner’s Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research. PMID:26731286

  14. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    Directory of Open Access Journals (Sweden)

    Sven Van Poucke

    Full Text Available With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension. Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM, the ETL process (Extract, Transform, Load was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  15. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    Science.gov (United States)

    Van Poucke, Sven; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; De Deyne, Cathy

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  16. High-flux ionic diodes, ionic transistors and ionic amplifiers based on external ion concentration polarization by an ion exchange membrane: a new scalable ionic circuit platform.

    Science.gov (United States)

    Sun, Gongchen; Senapati, Satyajyoti; Chang, Hsueh-Chia

    2016-04-07

    A microfluidic ion exchange membrane hybrid chip is fabricated using polymer-based, lithography-free methods to achieve ionic diode, transistor and amplifier functionalities with the same four-terminal design. The high ionic flux (>100 μA) feature of the chip can enable a scalable integrated ionic circuit platform for micro-total-analytical systems.

  17. Harnessing Disorder in Compression Based Nanofabrication

    Science.gov (United States)

    Engel, Clifford John

    The future of nanotechnologies depends on the successful development of versatile, low-cost techniques for patterning micro- and nanoarchitectures. While most approaches to nanofabrication have focused primarily on making periodic structures at ever smaller length scales with an ultimate goal of massively scaling their production, I have focused on introducing control into relatively disordered nanofabrication systems. Well-ordered patterns are increasingly unnecessary for a growing range of applications, from anti-biofouling coatings to light trapping to omniphobic surfaces. The ability to manipulate disorder, at will and over multiple length scales, starting with the nanoscale, can open new prospects for textured substrates and unconventional applications. Taking advantage of previously considered defects; I have been able to develop nanofabrication techniques with potential for massive scalability and the incorporation into a wide range of potential application. This thesis first describes the manipulation of the non-Newtonian properties of liquid Ga and Ga alloys to confine the metal and metal alloys in gratings with sub-wavelength periodicities. Through a solid to liquid phase change, I was able to access the superior plasmonic properties of liquid Ga for the generation of surface plasmon polaritons (SPP). The switching contract between solid and liquid Ga confine in the nanogratings allowed for reversible manipulation of SPP properties through heating and cooling around the relatively low melting temperature of Ga (29.8 °C). The remaining chapters focus on the development and characterization of an all polymer wrinkle material system. Wrinkles, spontaneous disordered features that are produced in response to compressive force, are an ideal for a growing number of applications where fine feature control is no longer the main motivation. However the mechanical limitations of many wrinkle systems have restricted the potential applications of wrinkled surfaces

  18. The SNPlex genotyping system: a flexible and scalable platform for SNP genotyping.

    Science.gov (United States)

    Tobler, Andreas R; Short, Sabine; Andersen, Mark R; Paner, Teodoro M; Briggs, Jason C; Lambert, Stephen M; Wu, Priscilla P; Wang, Yiwen; Spoonde, Alexander Y; Koehler, Ryan T; Peyret, Nicolas; Chen, Caifu; Broomer, Adam J; Ridzon, Dana A; Zhou, Hui; Hoo, Bradley S; Hayashibara, Kathleen C; Leong, Lilley N; Ma, Congcong N; Rosenblum, Barnet B; Day, Joseph P; Ziegle, Janet S; De La Vega, Francisco M; Rhodes, Michael D; Hennessy, Kevin M; Wenz, H Michael

    2005-12-01

    We developed the SNPlex Genotyping System to address the need for accurate genotyping data, high sample throughput, study design flexibility, and cost efficiency. The system uses oligonucleotide ligation/polymerase chain reaction and capillary electrophoresis to analyze bi-allelic single nucleotide polymorphism genotypes. It is well suited for single nucleotide polymorphism genotyping efforts in which throughput and cost efficiency are essential. The SNPlex Genotyping System offers a high degree of flexibility and scalability, allowing the selection of custom-defined sets of SNPs for medium- to high-throughput genotyping projects. It is therefore suitable for a broad range of study designs. In this article we describe the principle and applications of the SNPlex Genotyping System, as well as a set of single nucleotide polymorphism selection tools and validated assay resources that accelerate the assay design process. We developed the control pool, an oligonucleotide ligation probe set for training and quality-control purposes, which interrogates 48 SNPs simultaneously. We present performance data from this control pool obtained by testing genomic DNA samples from 44 individuals. in addition, we present data from a study that analyzed 521 SNPs in 92 individuals. Combined, both studies show the SNPlex Genotyping system to have a 99.32% overall call rate, 99.95% precision, and 99.84% concordance with genotypes analyzed by TaqMan probe-based assays. The SNPlex Genotyping System is an efficient and reliable tool for a broad range of genotyping applications, supported by applications for study design, data analysis, and data management.

  19. A Scalable Gene Synthesis Platform Using High-Fidelity DNA Microchips

    Science.gov (United States)

    Kosuri, Sriram; Eroshenko, Nikolai; LeProust, Emily; Super, Michael; Way, Jeffrey; Li, Jin Billy; Church, George M.

    2010-01-01

    Development of cheap, high-throughput, and reliable gene synthesis methods will broadly stimulate progress in biology and biotechnology1. Currently, the reliance on column-synthesized oligonucleotides as a source of DNA limits further cost reductions in gene synthesis2. Oligonucleotides from DNA microchips can reduce costs by at least an order of magnitude3,4,5, yet efforts to scale their use have been largely unsuccessful due to the high error rates and complexity of the oligonucleotide mixtures. Here we use high-fidelity DNA microchips, selective oligonucleotide pool amplification, optimized gene assembly protocols, and enzymatic error correction to develop a highly parallel gene synthesis platform. We tested our platform by assembling 47 genes, including 42 challenging therapeutic antibody sequences, encoding a total of ~35 kilo-basepairs of DNA. These assemblies were performed from a complex background containing 13,000 oligonucleotides encoding ~2.5 megabases of DNA, which is at least 50 times larger than previously published attempts. PMID:21113165

  20. Scalable Device for Automated Microbial Electroporation in a Digital Microfluidic Platform.

    Science.gov (United States)

    Madison, Andrew C; Royal, Matthew W; Vigneault, Frederic; Chen, Liji; Griffin, Peter B; Horowitz, Mark; Church, George M; Fair, Richard B

    2017-09-15

    Electrowetting-on-dielectric (EWD) digital microfluidic laboratory-on-a-chip platforms demonstrate excellent performance in automating labor-intensive protocols. When coupled with an on-chip electroporation capability, these systems hold promise for streamlining cumbersome processes such as multiplex automated genome engineering (MAGE). We integrated a single Ti:Au electroporation electrode into an otherwise standard parallel-plate EWD geometry to enable high-efficiency transformation of Escherichia coli with reporter plasmid DNA in a 200 nL droplet. Test devices exhibited robust operation with more than 10 transformation experiments performed per device without cross-contamination or failure. Despite intrinsic electric-field nonuniformity present in the EP/EWD device, the peak on-chip transformation efficiency was measured to be 8.6 ± 1.0 × 108 cfu·μg-1 for an average applied electric field strength of 2.25 ± 0.50 kV·mm-1. Cell survival and transformation fractions at this electroporation pulse strength were found to be 1.5 ± 0.3 and 2.3 ± 0.1%, respectively. Our work expands the EWD toolkit to include on-chip microbial electroporation and opens the possibility of scaling advanced genome engineering methods, like MAGE, into the submicroliter regime.

  1. Scalable, Secure Analysis of Social Sciences Data on the Azure Platform

    Energy Technology Data Exchange (ETDEWEB)

    Simmhan, Yogesh; Deng, Litao; Kumbhare, Alok; Redekopp, Mark; Prasanna, Viktor

    2012-05-07

    Human activity and interaction data is beginning to be collected at population scales through the pervasiveness of social media and willingness of people to volunteer information. This can allow social science researchers to understand and model human behavior with better accuracy and prediction power. Political and social scientists are starting to correlate such large scale social media datasets with events that impact society as evidence abound of the virtual and physical public spaces intersecting and influencing each other [1,2]. Managers of Cyber Physical Systems such as Smart Power Grid utilities are investigating the impact of consumer behavior on power consumption, and the possibility of influencing the usage profile [3]. Data collection is also made easier through technology such as mobile apps, social media sites and search engines that directly collect data, and sensors such smart meters and room occupancy sensors that indirectly measure human activity. These technology platforms also provide a convenient framework for “human sensors” to record and broadcast data for behavioral studies, as a form of crowd sourced citizen science. This has the added advantage of engaging the broader public in STEM activities and help influence public policy.

  2. Monitoring WLCG with lambda-architecture: a new scalable data store and analytics platform for monitoring at petabyte scale.

    CERN Document Server

    Magnoni, L; Cordeiro, C; Georgiou, M; Andreeva, J; Khan, A; Smith, D R

    2015-01-01

    Monitoring the WLCG infrastructure requires the gathering and analysis of a high volume of heterogeneous data (e.g. data transfers, job monitoring, site tests) coming from different services and experiment-specific frameworks to provide a uniform and flexible interface for scientists and sites. The current architecture, where relational database systems are used to store, to process and to serve monitoring data, has limitations in coping with the foreseen increase in the volume (e.g. higher LHC luminosity) and the variety (e.g. new data-transfer protocols and new resource-types, as cloud-computing) of WLCG monitoring events. This paper presents a new scalable data store and analytics platform designed by the Support for Distributed Computing (SDC) group, at the CERN IT department, which uses a variety of technologies each one targeting specific aspects of big-scale distributed data-processing (commonly referred as lambda-architecture approach). Results of data processing on Hadoop for WLCG data activities mon...

  3. Monitoring WLCG with lambda-architecture: a new scalable data store and analytics platform for monitoring at petabyte scale.

    Science.gov (United States)

    Magnoni, L.; Suthakar, U.; Cordeiro, C.; Georgiou, M.; Andreeva, J.; Khan, A.; Smith, D. R.

    2015-12-01

    Monitoring the WLCG infrastructure requires the gathering and analysis of a high volume of heterogeneous data (e.g. data transfers, job monitoring, site tests) coming from different services and experiment-specific frameworks to provide a uniform and flexible interface for scientists and sites. The current architecture, where relational database systems are used to store, to process and to serve monitoring data, has limitations in coping with the foreseen increase in the volume (e.g. higher LHC luminosity) and the variety (e.g. new data-transfer protocols and new resource-types, as cloud-computing) of WLCG monitoring events. This paper presents a new scalable data store and analytics platform designed by the Support for Distributed Computing (SDC) group, at the CERN IT department, which uses a variety of technologies each one targeting specific aspects of big-scale distributed data-processing (commonly referred as lambda-architecture approach). Results of data processing on Hadoop for WLCG data activities monitoring are presented, showing how the new architecture can easily analyze hundreds of millions of transfer logs in a few minutes. Moreover, a comparison of data partitioning, compression and file format (e.g. CSV, Avro) is presented, with particular attention given to how the file structure impacts the overall MapReduce performance. In conclusion, the evolution of the current implementation, which focuses on data storage and batch processing, towards a complete lambda-architecture is discussed, with consideration of candidate technology for the serving layer (e.g. Elasticsearch) and a description of a proof of concept implementation, based on Apache Spark and Esper, for the real-time part which compensates for batch-processing latency and automates problem detection and failures.

  4. Regular nanofabrics in emerging technologies

    CERN Document Server

    Jamaa, M Haykel Ben

    2011-01-01

    ""Regular Nanofabrics in Emerging Technologies"" gives a deep insight into both fabrication and design aspects of emerging semiconductor technologies, that represent potential candidates for the post-CMOS era. Its approach is unique, across different fields, and it offers a synergetic view for a public of different communities ranging from technologists, to circuit designers, and computer scientists. The book presents two technologies as potential candidates for future semiconductor devices and systems and it shows how fabrication issues can be addressed at the design level and vice versa. The

  5. Nanofabrication: conventional and nonconventional methods.

    Science.gov (United States)

    Chen, Y; Pépin, A

    2001-01-01

    Nanofabrication is playing an ever increasing role in science and technology on the nanometer scale and will soon allow us to build systems of the same complexity as found in nature. Conventional methods that emerged from microelectronics are now used for the fabrication of structures for integrated circuits, microelectro-mechanical systems, microoptics and microanalytical devices. Nonconventional or alternative approaches have changed the way we pattern very fine structures and have brought about a new appreciation of simple and low-cost techniques. We present an overview of some of these methods, paying particular attention to those which enable large-scale production of lithographic patterns. We preface the review with a brief primer on lithography and pattern transfer concepts. After reviewing the various patterning techniques, we discuss some recent application issues in the fields of microelectronics, optoelectronics, magnetism as well as in biology and biochemistry.

  6. Multiplexed, high density electrophysiology with nanofabricated neural probes.

    Directory of Open Access Journals (Sweden)

    Jiangang Du

    Full Text Available Extracellular electrode arrays can reveal the neuronal network correlates of behavior with single-cell, single-spike, and sub-millisecond resolution. However, implantable electrodes are inherently invasive, and efforts to scale up the number and density of recording sites must compromise on device size in order to connect the electrodes. Here, we report on silicon-based neural probes employing nanofabricated, high-density electrical leads. Furthermore, we address the challenge of reading out multichannel data with an application-specific integrated circuit (ASIC performing signal amplification, band-pass filtering, and multiplexing functions. We demonstrate high spatial resolution extracellular measurements with a fully integrated, low noise 64-channel system weighing just 330 mg. The on-chip multiplexers make possible recordings with substantially fewer external wires than the number of input channels. By combining nanofabricated probes with ASICs we have implemented a system for performing large-scale, high-density electrophysiology in small, freely behaving animals that is both minimally invasive and highly scalable.

  7. Multiplexed, high density electrophysiology with nanofabricated neural probes.

    Science.gov (United States)

    Du, Jiangang; Blanche, Timothy J; Harrison, Reid R; Lester, Henry A; Masmanidis, Sotiris C

    2011-01-01

    Extracellular electrode arrays can reveal the neuronal network correlates of behavior with single-cell, single-spike, and sub-millisecond resolution. However, implantable electrodes are inherently invasive, and efforts to scale up the number and density of recording sites must compromise on device size in order to connect the electrodes. Here, we report on silicon-based neural probes employing nanofabricated, high-density electrical leads. Furthermore, we address the challenge of reading out multichannel data with an application-specific integrated circuit (ASIC) performing signal amplification, band-pass filtering, and multiplexing functions. We demonstrate high spatial resolution extracellular measurements with a fully integrated, low noise 64-channel system weighing just 330 mg. The on-chip multiplexers make possible recordings with substantially fewer external wires than the number of input channels. By combining nanofabricated probes with ASICs we have implemented a system for performing large-scale, high-density electrophysiology in small, freely behaving animals that is both minimally invasive and highly scalable.

  8. Development of flash nanoprecipitation as a scalable platform for production of hybrid polymer-inorganic Janus particles

    Science.gov (United States)

    Lee, Victoria E.; Prud'Homme, Robert K.; Priestley, Rodney D.

    Polymer Janus particles, containing two or more distinct domains, can act as supports for inorganic nanoparticles, stabilizing them against aggregation and templating anisotropic functionalization of the microparticles. This anisotropy can be advantageous for applications such as biofuel upgrading, bionanosensors, and responsive materials. Here, we introduce flash nanoprecipitation (FNP) as a scalable, fast process to create hybrid polymer-inorganic Janus particles with control of particle size and anisotropy. During FNP, polymer Janus particles form by rapid intermixing of a polymer solution with a poor solvent, inducing polymer precipitation and phase separation. Inorganic nanoparticles are then adsorbed selectively onto one domain of the polymer support by exploiting electrostatic interactions between the charged particles. By tuning polymer concentration and ratio in the feed stream, the particle size and anisotropy can be controlled. We further demonstrate that these hybrid particles can simultaneously stabilize emulsions and selectively catalyze the degradation of dye in one phase. With support from the Princeton Imaging Analysis Center.

  9. Flexible, Scalable and Energy Efficient Bio-Signals Processing on the PULP Platform: A Case Study on Seizure Detection

    Directory of Open Access Journals (Sweden)

    Fabio Montagna

    2017-06-01

    Full Text Available Ultra-low power operation and extreme energy efficiency are strong requirements for a number of high-growth application areas requiring near-sensor processing, including elaboration of biosignals. Parallel near-threshold computing is emerging as an approach to achieve significant improvements in energy efficiency while overcoming the performance degradation typical of low-voltage operations. In this paper, we demonstrate the capabilities of the PULP (Parallel Ultra-Low Power platform on an algorithm for seizure detection, representative of a wide range of EEG signal processing applications. Starting from the 28-nm FD-SOI (Fully Depleted Silicon On Insulator technology implementation of the third embodiment of the PULP architecture, we analyze the energy-efficient implementation of the seizure detection algorithm on PULP. The proposed parallel implementation exploits the dynamic voltage and frequency scaling capabilities, as well as the embedded power knobs of the PULP platform, reducing energy consumption for a seizure detection by up to 10× with respect to a sequential implementation at the nominal supply voltage and by 4.2× with respect to a sequential implementation with voltage scaling. Moreover, we analyze the trans-precision optimization of the algorithm on PULP, by means of a hybrid fixed- and floating-point implementation. This approach reduces the energy consumption by up to 43% with respect to the plain fixed-point and floating-point implementations, leveraging the requirements in terms of the precision of the kernels composing the processing chain to improve energy efficiency. Thanks to the proposed architecture and system-level approach for optimization, we demonstrate that PULP reduces energy consumption by up to 140× with respect to commercial low-power microcontrollers, being able to satisfy the real-time constraints typical of bio-medical applications, breaking the barrier of microwatts for a 50-ms complete seizure detection

  10. Nanofabrication strategies for advanced electrode materials

    Directory of Open Access Journals (Sweden)

    Chen Kunfeng

    2017-08-01

    Full Text Available The development of advanced electrode materials for high-performance energy storage devices becomes more and more important for growing demand of portable electronics and electrical vehicles. To speed up this process, rapid screening of exceptional materials among various morphologies, structures and sizes of materials is urgently needed. Benefitting from the advance of nanotechnology, tremendous efforts have been devoted to the development of various nanofabrication strategies for advanced electrode materials. This review focuses on the analysis of novel nanofabrication strategies and progress in the field of fast screening advanced electrode materials. The basic design principles for chemical reaction, crystallization, electrochemical reaction to control the composition and nanostructure of final electrodes are reviewed. Novel fast nanofabrication strategies, such as burning, electrochemical exfoliation, and their basic principles are also summarized. More importantly, colloid system served as one up-front design can skip over the materials synthesis, accelerating the screening rate of highperformance electrode. This work encourages us to create innovative design ideas for rapid screening high-active electrode materials for applications in energy-related fields and beyond.

  11. Scalable devices

    KAUST Repository

    Krüger, Jens J.

    2014-01-01

    In computer science in general and in particular the field of high performance computing and supercomputing the term scalable plays an important role. It indicates that a piece of hardware, a concept, an algorithm, or an entire system scales with the size of the problem, i.e., it can not only be used in a very specific setting but it\\'s applicable for a wide range of problems. From small scenarios to possibly very large settings. In this spirit, there exist a number of fixed areas of research on scalability. There are works on scalable algorithms, scalable architectures but what are scalable devices? In the context of this chapter, we are interested in a whole range of display devices, ranging from small scale hardware such as tablet computers, pads, smart-phones etc. up to large tiled display walls. What interests us mostly is not so much the hardware setup but mostly the visualization algorithms behind these display systems that scale from your average smart phone up to the largest gigapixel display walls.

  12. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Chase Qishi [New Jersey Inst. of Technology, Newark, NJ (United States); Univ. of Memphis, TN (United States); Zhu, Michelle Mengxia [Southern Illinois Univ., Carbondale, IL (United States)

    2016-06-06

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models feature diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific

  13. Microchannel Reactors for ISRU Applications Using Nanofabricated Catalysts Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Makel Engineering, Inc. (MEI) and USRA propose to develop microchannel reactors for In-Situ Resources Utilization (ISRU) using nanofabricated catalysts. The proposed...

  14. Programmable solid state atom sources for nanofabrication

    Science.gov (United States)

    Han, Han; Imboden, Matthias; Stark, Thomas; Del Corro, Pablo G.; Pardo, Flavio; Bolle, Cristian A.; Lally, Richard W.; Bishop, David J.

    2015-06-01

    In this paper we discuss the development of a MEMS-based solid state atom source that can provide controllable atom deposition ranging over eight orders of magnitude, from ten atoms per square micron up to hundreds of atomic layers, on a target ~1 mm away. Using a micron-scale silicon plate as a thermal evaporation source we demonstrate the deposition of indium, silver, gold, copper, iron, aluminum, lead and tin. Because of their small sizes and rapid thermal response times, pulse width modulation techniques are a powerful way to control the atomic flux. Pulsing the source with precise voltages and timing provides control in terms of when and how many atoms get deposited. By arranging many of these devices into an array, one has a multi-material, programmable solid state evaporation source. These micro atom sources are a complementary technology that can enhance the capability of a variety of nano-fabrication techniques.In this paper we discuss the development of a MEMS-based solid state atom source that can provide controllable atom deposition ranging over eight orders of magnitude, from ten atoms per square micron up to hundreds of atomic layers, on a target ~1 mm away. Using a micron-scale silicon plate as a thermal evaporation source we demonstrate the deposition of indium, silver, gold, copper, iron, aluminum, lead and tin. Because of their small sizes and rapid thermal response times, pulse width modulation techniques are a powerful way to control the atomic flux. Pulsing the source with precise voltages and timing provides control in terms of when and how many atoms get deposited. By arranging many of these devices into an array, one has a multi-material, programmable solid state evaporation source. These micro atom sources are a complementary technology that can enhance the capability of a variety of nano-fabrication techniques. Electronic supplementary information (ESI) available: A document containing further information about device characterization

  15. High resolution UV spectroscopy and laser-focused nanofabrication

    NARCIS (Netherlands)

    Myszkiewicz, G.

    2005-01-01

    This thesis combines two at first glance different techniques: High Resolution Laser Induced Fluorescence Spectroscopy (LIF) of small aromatic molecules and Laser Focusing of atoms for Nanofabrication. The thesis starts with the introduction to the high resolution LIF technique of small aromatic

  16. Recent advances in nanofabrication techniques for SERS substrates and their applications in food safety analysis.

    Science.gov (United States)

    Xie, Xiaohui; Pu, Hongbin; Sun, Da-Wen

    2017-06-30

    The ability to analyze food safety and quality in a quick, sensitive, and reliable manner is of high importance in food industry. Surface-enhanced Raman scattering (SERS), which is popular for its significant enhancement, excellent sensitivity, and the fingerprinting ability to identify special molecules, has shown vast potential for rapid detection of chemical constitutes, chemical contaminants, and pathogens in food sample. For SERS, the enhancement of Raman signals is related to not only the SERS-active substrates, but also the interactions between sample and substrates. In the current review, colloidal and solid surface-based substrates are briefly described, fabrication techniques for SERS substrates are presented, and applications of SERS for food matrixes, correlation between substrates and food samples are also introduced. Finally, some outlook on further developments is presented. The current review is therefore intended to provide a comprehensive overview on the nanofabrication of SERS substrates, and the potential of applying SERS as an important food analysis platform.

  17. Inclined nanoimprinting lithography-based 3D nanofabrication

    Science.gov (United States)

    Liu, Zhan; Bucknall, David G.; Allen, Mark G.

    2011-06-01

    We report a 'top-down' 3D nanofabrication approach combining non-conventional inclined nanoimprint lithography (INIL) with reactive ion etching (RIE), contact molding and 3D metal nanotransfer printing (nTP). This integration of processes enables the production and conformal transfer of 3D polymer nanostructures of varying heights to a variety of other materials including a silicon-based substrate, a silicone stamp and a metal gold (Au) thin film. The process demonstrates the potential of reduced fabrication cost and complexity compared to existing methods. Various 3D nanostructures in technologically useful materials have been fabricated, including symmetric and asymmetric nanolines, nanocircles and nanosquares. Such 3D nanostructures have potential applications such as angle-resolved photonic crystals, plasmonic crystals and biomimicking anisotropic surfaces. This integrated INIL-based strategy shows great promise for 3D nanofabrication in the fields of photonics, plasmonics and surface tribology.

  18. Nanofabrication of Plasmonic Circuits Containing Single Photon Sources

    OpenAIRE

    Siampour, Hamidreza; Kumar, Shailesh; Bozhevolnyi, Sergey I.

    2017-01-01

    Nanofabrication of photonic components based on dielectric-loaded surface plasmon-polariton waveguides (DLSPPWs) excited by single nitrogen vacancy (NV) centers in nanodiamonds is demonstrated. DLSPPW circuits are built around NV containing nanodiamonds, which are certified to be single-photon emitters, using electron-beam lithography of hydrogen silsesquioxane (HSQ) resist on silver-coated silicon substrates. A propagation length of ~20 {\\mu}m for the NV single-photon emission is measured wi...

  19. Nanofabrication of Diffractive Soft X-ray Optics

    OpenAIRE

    Lindblom, Magnus

    2009-01-01

    This thesis summarizes the present status of the nanofabrication of diffractive optics, i.e. zone plates, and test objects for soft x-ray microscopy at KTH. The emphasis is on new and improved fabrication processes for nickel and germanium zone plates. A new concept in which nickel and germanium are combined in a zone plate is also presented. The main techniques used in the fabrication are electron beam lithography for the patterning, followed by plasma etching and electroplating for the stru...

  20. Protein-Based Nanofabrics for Multifunctional Air Filtering

    Science.gov (United States)

    Souzandeh, Hamid

    With the fast development of economics and population, air pollution is getting worse and becomes a great concern worldwide. The release of chemicals, particulates and biological materials into air can lead to various diseases or discomfort to humans and other living organisms, alongside other serious impacts on the environment. Therefore, improving indoor air quality using various air filters is in critical need because people stay inside buildings most time of the day. However, current air filters using traditional polymers can only remove particles from the polluted air and disposing the huge amount of used air filters can cause serious secondary environmental pollution. Therefore, development of multi-functional air filter materials with environmental friendliness is significant. For this purpose, we developed "green" protein-based multifunctional air-filtering materials. The outstanding performance of the green materials in removal of multiple species of pollutants, including particulate matter, toxic chemicals, and biological hazards, simultaneously, will greatly facilitate the development of the next-generation air-filtration systems. First and foremost, we developed high-performance protein-based nanofabric air-filter mats. It was found that the protein-nanofabrics possess high-efficiency multifunctional air-filtering properties for both particles and various species of chemical gases. Then, the high-performance natural protein-based nanofabrics were promoted both mechanically and functionally by a textured cellulose paper towel. It is interestingly discovered that the textured cellulose paper towel not only can act as a flexible mechanical support, but also a type of airflow regulator which can improve the pollutant-nanofilter interactions. Furthermore, the protein-based nanofabrics were crosslinked in order to enhance the environmental-stability of the filters. It was found that the crosslinked protein-nanofabrics can significantly improve the structure

  1. Scalable Transactions for Web Applications in the Cloud

    NARCIS (Netherlands)

    Zhou, W.; Pierre, G.E.O.; Chi, C.-H.

    2009-01-01

    Cloud Computing platforms provide scalability and high availability properties for web applications but they sacrifice data consistency at the same time. However, many applications cannot afford any data inconsistency. We present a scalable transaction manager for NoSQL cloud database services to

  2. Rational design and nanofabrication of gecko-inspired fibrillar adhesives.

    Science.gov (United States)

    Hu, Shihao; Xia, Zhenhai

    2012-08-20

    Gecko feet integrate many intriguing functions such as strong adhesion, easy detachment, and self-cleaning. Mimicking gecko toe pad structure leads to the development of new types of fibrillar adhesives useful for various applications. In this Concept article, in addition to the design of adhesive mimics by replicating gecko geometric features, we show a new trend of rational design by adding other physical, chemical, and biological principles on to the geometric merits, for enhancing robustness, responsive control, and durability. Current challenges and future directions are highlighted in the design and nanofabrication of biomimetic fibrillar adhesives. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Nanofabrication of Plasmonic Circuits Containing Single Photon Sources

    DEFF Research Database (Denmark)

    Siampour, Hamidreza; Kumar, Shailesh; Bozhevolnyi, Sergey I.

    2017-01-01

    Nanofabrication of photonic components based on dielectric loaded surface plasmon polariton waveguides (DLSPPWs) excited by single nitrogen vacancy (NV) centers in nanodiamonds is demonstrated. DLSPPW circuits are built around NV containing nanodiamonds, which are certified to be single-photon...... emitters, using electron-beam lithography of hydrogen silsesquioxane (HSQ) resist on silver-coated silicon substrates. A propagation length of 20 ± 5 μm for the NV single-photon emission is measured with DLSPPWs. A 5-fold enhancement in the total decay rate, and 58% coupling efficiency to the DLSPPW mode...

  4. Realization of a scalable airborne radar

    NARCIS (Netherlands)

    Halsema, D. van; Jongh, R.V. de; Es, J. van; Otten, M.P.G.; Vermeulen, B.C.B.; Liempt, L.J. van

    2008-01-01

    Modern airborne ground surveillance radar systems are increasingly based on Active Electronically Scanned Array (AESA) antennas. Efficient use of array technology and the need for radar solutions for various airborne platforms, manned and unmanned, leads to the design of scalable radar systems. The

  5. Scalable IC Platform for Smart Cameras

    Directory of Open Access Journals (Sweden)

    Harry Broers

    2005-08-01

    Full Text Available Smart cameras are among the emerging new fields of electronics. The points of interest are in the application areas, software and IC development. In order to reduce cost, it is worthwhile to invest in a single architecture that can be scaled for the various application areas in performance (and resulting power consumption. In this paper, we show that the combination of an SIMD (single-instruction multiple-data processor and a general-purpose DSP is very advantageous for the image processing tasks encountered in smart cameras. While the SIMD processor gives the very high performance necessary by exploiting the inherent data parallelism found in the pixel crunching part of the algorithms, the DSP offers a friendly approach to the more complex tasks. The paper continues to motivate that SIMD processors have very convenient scaling properties in silicon, making the complete, SIMD-DSP architecture suitable for different application areas without changing the software suite. Analysis of the changes in power consumption due to scaling shows that for typical image processing tasks, it is beneficial to scale the SIMD processor to use the maximum level of parallelism available in the algorithm if the IC supply voltage can be lowered. If silicon cost is of importance, the parallelism of the processor should be scaled to just reach the desired performance given the speed of the silicon.

  6. Applications of sample nanofabrication in diamond-anvil cell experiments

    Science.gov (United States)

    Pigott, J. S.; Fischer, R. A.; Hrubiak, R.; Scott, H. P.; Panero, W. R.

    2015-12-01

    We use electron gun evaporation, sputter deposition, and photolithography to fabricate samples for laser-heated diamond anvil cell experiments. With complimentary thermal modeling, the sample geometry can be optimized and tailored to the experimental application. Here we highlight equation of state studies using nanofabricated double-hot plate samples. The homogeneous samples produced by our methods lead to exceptionally even heating both spatially and temporally that produced high-quality equations of state for nickel and stishovite. The Fe and Pt mutual equations of state may be well characterized and we show recent progress in fabricating samples consisting of a layered stack of Pt/SiO2/Fe/SiO2 in which the SiO2 serves to prevent the alloying of Fe and Pt. Finally, by exploiting state-of-the art nanofabrication techniques, we explore a wider range of the potential applications of such samples including high-pressure, high-temperature diffusion, melting, and thermal conductivity. Using the TempDAC code, we investigate the ideal sizes and ratios of the sample, heating laser diameter, and x-ray spot size while quantifying the effect of x-ray misalignment.

  7. HPC - Platforms Penta Chart

    Energy Technology Data Exchange (ETDEWEB)

    Trujillo, Angelina Michelle [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-10-08

    Strategy, Planning, Acquiring- very large scale computing platforms come and go and planning for immensely scalable machines often precedes actual procurement by 3 years. Procurement can be another year or more. Integration- After Acquisition, machines must be integrated into the computing environments at LANL. Connection to scalable storage via large scale storage networking, assuring correct and secure operations. Management and Utilization – Ongoing operations, maintenance, and trouble shooting of the hardware and systems software at massive scale is required.

  8. Green chemistry and nanofabrication in a levitated Leidenfrost drop

    Science.gov (United States)

    Abdelaziz, Ramzy; Disci-Zayed, Duygu; Hedayati, Mehdi Keshavarz; Pöhls, Jan-Hendrik; Zillohu, Ahnaf Usman; Erkartal, Burak; Chakravadhanula, Venkata Sai Kiran; Duppel, Viola; Kienle, Lorenz; Elbahri, Mady

    2013-10-01

    Green nanotechnology focuses on the development of new and sustainable methods of creating nanoparticles, their localized assembly and integration into useful systems and devices in a cost-effective, simple and eco-friendly manner. Here we present our experimental findings on the use of the Leidenfrost drop as an overheated and charged green chemical reactor. Employing a droplet of aqueous solution on hot substrates, this method is capable of fabricating nanoparticles, creating nanoscale coatings on complex objects and designing porous metal in suspension and foam form, all in a levitated Leidenfrost drop. As examples of the potential applications of the Leidenfrost drop, fabrication of nanoporous black gold as a plasmonic wideband superabsorber, and synthesis of superhydrophilic and thermal resistive metal-polymer hybrid foams are demonstrated. We believe that the presented nanofabrication method may be a promising strategy towards the sustainable production of functional nanomaterials.

  9. Electrochemical fountain pen nanofabrication of vertically grown platinum nanowires

    Science.gov (United States)

    Suryavanshi, Abhijit P.; Yu, Min-Feng

    2007-03-01

    Local electrochemical deposition of freestanding platinum nanowires was demonstrated with a new approach—electrochemical fountain pen nanofabrication (ec-FPN). The ec-FPN exploits the meniscus formed between an electrolyte-filled nanopipette ('the fountain pen') and a conductive substrate to serve as a confined electrochemical cell for reducing and depositing metal ions. Freestanding Pt nanowires were continuously grown off the substrate by moving the nanopipette away from the substrate while maintaining a stable meniscus between the nanopipette and the nanowire growth front. High quality and high aspect-ratio polycrystalline Pt nanowires with diameter of ~150 nm and length over 30 µm were locally grown with ec-FPN. The ec-FPN technique is shown to be an efficient and clean technique for localized fabrication of a variety of vertically grown metal nanowires and can potentially be used for fabricating freeform 3D nanostructures.

  10. NANOFILM - New metallic nanocomposites for micro and nanofabrication

    DEFF Research Database (Denmark)

    Fischer, Søren Vang

    or catalysts. The possibility to effectively structure the nanocomposites are however a limiting factor. In this project the UV sensitive photoresist SU-8 gold and silver nanocomposites have been fabricated which can be deposited and structured using standard micro and nanofabrication processes....... A technique called pre grafting was found to be most effective for the stabilisation process. In pre grafting the nanoparticles were formed in the presence of PVP-VA. For the two metal nanocomposites attempted to be fabricated only the gold SU-8 nanocomposite was successfully and reproducibly produced...... was then chosen for as co-solvent which resulted in a successfully structured in situ silver SU-8 nanocomposite. For both the ex situ and in situ nanocomposites structuring was initially found to be troublesome. Complete cross-linking of the SU-8 was only found to be possible after removing the filter from...

  11. Safety Profile of TiO2-Based Photocatalytic Nanofabrics for Indoor Formaldehyde Degradation

    OpenAIRE

    Cui, Guixin; Xin, Yan; Jiang, Xin; Dong, Mengqi; Li, Junling; Wang, Peng; Zhai, Shumei; Dong, Yongchun; Jia, Jianbo; Yan, Bing

    2015-01-01

    Anatase TiO2 nanoparticles (TNPs) are synthesized using the sol-gel method and loaded onto the surface of polyester-cotton (65/35) fabrics. The nanofabrics degrade formaldehyde at an efficiency of 77% in eight hours with visible light irradiation or 97% with UV light. The loaded TNPs display very little release from nanofabrics (~0.0%) during a standard fastness to rubbing test. Assuming TNPs may fall off nanofabrics during their life cycles, we also examine the possible toxicity of TNPs to h...

  12. Nanofabrication and characterization of high-line-density x-ray transmission gratings

    DEFF Research Database (Denmark)

    Zhu, Xiaoli; Li, Hailiang; Cao, Leifeng

    2017-01-01

    We report the nanofabrication and characterization of x-ray transmission gratings with a high aspect ratio and a feature size of down to 65 nm. Two nanofabrication methods, the combination of electron beam and optical lithography and the combination of electron beam, x-ray, and optical lithograph...... the development of x-ray diffractive optical elements. (C) 2017 Society of Photo-Optical Instrumentation Engineers (SPIE)...

  13. Metal oxide multilayer hard mask system for 3D nanofabrication

    Science.gov (United States)

    Han, Zhongmei; Salmi, Emma; Vehkamäki, Marko; Leskelä, Markku; Ritala, Mikko

    2018-02-01

    We demonstrate the preparation and exploitation of multilayer metal oxide hard masks for lithography and 3D nanofabrication. Atomic layer deposition (ALD) and focused ion beam (FIB) technologies are applied for mask deposition and mask patterning, respectively. A combination of ALD and FIB was used and a patterning procedure was developed to avoid the ion beam defects commonly met when using FIB alone for microfabrication. ALD grown Al2O3/Ta2O5/Al2O3 thin film stacks were FIB milled with 30 keV gallium ions and chemically etched in 5% tetramethylammonium hydroxide at 50 °C. With metal evaporation, multilayers consisting of amorphous oxides Al2O3 and Ta2O5 can be tailored for use in 2D lift-off processing, in preparation of embedded sub-100 nm metal lines and for multilevel electrical contacts. Good pattern transfer was achieved by lift-off process from the 2D hard mask for micro- and nano-scaled fabrication. As a demonstration of the applicability of this method to 3D structures, self-supporting 3D Ta2O5 masks were made from a film stack on gold particles. Finally, thin film resistors were fabricated by utilizing controlled stiction of suspended Ta2O5 structures.

  14. Stamping Techniques for Micro and Nanofabrication: Methods and Applications

    Science.gov (United States)

    Rogers, John A.

    This chapter highlights some recent advances in high resolution printing methods, in which a "stamp" forms a pattern of "ink" on the surface it contacts. It focuses on two approaches whose capabilities, level of development, and demonstrated applications indicate a strong potential for widespread use, especially in areas where conventional methods are unsuitable. The first of these, known as microcontact printing, uses a high resolution rubber stamp to print patterns of chemical inks, mainly those that lead to the formation of organic self-assembled monolayers (SAMs). These printed SAMs can be used either as resists in selective wet etching, or as templates in selective deposition to form structures of a variety of materials. The other approach, referred to as nanotransfer printing, uses similar high resolution stamps, but ones inked with solid thin film materials. In this case, SAMs, or other types of surface chemistries, bond these films to a substrate that the stamp contacts. The material transfer that results upon removal of the stamp forms a pattern in the geometry of the relief features, in a purely additive fashion. In addition to providing detailed descriptions of these micro/nanoprinting techniques, this chapter illustrates their use in some areas where these methods may provide attractive alternatives to more established lithographic methods. The demonstrator applications span fields as diverse as biotechnology (intravascular stents), fiber optics (tunable fiber devices), nanoanalytical chemistry (high resolution nuclear magnetic resonance), plastic electronics (paper-like displays), and integrated optics (distributed feedback lasers). The growing interest in nanoscience and nanotechnology motivates research and the development of new methods that can be used for nanofabricating the relevant test structures or devices. The attractive capabilities of the techniques described here, together with the interesting and subtle materials science, chemistry, and

  15. PKI Scalability Issues

    OpenAIRE

    Slagell, Adam J.; Bonilla, Rafael

    2004-01-01

    This report surveys different PKI technologies such as PKIX and SPKI and the issues of PKI that affect scalability. Much focus is spent on certificate revocation methodologies and status verification systems such as CRLs, Delta-CRLs, CRS, Certificate Revocation Trees, Windowed Certificate Revocation, OCSP, SCVP and DVCS.

  16. Nanostructured biosensing platform-shadow edge lithography for high-throughput nanofabrication.

    Science.gov (United States)

    Bai, John G; Yeo, Woon-Hong; Chung, Jae-Hyun

    2009-02-07

    One of the critical challenges in nanostructured biosensors is to manufacture an addressable array of nanopatterns at low cost. The addressable array (1) provides multiplexing for biomolecule detection and (2) enables direct detection of biomolecules without labeling and amplification. To fabricate such an array of nanostructures, current nanolithography methods are limited by the lack of either high throughput or high resolution. This paper presents a high-resolution and high-throughput nanolithography method using the compensated shadow effect in high-vacuum evaporation. The approach enables the fabrication of uniform nanogaps down to 20 nm in width across a 100 mm silicon wafer. The nanogap pattern is used as a template for the routine fabrication of zero-, one-, and two-dimensional nanostructures with a high yield. The method can facilitate the fabrication of nanostructured biosensors on a wafer scale at a low manufacturing cost.

  17. Safety Profile of TiO2-Based Photocatalytic Nanofabrics for Indoor Formaldehyde Degradation

    Science.gov (United States)

    Cui, Guixin; Xin, Yan; Jiang, Xin; Dong, Mengqi; Li, Junling; Wang, Peng; Zhai, Shumei; Dong, Yongchun; Jia, Jianbo; Yan, Bing

    2015-01-01

    Anatase TiO2 nanoparticles (TNPs) are synthesized using the sol-gel method and loaded onto the surface of polyester-cotton (65/35) fabrics. The nanofabrics degrade formaldehyde at an efficiency of 77% in eight hours with visible light irradiation or 97% with UV light. The loaded TNPs display very little release from nanofabrics (~0.0%) during a standard fastness to rubbing test. Assuming TNPs may fall off nanofabrics during their life cycles, we also examine the possible toxicity of TNPs to human cells. We found that up to a concentration of 220 μg/mL, they do not affect viability of human acute monocytic leukemia cell line THP-1 macrophages and human liver and kidney cells. PMID:26610470

  18. Safety Profile of TiO₂-Based Photocatalytic Nanofabrics for Indoor Formaldehyde Degradation.

    Science.gov (United States)

    Cui, Guixin; Xin, Yan; Jiang, Xin; Dong, Mengqi; Li, Junling; Wang, Peng; Zhai, Shumei; Dong, Yongchun; Jia, Jianbo; Yan, Bing

    2015-11-19

    Anatase TiO₂ nanoparticles (TNPs) are synthesized using the sol-gel method and loaded onto the surface of polyester-cotton (65/35) fabrics. The nanofabrics degrade formaldehyde at an efficiency of 77% in eight hours with visible light irradiation or 97% with UV light. The loaded TNPs display very little release from nanofabrics (~0.0%) during a standard fastness to rubbing test. Assuming TNPs may fall off nanofabrics during their life cycles, we also examine the possible toxicity of TNPs to human cells. We found that up to a concentration of 220 μg/mL, they do not affect viability of human acute monocytic leukemia cell line THP-1 macrophages and human liver and kidney cells.

  19. Safety Profile of TiO2-Based Photocatalytic Nanofabrics for Indoor Formaldehyde Degradation

    Directory of Open Access Journals (Sweden)

    Guixin Cui

    2015-11-01

    Full Text Available Anatase TiO2 nanoparticles (TNPs are synthesized using the sol-gel method and loaded onto the surface of polyester-cotton (65/35 fabrics. The nanofabrics degrade formaldehyde at an efficiency of 77% in eight hours with visible light irradiation or 97% with UV light. The loaded TNPs display very little release from nanofabrics (~0.0% during a standard fastness to rubbing test. Assuming TNPs may fall off nanofabrics during their life cycles, we also examine the possible toxicity of TNPs to human cells. We found that up to a concentration of 220 μg/mL, they do not affect viability of human acute monocytic leukemia cell line THP-1 macrophages and human liver and kidney cells.

  20. Nanofabrication Technology for Production of Quantum Nano-Electronic Devices Integrating Niobium Electrodes and Optically Transparent Gates

    Science.gov (United States)

    2018-01-01

    TECHNICAL REPORT 3086 January 2018 Nanofabrication Technology for Production of Quantum Nano-electronic Devices Integrating Niobium Electrodes...work described in this report was performed for the by the Advanced Concepts and Applied Research Branch (Code 71730) and the Science and Technology ...Applied Sciences Division iii EXECUTIVE SUMMARY This technical report demonstrates nanofabrication technology for Niobium heterostructures and

  1. Improving the Performance Scalability of the Community Atmosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Mirin, Arthur [Lawrence Livermore National Laboratory (LLNL); Worley, Patrick H [ORNL

    2012-01-01

    The Community Atmosphere Model (CAM), which serves as the atmosphere component of the Community Climate System Model (CCSM), is the most computationally expensive CCSM component in typical configurations. On current and next-generation leadership class computing systems, the performance of CAM is tied to its parallel scalability. Improving performance scalability in CAM has been a challenge, due largely to algorithmic restrictions necessitated by the polar singularities in its latitude-longitude computational grid. Nevertheless, through a combination of exploiting additional parallelism, implementing improved communication protocols, and eliminating scalability bottlenecks, we have been able to more than double the maximum throughput rate of CAM on production platforms. We describe these improvements and present results on the Cray XT5 and IBM BG/P. The approaches taken are not specific to CAM and may inform similar scalability enhancement activities for other codes.

  2. Scalable Resolution Display Walls

    KAUST Repository

    Leigh, Jason

    2013-01-01

    This article will describe the progress since 2000 on research and development in 2-D and 3-D scalable resolution display walls that are built from tiling individual lower resolution flat panel displays. The article will describe approaches and trends in display hardware construction, middleware architecture, and user-interaction design. The article will also highlight examples of use cases and the benefits the technology has brought to their respective disciplines. © 1963-2012 IEEE.

  3. Scalable electro-photonic integration concept based on polymer waveguides

    NARCIS (Netherlands)

    Bosman, E.; Steenberge, G. van; Boersma, A.; Wiegersma, S.; Harmsma, P.J.; Karppinen, M.; Korhonen, T.; Offrein, B.J.; Dangel, R.; Daly, A.; Ortsiefer, M.; Justice, J.; Corbett, B.; Dorrestein, S.; Duis, J.

    2016-01-01

    A novel method for fabricating a single mode optical interconnection platform is presented. The method comprises the miniaturized assembly of optoelectronic single dies, the scalable fabrication of polymer single mode waveguides and the coupling to glass fiber arrays providing the I/O's. The low

  4. Sub-10 nm silicon ridge nanofabrication by advanced edge lithography for NIL applications

    NARCIS (Netherlands)

    Zhao, Yiping; Berenschot, Johan W.; Jansen, Henricus V.; Tas, Niels Roelof; Huskens, Jurriaan; Elwenspoek, Michael Curt

    A new nanofabrication scheme is presented to form stamps useful in thermal nanoimprint lithography (T-NIL). The stamp is created in <110> single crystalline silicon using a full-wet etching procedure including local oxidation of silicon (LOCOS) and employing an adapted edge lithography technique on

  5. From Lab to Fab: Developing a Nanoscale Delivery Tool for Scalable Nanomanufacturing

    Science.gov (United States)

    Safi, Asmahan A.

    The emergence of nanomaterials with unique properties at the nanoscale over the past two decades carries a capacity to impact society and transform or create new industries ranging from nanoelectronics to nanomedicine. However, a gap in nanomanufacturing technologies has prevented the translation of nanomaterial into real-world commercialized products. Bridging this gap requires a paradigm shift in methods for fabricating structured devices with a nanoscale resolution in a repeatable fashion. This thesis explores the new paradigms for fabricating nanoscale structures devices and systems for high throughput high registration applications. We present a robust and scalable nanoscale delivery platform, the Nanofountain Probe (NFP), for parallel direct-write of functional materials. The design and microfabrication of NFP is presented. The new generation addresses the challenges of throughput, resolution and ink replenishment characterizing tip-based nanomanufacturing. To achieve these goals, optimized probe geometry is integrated to the process along with channel sealing and cantilever bending. The capabilities of the newly fabricated probes are demonstrated through two type of delivery: protein nanopatterning and single cell nanoinjection. The broad applications of the NFP for single cell delivery are investigated. An external microfluidic packaging is developed to enable delivery in liquid environment. The system is integrated to a combined atomic force microscope and inverted fluorescence microscope. Intracellular delivery is demonstrated by injecting a fluorescent dextran into Hela cells in vitro while monitoring the injection forces. Such developments enable in vitro cellular delivery for single cell studies and high throughput gene expression. The nanomanufacturing capabilities of NFPs are explored. Nanofabrication of carbon nanotube-based electronics presents all the manufacturing challenges characterizing of assembling nanomaterials precisely onto devices. The

  6. Scalable Reliable SD Erlang Design

    OpenAIRE

    Chechina, Natalia; Trinder, Phil; Ghaffari, Amir; Green, Rickard; Lundin, Kenneth; Virding, Robert

    2014-01-01

    This technical report presents the design of Scalable Distributed (SD) Erlang: a set of language-level changes that aims to enable Distributed Erlang to scale for server applications on commodity hardware with at most 100,000 cores. We cover a number of aspects, specifically anticipated architecture, anticipated failures, scalable data structures, and scalable computation. Other two components that guided us in the design of SD Erlang are design principles and typical Erlang applications. The...

  7. Scalable photoreactor for hydrogen production

    KAUST Repository

    Takanabe, Kazuhiro

    2017-04-06

    Provided herein are scalable photoreactors that can include a membrane-free water- splitting electrolyzer and systems that can include a plurality of membrane-free water- splitting electrolyzers. Also provided herein are methods of using the scalable photoreactors provided herein.

  8. Multiscale modeling and experimental validation for nanochannel depth control in atomic force microscopy-based nanofabrication

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Jiaqi; Liu, Pinkuan, E-mail: pkliu@sjtu.edu.cn; Zhu, Xiaobo; Zhang, Fan; Chen, Guozhen [State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China)

    2014-08-21

    Nanochannels are essential features of many microelectronic and biomedical devices. To date, the most commonly employed method to fabricate these nanochannels is atomic force microscopy (AFM). However, there is presently a very poor understanding on the fundamental principles underlying this process, which limits its reliability and controllability. In this study, we present a comprehensive multiscale model by incorporating strain gradient plasticity and strain gradient elasticity theories, which can predict nanochannel depths during AFM-based nanofabrication. The modeling results are directly verified with experiments performed on Cu and Pt substrates. As this model can also be extended to include many additional conditions, it has broad applicability in a wide range of AFM-based nanofabrication applications.

  9. Tipping solutions: emerging 3D nano-fabrication/ -imaging technologies

    Directory of Open Access Journals (Sweden)

    Seniutinas Gediminas

    2017-06-01

    Full Text Available The evolution of optical microscopy from an imaging technique into a tool for materials modification and fabrication is now being repeated with other characterization techniques, including scanning electron microscopy (SEM, focused ion beam (FIB milling/imaging, and atomic force microscopy (AFM. Fabrication and in situ imaging of materials undergoing a three-dimensional (3D nano-structuring within a 1−100 nm resolution window is required for future manufacturing of devices. This level of precision is critically in enabling the cross-over between different device platforms (e.g. from electronics to micro-/nano-fluidics and/or photonics within future devices that will be interfacing with biological and molecular systems in a 3D fashion. Prospective trends in electron, ion, and nano-tip based fabrication techniques are presented.

  10. Tipping solutions: emerging 3D nano-fabrication/ -imaging technologies

    Science.gov (United States)

    Seniutinas, Gediminas; Balčytis, Armandas; Reklaitis, Ignas; Chen, Feng; Davis, Jeffrey; David, Christian; Juodkazis, Saulius

    2017-06-01

    The evolution of optical microscopy from an imaging technique into a tool for materials modification and fabrication is now being repeated with other characterization techniques, including scanning electron microscopy (SEM), focused ion beam (FIB) milling/imaging, and atomic force microscopy (AFM). Fabrication and in situ imaging of materials undergoing a three-dimensional (3D) nano-structuring within a 1-100 nm resolution window is required for future manufacturing of devices. This level of precision is critically in enabling the cross-over between different device platforms (e.g. from electronics to micro-/nano-fluidics and/or photonics) within future devices that will be interfacing with biological and molecular systems in a 3D fashion. Prospective trends in electron, ion, and nano-tip based fabrication techniques are presented.

  11. Manual, In situ, Real-Time Nanofabrication using Cracking through Indentation

    Science.gov (United States)

    Nam, Koo Hyun; Suh, Young D.; Yeo, Junyeob; Woo, Deokha

    2016-01-01

    Nanofabrication has seen an increasing demand for applications in many fields of science and technology, but its production still requires relatively difficult, time-consuming, and expensive processes. Here we report a simple but very effective one dimensional (1D) nano-patterning technology that suggests a new nanofabrication method. This new technique involves the control of naturally propagating cracks initiated through simple, manually generated indentation, obviating the necessity of complicated equipment and elaborate experimental environments such as those that employ clean rooms, high vacuums, and the fastidious maintenance of processing temperatures. The channel fabricated with this technique can be as narrow as 10 nm with unlimited length and very high cross-sectional aspect ratio, an accomplishment difficult even for a state-of-the-art technology such as e-beam lithography. More interestingly, the fabrication speed can be controlled and achieved to as little as several hundred micrometers per second. Along with the simplicity and real-time fabrication capability of the technique, this tunable fabrication speed makes the method introduced here the authentic nanofabrication for in situ experiments.

  12. Scalable Frequent Subgraph Mining

    KAUST Repository

    Abdelhamid, Ehab

    2017-06-19

    A graph is a data structure that contains a set of nodes and a set of edges connecting these nodes. Nodes represent objects while edges model relationships among these objects. Graphs are used in various domains due to their ability to model complex relations among several objects. Given an input graph, the Frequent Subgraph Mining (FSM) task finds all subgraphs with frequencies exceeding a given threshold. FSM is crucial for graph analysis, and it is an essential building block in a variety of applications, such as graph clustering and indexing. FSM is computationally expensive, and its existing solutions are extremely slow. Consequently, these solutions are incapable of mining modern large graphs. This slowness is caused by the underlying approaches of these solutions which require finding and storing an excessive amount of subgraph matches. This dissertation proposes a scalable solution for FSM that avoids the limitations of previous work. This solution is composed of four components. The first component is a single-threaded technique which, for each candidate subgraph, needs to find only a minimal number of matches. The second component is a scalable parallel FSM technique that utilizes a novel two-phase approach. The first phase quickly builds an approximate search space, which is then used by the second phase to optimize and balance the workload of the FSM task. The third component focuses on accelerating frequency evaluation, which is a critical step in FSM. To do so, a machine learning model is employed to predict the type of each graph node, and accordingly, an optimized method is selected to evaluate that node. The fourth component focuses on mining dynamic graphs, such as social networks. To this end, an incremental index is maintained during the dynamic updates. Only this index is processed and updated for the majority of graph updates. Consequently, search space is significantly pruned and efficiency is improved. The empirical evaluation shows that the

  13. Scalable Nanomanufacturing—A Review

    Directory of Open Access Journals (Sweden)

    Khershed Cooper

    2017-01-01

    Full Text Available This article describes the field of scalable nanomanufacturing, its importance and need, its research activities and achievements. The National Science Foundation is taking a leading role in fostering basic research in scalable nanomanufacturing (SNM. From this effort several novel nanomanufacturing approaches have been proposed, studied and demonstrated, including scalable nanopatterning. This paper will discuss SNM research areas in materials, processes and applications, scale-up methods with project examples, and manufacturing challenges that need to be addressed to move nanotechnology discoveries closer to the marketplace.

  14. Current parallel I/O limitations to scalable data analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Mascarenhas, Ajith Arthur; Pebay, Philippe Pierre

    2011-07-01

    This report describes the limitations to parallel scalability which we have encountered when applying our otherwise optimally scalable parallel statistical analysis tool kit to large data sets distributed across the parallel file system of the current premier DOE computational facility. This report describes our study to evaluate the effect of parallel I/O on the overall scalability of a parallel data analysis pipeline using our scalable parallel statistics tool kit [PTBM11]. In this goal, we tested it using the Jaguar-pf DOE/ORNL peta-scale platform on a large combustion simulation data under a variety of process counts and domain decompositions scenarios. In this report we have recalled the foundations of the parallel statistical analysis tool kit which we have designed and implemented, with the specific double intent of reproducing typical data analysis workflows, and achieving optimal design for scalable parallel implementations. We have briefly reviewed those earlier results and publications which allow us to conclude that we have achieved both goals. However, in this report we have further established that, when used in conjuction with a state-of-the-art parallel I/O system, as can be found on the premier DOE peta-scale platform, the scaling properties of the overall analysis pipeline comprising parallel data access routines degrade rapidly. This finding is problematic and must be addressed if peta-scale data analysis is to be made scalable, or even possible. In order to attempt to address these parallel I/O limitations, we will investigate the use the Adaptable IO System (ADIOS) [LZL+10] to improve I/O performance, while maintaining flexibility for a variety of IO options, such MPI IO, POSIX IO. This system is developed at ORNL and other collaborating institutions, and is being tested extensively on Jaguar-pf. Simulation code being developed on these systems will also use ADIOS to output the data thereby making it easier for other systems, such as ours, to

  15. Scalable Gravity Offload System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — A scalable gravity offload device simulates reduced gravity for the testing of various surface system elements such as mobile robots, excavators, habitats, and...

  16. Scalable resource management in high performance computers.

    Energy Technology Data Exchange (ETDEWEB)

    Frachtenberg, E. (Eitan); Petrini, F. (Fabrizio); Fernandez Peinador, J. (Juan); Coll, S. (Salvador)

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  17. Scalable Quantum Photonics with Single Color Centers in Silicon Carbide.

    Science.gov (United States)

    Radulaski, Marina; Widmann, Matthias; Niethammer, Matthias; Zhang, Jingyuan Linda; Lee, Sang-Yun; Rendler, Torsten; Lagoudakis, Konstantinos G; Son, Nguyen Tien; Janzén, Erik; Ohshima, Takeshi; Wrachtrup, Jörg; Vučković, Jelena

    2017-03-08

    Silicon carbide is a promising platform for single photon sources, quantum bits (qubits), and nanoscale sensors based on individual color centers. Toward this goal, we develop a scalable array of nanopillars incorporating single silicon vacancy centers in 4H-SiC, readily available for efficient interfacing with free-space objective and lensed-fibers. A commercially obtained substrate is irradiated with 2 MeV electron beams to create vacancies. Subsequent lithographic process forms 800 nm tall nanopillars with 400-1400 nm diameters. We obtain high collection efficiency of up to 22 kcounts/s optical saturation rates from a single silicon vacancy center while preserving the single photon emission and the optically induced electron-spin polarization properties. Our study demonstrates silicon carbide as a readily available platform for scalable quantum photonics architecture relying on single photon sources and qubits.

  18. Scalable Quantum Photonics with Single Color Centers in Silicon Carbide

    Science.gov (United States)

    Radulaski, Marina; Widmann, Matthias; Niethammer, Matthias; Zhang, Jingyuan Linda; Lee, Sang-Yun; Rendler, Torsten; Lagoudakis, Konstantinos G.; Son, Nguyen Tien; Janzén, Erik; Ohshima, Takeshi; Wrachtrup, Jörg; Vučković, Jelena

    2017-03-01

    Silicon carbide is a promising platform for single photon sources, quantum bits (qubits) and nanoscale sensors based on individual color centers. Towards this goal, we develop a scalable array of nanopillars incorporating single silicon vacancy centers in 4H-SiC, readily available for efficient interfacing with free-space objective and lensed-fibers. A commercially obtained substrate is irradiated with 2 MeV electron beams to create vacancies. Subsequent lithographic process forms 800 nm tall nanopillars with 400-1,400 nm diameters. We obtain high collection efficiency, up to 22 kcounts/s optical saturation rates from a single silicon vacancy center, while preserving the single photon emission and the optically induced electron-spin polarization properties. Our study demonstrates silicon carbide as a readily available platform for scalable quantum photonics architecture relying on single photon sources and qubits.

  19. Towards Scalable Graph Computation on Mobile Devices

    Science.gov (United States)

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2015-01-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach. PMID:25859564

  20. From nanofabrication to self-fabrication--tailored chemistry for control of single molecule electronic devices

    DEFF Research Database (Denmark)

    Moth-Poulsen, Kasper; Bjørnholm, Thomas

    2010-01-01

    Single molecule electronics is a field of research focused on the use of single molecules as electronics components. During the past 15 years the field has concentrated on development of test beds for measurements on single molecules. Bottom-up approaches to single molecule devices are emerging...... the electronic properties of a single molecule by chemical design....... as alternatives to the dominant top-down nanofabrication techniques. One example is solution-based self-assembly of a molecule enclosed by two gold nanorod electrodes. This article will discuss recent attempts to control the self-assembly process by the use of supramolecular chemistry and how to tailor...

  1. Template-free electrochemical nanofabrication of polyaniline nanobrush and hybrid polyaniline with carbon nanohorns for supercapacitors

    Energy Technology Data Exchange (ETDEWEB)

    Wei Di; Andrew, Piers; Ryhaenen, Tapani [Nokia Research Centre Cambridge, Broers Building, 21 J J Thomson Avenue, Cambridge CB3 0FA (United Kingdom); Wang, Haolan; Hiralal, Pritesh; Amaratunga, Gehan A J [Electrical Engineering Division, Department of Engineering, University of Cambridge, 9 J J Thomson Avenue, Cambridge CB3 0FA (United Kingdom); Hayashi, Yasuhiko, E-mail: di.wei@nokia.com, E-mail: gaja1@cam.ac.uk [Department of Materials Science, Nagoya Institute of Technology, Nagoya 466-8555 (Japan)

    2010-10-29

    Polyaniline (PANI) nanobrushes were synthesized by template-free electrochemical galvanostatic methods. When the same method was applied to the carbon nanohorn (CNH) solution containing aniline monomers, a hybrid nanostructure containing PANI and CNHs was enabled after electropolymerization. This is the first report on the template-free method to make PANI nanobrushes and homogeneous hybrid soft matter (PANI) with carbon nanoparticles. Raman spectroscopy was used to analyze the interaction between CNH and PANI. Electrochemical nanofabrication offers simplicity and good control when used to make electronic devices. Both of these materials were applied in supercapacitors and an improvement capacitive current by using the hybrid material was observed.

  2. Nanoscience and nanofabrication at Argonne National Laboratory: The art of making small

    Science.gov (United States)

    Ocola, Leonidas E.

    2014-03-01

    Over a decade ago the Department of Energy started the design, and construction of five Nanoscale Science Research Centers at different national laboratories with the objective to provide research opportunities in Nanoscience for the scientific community worldwide. The Center for Nanoscale Materials (CNM) at Argonne National Laboratory was constructed in 2006, and opened its doors to the user community in 2007. Currently the CNM hosts over 400 user proposals a year. There are six research groups at the CNM that do work in nanophotonics, electronic and magnetic materials and devices, nanobio interfaces, nanofabrication and devices, x-ray nanoscale microscopy and theory and modeling. I work in the Nanofabrication and Devices group and my research career has covered the use of x-rays, electrons and ions in the pursuit of making the smaller and smaller structures and devices. At the CNM I have been able to push the limits of electron beam lithography, and expand the use of ion beams to large area nanofabrication. Some of our accomplishments include determining liquid-polymer interactions as a function of temperature, redefining proximity effect correction at the nanoscale (NanoPEC), measuring to less than 0.5% error the backscatter range for 100 KV electron beams and finding that the range is a function of the density of the substrate, fabrication of plasmonic slit waveguides, and using ions to create complex three dimensional structures for use in fluidics. None of these accomplishments are possible without detailed understanding of the physics and chemistry mechanisms involved during fabrication. This requires extensive theory and simulation work to validate our experimental results. The fruit of our work then is a full understanding of ``why'' we use certain processes for nanofabrication and not just a simple set of process recipes. A summary of all these activities will be discussed at the presentation. This work was supported by the Department of Energy under

  3. Scalable algorithms for contact problems

    CERN Document Server

    Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít

    2016-01-01

    This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...

  4. Scalability of DL_POLY on High Performance Computing Platform

    CSIR Research Space (South Africa)

    Mabakane, Mabule S

    2017-12-01

    Full Text Available running on up to 100 processors (cores) (W. Smith & Todorov, 2006). DL_POLY_3 (including 3.09) utilises a static/equi-spacial Domain Decomposition parallelisation strategy in which the simulation cell (comprising the atoms, ions or molecules) is divided..., 1997; Lange et al., 2011). Traditionally, it is expected that codes should scale linearly when one increases computational resources such as compute nodes or servers (Chamberlain, Chace, & Patil, 1998; Gropp & Snir, 2009). However, several studies...

  5. Scalable Testing Platform for CMOS Read In Integrated Circuits

    Science.gov (United States)

    2016-03-31

    Hernandez, Jonathan Dickason, Peyman Barakshan, Nick Waite, Rodney McGee, Fouad Kiamilev Electrical and Computer Engineering Department University of...Instrumentation (PEO STRI ) under Contract No. W91ZLK- 06-C-0006." (b) "Any opinions, findings and conclusions or recommendations expressed in this...Science & Technology (T&E/S&T) Program and/or the US Army Program Executive Office for Simulation, Training and Instrumentation (PEO STRI ).” References

  6. Content Integration: Creating a Scalable Common Platform for Information Resources

    Science.gov (United States)

    Berenstein, Max; Katz, Demian

    2012-01-01

    Academic, government, and corporate librarians organize and leverage internal resources and content through institutional repositories and library catalogs. Getting more value and usage from the content they license is a key goal. However, the ever-growing amount of content and shifting user demands for new materials or features has made the…

  7. Dense Plasma Focus-Based Nanofabrication of III–V Semiconductors: Unique Features and Recent Advances

    Directory of Open Access Journals (Sweden)

    Onkar Mangla

    2015-12-01

    Full Text Available The hot and dense plasma formed in modified dense plasma focus (DPF device has been used worldwide for the nanofabrication of several materials. In this paper, we summarize the fabrication of III–V semiconductor nanostructures using the high fluence material ions produced by hot, dense and extremely non-equilibrium plasma generated in a modified DPF device. In addition, we present the recent results on the fabrication of porous nano-gallium arsenide (GaAs. The details of morphological, structural and optical properties of the fabricated nano-GaAs are provided. The effect of rapid thermal annealing on the above properties of porous nano-GaAs is studied. The study reveals that it is possible to tailor the size of pores with annealing temperature. The optical properties of these porous nano-GaAs also confirm the possibility to tailor the pore sizes upon annealing. Possible applications of the fabricated and subsequently annealed porous nano-GaAs in transmission-type photo-cathodes and visible optoelectronic devices are discussed. These results suggest that the modified DPF is an effective tool for nanofabrication of continuous and porous III–V semiconductor nanomaterials. Further opportunities for using the modified DPF device for the fabrication of novel nanostructures are discussed as well.

  8. Nanofabrication of a gold fiducial array on specimen support for electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Koning, Roman I., E-mail: r.i.koning@lumc.nl [Leiden University Medical Center, Department of Molecular Cell Biology, Section Electron Microscopy, P.O. Box 9600, 2300 RC Leiden (Netherlands); Kutchoukov, Vladimir G.; Hagen, Cornelis W. [Delft University of Technology, Faculty of Applied Sciences, Charged Particle Optics, Lorentzweg 1, 2628 CJ Delft (Netherlands); Koster, Abraham J. [Leiden University Medical Center, Department of Molecular Cell Biology, Section Electron Microscopy, P.O. Box 9600, 2300 RC Leiden (Netherlands)

    2013-12-15

    Here we describe the production, using lithography and micro-engineering technologies, of patterned arrays of nanofabricated gold dots on a thin Si{sub 3}N{sub 4} electron transparent layer, supported by silicon. We illustrate that the support with a patterned structure of nanosized gold can be exploited for (cryo) electron tomography application as a specimen support with predefined alignment markers. This nanogold patterned support has several advantages. The Si{sub 3}N{sub 4} window provides a 50 nm thin, strong and flat support with a ∼0.7 mm{sup 2} large electron-beam transparent window. The nanogold pattern has a user-defined size and density, is highly regular and stable. This facilitates accurate tracking during tilt series acquisition, provides sufficient contrast for accurate alignment during the image reconstruction step and avoids an uneven lateral distribution and movement of individual fiducials. We showed that the support is suitable for electron tomography on plastic sections. - Highlights: • We nanofabricated gold arrays on thin electron transparent silicon nitride support • The position and size of nanopatterned fiducials gold clusters can be controlled. • The gold fiducials can be used for alignment and tracking in electron tomography. • We recorded electron tomographic data of stained plastic sections of fixed cells.

  9. Scalable shared-memory multiprocessing

    CERN Document Server

    Lenoski, Daniel E

    1995-01-01

    Dr. Lenoski and Dr. Weber have experience with leading-edge research and practical issues involved in implementing large-scale parallel systems. They were key contributors to the architecture and design of the DASH multiprocessor. Currently, they are involved with commercializing scalable shared-memory technology.

  10. Scalability study of solid xenon

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, J.; Cease, H.; Jaskierny, W. F.; Markley, D.; Pahlka, R. B.; Balakishiyeva, D.; Saab, T.; Filipenko, M.

    2015-04-01

    We report a demonstration of the scalability of optically transparent xenon in the solid phase for use as a particle detector above a kilogram scale. We employed a cryostat cooled by liquid nitrogen combined with a xenon purification and chiller system. A modified {\\it Bridgeman's technique} reproduces a large scale optically transparent solid xenon.

  11. Scalable parallel communications

    Science.gov (United States)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth

  12. Scalability improvements to NRLMOL for DFT calculations of large molecules

    Science.gov (United States)

    Diaz, Carlos Manuel

    Advances in high performance computing (HPC) have provided a way to treat large, computationally demanding tasks using thousands of processors. With the development of more powerful HPC architectures, the need to create efficient and scalable code has grown more important. Electronic structure calculations are valuable in understanding experimental observations and are routinely used for new materials predictions. For the electronic structure calculations, the memory and computation time are proportional to the number of atoms. Memory requirements for these calculations scale as N2, where N is the number of atoms. While the recent advances in HPC offer platforms with large numbers of cores, the limited amount of memory available on a given node and poor scalability of the electronic structure code hinder their efficient usage of these platforms. This thesis will present some developments to overcome these bottlenecks in order to study large systems. These developments, which are implemented in the NRLMOL electronic structure code, involve the use of sparse matrix storage formats and the use of linear algebra using sparse and distributed matrices. These developments along with other related development now allow ground state density functional calculations using up to 25,000 basis functions and the excited state calculations using up to 17,000 basis functions while utilizing all cores on a node. An example on a light-harvesting triad molecule is described. Finally, future plans to further improve the scalability will be presented.

  13. Network selection, Information filtering and Scalable computation

    Science.gov (United States)

    Ye, Changqing

    -complete factorizations, possibly with a high percentage of missing values. This promotes additional sparsity beyond rank reduction. Computationally, we design methods based on a ``decomposition and combination'' strategy, to break large-scale optimization into many small subproblems to solve in a recursive and parallel manner. On this basis, we implement the proposed methods through multi-platform shared-memory parallel programming, and through Mahout, a library for scalable machine learning and data mining, for mapReduce computation. For example, our methods are scalable to a dataset consisting of three billions of observations on a single machine with sufficient memory, having good timings. Both theoretical and numerical investigations show that the proposed methods exhibit significant improvement in accuracy over state-of-the-art scalable methods.

  14. On the Scalability of Time-predictable Chip-Multiprocessing

    DEFF Research Database (Denmark)

    Puffitsch, Wolfgang; Schoeberl, Martin

    2012-01-01

    simple processors is not an option for embedded systems with high demands on computing power. In order to provide high performance and predictability we argue to use multiprocessor systems with a time-predictable memory interface. In this paper we present the scalability of a Java chip......Real-time systems need a time-predictable execution platform to be able to determine the worst-case execution time statically. In order to be time-predictable, several advanced processor features, such as out-of-order execution and other forms of speculation, have to be avoided. However, just using......-multiprocessor system that is designed to be time-predictable. Adding time-predictable caches is mandatory to achieve scalability with a shared memory multi-processor system. As Java bytecode retains information about the nature of memory accesses, it is possible to implement a memory hierarchy that takes...

  15. Scientific visualization uncertainty, multifield, biomedical, and scalable visualization

    CERN Document Server

    Chen, Min; Johnson, Christopher; Kaufman, Arie; Hagen, Hans

    2014-01-01

    Based on the seminar that took place in Dagstuhl, Germany in June 2011, this contributed volume studies the four important topics within the scientific visualization field: uncertainty visualization, multifield visualization, biomedical visualization and scalable visualization. • Uncertainty visualization deals with uncertain data from simulations or sampled data, uncertainty due to the mathematical processes operating on the data, and uncertainty in the visual representation, • Multifield visualization addresses the need to depict multiple data at individual locations and the combination of multiple datasets, • Biomedical is a vast field with select subtopics addressed from scanning methodologies to structural applications to biological applications, • Scalability in scientific visualization is critical as data grows and computational devices range from hand-held mobile devices to exascale computational platforms. Scientific Visualization will be useful to practitioners of scientific visualization, ...

  16. Potential of Scalable Vector Graphics (SVG) for Ocean Science Research

    Science.gov (United States)

    Sears, J. R.

    2002-12-01

    Scalable Vector Graphics (SVG), a graphic format encoded in Extensible Markup Language (XML), is a recent W3C standard. SVG is text-based and platform-neutral, allowing interoperability and a rich array of features that offer significant promise for the presentation and publication of ocean and earth science research. This presentation (a) provides a brief introduction to SVG with real-world examples; (b) reviews browsers, editors, and other SVG tools; and (c) talks about some of the more powerful capabilities of SVG that might be important for ocean and earth science data presentation, such as searchability, animation and scripting, interactivity, accessibility, dynamic SVG, layers, scalability, SVG Text, SVG Audio, server-side SVG, and embedding metadata and data. A list of useful SVG resources is also given.

  17. Microscopic droplet formation and energy transport analysis of condensation on scalable superhydrophobic nanostructured copper oxide surfaces.

    Science.gov (United States)

    Li, GuanQiu; Alhosani, Mohamed H; Yuan, ShaoJun; Liu, HaoRan; Ghaferi, Amal Al; Zhang, TieJun

    2014-12-09

    Utilization of nanotechnologies in condensation has been recognized as one opportunity to improve the efficiency of large-scale thermal power and desalination systems. High-performance and stable dropwise condensation in widely-used copper heat exchangers is appealing for energy and water industries. In this work, a scalable and low-cost nanofabrication approach was developed to fabricate superhydrophobic copper oxide (CuO) nanoneedle surfaces to promote dropwise condensation and even jumping-droplet condensation. By conducting systematic surface characterization and in situ environmental scanning electron microscope (ESEM) condensation experiments, we were able to probe the microscopic formation physics of droplets on irregular nanostructured surfaces. At the early stages of condensation process, the interfacial surface tensions at the edge of CuO nanoneedles were found to influence both the local energy barriers for microdroplet growth and the advancing contact angles when droplets undergo depinning. Local surface roughness also has a significant impact on the volume of the condensate within the nanostructures and overall heat transfer from the vapor to substrate. Both our theoretical analysis and in situ ESEM experiments have revealed that the liquid condensate within the nanostructures determines the amount of the work of adhesion and kinetic energy associated with droplet coalescence and jumping. Local and global droplet growth models were also proposed to predict how the microdroplet morphology within nanostructures affects the heat transfer performance of early-stage condensation. Our quantitative analysis of microdroplet formation and growth within irregular nanostructures provides the insight to guide the anodization-based nanofabrication for enhancing dropwise and jumping-droplet condensation performance.

  18. Quality scalable video data stream

    OpenAIRE

    Wiegand, T.; Kirchhoffer, H.; Schwarz, H

    2008-01-01

    An apparatus for generating a quality-scalable video data stream (36) is described which comprises means (42) for coding a video signal (18) using block-wise transformation to obtain transform blocks (146, 148) of transformation coefficient values for a picture (140) of the video signal, a predetermined scan order (154, 156, 164, 166) with possible scan positions being defined among the transformation coefficient values within the transform blocks so that in each transform block, for each pos...

  19. The Nanofabrication and Application of Substrates for Surface-Enhanced Raman Scattering

    Directory of Open Access Journals (Sweden)

    Xian Zhang

    2012-01-01

    Full Text Available Surface-enhanced Raman scattering (SERS was discovered in 1974 and impacted Raman spectroscopy and surface science. Although SERS has not been developed to be an applicable detection tool so far, nanotechnology has promoted its development in recent decades. The traditional SERS substrates, such as silver electrode, metal island film, and silver colloid, cannot be applied because of their enhancement factor or stability, but newly developed substrates, such as electrochemical deposition surface, Ag porous film, and surface-confined colloids, have better sensitivity and stability. Surface enhanced Raman scattering is applied in other fields such as detection of chemical pollutant, biomolecules, DNA, bacteria, and so forth. In this paper, the development of nanofabrication and application of surface-enhanced Ramans scattering substrate are discussed.

  20. Colloidal Inorganic Nanocrystal Based Nanocomposites: Functional Materials for Micro and Nanofabrication

    Directory of Open Access Journals (Sweden)

    Marinella Striccoli

    2010-02-01

    Full Text Available The unique size- and shape-dependent electronic properties of nanocrystals (NCs make them extremely attractive as novel structural building blocks for constructing a new generation of innovative materials and solid-state devices. Recent advances in material chemistry has allowed the synthesis of colloidal NCs with a wide range of compositions, with a precise control on size, shape and uniformity as well as specific surface chemistry. By incorporating such nanostructures in polymers, mesoscopic materials can be achieved and their properties engineered by choosing NCs differing in size and/or composition, properly tuning the interaction between NCs and surrounding environment. In this contribution, different approaches will be presented as effective opportunities for conveying colloidal NC properties to nanocomposite materials for micro and nanofabrication. Patterning of such nanocomposites either by conventional lithographic techniques and emerging patterning tools, such as ink jet printing and nanoimprint lithography, will be illustrated, pointing out their technological impact on developing new optoelectronic and sensing devices.

  1. Colloidal Inorganic Nanocrystal Based Nanocomposites: Functional Materials for Micro and Nanofabrication

    Science.gov (United States)

    Ingrosso, Chiara; Panniello, AnnaMaria; Comparelli, Roberto; Curri, Maria Lucia; Striccoli, Marinella

    2010-01-01

    The unique size- and shape-dependent electronic properties of nanocrystals (NCs) make them extremely attractive as novel structural building blocks for constructing a new generation of innovative materials and solid-state devices. Recent advances in material chemistry has allowed the synthesis of colloidal NCs with a wide range of compositions, with a precise control on size, shape and uniformity as well as specific surface chemistry. By incorporating such nanostructures in polymers, mesoscopic materials can be achieved and their properties engineered by choosing NCs differing in size and/or composition, properly tuning the interaction between NCs and surrounding environment. In this contribution, different approaches will be presented as effective opportunities for conveying colloidal NC properties to nanocomposite materials for micro and nanofabrication. Patterning of such nanocomposites either by conventional lithographic techniques and emerging patterning tools, such as ink jet printing and nanoimprint lithography, will be illustrated, pointing out their technological impact on developing new optoelectronic and sensing devices.

  2. The possibility of multi-layer nanofabrication via atomic force microscope-based pulse electrochemical nanopatterning

    Science.gov (United States)

    Kim, Uk Su; Morita, Noboru; Lee, Deug Woo; Jun, Martin; Park, Jeong Woo

    2017-05-01

    Pulse electrochemical nanopatterning, a non-contact scanning probe lithography process using ultrashort voltage pulses, is based primarily on an electrochemical machining process using localized electrochemical oxidation between a sharp tool tip and the sample surface. In this study, nanoscale oxide patterns were formed on silicon Si (100) wafer surfaces via electrochemical surface nanopatterning, by supplying external pulsed currents through non-contact atomic force microscopy. Nanoscale oxide width and height were controlled by modulating the applied pulse duration. Additionally, protruding nanoscale oxides were removed completely by simple chemical etching, showing a depressed pattern on the sample substrate surface. Nanoscale two-dimensional oxides, prepared by a localized electrochemical reaction, can be defined easily by controlling physical and electrical variables, before proceeding further to a layer-by-layer nanofabrication process.

  3. Scalable Multifunction RF Systems: Combined vs. Separate Transmit and Receive Arrays

    NARCIS (Netherlands)

    Huizing, A.G.

    2008-01-01

    A scalable multifunction RF (SMRF) system allows the RF functionality (radar, electronic warfare and communications) to be easily extended and the RF performance to be scaled to the requirements of different missions and platforms. This paper presents the results of a trade-off study with respect to

  4. Design issues of an open scalable architecture for active phased array radars

    NARCIS (Netherlands)

    Huizing, A.G.

    2003-01-01

    An open scalable architecture will make it easier and quicker to adapt active phased array radar to new missions and platforms. This will provide radar manufacturers with larger markets, more commonality in radar systems, and a better continuity in radar production lines. The procurement of open

  5. Wideband vs. Multiband Trade-offs for a Scalable Multifunction RF system

    NARCIS (Netherlands)

    Huizing, A.G.

    2005-01-01

    This paper presents a concept for a scalable multifunction RF (SMRF) system that allows the RF functionality (radar, electronic warfare and communications) to be easily extended and the RF performance to be scaled to the requirements of different missions and platforms. A trade-off analysis is

  6. Communication and service platform for public safety personnel

    NARCIS (Netherlands)

    Schmidt, J.R.

    2005-01-01

    This paper describes a communication and service platform for public safety personnel. The platform demonstrates just in time provisioning of data and scalable communication services and operates in a heterogeneous network environment with high survivability. As an example use case the design is

  7. Scalable Techniques for Formal Verification

    CERN Document Server

    Ray, Sandip

    2010-01-01

    This book presents state-of-the-art approaches to formal verification techniques to seamlessly integrate different formal verification methods within a single logical foundation. It should benefit researchers and practitioners looking to get a broad overview of the spectrum of formal verification techniques, as well as approaches to combining such techniques within a single framework. Coverage includes a range of case studies showing how such combination is fruitful in developing a scalable verification methodology for industrial designs. This book outlines both theoretical and practical issue

  8. Flexible scalable photonic manufacturing method

    Science.gov (United States)

    Skunes, Timothy A.; Case, Steven K.

    2003-06-01

    A process for flexible, scalable photonic manufacturing is described. Optical components are actively pre-aligned and secured to precision mounts. In a subsequent operation, the mounted optical components are passively placed onto a substrate known as an Optical Circuit Board (OCB). The passive placement may be either manual for low volume applications or with a pick-and-place robot for high volume applications. Mating registration features on the component mounts and the OCB facilitate accurate optical alignment. New photonic circuits may be created by changing the layout of the OCB. Predicted yield data from Monte Carlo tolerance simulations for two fiber optic photonic circuits is presented.

  9. Event metadata records as a testbed for scalable data mining

    Science.gov (United States)

    van Gemmeren, P.; Malon, D.

    2010-04-01

    At a data rate of 200 hertz, event metadata records ("TAGs," in ATLAS parlance) provide fertile grounds for development and evaluation of tools for scalable data mining. It is easy, of course, to apply HEP-specific selection or classification rules to event records and to label such an exercise "data mining," but our interest is different. Advanced statistical methods and tools such as classification, association rule mining, and cluster analysis are common outside the high energy physics community. These tools can prove useful, not for discovery physics, but for learning about our data, our detector, and our software. A fixed and relatively simple schema makes TAG export to other storage technologies such as HDF5 straightforward. This simplifies the task of exploiting very-large-scale parallel platforms such as Argonne National Laboratory's BlueGene/P, currently the largest supercomputer in the world for open science, in the development of scalable tools for data mining. Using a domain-neutral scientific data format may also enable us to take advantage of existing data mining components from other communities. There is, further, a substantial literature on the topic of one-pass algorithms and stream mining techniques, and such tools may be inserted naturally at various points in the event data processing and distribution chain. This paper describes early experience with event metadata records from ATLAS simulation and commissioning as a testbed for scalable data mining tool development and evaluation.

  10. Scalable Dynamic Instrumentation for BlueGene/L

    Energy Technology Data Exchange (ETDEWEB)

    Schulz, M; Ahn, D; Bernat, A; de Supinski, B R; Ko, S Y; Lee, G; Rountree, B

    2005-09-08

    Dynamic binary instrumentation for performance analysis on new, large scale architectures such as the IBM Blue Gene/L system (BG/L) poses new challenges. Their scale--with potentially hundreds of thousands of compute nodes--requires new, more scalable mechanisms to deploy and to organize binary instrumentation and to collect the resulting data gathered by the inserted probes. Further, many of these new machines don't support full operating systems on the compute nodes; rather, they rely on light-weight custom compute kernels that do not support daemon-based implementations. We describe the design and current status of a new implementation of the DPCL (Dynamic Probe Class Library) API for BG/L. DPCL provides an easy to use layer for dynamic instrumentation on parallel MPI applications based on the DynInst dynamic instrumentation mechanism for sequential platforms. Our work includes modifying DynInst to control instrumentation from remote I/O nodes and porting DPCL's communication to use MRNet, a scalable data reduction network for collecting performance data. We describe extensions to the DPCL API that support instrumentation of task subsets and aggregation of collected performance data. Overall, our implementation provides a scalable infrastructure that provides efficient binary instrumentation on BG/L.

  11. Event metadata records as a testbed for scalable data mining

    Energy Technology Data Exchange (ETDEWEB)

    Gemmeren, P van; Malon, D, E-mail: gemmeren@anl.go [Argonne National Laboratory, Argonne, Illinois 60439 (United States)

    2010-04-01

    At a data rate of 200 hertz, event metadata records ('TAGs,' in ATLAS parlance) provide fertile grounds for development and evaluation of tools for scalable data mining. It is easy, of course, to apply HEP-specific selection or classification rules to event records and to label such an exercise 'data mining,' but our interest is different. Advanced statistical methods and tools such as classification, association rule mining, and cluster analysis are common outside the high energy physics community. These tools can prove useful, not for discovery physics, but for learning about our data, our detector, and our software. A fixed and relatively simple schema makes TAG export to other storage technologies such as HDF5 straightforward. This simplifies the task of exploiting very-large-scale parallel platforms such as Argonne National Laboratory's BlueGene/P, currently the largest supercomputer in the world for open science, in the development of scalable tools for data mining. Using a domain-neutral scientific data format may also enable us to take advantage of existing data mining components from other communities. There is, further, a substantial literature on the topic of one-pass algorithms and stream mining techniques, and such tools may be inserted naturally at various points in the event data processing and distribution chain. This paper describes early experience with event metadata records from ATLAS simulation and commissioning as a testbed for scalable data mining tool development and evaluation.

  12. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    Data.gov (United States)

    National Aeronautics and Space Administration — In this research, we propose a variant of the classical Matching Pursuit Decomposition (MPD) algorithm with significantly improved scalability and computational...

  13. Perceptual compressive sensing scalability in mobile video

    Science.gov (United States)

    Bivolarski, Lazar

    2011-09-01

    Scalability features embedded within the video sequences allows for streaming over heterogeneous networks to a variety of end devices. Compressive sensing techniques that will allow for lowering the complexity increase the robustness of the video scalability are reviewed. Human visual system models are often used in establishing perceptual metrics that would evaluate quality of video. Combining of perceptual and compressive sensing approach outlined from recent investigations. The performance and the complexity of different scalability techniques are evaluated. Application of perceptual models to evaluation of the quality of compressive sensing scalability is considered in the near perceptually lossless case and to the appropriate coding schemes is reviewed.

  14. Final Report. Center for Scalable Application Development Software

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [Rice Univ., Houston, TX (United States)

    2014-10-26

    The Center for Scalable Application Development Software (CScADS) was established as a part- nership between Rice University, Argonne National Laboratory, University of California Berkeley, University of Tennessee – Knoxville, and University of Wisconsin – Madison. CScADS pursued an integrated set of activities with the aim of increasing the productivity of DOE computational scientists by catalyzing the development of systems software, libraries, compilers, and tools for leadership computing platforms. Principal Center activities were workshops to engage the research community in the challenges of leadership computing, research and development of open-source software, and work with computational scientists to help them develop codes for leadership computing platforms. This final report summarizes CScADS activities at Rice University in these areas.

  15. Platform Constellations

    DEFF Research Database (Denmark)

    Staykova, Kalina Stefanova; Damsgaard, Jan

    2016-01-01

    This research paper presents an initial attempt to introduce and explain the emergence of new phenomenon, which we refer to as platform constellations. Functioning as highly modular systems, the platform constellations are collections of highly connected platforms which co-exist in parallel...

  16. High aspect ratio nano-fabrication of photonic crystal structures on glass wafers using chrome as hard mask.

    Science.gov (United States)

    Hossain, Md Nazmul; Justice, John; Lovera, Pierre; McCarthy, Brendan; O'Riordan, Alan; Corbett, Brian

    2014-09-05

    Wafer-scale nano-fabrication of silicon nitride (Si x N y ) photonic crystal (PhC) structures on glass (quartz) substrates is demonstrated using a thin (30 nm) chromium (Cr) layer as the hard mask for transferring the electron beam lithography (EBL) defined resist patterns. The use of the thin Cr layer not only solves the charging effect during the EBL on the insulating substrate, but also facilitates high aspect ratio PhCs by acting as a hard mask while deep etching into the Si x N y . A very high aspect ratio of 10:1 on a 60 nm wide grating structure has been achieved while preserving the quality of the flat top of the narrow lines. The presented nano-fabrication method provides PhC structures necessary for a high quality optical response. Finally, we fabricated a refractive index based PhC sensor which shows a sensitivity of 185 nm per RIU.

  17. Scalable rendering on PC clusters

    Energy Technology Data Exchange (ETDEWEB)

    WYLIE,BRIAN N.; LEWIS,VASILY; SHIRLEY,DAVID NOYES; PAVLAKOS,CONSTANTINE

    2000-04-25

    This case study presents initial results from research targeted at the development of cost-effective scalable visualization and rendering technologies. The implementations of two 3D graphics libraries based on the popular sort-last and sort-middle parallel rendering techniques are discussed. An important goal of these implementations is to provide scalable rendering capability for extremely large datasets (>> 5 million polygons). Applications can use these libraries for either run-time visualization, by linking to an existing parallel simulation, or for traditional post-processing by linking to an interactive display program. The use of parallel, hardware-accelerated rendering on commodity hardware is leveraged to achieve high performance. Current performance results show that, using current hardware (a small 16-node cluster), they can utilize up to 85% of the aggregate graphics performance and achieve rendering rates in excess of 20 million polygons/second using OpenGL{reg_sign} with lighting, Gouraud shading, and individually specified triangles (not t-stripped).

  18. Silicon nanophotonics for scalable quantum coherent feedback networks

    Energy Technology Data Exchange (ETDEWEB)

    Sarovar, Mohan; Brif, Constantin [Sandia National Laboratories, Livermore, CA (United States); Soh, Daniel B.S. [Sandia National Laboratories, Livermore, CA (United States); Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States); Cox, Jonathan; DeRose, Christopher T.; Camacho, Ryan; Davids, Paul [Sandia National Laboratories, Albuquerque, NM (United States)

    2016-12-15

    The emergence of coherent quantum feedback control (CQFC) as a new paradigm for precise manipulation of dynamics of complex quantum systems has led to the development of efficient theoretical modeling and simulation tools and opened avenues for new practical implementations. This work explores the applicability of the integrated silicon photonics platform for implementing scalable CQFC networks. If proven successful, on-chip implementations of these networks would provide scalable and efficient nanophotonic components for autonomous quantum information processing devices and ultra-low-power optical processing systems at telecommunications wavelengths. We analyze the strengths of the silicon photonics platform for CQFC applications and identify the key challenges to both the theoretical formalism and experimental implementations. In particular, we determine specific extensions to the theoretical CQFC framework (which was originally developed with bulk-optics implementations in mind), required to make it fully applicable to modeling of linear and nonlinear integrated optics networks. We also report the results of a preliminary experiment that studied the performance of an in situ controllable silicon nanophotonic network of two coupled cavities and analyze the properties of this device using the CQFC formalism. (orig.)

  19. Ultra-high aspect ratio high-resolution nanofabrication for hard X-ray diffractive optics

    Science.gov (United States)

    Chang, Chieh; Sakdinawat, Anne

    2014-06-01

    Although diffractive optics have played a major role in nanoscale soft X-ray imaging, high-resolution and high-efficiency diffractive optics have largely been unavailable for hard X-rays where many scientific, technological and biomedical applications exist. This is owing to the long-standing challenge of fabricating ultra-high aspect ratio high-resolution dense nanostructures. Here we report significant progress in ultra-high aspect ratio nanofabrication of high-resolution, dense silicon nanostructures using vertical directionality controlled metal-assisted chemical etching. The resulting structures have very smooth sidewalls and can be used to pattern arbitrary features, not limited to linear or circular. We focus on the application of X-ray zone plate fabrication for high-efficiency, high-resolution diffractive optics, and demonstrate the process with linear, circular, and spiral zone plates. X-ray measurements demonstrate high efficiency in the critical outer layers. This method has broad applications including patterning for thermoelectric materials, battery anodes and sensors among others.

  20. Nanofabrication of planar split ring resonators for negative refractive index metamaterials in the infrared range

    Directory of Open Access Journals (Sweden)

    ZORAN JAKSIC

    2006-06-01

    Full Text Available Experimental nanofabrication of planar structures for one-dimensional metamaterials designed to achieve a negative effective refractive index in the mid-infrared range (5–10 micrometers was performed. Double split ring and complementary double split ring resonators (SRR and CSRR with square and circular geometries, were chosen to be fabricated since these are the basic building blocks to achieve a negative effective dielectric permittivity and magnetic permeability. Scanning probe nanolithography with z-scanner movement was used to fabricate straight-line and curvilinear segments with a line width of 80 – 120 nm. The geometries were delineated in 20 nm thin silver layers sputter-deposited on a positive photoresist substrate spin-coated on polished single crystal silicon wafers, as well as on polycarbonate slabs. The morphology of the structures was characterized by atomic force microscopy. The feature repeatibility was 60 – 150 nm, depending on the process conditions and the feature complexity. The nanolithographic groove depth in different samples ranged from 4 nm to 80 nm.

  1. Shrinking-hole colloidal lithography: self-aligned nanofabrication of complex plasmonic nanoantennas.

    Science.gov (United States)

    Syrenova, Svetlana; Wadell, Carl; Langhammer, Christoph

    2014-05-14

    Plasmonic nanoantennas create locally strongly enhanced electric fields in so-called hot spots. To place a relevant nanoobject with high accuracy in such a hot spot is crucial to fully capitalize on the potential of nanoantennas to control, detect, and enhance processes at the nanoscale. With state-of-the-art nanofabrication, in particular when several materials are to be used, small gaps between antenna elements are sought, and large surface areas are to be patterned, this is a grand challenge. Here we introduce self-aligned, bottom-up and self-assembly based Shrinking-Hole Colloidal Lithography, which provides (i) unique control of the size and position of subsequently deposited particles forming the nanoantenna itself, and (ii) allows delivery of nanoobjects consisting of a material of choice to the antenna hot spot, all in a single lithography step and, if desired, uniformly covering several square centimeters of surface. We illustrate the functionality of SHCL nanoantenna arrangements by (i) an optical hydrogen sensor exploiting the polarization dependent sensitivity of an Au-Pd nanoantenna ensemble; and (ii) single particle hydrogen sensing with an Au dimer nanoantenna with a small Pd nanoparticle in the hot spot.

  2. Scalable Performance Measurement and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gamblin, Todd [Univ. of North Carolina, Chapel Hill, NC (United States)

    2009-01-01

    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number of tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.

  3. Myria: Scalable Analytics as a Service

    Science.gov (United States)

    Howe, B.; Halperin, D.; Whitaker, A.

    2014-12-01

    At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.

  4. The COMET Sleep Research Platform.

    Science.gov (United States)

    Nichols, Deborah A; DeSalvo, Steven; Miller, Richard A; Jónsson, Darrell; Griffin, Kara S; Hyde, Pamela R; Walsh, James K; Kushida, Clete A

    2014-01-01

    The Comparative Outcomes Management with Electronic Data Technology (COMET) platform is extensible and designed for facilitating multicenter electronic clinical research. Our research goals were the following: (1) to conduct a comparative effectiveness trial (CET) for two obstructive sleep apnea treatments-positive airway pressure versus oral appliance therapy; and (2) to establish a new electronic network infrastructure that would support this study and other clinical research studies. The COMET platform was created to satisfy the needs of CET with a focus on creating a platform that provides comprehensive toolsets, multisite collaboration, and end-to-end data management. The platform also provides medical researchers the ability to visualize and interpret data using business intelligence (BI) tools. COMET is a research platform that is scalable and extensible, and which, in a future version, can accommodate big data sets and enable efficient and effective research across multiple studies and medical specialties. The COMET platform components were designed for an eventual move to a cloud computing infrastructure that enhances sustainability, overall cost effectiveness, and return on investment.

  5. Long Coherence Length 193 nm Laser for High-Resolution Nano-Fabrication

    Science.gov (United States)

    2008-06-27

    ArF excimer lasers . In this Phase I STTR we have successfully designed and modeled a solid-state laser source that exceeds our target specifications...briefly in section 6. More details of the fiber modeling and analysis work are given in the appendix. The solid-state196 nm laser system is discussed...will limit the pulse energy . Efficiency of the system is another potential problem, as is scalability in power. A third system would use a 946 nm

  6. NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors

    National Research Council Canada - National Science Library

    Cheung, Kit; Schultz, Simon R; Luk, Wayne

    2015-01-01

    NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs...

  7. Quality Scalability Aware Watermarking for Visual Content.

    Science.gov (United States)

    Bhowmik, Deepayan; Abhayaratne, Charith

    2016-11-01

    Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.

  8. Complementary Platforms

    NARCIS (Netherlands)

    van Cayseele, P.; Reynaerts, J.

    2007-01-01

    We introduce an analytical framework close to the canonical model of platform competition investigated by Rochet and Tirole (2006) to study pricing decisions in two-sided markets when two or more platforms are needed simultaneously for the successful completion of a transaction. The model developed

  9. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can arbit......, making it scalable to “big noisy data.” We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie....

  10. CASTOR: Widely Distributed Scalable Infospaces

    Science.gov (United States)

    2008-11-01

    as the application builder technology provided by Microsoft in their Indigo platform for Windows Vista. Tempest then automatically introduces...below. In RMTP, the sender and the receivers for a topic form a tree. Within this tree, every subset of nodes consisting of a parent and its child ...nodes represents a separate local recovery group. The child nodes in every such group send their local ACK/NAK information to the parent node, which

  11. Fast and scalable inequality joins

    KAUST Repository

    Khayyat, Zuhair

    2016-09-07

    Inequality joins, which is to join relations with inequality conditions, are used in various applications. Optimizing joins has been the subject of intensive research ranging from efficient join algorithms such as sort-merge join, to the use of efficient indices such as (Formula presented.)-tree, (Formula presented.)-tree and Bitmap. However, inequality joins have received little attention and queries containing such joins are notably very slow. In this paper, we introduce fast inequality join algorithms based on sorted arrays and space-efficient bit-arrays. We further introduce a simple method to estimate the selectivity of inequality joins which is then used to optimize multiple predicate queries and multi-way joins. Moreover, we study an incremental inequality join algorithm to handle scenarios where data keeps changing. We have implemented a centralized version of these algorithms on top of PostgreSQL, a distributed version on top of Spark SQL, and an existing data cleaning system, Nadeef. By comparing our algorithms against well-known optimization techniques for inequality joins, we show our solution is more scalable and several orders of magnitude faster. © 2016 Springer-Verlag Berlin Heidelberg

  12. Scalable encryption using alpha rooting

    Science.gov (United States)

    Wharton, Eric J.; Panetta, Karen A.; Agaian, Sos S.

    2008-04-01

    Full and partial encryption methods are important for subscription based content providers, such as internet and cable TV pay channels. Providers need to be able to protect their products while at the same time being able to provide demonstrations to attract new customers without giving away the full value of the content. If an algorithm were introduced which could provide any level of full or partial encryption in a fast and cost effective manner, the applications to real-time commercial implementation would be numerous. In this paper, we present a novel application of alpha rooting, using it to achieve fast and straightforward scalable encryption with a single algorithm. We further present use of the measure of enhancement, the Logarithmic AME, to select optimal parameters for the partial encryption. When parameters are selected using the measure, the output image achieves a balance between protecting the important data in the image while still containing a good overall representation of the image. We will show results for this encryption method on a number of images, using histograms to evaluate the effectiveness of the encryption.

  13. Finite Element Modeling on Scalable Parallel Computers

    Science.gov (United States)

    Cwik, T.; Zuffada, C.; Jamnejad, V.; Katz, D.

    1995-01-01

    A coupled finite element-integral equation was developed to model fields scattered from inhomogenous, three-dimensional objects of arbitrary shape. This paper outlines how to implement the software on a scalable parallel processor.

  14. Visual analytics in scalable visualization environments

    OpenAIRE

    Yamaoka, So

    2011-01-01

    Visual analytics is an interdisciplinary field that facilitates the analysis of the large volume of data through interactive visual interface. This dissertation focuses on the development of visual analytics techniques in scalable visualization environments. These scalable visualization environments offer a high-resolution, integrated virtual space, as well as a wide-open physical space that affords collaborative user interaction. At the same time, the sheer scale of these environments poses ...

  15. High Throughput Nanofabrication of Silicon Nanowire and Carbon Nanotube Tips on AFM Probes by Stencil-Deposited Catalysts

    DEFF Research Database (Denmark)

    Engstrøm, Daniel Southcott; Savu, Veronica; Zhu, Xueni

    2011-01-01

    A new and versatile technique for the wafer scale nanofabrication of silicon nanowire (SiNW) and multiwalled carbon nanotube (MWNT) tips on atomic force microscope (AFM) probes is presented. Catalyst material for the SiNW and MWNT growth was deposited on prefabricated AFM probes using aligned wafer...... scale nanostencil lithography. Individual vertical SiNWs were grown epitaxially by a catalytic vapor−liquid−solid (VLS) process and MWNTs were grown by a plasma-enhanced chemical vapor (PECVD) process on the AFM probes. The AFM probes were tested for imaging micrometers-deep trenches, where...

  16. Nanofabrication technique based on localized photocatalytic reactions using a TiO2-coated atomic force microscopy probe

    Science.gov (United States)

    Shibata, Takayuki; Iio, Naohiro; Furukawa, Hiromi; Nagai, Moeto

    2017-02-01

    We performed a fundamental study on the photocatalytic degradation of fluorescently labeled DNA molecules immobilized on titanium dioxide (TiO2) thin films under ultraviolet irradiation. The films were prepared by the electrochemical anodization of Ti thin films sputtered on silicon substrates. We also confirmed that the photocurrent arising from the photocatalytic oxidation of DNA molecules can be detected during this process. We then demonstrated an atomic force microscopy (AFM)-based nanofabrication technique by employing TiO2-coated AFM probes to penetrate living cell membranes under near-physiological conditions for minimally invasive intracellular delivery.

  17. Fully scalable video coding in multicast applications

    Science.gov (United States)

    Lerouge, Sam; De Sutter, Robbie; Lambert, Peter; Van de Walle, Rik

    2004-01-01

    The increasing diversity of the characteristics of the terminals and networks that are used to access multimedia content through the internet introduces new challenges for the distribution of multimedia data. Scalable video coding will be one of the elementary solutions in this domain. This type of coding allows to adapt an encoded video sequence to the limitations of the network or the receiving device by means of very basic operations. Algorithms for creating fully scalable video streams, in which multiple types of scalability are offered at the same time, are becoming mature. On the other hand, research on applications that use such bitstreams is only recently emerging. In this paper, we introduce a mathematical model for describing such bitstreams. In addition, we show how we can model applications that use scalable bitstreams by means of definitions that are built on top of this model. In particular, we chose to describe a multicast protocol that is targeted at scalable bitstreams. This way, we will demonstrate that it is possible to define an abstract model for scalable bitstreams, that can be used as a tool for reasoning about such bitstreams and related applications.

  18. ENDEAVOUR: A Scalable SDN Architecture for Real-World IXPs

    KAUST Repository

    Antichi, Gianni

    2017-10-25

    Innovation in interdomain routing has remained stagnant for over a decade. Recently, IXPs have emerged as economically-advantageous interconnection points for reducing path latencies and exchanging ever increasing traffic volumes among, possibly, hundreds of networks. Given their far-reaching implications on interdomain routing, IXPs are the ideal place to foster network innovation and extend the benefits of SDN to the interdomain level. In this paper, we present, evaluate, and demonstrate ENDEAVOUR, an SDN platform for IXPs. ENDEAVOUR can be deployed on a multi-hop IXP fabric, supports a large number of use cases, and is highly-scalable while avoiding broadcast storms. Our evaluation with real data from one of the largest IXPs, demonstrates the benefits and scalability of our solution: ENDEAVOUR requires around 70% fewer rules than alternative SDN solutions thanks to our rule partitioning mechanism. In addition, by providing an open source solution, we invite everyone from the community to experiment (and improve) our implementation as well as adapt it to new use cases.

  19. Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    John Mellor-Crummey

    2008-02-29

    Rice University's achievements as part of the Center for Programming Models for Scalable Parallel Computing include: (1) design and implemention of cafc, the first multi-platform CAF compiler for distributed and shared-memory machines, (2) performance studies of the efficiency of programs written using the CAF and UPC programming models, (3) a novel technique to analyze explicitly-parallel SPMD programs that facilitates optimization, (4) design, implementation, and evaluation of new language features for CAF, including communication topologies, multi-version variables, and distributed multithreading to simplify development of high-performance codes in CAF, and (5) a synchronization strength reduction transformation for automatically replacing barrier-based synchronization with more efficient point-to-point synchronization. The prototype Co-array Fortran compiler cafc developed in this project is available as open source software from http://www.hipersoft.rice.edu/caf.

  20. Scalable, ultra-resistant structural colors based on network metamaterials

    CERN Document Server

    Galinski, Henning; Dong, Hao; Gongora, Juan S Totero; Favaro, Grégory; Döbeli, Max; Spolenak, Ralph; Fratalocchi, Andrea; Capasso, Federico

    2016-01-01

    Structural colours have drawn wide attention for their potential as a future printing technology for various applications, ranging from biomimetic tissues to adaptive camouflage materials. However, an efficient approach to realise robust colours with a scalable fabrication technique is still lacking, hampering the realisation of practical applications with this platform. Here we develop a new approach based on large scale network metamaterials, which combine dealloyed subwavelength structures at the nanoscale with loss-less, ultra-thin dielectrics coatings. By using theory and experiments, we show how sub-wavelength dielectric coatings control a mechanism of resonant light coupling with epsilon-near-zero (ENZ) regions generated in the metallic network, manifesting the formation of highly saturated structural colours that cover a wide portion of the spectrum. Ellipsometry measurements report the efficient observation of these colours even at angles of $70$ degrees. The network-like architecture of these nanoma...

  1. MR-Tree - A Scalable MapReduce Algorithm for Building Decision Trees

    Directory of Open Access Journals (Sweden)

    Vasile PURDILĂ

    2014-03-01

    Full Text Available Learning decision trees against very large amounts of data is not practical on single node computers due to the huge amount of calculations required by this process. Apache Hadoop is a large scale distributed computing platform that runs on commodity hardware clusters and can be used successfully for data mining task against very large datasets. This work presents a parallel decision tree learning algorithm expressed in MapReduce programming model that runs on Apache Hadoop platform and has a very good scalability with dataset size.

  2. Relationship between Length and Surface-Enhanced Raman Spectroscopy Signal Strength in Metal Nanoparticle Chains: Ideal Models versus Nanofabrication

    Directory of Open Access Journals (Sweden)

    Kristen D. Alexander

    2012-01-01

    Full Text Available We have employed capillary force deposition on ion beam patterned substrates to fabricate chains of 60 nm gold nanospheres ranging in length from 1 to 9 nanoparticles. Measurements of the surface-averaged SERS enhancement factor strength for these chains were then compared to the numerical predictions. The SERS enhancement conformed to theoretical predictions in the case of only a few chains, with the vast majority of chains tested not matching such behavior. Although all of the nanoparticle chains appear identical under electron microscope observation, the extreme sensitivity of the SERS enhancement to nanoscale morphology renders current nanofabrication methods insufficient for consistent production of coupled nanoparticle chains. Notwithstanding this fact, the aggregate data also confirmed that nanoparticle dimers offer a large improvement over the monomer enhancement while conclusively showing that, within the limitations imposed by current state-of-the-art nanofabrication techniques, chains comprising more than two nanoparticles provide only a marginal signal boost over the already considerable dimer enhancement.

  3. Wanted: Scalable Tracers for Diffusion Measurements

    Science.gov (United States)

    2015-01-01

    Scalable tracers are potentially a useful tool to examine diffusion mechanisms and to predict diffusion coefficients, particularly for hindered diffusion in complex, heterogeneous, or crowded systems. Scalable tracers are defined as a series of tracers varying in size but with the same shape, structure, surface chemistry, deformability, and diffusion mechanism. Both chemical homology and constant dynamics are required. In particular, branching must not vary with size, and there must be no transition between ordinary diffusion and reptation. Measurements using scalable tracers yield the mean diffusion coefficient as a function of size alone; measurements using nonscalable tracers yield the variation due to differences in the other properties. Candidate scalable tracers are discussed for two-dimensional (2D) diffusion in membranes and three-dimensional diffusion in aqueous solutions. Correlations to predict the mean diffusion coefficient of globular biomolecules from molecular mass are reviewed briefly. Specific suggestions for the 3D case include the use of synthetic dendrimers or random hyperbranched polymers instead of dextran and the use of core–shell quantum dots. Another useful tool would be a series of scalable tracers varying in deformability alone, prepared by varying the density of crosslinking in a polymer to make say “reinforced Ficoll” or “reinforced hyperbranched polyglycerol.” PMID:25319586

  4. Scalable L-infinite coding of meshes.

    Science.gov (United States)

    Munteanu, Adrian; Cernea, Dan C; Alecu, Alin; Cornelis, Jan; Schelkens, Peter

    2010-01-01

    The paper investigates the novel concept of local-error control in mesh geometry encoding. In contrast to traditional mesh-coding systems that use the mean-square error as target distortion metric, this paper proposes a new L-infinite mesh-coding approach, for which the target distortion metric is the L-infinite distortion. In this context, a novel wavelet-based L-infinite-constrained coding approach for meshes is proposed, which ensures that the maximum error between the vertex positions in the original and decoded meshes is lower than a given upper bound. Furthermore, the proposed system achieves scalability in L-infinite sense, that is, any decoding of the input stream will correspond to a perfectly predictable L-infinite distortion upper bound. An instantiation of the proposed L-infinite-coding approach is demonstrated for MESHGRID, which is a scalable 3D object encoding system, part of MPEG-4 AFX. In this context, the advantages of scalable L-infinite coding over L-2-oriented coding are experimentally demonstrated. One concludes that the proposed L-infinite mesh-coding approach guarantees an upper bound on the local error in the decoded mesh, it enables a fast real-time implementation of the rate allocation, and it preserves all the scalability features and animation capabilities of the employed scalable mesh codec.

  5. Space platforms

    Science.gov (United States)

    1983-04-01

    The expanded scientific capabilities available by interfacing an orbital, free-flying experiments platform with Shuttle tending are outlined. The platform would be lifted to orbit by the Shuttle, and modularly increased in size on subsequent flights. Science packages could be left on the 26,000 lb space platform for up to six months. Component sections would include electrical and thermal control systems, berthing ports for payloads and an Orbiter, and an attitude control, communications, and data handling subsection. A 12 kW solar array would furnish power, and interconnect with Spacelab would further enhance the operations range. All berthed science packages would have individual pointing ability and grapples for the Orbiter RMS. Eventual evolution to include facilities for a human crew and a 25 kW solar array is projected.

  6. Scalable Density-Based Subspace Clustering

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Günnemann, Stephan

    2011-01-01

    For knowledge discovery in high dimensional databases, subspace clustering detects clusters in arbitrary subspace projections. Scalability is a crucial issue, as the number of possible projections is exponential in the number of dimensions. We propose a scalable density-based subspace clustering...... method that steers mining to few selected subspace clusters. Our novel steering technique reduces subspace processing by identifying and clustering promising subspaces and their combinations directly. Thereby, it narrows down the search space while maintaining accuracy. Thorough experiments on real...... and synthetic databases show that steering is efficient and scalable, with high quality results. For future work, our steering paradigm for density-based subspace clustering opens research potential for speeding up other subspace clustering approaches as well....

  7. Scalable Open Source Smart Grid Simulator (SGSim)

    DEFF Research Database (Denmark)

    Ebeid, Emad Samuel Malki; Jacobsen, Rune Hylsberg; Quaglia, Davide

    2017-01-01

    The future smart power grid will consist of an unlimited number of smart devices that communicate with control units to maintain the grid’s sustainability, efficiency, and balancing. In order to build and verify such controllers over a large grid, a scalable simulation environment is needed....... This paper presents an open source smart grid simulator (SGSim). The simulator is based on open source SystemC Network Simulation Library (SCNSL) and aims to model scalable smart grid applications. SGSim has been tested under different smart grid scenarios that contain hundreds of thousands of households...... and appliances. By using SGSim, different smart grid control strategies and protocols can be tested, validated and evaluated in a scalable environment....

  8. Advanced Nanofabrication Process Development for Self-Powered System-on-Chip

    KAUST Repository

    Rojas, Jhonathan Prieto

    2010-11-01

    In this work the development of a Self-Powered System-On-Chip is explored by examining two components of process development in different perspectives. On one side, an energy component is approached from a biochemical standpoint where a Microbial Fuel Cell (MFC) is built with standard microfabrication techniques, displaying a novel electrode based on Carbon Nanotubes (CNTs). The fabrication process involves the formation of a micrometric chamber that hosts an enhanced CNT-based anode. Preliminary results are promising, showing a high current density (113.6mA/m2) compared with other similar cells. Nevertheless many improvements can be done to the main design and further characterization of the anode will give a more complete understanding and bring the device closer to a practical implementation. On a second point of view, nano-patterning through silicon nitride spacer width control is developed, aimed at producing alternative sub-100nm device fabrication with the potential of further scaling thanks to nanowire based structures. These nanostructures are formed from a nano-pattern template, by using a bottom-up fabrication scheme. Uniformity and scalability of the process are demonstrated and its potential described. An estimated area of 0.120μm2 for a 6T-SRAM (Static Random Access Memory) bitcell (6 devices) can be achieved. In summary, by using a novel sustainable energy component and scalable nano-patterning for logic and computing module, this work has successfully collected the essential base knowledge and joined two different elements that synergistically will contribute for the future implementation of a Self-Powered System-on-Chip.

  9. An intermittent rocking platform for integrated expansion and differentiation of human pluripotent stem cells to cardiomyocytes in suspended microcarrier cultures

    Directory of Open Access Journals (Sweden)

    Sherwin Ting

    2014-09-01

    In conclusion, we have developed a simple robust and scalable platform that integrates both hESC expansion and CM differentiation in one unit process which is capable of meeting the need for large amounts of CMs.

  10. Platform computing

    CERN Multimedia

    2002-01-01

    "Platform Computing releases first grid-enabled workload management solution for IBM eServer Intel and UNIX high performance computing clusters. This Out-of-the-box solution maximizes the performance and capability of applications on IBM HPC clusters" (1/2 page) .

  11. Payment Platform

    DEFF Research Database (Denmark)

    Hjelholt, Morten; Damsgaard, Jan

    2012-01-01

    Payment transactions through the use of physical coins, bank notes or credit cards have for centuries been the standard formats of exchanging money. Recently online and mobile digital payment platforms has entered the stage as contenders to this position and possibly could penetrate societies...

  12. From Digital Disruption to Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten; Thomsen, Peter Poulsen

    2017-01-01

    a long time to replicate, business model scalability can be cornered into four dimensions. In many corporate restructuring exercises and Mergers and Acquisitions there is a tendency to look for synergies in the form of cost reductions, lean workflows and market segments. However, this state of mind......This article discusses the terms disruption, digital disruption, business models and business model scalability. It illustrates how managers should be using these terms for the benefit of their business by developing business models capable of achieving exponentially increasing returns to scale...

  13. From Digital Disruption to Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten; Thomsen, Peter Poulsen

    2017-01-01

    as a response to digital disruption. A series of case studies illustrate that besides frequent existing messages in the business literature relating to the importance of creating agile businesses, both in growing and declining economies, as well as hard to copy value propositions or value propositions that take......This article discusses the terms disruption, digital disruption, business models and business model scalability. It illustrates how managers should be using these terms for the benefit of their business by developing business models capable of achieving exponentially increasing returns to scale...... will seldom lead to business model scalability capable of competing with digital disruption(s)....

  14. An Extensible Sensing and Control Platform for Building Energy Management

    Energy Technology Data Exchange (ETDEWEB)

    Rowe, Anthony [Carnegie Mellon Univ., Pittsburgh, PA (United States); Berges, Mario [Carnegie Mellon Univ., Pittsburgh, PA (United States); Martin, Christopher [Robert Bosch LLC, Anderson, SC (United States)

    2016-04-03

    The goal of this project is to develop Mortar.io, an open-source BAS platform designed to simplify data collection, archiving, event scheduling and coordination of cross-system interactions. Mortar.io is optimized for (1) robustness to network outages, (2) ease of installation using plug-and-play and (3) scalable support for small to large buildings and campuses.

  15. Content-Aware Scalability-Type Selection for Rate Adaptation of Scalable Video

    Directory of Open Access Journals (Sweden)

    Tekalp A Murat

    2007-01-01

    Full Text Available Scalable video coders provide different scaling options, such as temporal, spatial, and SNR scalabilities, where rate reduction by discarding enhancement layers of different scalability-type results in different kinds and/or levels of visual distortion depend on the content and bitrate. This dependency between scalability type, video content, and bitrate is not well investigated in the literature. To this effect, we first propose an objective function that quantifies flatness, blockiness, blurriness, and temporal jerkiness artifacts caused by rate reduction by spatial size, frame rate, and quantization parameter scaling. Next, the weights of this objective function are determined for different content (shot types and different bitrates using a training procedure with subjective evaluation. Finally, a method is proposed for choosing the best scaling type for each temporal segment that results in minimum visual distortion according to this objective function given the content type of temporal segments. Two subjective tests have been performed to validate the proposed procedure for content-aware selection of the best scalability type on soccer videos. Soccer videos scaled from 600 kbps to 100 kbps by the proposed content-aware selection of scalability type have been found visually superior to those that are scaled using a single scalability option over the whole sequence.

  16. Nanofabrication de boites quantiques laterales pour l'optimisation de qubits de spin

    Science.gov (United States)

    Camirand Lemyre, Julien

    On présente dans ce travail un nouveau type de qubit de spin dont les performances reposent sur les propriétés d'un seul électron dans une double boîte quantique. Le fort moment dipolaire de la double boite combiné à une large variation du champ magnétique entre les deux boîtes permettrait de réaliser des opérations logiques plus rapidement que dans une seule boîte quantique. Pour maximiser les variations du champ magnétique, on utilisera un micro-aimant placé le plus près possible d'une des deux boîtes. À cette fin, une hétérostructure de GaAs/A1GaAs sur laquelle sont déposées des grilles d'aluminium a été utilisée pour former une double boite quantique latérale. L'occupation par un seul électron de la double boîte est confirmée par des mesures de transport électrique à basse température ainsi que par l'observation du blocage de spin. De plus, un procédé d'oxydation des grilles par plasma d'oxygène a été développé. Une étude des propriétés de l'oxyde formé par cette méthode montre qu'il est possible de placer un micro-aimant directement sur la surface de l'hétérostructure sans affecter l'isolation électrique entre les grilles. Cette nouvelle approche permet de produire des champs magnétiques encore plus intenses que dans les expériences antérieures, pour lesquelles le micro-aimant est placé beaucoup plus loin de la surface. L'ensemble du procédé de fabrication, de la photolithographie à l'électrolithographie, a été développé au cours de ce travail dans les salles blanches du département de génie électrique et dans les salles propres du département de physique de l'Université de Sherbrooke. Ce travail est une étape importante dans la réalisation de qubits de spin plus performants dans les boîtes quantiques latérales. Mots-clés: Information quantique, Spin, Rotations ultra-rapides, Boîtes quantiques latérales, Micro-aimants, Oxydation plasma, Nanofabrication.

  17. The Scalable Reasoning System: Lightweight Visualization for Distributed Analytics

    Energy Technology Data Exchange (ETDEWEB)

    Pike, William A.; Bruce, Joseph R.; Baddeley, Robert L.; Best, Daniel M.; Franklin, Lyndsey; May, Richard A.; Rice, Douglas M.; Riensche, Roderick M.; Younkin, Katarina

    2009-03-01

    A central challenge in visual analytics is the creation of accessible, widely distributable analysis applications that bring the benefits of visual discovery to as broad a user base as possible. Moreover, to support the role of visualization in the knowledge creation process, it is advantageous to allow users to describe the reasoning strategies they employ while interacting with analytic environments. We introduce an application suite called the Scalable Reasoning System (SRS), which provides web-based and mobile interfaces for visual analysis. The service-oriented analytic framework that underlies SRS provides a platform for deploying pervasive visual analytic environments across an enterprise. SRS represents a “lightweight” approach to visual analytics whereby thin client analytic applications can be rapidly deployed in a platform-agnostic fashion. Client applications support multiple coordinated views while giving analysts the ability to record evidence, assumptions, hypotheses and other reasoning artifacts. We describe the capabilities of SRS in the context of a real-world deployment at a regional law enforcement organization.

  18. On-chip detection of non-classical light by scalable integration of single-photon detectors.

    Science.gov (United States)

    Najafi, Faraz; Mower, Jacob; Harris, Nicholas C; Bellei, Francesco; Dane, Andrew; Lee, Catherine; Hu, Xiaolong; Kharel, Prashanta; Marsili, Francesco; Assefa, Solomon; Berggren, Karl K; Englund, Dirk

    2015-01-09

    Photonic-integrated circuits have emerged as a scalable platform for complex quantum systems. A central goal is to integrate single-photon detectors to reduce optical losses, latency and wiring complexity associated with off-chip detectors. Superconducting nanowire single-photon detectors (SNSPDs) are particularly attractive because of high detection efficiency, sub-50-ps jitter and nanosecond-scale reset time. However, while single detectors have been incorporated into individual waveguides, the system detection efficiency of multiple SNSPDs in one photonic circuit-required for scalable quantum photonic circuits-has been limited to classical light.

  19. [Application of the life sciences platform based on oracle to biomedical informations].

    Science.gov (United States)

    Zhao, Zhi-Yun; Li, Tai-Huan; Yang, Hong-Qiao

    2008-03-01

    The life sciences platform based on Oracle database technology is introduced in this paper. By providing a powerful data access, integrating a variety of data types, and managing vast quantities of data, the software presents a flexible, safe and scalable management platform for biomedical data processing.

  20. Scalable Detection and Isolation of Phishing

    NARCIS (Netherlands)

    Moreira Moura, Giovane; Pras, Aiko

    2009-01-01

    This paper presents a proposal for scalable detection and isolation of phishing. The main ideas are to move the protection from end users towards the network provider and to employ the novel bad neighborhood concept, in order to detect and isolate both phishing e-mail senders and phishing web

  1. Scalable Open Source Smart Grid Simulator (SGSim)

    DEFF Research Database (Denmark)

    Ebeid, Emad Samuel Malki; Jacobsen, Rune Hylsberg; Stefanni, Francesco

    2017-01-01

    . This paper presents an open source smart grid simulator (SGSim). The simulator is based on open source SystemC Network Simulation Library (SCNSL) and aims to model scalable smart grid applications. SGSim has been tested under different smart grid scenarios that contain hundreds of thousands of households...

  2. Scalable Domain Decomposed Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  3. Subjective comparison of temporal and quality scalability

    DEFF Research Database (Denmark)

    Korhonen, Jari; Reiter, Ulrich; You, Junyong

    2011-01-01

    and quality scalability. The practical experiments with low resolution video sequences show that in general, distortion is a more crucial factor for the perceived subjective quality than frame rate. However, the results also depend on the content. Moreover,, we discuss the role of other different influence...

  4. Bubble pump: scalable strategy for in-plane liquid routing.

    Science.gov (United States)

    Oskooei, Ali; Günther, Axel

    2015-07-07

    We present an on-chip liquid routing technique intended for application in well-based microfluidic systems that require long-term active pumping at low to medium flowrates. Our technique requires only one fluidic feature layer, one pneumatic control line and does not rely on flexible membranes and mechanical or moving parts. The presented bubble pump is therefore compatible with both elastomeric and rigid substrate materials and the associated scalable manufacturing processes. Directed liquid flow was achieved in a microchannel by an in-series configuration of two previously described "bubble gates", i.e., by gas-bubble enabled miniature gate valves. Only one time-dependent pressure signal is required and initiates at the upstream (active) bubble gate a reciprocating bubble motion. Applied at the downstream (passive) gate a time-constant gas pressure level is applied. In its rest state, the passive gate remains closed and only temporarily opens while the liquid pressure rises due to the active gate's reciprocating bubble motion. We have designed, fabricated and consistently operated our bubble pump with a variety of working liquids for >72 hours. Flow rates of 0-5.5 μl min(-1), were obtained and depended on the selected geometric dimensions, working fluids and actuation frequencies. The maximum operational pressure was 2.9 kPa-9.1 kPa and depended on the interfacial tension of the working fluids. Attainable flow rates compared favorably with those of available micropumps. We achieved flow rate enhancements of 30-100% by operating two bubble pumps in tandem and demonstrated scalability of the concept in a multi-well format with 12 individually and uniformly perfused microchannels (variation in flow rate bubble pump may provide active flow control for analytical and point-of-care diagnostic devices, as well as for microfluidic cells culture and organ-on-chip platforms.

  5. ParaText : scalable text modeling and analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-06-01

    Automated processing, modeling, and analysis of unstructured text (news documents, web content, journal articles, etc.) is a key task in many data analysis and decision making applications. As data sizes grow, scalability is essential for deep analysis. In many cases, documents are modeled as term or feature vectors and latent semantic analysis (LSA) is used to model latent, or hidden, relationships between documents and terms appearing in those documents. LSA supplies conceptual organization and analysis of document collections by modeling high-dimension feature vectors in many fewer dimensions. While past work on the scalability of LSA modeling has focused on the SVD, the goal of our work is to investigate the use of distributed memory architectures for the entire text analysis process, from data ingestion to semantic modeling and analysis. ParaText is a set of software components for distributed processing, modeling, and analysis of unstructured text. The ParaText source code is available under a BSD license, as an integral part of the Titan toolkit. ParaText components are chained-together into data-parallel pipelines that are replicated across processes on distributed-memory architectures. Individual components can be replaced or rewired to explore different computational strategies and implement new functionality. ParaText functionality can be embedded in applications on any platform using the native C++ API, Python, or Java. The ParaText MPI Process provides a 'generic' text analysis pipeline in a command-line executable that can be used for many serial and parallel analysis tasks. ParaText can also be deployed as a web service accessible via a RESTful (HTTP) API. In the web service configuration, any client can access the functionality provided by ParaText using commodity protocols ... from standard web browsers to custom clients written in any language.

  6. A scalable parallel black oil simulator on distributed memory parallel computers

    Science.gov (United States)

    Wang, Kun; Liu, Hui; Chen, Zhangxin

    2015-11-01

    This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.

  7. Design and Implementation of a Scalable Membership Service for Supercomputer Resiliency-Aware Runtime

    Energy Technology Data Exchange (ETDEWEB)

    Tock, Yoav [IBM Corporation, Haifa Research Center; Mandler, Benjamin [IBM Corporation, Haifa Research Center; Moreira, Jose [IBM T. J. Watson Research Center; Jones, Terry R [ORNL

    2013-01-01

    As HPC systems and applications get bigger and more complex, we are approaching an era in which resiliency and run-time elasticity concerns be- come paramount.We offer a building block for an alternative resiliency approach in which computations will be able to make progress while components fail, in addition to enabling a dynamic set of nodes throughout a computation lifetime. The core of our solution is a hierarchical scalable membership service provid- ing eventual consistency semantics. An attribute replication service is used for hierarchy organization, and is exposed to external applications. Our solution is based on P2P technologies and provides resiliency and elastic runtime support at ultra large scales. Resulting middleware is general purpose while exploiting HPC platform unique features and architecture. We have implemented and tested this system on BlueGene/P with Linux, and using worst-case analysis, evaluated the service scalability as effective for up to 1M nodes.

  8. Integration of an intelligent systems behavior simulator and a scalable soldier-machine interface

    Science.gov (United States)

    Johnson, Tony; Manteuffel, Chris; Brewster, Benjamin; Tierney, Terry

    2007-04-01

    As the Army's Future Combat Systems (FCS) introduce emerging technologies and new force structures to the battlefield, soldiers will increasingly face new challenges in workload management. The next generation warfighter will be responsible for effectively managing robotic assets in addition to performing other missions. Studies of future battlefield operational scenarios involving the use of automation, including the specification of existing and proposed technologies, will provide significant insight into potential problem areas regarding soldier workload. The US Army Tank Automotive Research, Development, and Engineering Center (TARDEC) is currently executing an Army technology objective program to analyze and evaluate the effect of automated technologies and their associated control devices with respect to soldier workload. The Human-Robotic Interface (HRI) Intelligent Systems Behavior Simulator (ISBS) is a human performance measurement simulation system that allows modelers to develop constructive simulations of military scenarios with various deployments of interface technologies in order to evaluate operator effectiveness. One such interface is TARDEC's Scalable Soldier-Machine Interface (SMI). The scalable SMI provides a configurable machine interface application that is capable of adapting to several hardware platforms by recognizing the physical space limitations of the display device. This paper describes the integration of the ISBS and Scalable SMI applications, which will ultimately benefit both systems. The ISBS will be able to use the Scalable SMI to visualize the behaviors of virtual soldiers performing HRI tasks, such as route planning, and the scalable SMI will benefit from stimuli provided by the ISBS simulation environment. The paper describes the background of each system and details of the system integration approach.

  9. A cracking-assisted micro-/nanofluidic fabrication platform for silver nanobelt arrays and nanosensors.

    Science.gov (United States)

    Kim, Dong-Joo; Ha, Dogyeong; Zhou, Qitao; Thokchom, Ashish Kumar; Lim, Ji Won; Lee, Jongwan; Park, Jun Gyu; Kim, Taesung

    2017-07-13

    Nanowires (NWs) with a high surface-to-volume ratio are advantageous for bio- or chemical sensor applications with high sensitivity, high selectivity, rapid response, and low power consumption. However, NWs are typically fabricated by combining several nanofabrication and even microfabrication processes, resulting in drawbacks such as high fabrication cost, extensive labor, and long processing time. Here, we show a novel NW fabrication platform based on "crack-photolithography" to produce a micro-/nanofluidic channel network. Solutions were loaded along the microchannel, while chemical synthesis was performed in the nanoslit-like nanochannels for fabricating silver nanobelts (AgNBs). In addition, the NW/NB fabrication platform not only made it possible to produce AgNBs in a repeatable, high-throughput, and low-cost manner but also allowed the simultaneous synthesis and alignment of AgNBs on a chip, eliminating the need for special micro- and/or nanofabrication equipment and dramatically reducing the processing time, labor, and cost. Finally, we demonstrated that the AgNBs can be used as chemical sensors, either as prepared or when integrated in a flexible substrate, to detect target analytes such as hydrogen peroxide.

  10. Scalable Atomistic Simulation Algorithms for Materials Research

    Directory of Open Access Journals (Sweden)

    Aiichiro Nakano

    2002-01-01

    Full Text Available A suite of scalable atomistic simulation programs has been developed for materials research based on space-time multiresolution algorithms. Design and analysis of parallel algorithms are presented for molecular dynamics (MD simulations and quantum-mechanical (QM calculations based on the density functional theory. Performance tests have been carried out on 1,088-processor Cray T3E and 1,280-processor IBM SP3 computers. The linear-scaling algorithms have enabled 6.44-billion-atom MD and 111,000-atom QM calculations on 1,024 SP3 processors with parallel efficiency well over 90%. production-quality programs also feature wavelet-based computational-space decomposition for adaptive load balancing, spacefilling-curve-based adaptive data compression with user-defined error bound for scalable I/O, and octree-based fast visibility culling for immersive and interactive visualization of massive simulation data.

  11. Declarative and Scalable Selection for Map Visualizations

    DEFF Research Database (Denmark)

    Kefaloukos, Pimin Konstantin Balic

    supports the PostgreSQL dialect of SQL. The prototype implementation is a compiler that translates CVL into SQL and stored procedures. (c) TileHeat is a framework and basic algorithm for partial materialization of hot tile sets for scalable map distribution. The framework predicts future map workloads......, there are indications that the method is scalable for databases that contain millions of records, especially if the target language of the compiler is substituted by a cluster-ready variant of SQL. While several realistic use cases for maps have been implemented in CVL, additional non-geographic data visualization uses...... goal. The results for Tileheat show that the prediction method offers a substantial improvement over the current method used by the Danish Geodata Agency. Thus, a large amount of computations can potentially be saved by this public institution, who is responsible for the distribution of government...

  12. A Scalability Model for ECS's Data Server

    Science.gov (United States)

    Menasce, Daniel A.; Singhal, Mukesh

    1998-01-01

    This report presents in four chapters a model for the scalability analysis of the Data Server subsystem of the Earth Observing System Data and Information System (EOSDIS) Core System (ECS). The model analyzes if the planned architecture of the Data Server will support an increase in the workload with the possible upgrade and/or addition of processors, storage subsystems, and networks. The approaches in the report include a summary of the architecture of ECS's Data server as well as a high level description of the Ingest and Retrieval operations as they relate to ECS's Data Server. This description forms the basis for the development of the scalability model of the data server and the methodology used to solve it.

  13. Stencil Lithography for Scalable Micro- and Nanomanufacturing

    Directory of Open Access Journals (Sweden)

    Ke Du

    2017-04-01

    Full Text Available In this paper, we review the current development of stencil lithography for scalable micro- and nanomanufacturing as a resistless and reusable patterning technique. We first introduce the motivation and advantages of stencil lithography for large-area micro- and nanopatterning. Then we review the progress of using rigid membranes such as SiNx and Si as stencil masks as well as stacking layers. We also review the current use of flexible membranes including a compliant SiNx membrane with springs, polyimide film, polydimethylsiloxane (PDMS layer, and photoresist-based membranes as stencil lithography masks to address problems such as blurring and non-planar surface patterning. Moreover, we discuss the dynamic stencil lithography technique, which significantly improves the patterning throughput and speed by moving the stencil over the target substrate during deposition. Lastly, we discuss the future advancement of stencil lithography for a resistless, reusable, scalable, and programmable nanolithography method.

  14. SPRNG Scalable Parallel Random Number Generator LIbrary

    Energy Technology Data Exchange (ETDEWEB)

    2010-03-16

    This revision corrects some errors in SPRNG 1. Users of newer SPRNG versions can obtain the corrected files and build their version with it. This version also improves the scalability of some of the application-based tests in the SPRNG test suite. It also includes an interface to a parallel Mersenne Twister, so that if users install the Mersenne Twister, then they can test this generator with the SPRNG test suite and also use some SPRNG features with that generator.

  15. Bitcoin-NG: A Scalable Blockchain Protocol

    OpenAIRE

    Eyal, Ittay; Gencer, Adem Efe; Sirer, Emin Gun; Renesse, Robbert,

    2015-01-01

    Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin's blockchain protocol, Bitcoin-NG is By...

  16. Stencil Lithography for Scalable Micro- and Nanomanufacturing

    OpenAIRE

    Ke Du; Junjun Ding; Yuyang Liu; Ishan Wathuthanthri; Chang-Hwan Choi

    2017-01-01

    In this paper, we review the current development of stencil lithography for scalable micro- and nanomanufacturing as a resistless and reusable patterning technique. We first introduce the motivation and advantages of stencil lithography for large-area micro- and nanopatterning. Then we review the progress of using rigid membranes such as SiNx and Si as stencil masks as well as stacking layers. We also review the current use of flexible membranes including a compliant SiNx membrane with spring...

  17. Scalable robotic biofabrication of tissue spheroids

    Energy Technology Data Exchange (ETDEWEB)

    Mehesz, A Nagy; Hajdu, Z; Visconti, R P; Markwald, R R; Mironov, V [Advanced Tissue Biofabrication Center, Department of Regenerative Medicine and Cell Biology, Medical University of South Carolina, Charleston, SC (United States); Brown, J [Department of Mechanical Engineering, Clemson University, Clemson, SC (United States); Beaver, W [York Technical College, Rock Hill, SC (United States); Da Silva, J V L, E-mail: mironovv@musc.edu [Renato Archer Information Technology Center-CTI, Campinas (Brazil)

    2011-06-15

    Development of methods for scalable biofabrication of uniformly sized tissue spheroids is essential for tissue spheroid-based bioprinting of large size tissue and organ constructs. The most recent scalable technique for tissue spheroid fabrication employs a micromolded recessed template prepared in a non-adhesive hydrogel, wherein the cells loaded into the template self-assemble into tissue spheroids due to gravitational force. In this study, we present an improved version of this technique. A new mold was designed to enable generation of 61 microrecessions in each well of a 96-well plate. The microrecessions were seeded with cells using an EpMotion 5070 automated pipetting machine. After 48 h of incubation, tissue spheroids formed at the bottom of each microrecession. To assess the quality of constructs generated using this technology, 600 tissue spheroids made by this method were compared with 600 spheroids generated by the conventional hanging drop method. These analyses showed that tissue spheroids fabricated by the micromolded method are more uniform in diameter. Thus, use of micromolded recessions in a non-adhesive hydrogel, combined with automated cell seeding, is a reliable method for scalable robotic fabrication of uniform-sized tissue spheroids.

  18. DISP: Optimizations towards Scalable MPI Startup

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Huansong [Florida State University, Tallahassee; Pophale, Swaroop S [ORNL; Gorentla Venkata, Manjunath [ORNL; Yu, Weikuan [Florida State University, Tallahassee

    2016-01-01

    Despite the popularity of MPI for high performance computing, the startup of MPI programs faces a scalability challenge as both the execution time and memory consumption increase drastically at scale. We have examined this problem using the collective modules of Cheetah and Tuned in Open MPI as representative implementations. Previous improvements for collectives have focused on algorithmic advances and hardware off-load. In this paper, we examine the startup cost of the collective module within a communicator and explore various techniques to improve its efficiency and scalability. Accordingly, we have developed a new scalable startup scheme with three internal techniques, namely Delayed Initialization, Module Sharing and Prediction-based Topology Setup (DISP). Our DISP scheme greatly benefits the collective initialization of the Cheetah module. At the same time, it helps boost the performance of non-collective initialization in the Tuned module. We evaluate the performance of our implementation on Titan supercomputer at ORNL with up to 4096 processes. The results show that our delayed initialization can speed up the startup of Tuned and Cheetah by an average of 32.0% and 29.2%, respectively, our module sharing can reduce the memory consumption of Tuned and Cheetah by up to 24.1% and 83.5%, respectively, and our prediction-based topology setup can speed up the startup of Cheetah by up to 80%.

  19. A scalable distributed RRT for motion planning

    KAUST Repository

    Jacobs, Sam Ade

    2013-05-01

    Rapidly-exploring Random Tree (RRT), like other sampling-based motion planning methods, has been very successful in solving motion planning problems. Even so, sampling-based planners cannot solve all problems of interest efficiently, so attention is increasingly turning to parallelizing them. However, one challenge in parallelizing RRT is the global computation and communication overhead of nearest neighbor search, a key operation in RRTs. This is a critical issue as it limits the scalability of previous algorithms. We present two parallel algorithms to address this problem. The first algorithm extends existing work by introducing a parameter that adjusts how much local computation is done before a global update. The second algorithm radially subdivides the configuration space into regions, constructs a portion of the tree in each region in parallel, and connects the subtrees,i removing cycles if they exist. By subdividing the space, we increase computation locality enabling a scalable result. We show that our approaches are scalable. We present results demonstrating almost linear scaling to hundreds of processors on a Linux cluster and a Cray XE6 machine. © 2013 IEEE.

  20. Numeric Analysis for Relationship-Aware Scalable Streaming Scheme

    Directory of Open Access Journals (Sweden)

    Heung Ki Lee

    2014-01-01

    Full Text Available Frequent packet loss of media data is a critical problem that degrades the quality of streaming services over mobile networks. Packet loss invalidates frames containing lost packets and other related frames at the same time. Indirect loss caused by losing packets decreases the quality of streaming. A scalable streaming service can decrease the amount of dropped multimedia resulting from a single packet loss. Content providers typically divide one large media stream into several layers through a scalable streaming service and then provide each scalable layer to the user depending on the mobile network. Also, a scalable streaming service makes it possible to decode partial multimedia data depending on the relationship between frames and layers. Therefore, a scalable streaming service provides a way to decrease the wasted multimedia data when one packet is lost. However, the hierarchical structure between frames and layers of scalable streams determines the service quality of the scalable streaming service. Even if whole packets of layers are transmitted successfully, they cannot be decoded as a result of the absence of reference frames and layers. Therefore, the complicated relationship between frames and layers in a scalable stream increases the volume of abandoned layers. For providing a high-quality scalable streaming service, we choose a proper relationship between scalable layers as well as the amount of transmitted multimedia data depending on the network situation. We prove that a simple scalable scheme outperforms a complicated scheme in an error-prone network. We suggest an adaptive set-top box (AdaptiveSTB to lower the dependency between scalable layers in a scalable stream. Also, we provide a numerical model to obtain the indirect loss of multimedia data and apply it to various multimedia streams. Our AdaptiveSTB enhances the quality of a scalable streaming service by removing indirect loss.

  1. Scalable and balanced dynamic hybrid data assimilation

    Science.gov (United States)

    Kauranne, Tuomo; Amour, Idrissa; Gunia, Martin; Kallio, Kari; Lepistö, Ahti; Koponen, Sampsa

    2017-04-01

    Scalability of complex weather forecasting suites is dependent on the technical tools available for implementing highly parallel computational kernels, but to an equally large extent also on the dependence patterns between various components of the suite, such as observation processing, data assimilation and the forecast model. Scalability is a particular challenge for 4D variational assimilation methods that necessarily couple the forecast model into the assimilation process and subject this combination to an inherently serial quasi-Newton minimization process. Ensemble based assimilation methods are naturally more parallel, but large models force ensemble sizes to be small and that results in poor assimilation accuracy, somewhat akin to shooting with a shotgun in a million-dimensional space. The Variational Ensemble Kalman Filter (VEnKF) is an ensemble method that can attain the accuracy of 4D variational data assimilation with a small ensemble size. It achieves this by processing a Gaussian approximation of the current error covariance distribution, instead of a set of ensemble members, analogously to the Extended Kalman Filter EKF. Ensemble members are re-sampled every time a new set of observations is processed from a new approximation of that Gaussian distribution which makes VEnKF a dynamic assimilation method. After this a smoothing step is applied that turns VEnKF into a dynamic Variational Ensemble Kalman Smoother VEnKS. In this smoothing step, the same process is iterated with frequent re-sampling of the ensemble but now using past iterations as surrogate observations until the end result is a smooth and balanced model trajectory. In principle, VEnKF could suffer from similar scalability issues as 4D-Var. However, this can be avoided by isolating the forecast model completely from the minimization process by implementing the latter as a wrapper code whose only link to the model is calling for many parallel and totally independent model runs, all of them

  2. High-flux ionic diodes, ionic transistors and ionic amplifiers based on external ion concentration polarization by an ion exchange membrane: a new scalable ionic circuit platform†

    Science.gov (United States)

    Sun, Gongchen; Senapati, Satyajyoti

    2016-01-01

    A microfluidic-ion exchange membrane hybrid chip is fabricated by polymer-based, lithography-free methods to achieve ionic diode, transistor and amplifier functionalities with the same four-terminal design. The high ionic flux (> 100 μA) feature of the chip can enable a scalable integrated ionic circuit platform for micro-total-analytical systems. PMID:26960551

  3. Scalable TCP-friendly Video Distribution for Heterogeneous Clients

    Science.gov (United States)

    Zink, Michael; Griwodz, Carsten; Schmitt, Jens; Steinmetz, Ralf

    2003-01-01

    This paper investigates an architecture and implementation for the use of a TCP-friendly protocol in a scalable video distribution system for hierarchically encoded layered video. The design supports a variety of heterogeneous clients, because recent developments have shown that access network and client capabilities differ widely in today's Internet. The distribution system presented here consists of videos servers, proxy caches and clients that make use of a TCP-friendly rate control (TFRC) to perform congestion controlled streaming of layer encoded video. The data transfer protocol of the system is RTP compliant, yet it integrates protocol elements for congestion control with protocols elements for retransmission that is necessary for lossless transfer of contents into proxy caches. The control protocol RTSP is used to negotiate capabilities, such as support for congestion control or retransmission. By tests performed with our experimental platform in a lab test and over the Internet, we show that congestion controlled streaming of layer encoded video through proxy caches is a valid means of supporting heterogeneous clients. We show that filtering of layers depending on a TFRC-controlled permissible bandwidth allows the preferred delivery of the most relevant layers to end-systems while additional layers can be delivered to the cache server. We experiment with uncontrolled delivery from the proxy cache to the client as well, which may result in random loss and bandwidth waste but also a higher goodput, and compare these two approaches.

  4. Scalable and responsive event processing in the cloud

    Science.gov (United States)

    Suresh, Visalakshmi; Ezhilchelvan, Paul; Watson, Paul

    2013-01-01

    Event processing involves continuous evaluation of queries over streams of events. Response-time optimization is traditionally done over a fixed set of nodes and/or by using metrics measured at query-operator levels. Cloud computing makes it easy to acquire and release computing nodes as required. Leveraging this flexibility, we propose a novel, queueing-theory-based approach for meeting specified response-time targets against fluctuating event arrival rates by drawing only the necessary amount of computing resources from a cloud platform. In the proposed approach, the entire processing engine of a distinct query is modelled as an atomic unit for predicting response times. Several such units hosted on a single node are modelled as a multiple class M/G/1 system. These aspects eliminate intrusive, low-level performance measurements at run-time, and also offer portability and scalability. Using model-based predictions, cloud resources are efficiently used to meet response-time targets. The efficacy of the approach is demonstrated through cloud-based experiments. PMID:23230164

  5. Scalable, ultra-resistant structural colors based on network metamaterials

    KAUST Repository

    Galinski, Henning

    2017-05-05

    Structural colors have drawn wide attention for their potential as a future printing technology for various applications, ranging from biomimetic tissues to adaptive camouflage materials. However, an efficient approach to realize robust colors with a scalable fabrication technique is still lacking, hampering the realization of practical applications with this platform. Here, we develop a new approach based on large-scale network metamaterials that combine dealloyed subwavelength structures at the nanoscale with lossless, ultra-thin dielectric coatings. By using theory and experiments, we show how subwavelength dielectric coatings control a mechanism of resonant light coupling with epsilon-near-zero regions generated in the metallic network, generating the formation of saturated structural colors that cover a wide portion of the spectrum. Ellipsometry measurements support the efficient observation of these colors, even at angles of 70°. The network-like architecture of these nanomaterials allows for high mechanical resistance, which is quantified in a series of nano-scratch tests. With such remarkable properties, these metastructures represent a robust design technology for real-world, large-scale commercial applications.

  6. SAChES: Scalable Adaptive Chain-Ensemble Sampling.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Huang, Maoyi [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hou, Zhangshuan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bao, Jie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ren, Huiying [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-08-01

    We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the use of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.

  7. Microfluidic platforms for lab-on-a-chip applications.

    Science.gov (United States)

    Haeberle, Stefan; Zengerle, Roland

    2007-09-01

    We review microfluidic platforms that enable the miniaturization, integration and automation of biochemical assays. Nowadays nearly an unmanageable variety of alternative approaches exists that can do this in principle. Here we focus on those kinds of platforms only that allow performance of a set of microfluidic functions--defined as microfluidic unit operations-which can be easily combined within a well defined and consistent fabrication technology to implement application specific biochemical assays in an easy, flexible and ideally monolithically way. The microfluidic platforms discussed in the following are capillary test strips, also known as lateral flow assays, the "microfluidic large scale integration" approach, centrifugal microfluidics, the electrokinetic platform, pressure driven droplet based microfluidics, electrowetting based microfluidics, SAW driven microfluidics and, last but not least, "free scalable non-contact dispensing". The microfluidic unit operations discussed within those platforms are fluid transport, metering, mixing, switching, incubation, separation, droplet formation, droplet splitting, nL and pL dispensing, and detection.

  8. Progression in the Fountain Pen Approach: From 2D Writing to 3D Free-Form Micro/Nanofabrication.

    Science.gov (United States)

    Je, Jung Ho; Kim, Jong-Man; Jaworski, Justyn

    2017-01-01

    The fountain pen approach, as a means for transferring materials to substrates, has shown numerous incarnations in recent years for creating 2D micro/nanopatterns and even generating 3D free-form nanostructures using a variety of material "inks". While the idea of filled reservoirs used to deliver material to a substrate via a capillary remains unchanged since antiquity, the advent of precise micromanipulation systems and functional material "inks" allows the extension of this mechanism to more high-tech applications. Herein, the recent growth in meniscus guided fountain pen approaches for benchtop micro/nanofabrication, which has occurred in the last decade, is discussed. Particular attention is given to the theory, equipment, and experimentation encompassing this unique direct writing approach. A detailed exploration of the diverse ink systems and functional device applications borne from this strategy is put forth to reveal its rapid expansion to a broad range of scientific and engineering disciplines. As such, this informative review is provided for researchers considering adoption of this recent advancement of a familiar technology. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Size-controlled conformal nanofabrication of biotemplated three-dimensional TiO2 and ZnO nanonetworks

    Science.gov (United States)

    Ceylan, Hakan; Ozgit-Akgun, Cagla; Erkal, Turan S.; Donmez, Inci; Garifullin, Ruslan; Tekinay, Ayse B.; Usta, Hakan; Biyikli, Necmi; Guler, Mustafa O.

    2013-01-01

    A solvent-free fabrication of TiO2 and ZnO nanonetworks is demonstrated by using supramolecular nanotemplates with high coating conformity, uniformity, and atomic scale size control. Deposition of TiO2 and ZnO on three-dimensional nanofibrous network template is accomplished. Ultrafine control over nanotube diameter allows robust and systematic evaluation of the electrochemical properties of TiO2 and ZnO nanonetworks in terms of size-function relationship. We observe hypsochromic shift in UV absorbance maxima correlated with decrease in wall thickness of the nanotubes. Photocatalytic activities of anatase TiO2 and hexagonal wurtzite ZnO nanonetworks are found to be dependent on both the wall thickness and total surface area per unit of mass. Wall thickness has effect on photoexcitation properties of both TiO2 and ZnO due to band gap energies and total surface area per unit of mass. The present work is a successful example that concentrates on nanofabrication of intact three-dimensional semiconductor nanonetworks with controlled band gap energies. PMID:23892593

  10. The conquest of middle-earth: combining top-down and bottom-up nanofabrication for constructing nanoparticle based devices.

    Science.gov (United States)

    Diaz Fernandez, Yuri A; Gschneidtner, Tina A; Wadell, Carl; Fornander, Louise H; Lara Avila, Samuel; Langhammer, Christoph; Westerlund, Fredrik; Moth-Poulsen, Kasper

    2014-12-21

    The development of top-down nanofabrication techniques has opened many possibilities for the design and realization of complex devices based on single molecule phenomena such as e.g. single molecule electronic devices. These impressive achievements have been complemented by the fundamental understanding of self-assembly phenomena, leading to bottom-up strategies to obtain hybrid nanomaterials that can be used as building blocks for more complex structures. In this feature article we highlight some relevant published work as well as present new experimental results, illustrating the versatility of self-assembly methods combined with top-down fabrication techniques for solving relevant challenges in modern nanotechnology. We present recent developments on the use of hierarchical self-assembly methods to bridge the gap between sub-nanometer and micrometer length scales. By the use of non-covalent self-assembly methods, we show that we are able to control the positioning of nanoparticles on surfaces, and to address the deterministic assembly of nano-devices with potential applications in plasmonic sensing and single-molecule electronics experiments.

  11. Nanofabrication of 10-nm T-shaped gates using a double patterning process with electron beam lithography and dry etch

    Science.gov (United States)

    Shao, Jinhai; Deng, Jianan; Lu, W.; Chen, Yifang

    2017-07-01

    A process to fabricate T-shaped gates with the footprint scaling down to 10 nm using a double patterning procedure is reported. One of the keys in this process is to separate the definition of the footprint from that for the gate-head so that the proximity effect originated from electron forward scattering in the resist is significantly minimized, enabling us to achieve as narrow as 10-nm foot width. Furthermore, in contrast to the reported technique for 10-nm T-shaped profile in resist, this process utilizes a metallic film with a nanoslit as an etch mask to form a well-defined 10-nm-wide foot in a SiNx layer by reactive ion etch. Such a double patterning process has demonstrated enhanced reliability. The detailed process is comprehensively described, and its advantages and limitations are discussed. Nanofabrication of InP-based high-electron-mobility transistors using the developed process for 10- to 20-nm T-shaped gates is currently under the way.

  12. Combined Scalable Video Coding Method for Wireless Transmission

    Directory of Open Access Journals (Sweden)

    Achmad Affandi

    2011-08-01

    Full Text Available Mobile video streaming is one of multimedia services that has developed very rapidly. Recently, bandwidth utilization for wireless transmission is the main problem in the field of multimedia communications. In this research, we offer a combination of scalable methods as the most attractive solution to this problem. Scalable method for wireless communication should adapt to input video sequence. Standard ITU (International Telecommunication Union - Joint Scalable Video Model (JSVM is employed to produce combined scalable video coding (CSVC method that match the required quality of video streaming services for wireless transmission. The investigation in this paper shows that combined scalable technique outperforms the non-scalable one, in using bit rate capacity at certain layer.

  13. Towards a Scalable, Biomimetic, Antibacterial Coating

    Science.gov (United States)

    Dickson, Mary Nora

    Corneal afflictions are the second leading cause of blindness worldwide. When a corneal transplant is unavailable or contraindicated, an artificial cornea device is the only chance to save sight. Bacterial or fungal biofilm build up on artificial cornea devices can lead to serious complications including the need for systemic antibiotic treatment and even explantation. As a result, much emphasis has been placed on anti-adhesion chemical coatings and antibiotic leeching coatings. These methods are not long-lasting, and microorganisms can eventually circumvent these measures. Thus, I have developed a surface topographical antimicrobial coating. Various surface structures including rough surfaces, superhydrophobic surfaces, and the natural surfaces of insects' wings and sharks' skin are promising anti-biofilm candidates, however none meet the criteria necessary for implementation on the surface of an artificial cornea device. In this thesis I: 1) developed scalable fabrication protocols for a library of biomimetic nanostructure polymer surfaces 2) assessed the potential these for poly(methyl methacrylate) nanopillars to kill or prevent formation of biofilm by E. coli bacteria and species of Pseudomonas and Staphylococcus bacteria and improved upon a proposed mechanism for the rupture of Gram-negative bacterial cell walls 3) developed a scalable, commercially viable method for producing antibacterial nanopillars on a curved, PMMA artificial cornea device and 4) developed scalable fabrication protocols for implantation of antibacterial nanopatterned surfaces on the surfaces of thermoplastic polyurethane materials, commonly used in catheter tubings. This project constitutes a first step towards fabrication of the first entirely PMMA artificial cornea device. The major finding of this work is that by precisely controlling the topography of a polymer surface at the nano-scale, we can kill adherent bacteria and prevent biofilm formation of certain pathogenic bacteria

  14. Programming Scala Scalability = Functional Programming + Objects

    CERN Document Server

    Wampler, Dean

    2009-01-01

    Learn how to be more productive with Scala, a new multi-paradigm language for the Java Virtual Machine (JVM) that integrates features of both object-oriented and functional programming. With this book, you'll discover why Scala is ideal for highly scalable, component-based applications that support concurrency and distribution. Programming Scala clearly explains the advantages of Scala as a JVM language. You'll learn how to leverage the wealth of Java class libraries to meet the practical needs of enterprise and Internet projects more easily. Packed with code examples, this book provides us

  15. Scalable and Anonymous Group Communication with MTor

    Directory of Open Access Journals (Sweden)

    Lin Dong

    2016-04-01

    Full Text Available This paper presents MTor, a low-latency anonymous group communication system. We construct MTor as an extension to Tor, allowing the construction of multi-source multicast trees on top of the existing Tor infrastructure. MTor does not depend on an external service to broker the group communication, and avoids central points of failure and trust. MTor’s substantial bandwidth savings and graceful scalability enable new classes of anonymous applications that are currently too bandwidth-intensive to be viable through traditional unicast Tor communication-e.g., group file transfer, collaborative editing, streaming video, and real-time audio conferencing.

  16. Scalable conditional induction variables (CIV) analysis

    DEFF Research Database (Denmark)

    Oancea, Cosmin Eugen; Rauchwerger, Lawrence

    2015-01-01

    representation. Our technique requires no modifications of our dependence tests, which is agnostic to the original shape of the subscripts, and is more powerful than previously reported dependence tests that rely on the pairwise disambiguation of read-write references. We have implemented the CIV analysis in our...... parallelizing compiler and evaluated its impact on five Fortran benchmarks. We have found that that there are many important loops using CIV subscripts and that our analysis can lead to their scalable parallelization. This in turn has led to the parallelization of the benchmark programs they appear in....

  17. A New, Scalable and Low Cost Multi-Channel Monitoring System for Polymer Electrolyte Fuel Cells

    Directory of Open Access Journals (Sweden)

    Antonio José Calderón

    2016-03-01

    Full Text Available In this work a new, scalable and low cost multi-channel monitoring system for Polymer Electrolyte Fuel Cells (PEFCs has been designed, constructed and experimentally validated. This developed monitoring system performs non-intrusive voltage measurement of each individual cell of a PEFC stack and it is scalable, in the sense that it is capable to carry out measurements in stacks from 1 to 120 cells (from watts to kilowatts. The developed system comprises two main subsystems: hardware devoted to data acquisition (DAQ and software devoted to real-time monitoring. The DAQ subsystem is based on the low-cost open-source platform Arduino and the real-time monitoring subsystem has been developed using the high-level graphical language NI LabVIEW. Such integration can be considered a novelty in scientific literature for PEFC monitoring systems. An original amplifying and multiplexing board has been designed to increase the Arduino input port availability. Data storage and real-time monitoring have been performed with an easy-to-use interface. Graphical and numerical visualization allows a continuous tracking of cell voltage. Scalability, flexibility, easy-to-use, versatility and low cost are the main features of the proposed approach. The system is described and experimental results are presented. These results demonstrate its suitability to monitor the voltage in a PEFC at cell level.

  18. A New, Scalable and Low Cost Multi-Channel Monitoring System for Polymer Electrolyte Fuel Cells

    Science.gov (United States)

    Calderón, Antonio José; González, Isaías; Calderón, Manuel; Segura, Francisca; Andújar, José Manuel

    2016-01-01

    In this work a new, scalable and low cost multi-channel monitoring system for Polymer Electrolyte Fuel Cells (PEFCs) has been designed, constructed and experimentally validated. This developed monitoring system performs non-intrusive voltage measurement of each individual cell of a PEFC stack and it is scalable, in the sense that it is capable to carry out measurements in stacks from 1 to 120 cells (from watts to kilowatts). The developed system comprises two main subsystems: hardware devoted to data acquisition (DAQ) and software devoted to real-time monitoring. The DAQ subsystem is based on the low-cost open-source platform Arduino and the real-time monitoring subsystem has been developed using the high-level graphical language NI LabVIEW. Such integration can be considered a novelty in scientific literature for PEFC monitoring systems. An original amplifying and multiplexing board has been designed to increase the Arduino input port availability. Data storage and real-time monitoring have been performed with an easy-to-use interface. Graphical and numerical visualization allows a continuous tracking of cell voltage. Scalability, flexibility, easy-to-use, versatility and low cost are the main features of the proposed approach. The system is described and experimental results are presented. These results demonstrate its suitability to monitor the voltage in a PEFC at cell level. PMID:27005630

  19. Scalable Engineering of Quantum Optical Information Processing Architectures (SEQUOIA)

    Science.gov (United States)

    2016-12-13

    scalable architecture for LOQC and cluster state quantum computing (Ballistic or non-ballistic) - With parametric nonlinearities (Kerr, chi-2...Scalable Engineering of Quantum Optical Information-Processing Architectures (SEQUOIA) 5a. CONTRACT NUMBER W31-P4Q-15-C-0045 5b. GRANT NUMBER 5c...Technologies 13 December 2016 “Scalable Engineering of Quantum Optical Information-Processing Architectures (SEQUOIA)” Final R&D Status Report

  20. Big data integration: scalability and sustainability

    KAUST Repository

    Zhang, Zhang

    2016-01-26

    Integration of various types of omics data is critically indispensable for addressing most important and complex biological questions. In the era of big data, however, data integration becomes increasingly tedious, time-consuming and expensive, posing a significant obstacle to fully exploit the wealth of big biological data. Here we propose a scalable and sustainable architecture that integrates big omics data through community-contributed modules. Community modules are contributed and maintained by different committed groups and each module corresponds to a specific data type, deals with data collection, processing and visualization, and delivers data on-demand via web services. Based on this community-based architecture, we build Information Commons for Rice (IC4R; http://ic4r.org), a rice knowledgebase that integrates a variety of rice omics data from multiple community modules, including genome-wide expression profiles derived entirely from RNA-Seq data, resequencing-based genomic variations obtained from re-sequencing data of thousands of rice varieties, plant homologous genes covering multiple diverse plant species, post-translational modifications, rice-related literatures, and community annotations. Taken together, such architecture achieves integration of different types of data from multiple community-contributed modules and accordingly features scalable, sustainable and collaborative integration of big data as well as low costs for database update and maintenance, thus helpful for building IC4R into a comprehensive knowledgebase covering all aspects of rice data and beneficial for both basic and translational researches.

  1. Using MPI to Implement Scalable Libraries

    Science.gov (United States)

    Lusk, Ewing

    MPI is an instantiation of a general-purpose programming model, and high-performance implementations of the MPI standard have provided scalability for a wide range of applications. Ease of use was not an explicit goal of the MPI design process, which emphasized completeness, portability, and performance. Thus it is not surprising that MPI is occasionally criticized for being inconvenient to use and thus a drag on software developer productivity. One approach to the productivity issue is to use MPI to implement simpler programming models. Such models may limit the range of parallel algorithms that can be expressed, yet provide sufficient generality to benefit a significant number of applications, even from different domains.We illustrate this concept with the ADLB (Asynchronous, Dynamic Load-Balancing) library, which can be used to express manager/worker algorithms in such a way that their execution is scalable, even on the largestmachines. ADLB makes sophisticated use ofMPI functionality while providing an extremely simple API for the application programmer.We will describe it in the context of solving Sudoku puzzles and a nuclear physics Monte Carlo application currently running on tens of thousands of processors.

  2. Scalable fast multipole accelerated vortex methods

    KAUST Repository

    Hu, Qi

    2014-05-01

    The fast multipole method (FMM) is often used to accelerate the calculation of particle interactions in particle-based methods to simulate incompressible flows. To evaluate the most time-consuming kernels - the Biot-Savart equation and stretching term of the vorticity equation, we mathematically reformulated it so that only two Laplace scalar potentials are used instead of six. This automatically ensuring divergence-free far-field computation. Based on this formulation, we developed a new FMM-based vortex method on heterogeneous architectures, which distributed the work between multicore CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm uses new data structures which can dynamically manage inter-node communication and load balance efficiently, with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching calculation for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s.

  3. Using the scalable nonlinear equations solvers package

    Energy Technology Data Exchange (ETDEWEB)

    Gropp, W.D.; McInnes, L.C.; Smith, B.F.

    1995-02-01

    SNES (Scalable Nonlinear Equations Solvers) is a software package for the numerical solution of large-scale systems of nonlinear equations on both uniprocessors and parallel architectures. SNES also contains a component for the solution of unconstrained minimization problems, called SUMS (Scalable Unconstrained Minimization Solvers). Newton-like methods, which are known for their efficiency and robustness, constitute the core of the package. As part of the multilevel PETSc library, SNES incorporates many features and options from other parts of PETSc. In keeping with the spirit of the PETSc library, the nonlinear solution routines are data-structure-neutral, making them flexible and easily extensible. This users guide contains a detailed description of uniprocessor usage of SNES, with some added comments regarding multiprocessor usage. At this time the parallel version is undergoing refinement and extension, as we work toward a common interface for the uniprocessor and parallel cases. Thus, forthcoming versions of the software will contain additional features, and changes to parallel interface may result at any time. The new parallel version will employ the MPI (Message Passing Interface) standard for interprocessor communication. Since most of these details will be hidden, users will need to perform only minimal message-passing programming.

  4. Scalability Optimization of Seamless Positioning Service

    Directory of Open Access Journals (Sweden)

    Juraj Machaj

    2016-01-01

    Full Text Available Recently positioning services are getting more attention not only within research community but also from service providers. From the service providers point of view positioning service that will be able to work seamlessly in all environments, for example, indoor, dense urban, and rural, has a huge potential to open new markets. However, such system does not only need to provide accurate position estimates but have to be scalable and resistant to fake positioning requests. In the previous works we have proposed a modular system, which is able to provide seamless positioning in various environments. The system automatically selects optimal positioning module based on available radio signals. The system currently consists of three positioning modules—GPS, GSM based positioning, and Wi-Fi based positioning. In this paper we will propose algorithm which will reduce time needed for position estimation and thus allow higher scalability of the modular system and thus allow providing positioning services to higher amount of users. Such improvement is extremely important, for real world application where large number of users will require position estimates, since positioning error is affected by response time of the positioning server.

  5. An Open Infrastructure for Scalable, Reconfigurable Analysis

    Energy Technology Data Exchange (ETDEWEB)

    de Supinski, B R; Fowler, R; Gamblin, T; Mueller, F; Ratn, P; Schulz, M

    2008-05-15

    Petascale systems will have hundreds of thousands of processor cores so their applications must be massively parallel. Effective use of petascale systems will require efficient interprocess communication through memory hierarchies and complex network topologies. Tools to collect and analyze detailed data about this communication would facilitate its optimization. However, several factors complicate tool design. First, large-scale runs on petascale systems will be a precious commodity, so scalable tools must have almost no overhead. Second, the volume of performance data from petascale runs could easily overwhelm hand analysis and, thus, tools must collect only data that is relevant to diagnosing performance problems. Analysis must be done in-situ, when available processing power is proportional to the data. We describe a tool framework that overcomes these complications. Our approach allows application developers to combine existing techniques for measurement, analysis, and data aggregation to develop application-specific tools quickly. Dynamic configuration enables application developers to select exactly the measurements needed and generic components support scalable aggregation and analysis of this data with little additional effort.

  6. Highly scalable Ab initio genomic motif identification

    KAUST Repository

    Marchand, Benoit

    2011-01-01

    We present results of scaling an ab initio motif family identification system, Dragon Motif Finder (DMF), to 65,536 processor cores of IBM Blue Gene/P. DMF seeks groups of mutually similar polynucleotide patterns within a set of genomic sequences and builds various motif families from them. Such information is of relevance to many problems in life sciences. Prior attempts to scale such ab initio motif-finding algorithms achieved limited success. We solve the scalability issues using a combination of mixed-mode MPI-OpenMP parallel programming, master-slave work assignment, multi-level workload distribution, multi-level MPI collectives, and serial optimizations. While the scalability of our algorithm was excellent (94% parallel efficiency on 65,536 cores relative to 256 cores on a modest-size problem), the final speedup with respect to the original serial code exceeded 250,000 when serial optimizations are included. This enabled us to carry out many large-scale ab initio motiffinding simulations in a few hours while the original serial code would have needed decades of execution time. Copyright 2011 ACM.

  7. Nanofabrication for On-Chip Optical Levitation, Atom-Trapping, and Superconducting Quantum Circuits

    Science.gov (United States)

    Norte, Richard Alexander

    a final value of Qm = 5.8(1.1) x 105, representing more than an order of magnitude improvement over the conventional limits of SiO2 for a pendulum geometry. Our technique may enable new opportunities for mechanical sensing and facilitate observations of quantum behavior in this class of mechanical systems. We then give a detailed overview of the techniques used to produce high-aspect-ratio nanostructures with applications in a wide range of quantum optics experiments. The ability to fabricate such nanodevices with high precision opens the door to a vast array of experiments which integrate macroscopic optical setups with lithographically engineered nanodevices. Coupled with atom-trapping experiments in the Kimble Lab, we use these techniques to realize a new waveguide chip designed to address ultra-cold atoms along lithographically patterned nanobeams which have large atom-photon coupling and near 4pi Steradian optical access for cooling and trapping atoms. We describe a fully integrated and scalable design where cold atoms are spatially overlapped with the nanostring cavities in order to observe a resonant optical depth of d0 ≈ 0.15. The nanodevice illuminates new possibilities for integrating atoms into photonic circuits and engineering quantum states of atoms and light on a microscopic scale. We then describe our work with superconducting microwave resonators coupled to a phononic cavity towards the goal of building an integrated device for quantum-limited microwave-to-optical wavelength conversion. We give an overview of our characterizations of several types of substrates for fabricating a low-loss high-frequency electromechanical system. We describe our electromechanical system fabricated on a SiN membrane which consists of a 12 GHz superconducting LC resonator coupled capacitively to the high frequency localized modes of a phononic nanobeam. Using our suspended membrane geometry we isolate our system from substrates with significant loss tangents

  8. Development, Verification and Validation of Parallel, Scalable Volume of Fluid CFD Program for Propulsion Applications

    Science.gov (United States)

    West, Jeff; Yang, H. Q.

    2014-01-01

    There are many instances involving liquid/gas interfaces and their dynamics in the design of liquid engine powered rockets such as the Space Launch System (SLS). Some examples of these applications are: Propellant tank draining and slosh, subcritical condition injector analysis for gas generators, preburners and thrust chambers, water deluge mitigation for launch induced environments and even solid rocket motor liquid slag dynamics. Commercially available CFD programs simulating gas/liquid interfaces using the Volume of Fluid approach are currently limited in their parallel scalability. In 2010 for instance, an internal NASA/MSFC review of three commercial tools revealed that parallel scalability was seriously compromised at 8 cpus and no additional speedup was possible after 32 cpus. Other non-interface CFD applications at the time were demonstrating useful parallel scalability up to 4,096 processors or more. Based on this review, NASA/MSFC initiated an effort to implement a Volume of Fluid implementation within the unstructured mesh, pressure-based algorithm CFD program, Loci-STREAM. After verification was achieved by comparing results to the commercial CFD program CFD-Ace+, and validation by direct comparison with data, Loci-STREAM-VoF is now the production CFD tool for propellant slosh force and slosh damping rate simulations at NASA/MSFC. On these applications, good parallel scalability has been demonstrated for problems sizes of tens of millions of cells and thousands of cpu cores. Ongoing efforts are focused on the application of Loci-STREAM-VoF to predict the transient flow patterns of water on the SLS Mobile Launch Platform in order to support the phasing of water for launch environment mitigation so that vehicle determinantal effects are not realized.

  9. geoKepler Workflow Module for Computationally Scalable and Reproducible Geoprocessing and Modeling

    Science.gov (United States)

    Cowart, C.; Block, J.; Crawl, D.; Graham, J.; Gupta, A.; Nguyen, M.; de Callafon, R.; Smarr, L.; Altintas, I.

    2015-12-01

    The NSF-funded WIFIRE project has developed an open-source, online geospatial workflow platform for unifying geoprocessing tools and models for for fire and other geospatially dependent modeling applications. It is a product of WIFIRE's objective to build an end-to-end cyberinfrastructure for real-time and data-driven simulation, prediction and visualization of wildfire behavior. geoKepler includes a set of reusable GIS components, or actors, for the Kepler Scientific Workflow System (https://kepler-project.org). Actors exist for reading and writing GIS data in formats such as Shapefile, GeoJSON, KML, and using OGC web services such as WFS. The actors also allow for calling geoprocessing tools in other packages such as GDAL and GRASS. Kepler integrates functions from multiple platforms and file formats into one framework, thus enabling optimal GIS interoperability, model coupling, and scalability. Products of the GIS actors can be fed directly to models such as FARSITE and WRF. Kepler's ability to schedule and scale processes using Hadoop and Spark also makes geoprocessing ultimately extensible and computationally scalable. The reusable workflows in geoKepler can be made to run automatically when alerted by real-time environmental conditions. Here, we show breakthroughs in the speed of creating complex data for hazard assessments with this platform. We also demonstrate geoKepler workflows that use Data Assimilation to ingest real-time weather data into wildfire simulations, and for data mining techniques to gain insight into environmental conditions affecting fire behavior. Existing machine learning tools and libraries such as R and MLlib are being leveraged for this purpose in Kepler, as well as Kepler's Distributed Data Parallel (DDP) capability to provide a framework for scalable processing. geoKepler workflows can be executed via an iPython notebook as a part of a Jupyter hub at UC San Diego for sharing and reporting of the scientific analysis and results from

  10. Layer-by-layer assembly as a versatile bottom-up nanofabrication technique for exploratory research and realistic application.

    Science.gov (United States)

    Ariga, Katsuhiko; Hill, Jonathan P; Ji, Qingmin

    2007-05-21

    The layer-by-layer (LbL) adsorption technique offers an easy and inexpensive process for multilayer formation and allows a variety of materials to be incorporated within the film structures. Therefore, the LbL assembly method can be regarded as a versatile bottom-up nanofabrication technique. Research fields concerned with LbL assembly have developed rapidly but some important physicochemical aspects remain uninvestigated. In this review, we will introduce several examples from physicochemical investigations regarding the basics of this method to advanced research aimed at practical applications. These are selected mostly from recent reports and should stimulate many physical chemists and chemical physicists in the further development of LbL assembly. In order to further understand the mechanism of the LbL assembly process, theoretical work, including thermodynamics calculations, has been conducted. Additionally, the use of molecular dynamics simulation has been proposed. Recently, many kinds of physicochemical molecular interactions, including hydrogen bonding, charge transfer interactions, and stereo-complex formation, have been used. The combination of the LbL method with other fabrication techniques such as spin-coating, spraying, and photolithography has also been extensively researched. These improvements have enabled preparation of LbL films composed of various materials contained in well-designed nanostructures. The resulting structures can be used to investigate basic physicochemical phenomena where relative distances between interacting groups is of great importance. Similarly, LbL structures prepared by such advanced techniques are used widely for development of functional systems for physical applications from photovoltaic devices and field effect transistors to biochemical applications including nano-sized reactors and drug delivery systems.

  11. New Complexity Scalable MPEG Encoding Techniques for Mobile Applications

    Directory of Open Access Journals (Sweden)

    Stephan Mietens

    2004-03-01

    Full Text Available Complexity scalability offers the advantage of one-time design of video applications for a large product family, including mobile devices, without the need of redesigning the applications on the algorithmic level to meet the requirements of the different products. In this paper, we present complexity scalable MPEG encoding having core modules with modifications for scalability. The interdependencies of the scalable modules and the system performance are evaluated. Experimental results show scalability giving a smooth change in complexity and corresponding video quality. Scalability is basically achieved by varying the number of computed DCT coefficients and the number of evaluated motion vectors but other modules are designed such they scale with the previous parameters. In the experiments using the “Stefan” sequence, the elapsed execution time of the scalable encoder, reflecting the computational complexity, can be gradually reduced to roughly 50% of its original execution time. The video quality scales between 20 dB and 48 dB PSNR with unity quantizer setting, and between 21.5 dB and 38.5 dB PSNR for different sequences targeting 1500 kbps. The implemented encoder and the scalability techniques can be successfully applied in mobile systems based on MPEG video compression.

  12. Scalable DeNoise-and-Forward in Bidirectional Relay Networks

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Krigslund, Rasmus; Popovski, Petar

    2010-01-01

    In this paper a scalable relaying scheme is proposed based on an existing concept called DeNoise-and-Forward, DNF. We call it Scalable DNF, S-DNF, and it targets the scenario with multiple communication flows through a single common relay. The idea of the scheme is to combine packets at the relay...

  13. Building scalable apps with Redis and Node.js

    CERN Document Server

    Johanan, Joshua

    2014-01-01

    If the phrase scalability sounds alien to you, then this is an ideal book for you. You will not need much Node.js experience as each framework is demonstrated in a way that requires no previous knowledge of the framework. You will be building scalable Node.js applications in no time! Knowledge of JavaScript is required.

  14. BASSET: Scalable Gateway Finder in Large Graphs

    Energy Technology Data Exchange (ETDEWEB)

    Tong, H; Papadimitriou, S; Faloutsos, C; Yu, P S; Eliassi-Rad, T

    2010-11-03

    Given a social network, who is the best person to introduce you to, say, Chris Ferguson, the poker champion? Or, given a network of people and skills, who is the best person to help you learn about, say, wavelets? The goal is to find a small group of 'gateways': persons who are close enough to us, as well as close enough to the target (person, or skill) or, in other words, are crucial in connecting us to the target. The main contributions are the following: (a) we show how to formulate this problem precisely; (b) we show that it is sub-modular and thus it can be solved near-optimally; (c) we give fast, scalable algorithms to find such gateways. Experiments on real data sets validate the effectiveness and efficiency of the proposed methods, achieving up to 6,000,000x speedup.

  15. The Concept of Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten

    2015-01-01

    are leveraged in this value creation, delivery and realization exercise. Central to the mainstream understanding of business models is the value proposition towards the customer and the hypothesis generated is that if the firm delivers to the customer what he/she requires, then there is a good foundation......The power of business models lies in their ability to visualize and clarify how firms’ may configure their value creation processes. Among the key aspects of business model thinking are a focus on what the customer values, how this value is best delivered to the customer and how strategic partners...... for a long-term profitable business. However, the message conveyed in this article is that while providing a good value proposition may help the firm ‘get by’, the really successful businesses of today are those able to reach the sweet-spot of business model scalability. This article introduces and discusses...

  16. Towards scalable Byzantine fault-tolerant replication

    Science.gov (United States)

    Zbierski, Maciej

    2017-08-01

    Byzantine fault-tolerant (BFT) replication is a powerful technique, enabling distributed systems to remain available and correct even in the presence of arbitrary faults. Unfortunately, existing BFT replication protocols are mostly load-unscalable, i.e. they fail to respond with adequate performance increase whenever new computational resources are introduced into the system. This article proposes a universal architecture facilitating the creation of load-scalable distributed services based on BFT replication. The suggested approach exploits parallel request processing to fully utilize the available resources, and uses a load balancer module to dynamically adapt to the properties of the observed client workload. The article additionally provides a discussion on selected deployment scenarios, and explains how the proposed architecture could be used to increase the dependability of contemporary large-scale distributed systems.

  17. A graph algebra for scalable visual analytics.

    Science.gov (United States)

    Shaverdian, Anna A; Zhou, Hao; Michailidis, George; Jagadish, Hosagrahar V

    2012-01-01

    Visual analytics (VA), which combines analytical techniques with advanced visualization features, is fast becoming a standard tool for extracting information from graph data. Researchers have developed many tools for this purpose, suggesting a need for formal methods to guide these tools' creation. Increased data demands on computing requires redesigning VA tools to consider performance and reliability in the context of analysis of exascale datasets. Furthermore, visual analysts need a way to document their analyses for reuse and results justification. A VA graph framework encapsulated in a graph algebra helps address these needs. Its atomic operators include selection and aggregation. The framework employs a visual operator and supports dynamic attributes of data to enable scalable visual exploration of data.

  18. Declarative and Scalable Selection for Map Visualizations

    DEFF Research Database (Denmark)

    Kefaloukos, Pimin Konstantin Balic

    foreground layers is merited. (2) The typical map making professional has changed from a GIS specialist to a busy person with map making as a secondary skill. Today, thematic maps are produced by journalists, aid workers, amateur data enth siasts, and scientists alike. Therefore it is crucial...... that this diverse group of map makers is provided with easy-to-use and expressible thematic map design tools. Such tools should support customized selection of data for maps in scenarios where developer time is a scarce resource. (3) The Web provides access to massive data repositories for thematic maps...... based on an access log of recent requests. The results show that Glossy SQL og CVL can be used to compute cartographic selection by processing one or more complex queries in a relational database. The scalability of the approach has been verified up to half a million objects in the database. Furthermore...

  19. Privacy-Preserving and Scalable Service Recommendation Based on SimHash in a Distributed Cloud Environment

    Directory of Open Access Journals (Sweden)

    Yanwei Xu

    2017-01-01

    Full Text Available With the increasing volume of web services in the cloud environment, Collaborative Filtering- (CF- based service recommendation has become one of the most effective techniques to alleviate the heavy burden on the service selection decisions of a target user. However, the service recommendation bases, that is, historical service usage data, are often distributed in different cloud platforms. Two challenges are present in such a cross-cloud service recommendation scenario. First, a cloud platform is often not willing to share its data to other cloud platforms due to privacy concerns, which decreases the feasibility of cross-cloud service recommendation severely. Second, the historical service usage data recorded in each cloud platform may update over time, which reduces the recommendation scalability significantly. In view of these two challenges, a novel privacy-preserving and scalable service recommendation approach based on SimHash, named SerRecSimHash, is proposed in this paper. Finally, through a set of experiments deployed on a real distributed service quality dataset WS-DREAM, we validate the feasibility of our proposal in terms of recommendation accuracy and efficiency while guaranteeing privacy-preservation.

  20. Scalable and Media Aware Adaptive Video Streaming over Wireless Networks

    Science.gov (United States)

    Tizon, Nicolas; Pesquet-Popescu, Béatrice

    2008-12-01

    This paper proposes an advanced video streaming system based on scalable video coding in order to optimize resource utilization in wireless networks with retransmission mechanisms at radio protocol level. The key component of this system is a packet scheduling algorithm which operates on the different substreams of a main scalable video stream and which is implemented in a so-called media aware network element. The concerned type of transport channel is a dedicated channel subject to parameters (bitrate, loss rate) variations on the long run. Moreover, we propose a combined scalability approach in which common temporal and SNR scalability features can be used jointly with a partitioning of the image into regions of interest. Simulation results show that our approach provides substantial quality gain compared to classical packet transmission methods and they demonstrate how ROI coding combined with SNR scalability allows to improve again the visual quality.

  1. Characterization of the Jason Multiagent Platform on Multicore Processors

    Directory of Open Access Journals (Sweden)

    Pascual Pérez-Carro

    2014-01-01

    Full Text Available Multiagent platforms need to be evaluated focusing on the underlying computer architecture in order to allow developers to exploit the parallelism available in multicore processors. This paper presents the characterization of Jason, a well-known Java-based multiagent platform, when executed on distributed shared memory architectures. Since this kind of architecture is already present in current multicore processors, this should be the first step for the characterization of this platform on distributed systems. To this end, we propose the execution of a set of benchmarks recently proposed for evaluating multiagent platforms. The results obtained show that Jason can be used to program CPU-intensive multiagent applications without loosing the Java scalability over multicore processors. Though, Jason's performance for communication-intensive applications depends on the traffic pattern generated by the agents, the layout of the cores and the selected execution mode (i.e. synchronous or asynchronous.

  2. Heterogeneous scalable framework for multiphase flows

    Energy Technology Data Exchange (ETDEWEB)

    Morris, Karla Vanessa [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-09-01

    Two categories of challenges confront the developer of computational spray models: those related to the computation and those related to the physics. Regarding the computation, the trend towards heterogeneous, multi- and many-core platforms will require considerable re-engineering of codes written for the current supercomputing platforms. Regarding the physics, accurate methods for transferring mass, momentum and energy from the dispersed phase onto the carrier fluid grid have so far eluded modelers. Significant challenges also lie at the intersection between these two categories. To be competitive, any physics model must be expressible in a parallel algorithm that performs well on evolving computer platforms. This work created an application based on a software architecture where the physics and software concerns are separated in a way that adds flexibility to both. The develop spray-tracking package includes an application programming interface (API) that abstracts away the platform-dependent parallelization concerns, enabling the scientific programmer to write serial code that the API resolves into parallel processes and threads of execution. The project also developed the infrastructure required to provide similar APIs to other application. The API allow object-oriented Fortran applications direct interaction with Trilinos to support memory management of distributed objects in central processing units (CPU) and graphic processing units (GPU) nodes for applications using C++.

  3. New platforms for multi-functional ocular lenses: engineering double-sided functionalized nano-coatings

    DEFF Research Database (Denmark)

    Mehta, Prina; Justo, Lucas; Walsh, Susannah

    2015-01-01

    A scalable platform to prepare multi-functional ocular lenses is demonstrated. Using rapidly dissolving polyvinylpyrrolidone (PVP) as the active stabilizing matrix, both sides of ocular lenses were coated using a modified scaled-up masking electrohydrodynamic atomization (EHDA) technique (flow...... controlled release strategies) suggests several therapeutic platforms for ocular lenses can be further developed at ambient temperature and pressure. These provide multi-functional properties (in personalized delivery, nanomedicine and nanosensors) from a single drug delivery device....

  4. A survey on platforms for big data analytics.

    Science.gov (United States)

    Singh, Dilpreet; Reddy, Chandan K

    The primary purpose of this paper is to provide an in-depth analysis of different platforms available for performing big data analytics. This paper surveys different hardware platforms available for big data analytics and assesses the advantages and drawbacks of each of these platforms based on various metrics such as scalability, data I/O rate, fault tolerance, real-time processing, data size supported and iterative task support. In addition to the hardware, a detailed description of the software frameworks used within each of these platforms is also discussed along with their strengths and drawbacks. Some of the critical characteristics described here can potentially aid the readers in making an informed decision about the right choice of platforms depending on their computational needs. Using a star ratings table, a rigorous qualitative comparison between different platforms is also discussed for each of the six characteristics that are critical for the algorithms of big data analytics. In order to provide more insights into the effectiveness of each of the platform in the context of big data analytics, specific implementation level details of the widely used k-means clustering algorithm on various platforms are also described in the form pseudocode.

  5. Oracle database performance and scalability a quantitative approach

    CERN Document Server

    Liu, Henry H

    2011-01-01

    A data-driven, fact-based, quantitative text on Oracle performance and scalability With database concepts and theories clearly explained in Oracle's context, readers quickly learn how to fully leverage Oracle's performance and scalability capabilities at every stage of designing and developing an Oracle-based enterprise application. The book is based on the author's more than ten years of experience working with Oracle, and is filled with dependable, tested, and proven performance optimization techniques. Oracle Database Performance and Scalability is divided into four parts that enable reader

  6. A novel 3D scalable video compression algorithm

    Science.gov (United States)

    Somasundaram, Siva; Subbalakshmi, Koduvayur P.

    2003-05-01

    In this paper we propose a scalable video coding scheme that utilizes the embedded block coding with optimal truncation (EBCOT) compression algorithm. Three dimensional spatio-temporal decomposition of the video sequence succeeded by compression using the EBCOT generates a SNR and resolution scalable bit stream. The proposed video coding algorithm not only performs closer to the MPEG-4 video coding standard in compression efficiency but also provides better SNR and resolution scalability. Experimental results show that the performance of the proposed algorithm does better than the 3-D SPIHT (Set Partitioning in Hierarchial Trees) algorithm by 1.5dB.

  7. Segway robotic mobility platform

    Science.gov (United States)

    Nguyen, Hoa G.; Morrell, John; Mullens, Katherine D.; Burmeister, Aaron B.; Miles, Susan; Farrington, Nathan; Thomas, Kari M.; Gage, Douglas W.

    2004-12-01

    The Segway Robotic Mobility Platform (RMP) is a new mobile robotic platform based on the self-balancing Segway Human Transporter (HT). The Segway RMP is faster, cheaper, and more agile than existing comparable platforms. It is also rugged, has a small footprint, a zero turning radius, and yet can carry a greater payload. The new geometry of the platform presents researchers with an opportunity to examine novel topics, including people-height sensing and actuation modalities. This paper describes the history and development of the platform, its characteristics, and a summary of current research projects involving the platform at various institutions across the United States.

  8. A Scalable, Open Source Platform for Data Processing, Archiving and Dissemination

    Science.gov (United States)

    2016-01-01

    Problems .............................................................. 15 5.1.3 Extract, Transform, and Load ( ETL ) for the Summer Workshop Challenges...analytic steps. • Followed Template ETL Process Defined with Kiva - Python extractors for Traceroutes and Edgescape data - Wrapped ETL in a workflow...5.1.3 Extract, Transform, and Load ( ETL ) for the Summer Workshop Challenges A survey of the many different but related approaches to performing the

  9. Learning Analytics Platform, towards an Open Scalable Streaming Solution for Education

    Science.gov (United States)

    Lewkow, Nicholas; Zimmerman, Neil; Riedesel, Mark; Essa, Alfred

    2015-01-01

    Next generation digital learning environments require delivering "just-in-time feedback" to learners and those who support them. Unlike traditional business intelligence environments, streaming data requires resilient infrastructure that can move data at scale from heterogeneous data sources, process the data quickly for use across…

  10. Scalable geocomputation: evolving an environmental model building platform from single-core to supercomputers

    Science.gov (United States)

    Schmitz, Oliver; de Jong, Kor; Karssenberg, Derek

    2017-04-01

    There is an increasing demand to run environmental models on a big scale: simulations over large areas at high resolution. The heterogeneity of available computing hardware such as multi-core CPUs, GPUs or supercomputer potentially provides significant computing power to fulfil this demand. However, this requires detailed knowledge of the underlying hardware, parallel algorithm design and the implementation thereof in an efficient system programming language. Domain scientists such as hydrologists or ecologists often lack this specific software engineering knowledge, their emphasis is (and should be) on exploratory building and analysis of simulation models. As a result, models constructed by domain specialists mostly do not take full advantage of the available hardware. A promising solution is to separate the model building activity from software engineering by offering domain specialists a model building framework with pre-programmed building blocks that they combine to construct a model. The model building framework, consequently, needs to have built-in capabilities to make full usage of the available hardware. Developing such a framework providing understandable code for domain scientists and being runtime efficient at the same time poses several challenges on developers of such a framework. For example, optimisations can be performed on individual operations or the whole model, or tasks need to be generated for a well-balanced execution without explicitly knowing the complexity of the domain problem provided by the modeller. Ideally, a modelling framework supports the optimal use of available hardware whichsoever combination of model building blocks scientists use. We demonstrate our ongoing work on developing parallel algorithms for spatio-temporal modelling and demonstrate 1) PCRaster, an environmental software framework (http://www.pcraster.eu) providing spatio-temporal model building blocks and 2) parallelisation of about 50 of these building blocks using the new Fern library (https://github.com/geoneric/fern/), an independent generic raster processing library. Fern is a highly generic software library and its algorithms can be configured according to the configuration of a modelling framework. With manageable programming effort (e.g. matching data types between programming and domain language) we created a binding between Fern and PCRaster. The resulting PCRaster Python multicore module can be used to execute existing PCRaster models without having to make any changes to the model code. We show initial results on synthetic and geoscientific models indicating significant runtime improvements provided by parallel local and focal operations. We further outline challenges in improving remaining algorithms such as flow operations over digital elevation maps and further potential improvements like enhancing disk I/O.

  11. Design, Fabrication, and Testing of a Scalable Series Augmented Railgun Research Platform

    Science.gov (United States)

    2006-03-01

    higher velocity shots was grainy but retained the aluminum metallic tone whereas the root radius of the non-augmented shots was obscured by... blackened deposits. Although the current levels experienced in this testing are far less than the 900 kA threshold for root radius melting observed by

  12. Load Generation for Investigating Game System Scalability

    OpenAIRE

    Halvorsen, Stig Magnus

    2014-01-01

    Video games have proven to be an interesting platform for computer scientists, as many games demand the latest technology, fast response times and effective utilization of hardware. Video games have been used both as a topic of and a tool for computer science (CS). Finding the right games to perform experiments on is however difficult. An important reason is the lack of suitable games for research. Open source games are attractive candidates as their availability and openness is crucial to pr...

  13. Scalable, remote administration of Windows NT.

    Energy Technology Data Exchange (ETDEWEB)

    Gomberg, M.; Stacey, C.; Sayre, J.

    1999-06-08

    In the UNIX community there is an overwhelming perception that NT is impossible to manage remotely and that NT administration doesn't scale. This was essentially true with earlier versions of the operating system. Even today, out of the box, NT is difficult to manage remotely. Many tools, however, now make remote management of NT not only possible, but under some circumstances very easy. In this paper we discuss how we at Argonne's Mathematics and Computer Science Division manage all our NT machines remotely from a single console, with minimum locally installed software overhead. We also present NetReg, which is a locally developed tool for scalable registry management. NetReg allows us to apply a registry change to a specified set of machines. It is a command line utility that can be run in either interactive or batch mode and is written in Perl for Win32, taking heavy advantage of the Win32::TieRegistry module.

  14. Scalable conditional induction variables (CIV) analysis

    KAUST Repository

    Oancea, Cosmin E.

    2015-02-01

    Subscripts using induction variables that cannot be expressed as a formula in terms of the enclosing-loop indices appear in the low-level implementation of common programming abstractions such as Alter, or stack operations and pose significant challenges to automatic parallelization. Because the complexity of such induction variables is often due to their conditional evaluation across the iteration space of loops we name them Conditional Induction Variables (CIV). This paper presents a flow-sensitive technique that summarizes both such CIV-based and affine subscripts to program level, using the same representation. Our technique requires no modifications of our dependence tests, which is agnostic to the original shape of the subscripts, and is more powerful than previously reported dependence tests that rely on the pairwise disambiguation of read-write references. We have implemented the CIV analysis in our parallelizing compiler and evaluated its impact on five Fortran benchmarks. We have found that that there are many important loops using CIV subscripts and that our analysis can lead to their scalable parallelization. This in turn has led to the parallelization of the benchmark programs they appear in.

  15. Scalable Notch Antenna System for Multiport Applications

    Directory of Open Access Journals (Sweden)

    Abdurrahim Toktas

    2016-01-01

    Full Text Available A novel and compact scalable antenna system is designed for multiport applications. The basic design is built on a square patch with an electrical size of 0.82λ0×0.82λ0 (at 2.4 GHz on a dielectric substrate. The design consists of four symmetrical and orthogonal triangular notches with circular feeding slots at the corners of the common patch. The 4-port antenna can be simply rearranged to 8-port and 12-port systems. The operating band of the system can be tuned by scaling (S the size of the system while fixing the thickness of the substrate. The antenna system with S: 1/1 in size of 103.5×103.5 mm2 operates at the frequency band of 2.3–3.0 GHz. By scaling the antenna with S: 1/2.3, a system of 45×45 mm2 is achieved, and thus the operating band is tuned to 4.7–6.1 GHz with the same scattering characteristic. A parametric study is also conducted to investigate the effects of changing the notch dimensions. The performance of the antenna is verified in terms of the antenna characteristics as well as diversity and multiplexing parameters. The antenna system can be tuned by scaling so that it is applicable to the multiport WLAN, WIMAX, and LTE devices with port upgradability.

  16. Scalable inference for stochastic block models

    KAUST Repository

    Peng, Chengbin

    2017-12-08

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference algorithms for such a model are increasingly limited due to their high time complexity and poor scalability. In this paper, we propose a multi-stage maximum likelihood approach to recover the latent parameters of the stochastic block model, in time linear with respect to the number of edges. We also propose a parallel algorithm based on message passing. Our algorithm can overlap communication and computation, providing speedup without compromising accuracy as the number of processors grows. For example, to process a real-world graph with about 1.3 million nodes and 10 million edges, our algorithm requires about 6 seconds on 64 cores of a contemporary commodity Linux cluster. Experiments demonstrate that the algorithm can produce high quality results on both benchmark and real-world graphs. An example of finding more meaningful communities is illustrated consequently in comparison with a popular modularity maximization algorithm.

  17. A Programmable, Scalable-Throughput Interleaver

    Directory of Open Access Journals (Sweden)

    Rijshouwer EJC

    2010-01-01

    Full Text Available The interleaver stages of digital communication standards show a surprisingly large variation in throughput, state sizes, and permutation functions. Furthermore, data rates for 4G standards such as LTE-Advanced will exceed typical baseband clock frequencies of handheld devices. Multistream operation for Software Defined Radio and iterative decoding algorithms will call for ever higher interleave data rates. Our interleave machine is built around 8 single-port SRAM banks and can be programmed to generate up to 8 addresses every clock cycle. The scalable architecture combines SIMD and VLIW concepts with an efficient resolution of bank conflicts. A wide range of cellular, connectivity, and broadcast interleavers have been mapped on this machine, with throughputs up to more than 0.5 Gsymbol/second. Although it was designed for channel interleaving, the application domain of the interleaver extends also to Turbo interleaving. The presented configuration of the architecture is designed as a part of a programmable outer receiver on a prototype board. It offers (near universal programmability to enable the implementation of new interleavers. The interleaver measures 2.09 m in 65 nm CMOS (including memories and proves functional on silicon.

  18. SCTP as scalable video coding transport

    Science.gov (United States)

    Ortiz, Jordi; Graciá, Eduardo Martínez; Skarmeta, Antonio F.

    2013-12-01

    This study presents an evaluation of the Stream Transmission Control Protocol (SCTP) for the transport of the scalable video codec (SVC), proposed by MPEG as an extension to H.264/AVC. Both technologies fit together properly. On the one hand, SVC permits to split easily the bitstream into substreams carrying different video layers, each with different importance for the reconstruction of the complete video sequence at the receiver end. On the other hand, SCTP includes features, such as the multi-streaming and multi-homing capabilities, that permit to transport robustly and efficiently the SVC layers. Several transmission strategies supported on baseline SCTP and its concurrent multipath transfer (CMT) extension are compared with the classical solutions based on the Transmission Control Protocol (TCP) and the Realtime Transmission Protocol (RTP). Using ns-2 simulations, it is shown that CMT-SCTP outperforms TCP and RTP in error-prone networking environments. The comparison is established according to several performance measurements, including delay, throughput, packet loss, and peak signal-to-noise ratio of the received video.

  19. Scalable Combinatorial Tools for Health Disparities Research

    Directory of Open Access Journals (Sweden)

    Michael A. Langston

    2014-10-01

    Full Text Available Despite staggering investments made in unraveling the human genome, current estimates suggest that as much as 90% of the variance in cancer and chronic diseases can be attributed to factors outside an individual’s genetic endowment, particularly to environmental exposures experienced across his or her life course. New analytical approaches are clearly required as investigators turn to complicated systems theory and ecological, place-based and life-history perspectives in order to understand more clearly the relationships between social determinants, environmental exposures and health disparities. While traditional data analysis techniques remain foundational to health disparities research, they are easily overwhelmed by the ever-increasing size and heterogeneity of available data needed to illuminate latent gene x environment interactions. This has prompted the adaptation and application of scalable combinatorial methods, many from genome science research, to the study of population health. Most of these powerful tools are algorithmically sophisticated, highly automated and mathematically abstract. Their utility motivates the main theme of this paper, which is to describe real applications of innovative transdisciplinary models and analyses in an effort to help move the research community closer toward identifying the causal mechanisms and associated environmental contexts underlying health disparities. The public health exposome is used as a contemporary focus for addressing the complex nature of this subject.

  20. Scalability and interoperability within glideinWMS

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, D.; /Wisconsin U., Madison; Sfiligoi, I.; /Fermilab; Padhi, S.; /UC, San Diego; Frey, J.; /Wisconsin U., Madison; Tannenbaum, T.; /Wisconsin U., Madison

    2010-01-01

    Physicists have access to thousands of CPUs in grid federations such as OSG and EGEE. With the start-up of the LHC, it is essential for individuals or groups of users to wrap together available resources from multiple sites across multiple grids under a higher user-controlled layer in order to provide a homogeneous pool of available resources. One such system is glideinWMS, which is based on the Condor batch system. A general discussion of glideinWMS can be found elsewhere. Here, we focus on recent advances in extending its reach: scalability and integration of heterogeneous compute elements. We demonstrate that the new developments exceed the design goal of over 10,000 simultaneous running jobs under a single Condor schedd, using strong security protocols across global networks, and sustaining a steady-state job completion rate of a few Hz. We also show interoperability across heterogeneous computing elements achieved using client-side methods. We discuss this technique and the challenges in direct access to NorduGrid and CREAM compute elements, in addition to Globus based systems.

  1. Venus Aerial Platform Study

    Science.gov (United States)

    Cutts, J. A.

    2017-11-01

    A Venus Aerial Platform Study, which was underway in early 2017, is assessing the science and technologies for exploring Venus with aerial vehicles in order to develop a Venus Aerial Platform Roadmap for the future exploration of the planet.

  2. ARC Code TI: Block-GP: Scalable Gaussian Process Regression

    Data.gov (United States)

    National Aeronautics and Space Administration — Block GP is a Gaussian Process regression framework for multimodal data, that can be an order of magnitude more scalable than existing state-of-the-art nonlinear...

  3. Scalable pattern recognition algorithms applications in computational biology and bioinformatics

    CERN Document Server

    Maji, Pradipta

    2014-01-01

    Reviews the development of scalable pattern recognition algorithms for computational biology and bioinformatics Includes numerous examples and experimental results to support the theoretical concepts described Concludes each chapter with directions for future research and a comprehensive bibliography

  4. Scalability of telecom cloud architectures for live-TV distribution

    OpenAIRE

    Asensio Carmona, Adrian; Contreras, Luis Miguel; Ruiz Ramírez, Marc; López Álvarez, Victor; Velasco Esteban, Luis Domingo

    2015-01-01

    A hierarchical distributed telecom cloud architecture for live-TV distribution exploiting flexgrid networking and SBVTs is proposed. Its scalability is compared to that of a centralized architecture. Cost savings as high as 32 % are shown. Peer Reviewed

  5. Evaluating the Scalability of Enterprise JavaBeans Technology

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yan (Jenny); Gorton, Ian; Liu, Anna; Chen, Shiping; Paul A Strooper; Pornsiri Muenchaisri

    2002-12-04

    One of the major problems in building large-scale distributed systems is to anticipate the performance of the eventual solution before it has been built. This problem is especially germane to Internet-based e-business applications, where failure to provide high performance and scalability can lead to application and business failure. The fundamental software engineering problem is compounded by many factors, including individual application diversity, software architecture trade-offs, COTS component integration requirements, and differences in performance of various software and hardware infrastructures. In this paper, we describe the results of an empirical investigation into the scalability of a widely used distributed component technology, Enterprise JavaBeans (EJB). A benchmark application is developed and tested to measure the performance of a system as both the client load and component infrastructure are scaled up. A scalability metric from the literature is then applied to analyze the scalability of the EJB component infrastructure under two different architectural solutions.

  6. Scalable RFCMOS Model for 90 nm Technology

    Directory of Open Access Journals (Sweden)

    Ah Fatt Tong

    2011-01-01

    Full Text Available This paper presents the formation of the parasitic components that exist in the RF MOSFET structure during its high-frequency operation. The parasitic components are extracted from the transistor's S-parameter measurement, and its geometry dependence is studied with respect to its layout structure. Physical geometry equations are proposed to represent these parasitic components, and by implementing them into the RF model, a scalable RFCMOS model, that is, valid up to 49.85 GHz is demonstrated. A new verification technique is proposed to verify the quality of the developed scalable RFCMOS model. The proposed technique can shorten the verification time of the scalable RFCMOS model and ensure that the coded scalable model file is error-free and thus more reliable to use.

  7. Scalable-to-lossless transform domain distributed video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Veselov, Anton

    2010-01-01

    Distributed video coding (DVC) is a novel approach providing new features as low complexity encoding by mainly exploiting the source statistics at the decoder based on the availability of decoder side information. In this paper, scalable-tolossless DVC is presented based on extending a lossy...... TransformDomain Wyner-Ziv (TDWZ) distributed video codec with feedback.The lossless coding is obtained by using a reversible integer DCT.Experimental results show that the performance of the proposed scalable-to-lossless TDWZ video codec can outperform alternatives based on the JPEG 2000 standard. The TDWZ...... codec provides frame by frame encoding. Comparing the lossless coding efficiency, the proposed scalable-to-lossless TDWZ video codec can save up to 5%-13% bits compared to JPEG LS and H.264 Intra frame lossless coding and do so as a scalable-to-lossless coding....

  8. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2008-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately...

  9. Scalable Multiple-Description Image Coding Based on Embedded Quantization

    Directory of Open Access Journals (Sweden)

    Moerman Ingrid

    2007-01-01

    Full Text Available Scalable multiple-description (MD coding allows for fine-grain rate adaptation as well as robust coding of the input source. In this paper, we present a new approach for scalable MD coding of images, which couples the multiresolution nature of the wavelet transform with the robustness and scalability features provided by embedded multiple-description scalar quantization (EMDSQ. Two coding systems are proposed that rely on quadtree coding to compress the side descriptions produced by EMDSQ. The proposed systems are capable of dynamically adapting the bitrate to the available bandwidth while providing robustness to data losses. Experiments performed under different simulated network conditions demonstrate the effectiveness of the proposed scalable MD approach for image streaming over error-prone channels.

  10. Scalable Multiple-Description Image Coding Based on Embedded Quantization

    Directory of Open Access Journals (Sweden)

    Augustin I. Gavrilescu

    2007-02-01

    Full Text Available Scalable multiple-description (MD coding allows for fine-grain rate adaptation as well as robust coding of the input source. In this paper, we present a new approach for scalable MD coding of images, which couples the multiresolution nature of the wavelet transform with the robustness and scalability features provided by embedded multiple-description scalar quantization (EMDSQ. Two coding systems are proposed that rely on quadtree coding to compress the side descriptions produced by EMDSQ. The proposed systems are capable of dynamically adapting the bitrate to the available bandwidth while providing robustness to data losses. Experiments performed under different simulated network conditions demonstrate the effectiveness of the proposed scalable MD approach for image streaming over error-prone channels.

  11. Mobile platform security

    CERN Document Server

    Asokan, N; Dmitrienko, Alexandra

    2013-01-01

    Recently, mobile security has garnered considerable interest in both the research community and industry due to the popularity of smartphones. The current smartphone platforms are open systems that allow application development, also for malicious parties. To protect the mobile device, its user, and other mobile ecosystem stakeholders such as network operators, application execution is controlled by a platform security architecture. This book explores how such mobile platform security architectures work. We present a generic model for mobile platform security architectures: the model illustrat

  12. Coalescent: an open-source and scalable framework for exact calculations in coalescent theory

    Directory of Open Access Journals (Sweden)

    Tewari Susanta

    2012-10-01

    Full Text Available Abstract Background Currently, there is no open-source, cross-platform and scalable framework for coalescent analysis in population genetics. There is no scalable GUI based user application either. Such a framework and application would not only drive the creation of more complex and realistic models but also make them truly accessible. Results As a first attempt, we built a framework and user application for the domain of exact calculations in coalescent analysis. The framework provides an API with the concepts of model, data, statistic, phylogeny, gene tree and recursion. Infinite-alleles and infinite-sites models are considered. It defines pluggable computations such as counting and listing all the ancestral configurations and genealogies and computing the exact probability of data. It can visualize a gene tree, trace and visualize the internals of the recursion algorithm for further improvement and attach dynamically a number of output processors. The user application defines jobs in a plug-in like manner so that they can be activated, deactivated, installed or uninstalled on demand. Multiple jobs can be run and their inputs edited. Job inputs are persisted across restarts and running jobs can be cancelled where applicable. Conclusions Coalescent theory plays an increasingly important role in analysing molecular population genetic data. Models involved are mathematically difficult and computationally challenging. An open-source, scalable framework that lets users immediately take advantage of the progress made by others will enable exploration of yet more difficult and realistic models. As models become more complex and mathematically less tractable, the need for an integrated computational approach is obvious. Object oriented designs, though has upfront costs, are practical now and can provide such an integrated approach.

  13. Coalescent: an open-source and scalable framework for exact calculations in coalescent theory.

    Science.gov (United States)

    Tewari, Susanta; Spouge, John L

    2012-10-03

    Currently, there is no open-source, cross-platform and scalable framework for coalescent analysis in population genetics. There is no scalable GUI based user application either. Such a framework and application would not only drive the creation of more complex and realistic models but also make them truly accessible. As a first attempt, we built a framework and user application for the domain of exact calculations in coalescent analysis. The framework provides an API with the concepts of model, data, statistic, phylogeny, gene tree and recursion. Infinite-alleles and infinite-sites models are considered. It defines pluggable computations such as counting and listing all the ancestral configurations and genealogies and computing the exact probability of data. It can visualize a gene tree, trace and visualize the internals of the recursion algorithm for further improvement and attach dynamically a number of output processors. The user application defines jobs in a plug-in like manner so that they can be activated, deactivated, installed or uninstalled on demand. Multiple jobs can be run and their inputs edited. Job inputs are persisted across restarts and running jobs can be cancelled where applicable. Coalescent theory plays an increasingly important role in analysing molecular population genetic data. Models involved are mathematically difficult and computationally challenging. An open-source, scalable framework that lets users immediately take advantage of the progress made by others will enable exploration of yet more difficult and realistic models. As models become more complex and mathematically less tractable, the need for an integrated computational approach is obvious. Object oriented designs, though has upfront costs, are practical now and can provide such an integrated approach.

  14. TriG: Next Generation Scalable Spaceborne GNSS Receiver

    Science.gov (United States)

    Tien, Jeffrey Y.; Okihiro, Brian Bachman; Esterhuizen, Stephan X.; Franklin, Garth W.; Meehan, Thomas K.; Munson, Timothy N.; Robison, David E.; Turbiner, Dmitry; Young, Lawrence E.

    2012-01-01

    TriG is the next generation NASA scalable space GNSS Science Receiver. It will track all GNSS and additional signals (i.e. GPS, GLONASS, Galileo, Compass and Doris). Scalable 3U architecture and fully software and firmware recofigurable, enabling optimization to meet specific mission requirements. TriG GNSS EM is currently undergoing testing and is expected to complete full performance testing later this year.

  15. SDC: Scalable description coding for adaptive streaming media

    OpenAIRE

    Quinlan, Jason J.; Zahran, Ahmed H.; Sreenan, Cormac J.

    2012-01-01

    Video compression techniques enable adaptive media streaming over heterogeneous links to end-devices. Scalable Video Coding (SVC) and Multiple Description Coding (MDC) represent well-known techniques for video compression with distinct characteristics in terms of bandwidth efficiency and resiliency to packet loss. In this paper, we present Scalable Description Coding (SDC), a technique to compromise the tradeoff between bandwidth efficiency and error resiliency without sacrificing user-percei...

  16. ITS Platform North Denmark

    DEFF Research Database (Denmark)

    Lahrmann, Harry; Agerholm, Niels; Juhl, Jens

    2012-01-01

    This paper presents the project entitled “ITS Platform North Denmark” which is used as a test platform for Intelligent Transportation System (ITS) solutions. The platform consists of a newly developed GNSS/GPRS On Board Unit (OBU) to be installed in 500 cars, a backend server and a specially...

  17. Scalable persistent identifier systems for dynamic datasets

    Science.gov (United States)

    Golodoniuc, P.; Cox, S. J. D.; Klump, J. F.

    2016-12-01

    Reliable and persistent identification of objects, whether tangible or not, is essential in information management. Many Internet-based systems have been developed to identify digital data objects, e.g., PURL, LSID, Handle, ARK. These were largely designed for identification of static digital objects. The amount of data made available online has grown exponentially over the last two decades and fine-grained identification of dynamically generated data objects within large datasets using conventional systems (e.g., PURL) has become impractical. We have compared capabilities of various technological solutions to enable resolvability of data objects in dynamic datasets, and developed a dataset-centric approach to resolution of identifiers. This is particularly important in Semantic Linked Data environments where dynamic frequently changing data is delivered live via web services, so registration of individual data objects to obtain identifiers is impractical. We use identifier patterns and pattern hierarchies for identification of data objects, which allows relationships between identifiers to be expressed, and also provides means for resolving a single identifier into multiple forms (i.e. views or representations of an object). The latter can be implemented through (a) HTTP content negotiation, or (b) use of URI querystring parameters. The pattern and hierarchy approach has been implemented in the Linked Data API supporting the United Nations Spatial Data Infrastructure (UNSDI) initiative and later in the implementation of geoscientific data delivery for the Capricorn Distal Footprints project using International Geo Sample Numbers (IGSN). This enables flexible resolution of multi-view persistent identifiers and provides a scalable solution for large heterogeneous datasets.

  18. Microscopic Characterization of Scalable Coherent Rydberg Superatoms

    Directory of Open Access Journals (Sweden)

    Johannes Zeiher

    2015-08-01

    Full Text Available Strong interactions can amplify quantum effects such that they become important on macroscopic scales. Controlling these coherently on a single-particle level is essential for the tailored preparation of strongly correlated quantum systems and opens up new prospects for quantum technologies. Rydberg atoms offer such strong interactions, which lead to extreme nonlinearities in laser-coupled atomic ensembles. As a result, multiple excitation of a micrometer-sized cloud can be blocked while the light-matter coupling becomes collectively enhanced. The resulting two-level system, often called a “superatom,” is a valuable resource for quantum information, providing a collective qubit. Here, we report on the preparation of 2 orders of magnitude scalable superatoms utilizing the large interaction strength provided by Rydberg atoms combined with precise control of an ensemble of ultracold atoms in an optical lattice. The latter is achieved with sub-shot-noise precision by local manipulation of a two-dimensional Mott insulator. We microscopically confirm the superatom picture by in situ detection of the Rydberg excitations and observe the characteristic square-root scaling of the optical coupling with the number of atoms. Enabled by the full control over the atomic sample, including the motional degrees of freedom, we infer the overlap of the produced many-body state with a W state from the observed Rabi oscillations and deduce the presence of entanglement. Finally, we investigate the breakdown of the superatom picture when two Rydberg excitations are present in the system, which leads to dephasing and a loss of coherence.

  19. Microscopic Characterization of Scalable Coherent Rydberg Superatoms

    Science.gov (United States)

    Zeiher, Johannes; Schauß, Peter; Hild, Sebastian; Macrı, Tommaso; Bloch, Immanuel; Gross, Christian

    2015-07-01

    Strong interactions can amplify quantum effects such that they become important on macroscopic scales. Controlling these coherently on a single-particle level is essential for the tailored preparation of strongly correlated quantum systems and opens up new prospects for quantum technologies. Rydberg atoms offer such strong interactions, which lead to extreme nonlinearities in laser-coupled atomic ensembles. As a result, multiple excitation of a micrometer-sized cloud can be blocked while the light-matter coupling becomes collectively enhanced. The resulting two-level system, often called a "superatom," is a valuable resource for quantum information, providing a collective qubit. Here, we report on the preparation of 2 orders of magnitude scalable superatoms utilizing the large interaction strength provided by Rydberg atoms combined with precise control of an ensemble of ultracold atoms in an optical lattice. The latter is achieved with sub-shot-noise precision by local manipulation of a two-dimensional Mott insulator. We microscopically confirm the superatom picture by in situ detection of the Rydberg excitations and observe the characteristic square-root scaling of the optical coupling with the number of atoms. Enabled by the full control over the atomic sample, including the motional degrees of freedom, we infer the overlap of the produced many-body state with a W state from the observed Rabi oscillations and deduce the presence of entanglement. Finally, we investigate the breakdown of the superatom picture when two Rydberg excitations are present in the system, which leads to dephasing and a loss of coherence.

  20. Physical principles for scalable neural recording.

    Science.gov (United States)

    Marblestone, Adam H; Zamft, Bradley M; Maguire, Yael G; Shapiro, Mikhail G; Cybulski, Thaddeus R; Glaser, Joshua I; Amodei, Dario; Stranges, P Benjamin; Kalhor, Reza; Dalrymple, David A; Seo, Dongjin; Alon, Elad; Maharbiz, Michel M; Carmena, Jose M; Rabaey, Jan M; Boyden, Edward S; Church, George M; Kording, Konrad P

    2013-01-01

    Simultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience. Entirely new approaches may be required, motivating an analysis of the fundamental physical constraints on the problem. We outline the physical principles governing brain activity mapping using optical, electrical, magnetic resonance, and molecular modalities of neural recording. Focusing on the mouse brain, we analyze the scalability of each method, concentrating on the limitations imposed by spatiotemporal resolution, energy dissipation, and volume displacement. Based on this analysis, all existing approaches require orders of magnitude improvement in key parameters. Electrical recording is limited by the low multiplexing capacity of electrodes and their lack of intrinsic spatial resolution, optical methods are constrained by the scattering of visible light in brain tissue, magnetic resonance is hindered by the diffusion and relaxation timescales of water protons, and the implementation of molecular recording is complicated by the stochastic kinetics of enzymes. Understanding the physical limits of brain activity mapping may provide insight into opportunities for novel solutions. For example, unconventional methods for delivering electrodes may enable unprecedented numbers of recording sites, embedded optical devices could allow optical detectors to be placed within a few scattering lengths of the measured neurons, and new classes of molecularly engineered sensors might obviate cumbersome hardware architectures. We also study the physics of powering and communicating with microscale devices embedded in brain tissue and find that, while radio-frequency electromagnetic data transmission suffers from a severe power-bandwidth tradeoff, communication via infrared light or ultrasound may allow high data rates due to the possibility of spatial multiplexing. The use of embedded local recording and

  1. Memory-Scalable GPU Spatial Hierarchy Construction.

    Science.gov (United States)

    Qiming Hou; Xin Sun; Kun Zhou; Lauterbach, C; Manocha, D

    2011-04-01

    Recent GPU algorithms for constructing spatial hierarchies have achieved promising performance for moderately complex models by using the breadth-first search (BFS) construction order. While being able to exploit the massive parallelism on the GPU, the BFS order also consumes excessive GPU memory, which becomes a serious issue for interactive applications involving very complex models with more than a few million triangles. In this paper, we propose to use the partial breadth-first search (PBFS) construction order to control memory consumption while maximizing performance. We apply the PBFS order to two hierarchy construction algorithms. The first algorithm is for kd-trees that automatically balances between the level of parallelism and intermediate memory usage. With PBFS, peak memory consumption during construction can be efficiently controlled without costly CPU-GPU data transfer. We also develop memory allocation strategies to effectively limit memory fragmentation. The resulting algorithm scales well with GPU memory and constructs kd-trees of models with millions of triangles at interactive rates on GPUs with 1 GB memory. Compared with existing algorithms, our algorithm is an order of magnitude more scalable for a given GPU memory bound. The second algorithm is for out-of-core bounding volume hierarchy (BVH) construction for very large scenes based on the PBFS construction order. At each iteration, all constructed nodes are dumped to the CPU memory, and the GPU memory is freed for the next iteration's use. In this way, the algorithm is able to build trees that are too large to be stored in the GPU memory. Experiments show that our algorithm can construct BVHs for scenes with up to 20 M triangles, several times larger than previous GPU algorithms.

  2. Computing Platforms for Big Biological Data Analytics: Perspectives and Challenges.

    Science.gov (United States)

    Yin, Zekun; Lan, Haidong; Tan, Guangming; Lu, Mian; Vasilakos, Athanasios V; Liu, Weiguo

    2017-01-01

    The last decade has witnessed an explosion in the amount of available biological sequence data, due to the rapid progress of high-throughput sequencing projects. However, the biological data amount is becoming so great that traditional data analysis platforms and methods can no longer meet the need to rapidly perform data analysis tasks in life sciences. As a result, both biologists and computer scientists are facing the challenge of gaining a profound insight into the deepest biological functions from big biological data. This in turn requires massive computational resources. Therefore, high performance computing (HPC) platforms are highly needed as well as efficient and scalable algorithms that can take advantage of these platforms. In this paper, we survey the state-of-the-art HPC platforms for big biological data analytics. We first list the characteristics of big biological data and popular computing platforms. Then we provide a taxonomy of different biological data analysis applications and a survey of the way they have been mapped onto various computing platforms. After that, we present a case study to compare the efficiency of different computing platforms for handling the classical biological sequence alignment problem. At last we discuss the open issues in big biological data analytics.

  3. Scalable Parallel Distributed Coprocessor System for Graph Searching Problems with Massive Data

    Directory of Open Access Journals (Sweden)

    Wanrong Huang

    2017-01-01

    Full Text Available The Internet applications, such as network searching, electronic commerce, and modern medical applications, produce and process massive data. Considerable data parallelism exists in computation processes of data-intensive applications. A traversal algorithm, breadth-first search (BFS, is fundamental in many graph processing applications and metrics when a graph grows in scale. A variety of scientific programming methods have been proposed for accelerating and parallelizing BFS because of the poor temporal and spatial locality caused by inherent irregular memory access patterns. However, new parallel hardware could provide better improvement for scientific methods. To address small-world graph problems, we propose a scalable and novel field-programmable gate array-based heterogeneous multicore system for scientific programming. The core is multithread for streaming processing. And the communication network InfiniBand is adopted for scalability. We design a binary search algorithm to address mapping to unify all processor addresses. Within the limits permitted by the Graph500 test bench after 1D parallel hybrid BFS algorithm testing, our 8-core and 8-thread-per-core system achieved superior performance and efficiency compared with the prior work under the same degree of parallelism. Our system is efficient not as a special acceleration unit but as a processor platform that deals with graph searching applications.

  4. Digital quantum simulators in a scalable architecture of hybrid spin-photon qubits.

    Science.gov (United States)

    Chiesa, Alessandro; Santini, Paolo; Gerace, Dario; Raftery, James; Houck, Andrew A; Carretta, Stefano

    2015-11-13

    Resolving quantum many-body problems represents one of the greatest challenges in physics and physical chemistry, due to the prohibitively large computational resources that would be required by using classical computers. A solution has been foreseen by directly simulating the time evolution through sequences of quantum gates applied to arrays of qubits, i.e. by implementing a digital quantum simulator. Superconducting circuits and resonators are emerging as an extremely promising platform for quantum computation architectures, but a digital quantum simulator proposal that is straightforwardly scalable, universal, and realizable with state-of-the-art technology is presently lacking. Here we propose a viable scheme to implement a universal quantum simulator with hybrid spin-photon qubits in an array of superconducting resonators, which is intrinsically scalable and allows for local control. As representative examples we consider the transverse-field Ising model, a spin-1 Hamiltonian, and the two-dimensional Hubbard model and we numerically simulate the scheme by including the main sources of decoherence.

  5. Scalability of Several Asynchronous Many-Task Models for In Situ Statistical Analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bennett, Janine Camille [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Kolla, Hemanth [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Borghesi, Giulio [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2017-05-01

    This report is a sequel to [PB16], in which we provided a first progress report on research and development towards a scalable, asynchronous many-task, in situ statistical analysis engine using the Legion runtime system. This earlier work included a prototype implementation of a proposed solution, using a proxy mini-application as a surrogate for a full-scale scientific simulation code. The first scalability studies were conducted with the above on modestly-sized experimental clusters. In contrast, in the current work we have integrated our in situ analysis engines with a full-size scientific application (S3D, using the Legion-SPMD model), and have conducted nu- merical tests on the largest computational platform currently available for DOE science ap- plications. We also provide details regarding the design and development of a light-weight asynchronous collectives library. We describe how this library is utilized within our SPMD- Legion S3D workflow, and compare the data aggregation technique deployed herein to the approach taken within our previous work.

  6. A Hybrid MPI-OpenMP Scheme for Scalable Parallel Pseudospectral Computations for Fluid Turbulence

    Science.gov (United States)

    Rosenberg, D. L.; Mininni, P. D.; Reddy, R. N.; Pouquet, A.

    2010-12-01

    A hybrid scheme that utilizes MPI for distributed memory parallelism and OpenMP for shared memory parallelism is presented. The work is motivated by the desire to achieve exceptionally high Reynolds numbers in pseudospectral computations of fluid turbulence on emerging petascale, high core-count, massively parallel processing systems. The hybrid implementation derives from and augments a well-tested scalable MPI-parallelized pseudospectral code. The hybrid paradigm leads to a new picture for the domain decomposition of the pseudospectral grids, which is helpful in understanding, among other things, the 3D transpose of the global data that is necessary for the parallel fast Fourier transforms that are the central component of the numerical discretizations. Details of the hybrid implementation are provided, and performance tests illustrate the utility of the method. It is shown that the hybrid scheme achieves near ideal scalability up to ~20000 compute cores with a maximum mean efficiency of 83%. Data are presented that demonstrate how to choose the optimal number of MPI processes and OpenMP threads in order to optimize code performance on two different platforms.

  7. Continuous Platform Development

    DEFF Research Database (Denmark)

    Nielsen, Ole Fiil

    low risks and investments but also with relatively fuzzy results. When looking for new platform projects, it is important to make sure that the company and market is ready for the introduction of platforms, and to make sure that people from marketing and sales, product development, and downstream...... departments are all consulted. Platform ideas originate primarily from experienced workers, managers, or platform thinkers. Platform projects are presented regularly for future users and decision makers in structured presentations consisting of both quantitative estimates of costs and benefits and qualitative...

  8. A Low-Power Scalable Stream Compute Accelerator for General Matrix Multiply (GEMM

    Directory of Open Access Journals (Sweden)

    Antony Savich

    2014-01-01

    play an important role in determining the performance of such applications. This paper proposes a novel efficient, highly scalable hardware accelerator that is of equivalent performance to a 2 GHz quad core PC but can be used in low-power applications targeting embedded systems requiring high performance computation. Power, performance, and resource consumption are demonstrated on a fully-functional prototype. The proposed hardware accelerator is 36× more energy efficient per unit of computation compared to state-of-the-art Xeon processor of equal vintage and is 14× more efficient as a stand-alone platform with equivalent performance. An important comparison between simulated system estimates and real system performance is carried out.

  9. Scalable ionic gelation synthesis of chitosan nanoparticles for drug delivery in static mixers.

    Science.gov (United States)

    Dong, Yuancai; Ng, Wai Kiong; Shen, Shoucang; Kim, Sanggu; Tan, Reginald B H

    2013-05-15

    The purpose of this study is to synthesize chitosan (CS) nanoparticles (NPs) by ionic gelation with tripolyphosphate (TPP) as crossslinker in static mixers. The proposed static mixing technique showed good control over the ionic gelation process and 152-376 nm CS NPs were achieved in a continuous and scalable mode. Increasing the flow rates of CS:TPP solution streams, decreasing the CS concentration or reducing the CS:TPP solution volume ratio led to the smaller particles. Sylicylic acid (SA) was used as a model drug and successfully loaded into the CS NPs during the fabrication process. Our work demonstrates that ionic gelation-static mixing is a robust platform for continuous and large scale production of CS NPs for drug delivery. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Scalable collision detection using p-partition fronts on many-core processors.

    Science.gov (United States)

    Zhang, Xinyu; Kim, Young J

    2014-03-01

    We present a new parallel algorithm for collision detection using many-core computing platforms of CPUs or GPUs. Based on the notion of a $(p)$-partition front, our algorithm is able to evenly partition and distribute the workload of BVH traversal among multiple processing cores without the need for dynamic balancing, while minimizing the memory overhead inherent to the state-of-the-art parallel collision detection algorithms. We demonstrate the scalability of our algorithm on different benchmarking scenarios with and without using temporal coherence, including dynamic simulation of rigid bodies, cloth simulation, and random collision courses. In these experiments, we observe nearly linear performance improvement in terms of the number of processing cores on the CPUs and GPUs.

  11. CHORUS – providing a scalable solution for public access to scholarly research

    Directory of Open Access Journals (Sweden)

    Howard Ratner

    2014-03-01

    Full Text Available CHORUS (Clearinghouse for the Open Research of the United States offers an open technology platform in response to the public access requirements of US federal funding agencies, researchers, institutions and the public. It is focused on five principal sets of functions: 'identification', 'preservation', 'discovery', 'access', and 'compliance' . CHORUS facilitates public access to peer-reviewed publications, after a determined embargo period (where applicable, for each discipline and agency. By leveraging existing tools such as CrossRef, FundRef and ORCID, CHORUS allows a greater proportion of funding to remain focused on research. CHORUS identifies articles that report on federally funded research and enables a reader to access the ‘best available version’ free of charge, via the publisher. It is a scalable solution that offers maximum efficiency for all parties by automating as much of the process as is possible. CHORUS launched in pilot phase in September 2013, and the production phase will begin in early 2014.

  12. A scalable silicon photonic chip-scale optical switch for high performance computing systems.

    Science.gov (United States)

    Yu, Runxiang; Cheung, Stanley; Li, Yuliang; Okamoto, Katsunari; Proietti, Roberto; Yin, Yawei; Yoo, S J B

    2013-12-30

    This paper discusses the architecture and provides performance studies of a silicon photonic chip-scale optical switch for scalable interconnect network in high performance computing systems. The proposed switch exploits optical wavelength parallelism and wavelength routing characteristics of an Arrayed Waveguide Grating Router (AWGR) to allow contention resolution in the wavelength domain. Simulation results from a cycle-accurate network simulator indicate that, even with only two transmitter/receiver pairs per node, the switch exhibits lower end-to-end latency and higher throughput at high (>90%) input loads compared with electronic switches. On the device integration level, we propose to integrate all the components (ring modulators, photodetectors and AWGR) on a CMOS-compatible silicon photonic platform to ensure a compact, energy efficient and cost-effective device. We successfully demonstrate proof-of-concept routing functions on an 8 × 8 prototype fabricated using foundry services provided by OpSIS-IME.

  13. Cross-Platform Technologies

    Directory of Open Access Journals (Sweden)

    Maria Cristina ENACHE

    2017-04-01

    Full Text Available Cross-platform - a concept becoming increasingly used in recent years especially in the development of mobile apps, but this consistently over time and in the development of conventional desktop applications. The notion of cross-platform software (multi-platform or platform-independent refers to a software application that can run on more than one operating system or computing architecture. Thus, a cross-platform application can operate independent of software or hardware platform on which it is execute. As a generic definition presents a wide range of meanings for purposes of this paper we individualize this definition as follows: we will reduce the horizon of meaning and we use functionally following definition: a cross-platform application is a software application that can run on more than one operating system (desktop or mobile identical or in a similar way.

  14. Product Platform Modeling

    DEFF Research Database (Denmark)

    Pedersen, Rasmus

    as important assets in a product platform, yet activities, working patterns, processes and knowledge can also be reused in a platform approach. Encapsulation is seen as a process in which the different elements of a platform are grouped into well defined and self-contained units which are decoupled from each......This PhD thesis has the title Product Platform Modelling. The thesis is about product platforms and visual product platform modelling. Product platforms have gained an increasing attention in industry and academia in the past decade. The reasons are many, yet the increasing globalisation...... and the change in the global economy seem to be major factors. Manufacturing companies have experienced an intensifying competition and many companies face increasing demands for reductions in costs and lead times in development and production. At the same time many customers have raised their demands...

  15. Use of the NetBeans Platform for NASA Robotic Conjunction Assessment Risk Analysis

    Science.gov (United States)

    Sabey, Nickolas J.

    2014-01-01

    The latest Java and JavaFX technologies are very attractive software platforms for customers involved in space mission operations such as those of NASA and the US Air Force. For NASA Robotic Conjunction Assessment Risk Analysis (CARA), the NetBeans platform provided an environment in which scalable software solutions could be developed quickly and efficiently. Both Java 8 and the NetBeans platform are in the process of simplifying CARA development in secure environments by providing a significant amount of capability in a single accredited package, where accreditation alone can account for 6-8 months for each library or software application. Capabilities either in use or being investigated by CARA include: 2D and 3D displays with JavaFX, parallelization with the new Streams API, and scalability through the NetBeans plugin architecture.

  16. Robust, small-scale cultivation platform for Streptomyces coelicolor

    DEFF Research Database (Denmark)

    Sohoni, Sujata Vijay; Bapat, Prashant Madhusudan; Lantz, Anna Eliasson

    2012-01-01

    on approximate estimation of scalable physiological traits. Microtiter plate (MTP) based screening platforms have lately become an attractive alternative to shake flasks mainly because of the ease of automation. However, there are very few reports on applications for filamentous organisms; as well as efforts....... The MTP cultivations were found to behave similar to bench-scale in terms of growth rate, productivity and substrate uptake rate and so was the onset of antibiotic synthesis. Shake flask cultivations however, showed discrepancy with respect to morphology and had considerably reduced volumetric production...... rates of antibiotics. CONCLUSION: We observed good agreement of the physiological data obtained in the developed MTP platform with bench-scale. Hence, the described MTP-based screening platform has a high potential for investigation of secondary metabolite biosynthesis in Streptomycetes and other...

  17. Building a Hybrid Experimental Platform for Mobile Botnet Research

    Directory of Open Access Journals (Sweden)

    Apostolos Malatras

    2016-03-01

    Full Text Available Mobile botnets are an emerging security threat that aims at exploiting the wide penetration of mobile devices and systems and their vulnerabilities in the same spirit of traditional botnets. Mobile botmasters take advantage of infected mobile devices and issue command and control operations on them to extract personal information, cause denial of service or gain financially. To date, research on countering such attacks or studying their effects has been conducted in a sporadic manner that hinders the repetition of experiments and thus limits their validity. We present here our work on a hybrid experimental platform for mobile botnets that supports the execution and monitoring of related scenarios concerning their infection, attack vectors, propagation, etc. The platform is based on principles of flexibility, extensibility and facilitates the setup of scalable experiments utilising both real and emulated mobile systems. We also discuss a novel method of estimating the active bot population in a botnet and illustrate its deployment on the experimental platform.

  18. D2.3 - ENCOURAGE platform reference architecture

    DEFF Research Database (Denmark)

    Ferreira, Luis Lino; Pinho, Luis Miguel; Albano, Michele

    2012-01-01

    of heterogeneous (both new and legacy) systems, by abstracting from the technologies within the buildings and supporting multiple independent gateways, creating the notion of a single abstract interface. The document defines the overall architecture of the ENCOURAGE platform, presenting the structure......This document describes the reference architecture of the ENCOURAGE platform, together with the interconnection of the platform with the external environment. The document is the outcome of task 2.3 (Design of System Architecture) of the ENCOURAGE project, and sets, together with the remaining...... and standardization initiatives, in the areas addressed by ENCOURAGE. Also, related existing architectures are analysed for consistency and state-of-the-art survey. This allows for ENCOURAGE to build on current practice, innovating in its modularity, scalability and support for seamless interoperability...

  19. Making Spatial Statistics Service Accessible On Cloud Platform

    Science.gov (United States)

    Mu, X.; Wu, J.; Li, T.; Zhong, Y.; Gao, X.

    2014-04-01

    Web service can bring together applications running on diverse platforms, users can access and share various data, information and models more effectively and conveniently from certain web service platform. Cloud computing emerges as a paradigm of Internet computing in which dynamical, scalable and often virtualized resources are provided as services. With the rampant growth of massive data and restriction of net, traditional web services platforms have some prominent problems existing in development such as calculation efficiency, maintenance cost and data security. In this paper, we offer a spatial statistics service based on Microsoft cloud. An experiment was carried out to evaluate the availability and efficiency of this service. The results show that this spatial statistics service is accessible for the public conveniently with high processing efficiency.

  20. Scalable quantum information processing with photons and atoms

    Science.gov (United States)

    Pan, Jian-Wei

    Over the past three decades, the promises of super-fast quantum computing and secure quantum cryptography have spurred a world-wide interest in quantum information, generating fascinating quantum technologies for coherent manipulation of individual quantum systems. However, the distance of fiber-based quantum communications is limited due to intrinsic fiber loss and decreasing of entanglement quality. Moreover, probabilistic single-photon source and entanglement source demand exponentially increased overheads for scalable quantum information processing. To overcome these problems, we are taking two paths in parallel: quantum repeaters and through satellite. We used the decoy-state QKD protocol to close the loophole of imperfect photon source, and used the measurement-device-independent QKD protocol to close the loophole of imperfect photon detectors--two main loopholes in quantum cryptograph. Based on these techniques, we are now building world's biggest quantum secure communication backbone, from Beijing to Shanghai, with a distance exceeding 2000 km. Meanwhile, we are developing practically useful quantum repeaters that combine entanglement swapping, entanglement purification, and quantum memory for the ultra-long distance quantum communication. The second line is satellite-based global quantum communication, taking advantage of the negligible photon loss and decoherence in the atmosphere. We realized teleportation and entanglement distribution over 100 km, and later on a rapidly moving platform. We are also making efforts toward the generation of multiphoton entanglement and its use in teleportation of multiple properties of a single quantum particle, topological error correction, quantum algorithms for solving systems of linear equations and machine learning. Finally, I will talk about our recent experiments on quantum simulations on ultracold atoms. On the one hand, by applying an optical Raman lattice technique, we realized a two-dimensional spin-obit (SO

  1. On Using Cloud Platforms in a Software Architecture for Smart Energy Grids

    Energy Technology Data Exchange (ETDEWEB)

    Simmhan, Yogesh [Univ. of Southern California, Los Angeles, CA (United States); Giakkoupis, Michail [Univ. of Southern California, Los Angeles, CA (United States); Cao, Baohua [Univ. of Southern California, Los Angeles, CA (United States); Prasanna, Viktor K. [Univ. of Southern California, Los Angeles, CA (United States)

    2010-11-30

    Increasing concern about energy consumption is leading to infrastructure that continuously monitors consumer energy usage and allow power utilities to provide dynamic feedback to curtail peak power load. Smart Grid infrastructure being deployed globally needs scalable software platforms to rapidly integrate and analyze information streaming from millions of smart meters, forecast power usage and respond to operational events. Cloud platforms are well suited to support such data and compute intensive, always-on applications. We examine opportunities and challenges of using cloud platforms for such applications in the emerging domain of energy informatics.

  2. Nanopipette combined with quartz tuning fork-atomic force microscope for force spectroscopy/microscopy and liquid delivery-based nanofabrication

    Science.gov (United States)

    An, Sangmin; Lee, Kunyoung; Kim, Bongsu; Noh, Haneol; Kim, Jongwoo; Kwon, Soyoung; Lee, Manhee; Hong, Mun-Heon; Jhe, Wonho

    2014-03-01

    This paper introduces a nanopipette combined with a quartz tuning fork-atomic force microscope system (nanopipette/QTF-AFM), and describes experimental and theoretical investigations of the nanoscale materials used. The system offers several advantages over conventional cantilever-based AFM and QTF-AFM systems, including simple control of the quality factor based on the contact position of the QTF, easy variation of the effective tip diameter, electrical detection, on-demand delivery and patterning of various solutions, and in situ surface characterization after patterning. This tool enables nanoscale liquid delivery and nanofabrication processes without damaging the apex of the tip in various environments, and also offers force spectroscopy and microscopy capabilities.

  3. Nanopipette combined with quartz tuning fork-atomic force microscope for force spectroscopy/microscopy and liquid delivery-based nanofabrication.

    Science.gov (United States)

    An, Sangmin; Lee, Kunyoung; Kim, Bongsu; Noh, Haneol; Kim, Jongwoo; Kwon, Soyoung; Lee, Manhee; Hong, Mun-Heon; Jhe, Wonho

    2014-03-01

    This paper introduces a nanopipette combined with a quartz tuning fork-atomic force microscope system (nanopipette/QTF-AFM), and describes experimental and theoretical investigations of the nanoscale materials used. The system offers several advantages over conventional cantilever-based AFM and QTF-AFM systems, including simple control of the quality factor based on the contact position of the QTF, easy variation of the effective tip diameter, electrical detection, on-demand delivery and patterning of various solutions, and in situ surface characterization after patterning. This tool enables nanoscale liquid delivery and nanofabrication processes without damaging the apex of the tip in various environments, and also offers force spectroscopy and microscopy capabilities.

  4. Platform switching and bone platform switching.

    Science.gov (United States)

    Carinci, Francesco; Brunelli, Giorgio; Danza, Matteo

    2009-01-01

    Bone platform switching involves an inward bone ring in the coronal part of the implant that is in continuity with the alveolar bone crest. Bone platform switching is obtained by using a dental fixture with a reverse conical neck. A retrospective study was performed to evaluate the effectiveness of conventional vs reverse conical neck implants. In the period between May 2004 and November 2007, 86 patients (55 females and 31 males; median age, 53 years) were operated and 234 implants were inserted: 40 and 194 were conventional vs reverse conical neck implants, respectively. Kaplan-Meier algorithm and Cox regression were used to detect those variables associated with the clinical outcome. No differences in survival and success rates were detected between conventional vs reverse conical neck implants alone or in combination with any of the studied variables. Although bone platform switching leads to several advantages, no statistical difference in alveolar crest resorption is detected in comparison with reverse conical neck implants. We suppose that the proximity of the implant abutment junction to the alveolar crestal bone gives no protection against the microflora contained in the micrograph. Additional studies on larger series and a combination of platform switching and bone platform switching could lead to improved clinical outcomes.

  5. ADMS Evaluation Platform

    Energy Technology Data Exchange (ETDEWEB)

    2018-01-23

    Deploying an ADMS or looking to optimize its value? NREL offers a low-cost, low-risk evaluation platform for assessing ADMS performance. The National Renewable Energy Laboratory (NREL) has developed a vendor-neutral advanced distribution management system (ADMS) evaluation platform and is expanding its capabilities. The platform uses actual grid-scale hardware, large-scale distribution system models, and advanced visualization to simulate realworld conditions for the most accurate ADMS evaluation and experimentation.

  6. Affordable and Scalable Manufacturing of Wearable Multi-Functional Sensory “Skin” for Internet of Everything Applications

    KAUST Repository

    Nassar, Joanna M.

    2017-10-01

    Demand for wearable electronics is expected to at least triple by 2020, embracing all sorts of Internet of Everything (IoE) applications, such as activity tracking, environmental mapping, and advanced healthcare monitoring, in the purpose of enhancing the quality of life. This entails the wide availability of free-form multifunctional sensory systems (i.e “skin” platforms) that can conform to the variety of uneven surfaces, providing intimate contact and adhesion with the skin, necessary for localized and enhanced sensing capabilities. However, current wearable devices appear to be bulky, rigid and not convenient for continuous wear in everyday life, hindering their implementation into advanced and unexplored applications beyond fitness tracking. Besides, they retail at high price tags which limits their availability to at least half of the World’s population. Hence, form factor (physical flexibility and/or stretchability), cost, and accessibility become the key drivers for further developments. To support this need in affordable and adaptive wearables and drive academic developments in “skin” platforms into practical and functional consumer devices, compatibility and integration into a high performance yet low power system is crucial to sustain the high data rates and large data management driven by IoE. Likewise, scalability becomes essential for batch fabrication and precision. Therefore, I propose to develop three distinct but necessary “skin” platforms using scalable and cost effective manufacturing techniques. My first approach is the fabrication of a CMOS-compatible “silicon skin”, crucial for any truly autonomous and conformal wearable device, where monolithic integration between heterogeneous material-based sensory platform and system components is a challenge yet to be addressed. My second approach displays an even more affordable and accessible “paper skin”, using recyclable and off-the-shelf materials, targeting environmental

  7. Quality Scalability Compression on Single-Loop Solution in HEVC

    Directory of Open Access Journals (Sweden)

    Mengmeng Zhang

    2014-01-01

    Full Text Available This paper proposes a quality scalable extension design for the upcoming high efficiency video coding (HEVC standard. In the proposed design, the single-loop decoder solution is extended into the proposed scalable scenario. A novel interlayer intra/interprediction is added to reduce the amount of bits representation by exploiting the correlation between coding layers. The experimental results indicate that the average Bjøntegaard delta rate decrease of 20.50% can be gained compared with the simulcast encoding. The proposed technique achieved 47.98% Bjøntegaard delta rate reduction compared with the scalable video coding extension of the H.264/AVC. Consequently, significant rate savings confirm that the proposed method achieves better performance.

  8. Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration

    Energy Technology Data Exchange (ETDEWEB)

    Masalma, Yahya [Universidad del Turabo; Jiao, Yu [ORNL

    2010-10-01

    We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.

  9. Natural product synthesis in the age of scalability.

    Science.gov (United States)

    Kuttruff, Christian A; Eastgate, Martin D; Baran, Phil S

    2014-04-01

    The ability to procure useful quantities of a molecule by simple, scalable routes is emerging as an important goal in natural product synthesis. Approaches to molecules that yield substantial material enable collaborative investigations (such as SAR studies or eventual commercial production) and inherently spur innovation in chemistry. As such, when evaluating a natural product synthesis, scalability is becoming an increasingly important factor. In this Highlight, we discuss recent examples of natural product synthesis from our laboratory and others, where the preparation of gram-scale quantities of a target compound or a key intermediate allowed for a deeper understanding of biological activities or enabled further investigational collaborations.

  10. Providing scalable system software for high-end simulations

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D. [Sandia National Labs., Albuquerque, NM (United States)

    1997-12-31

    Detailed, full-system, complex physics simulations have been shown to be feasible on systems containing thousands of processors. In order to manage these computer systems it has been necessary to create scalable system services. In this talk Sandia`s research on scalable systems will be described. The key concepts of low overhead data movement through portals and of flexible services through multi-partition architectures will be illustrated in detail. The talk will conclude with a discussion of how these techniques can be applied outside of the standard monolithic MPP system.

  11. Scalable and Hybrid Radio Resource Management for Future Wireless Networks

    DEFF Research Database (Denmark)

    Mino, E.; Luo, Jijun; Tragos, E.

    2007-01-01

    The concept of ubiquitous and scalable system is applied in the IST WINNER II [1] project to deliver optimum performance for different deployment scenarios, from local area to wide area wireless networks. The integration in a unique radio system of a cellular and local area type networks supposes...... describes a proposal for scalable and hybrid radio resource management to efficiently integrate the different WINNER system modes. Index...... a great advantage for the final user and for the operator, compared with the current situation, with disconnected systems, usually with different subscriptions, radio interfaces and terminals. To be a ubiquitous wireless system, the IST project WINNER II has defined three system modes. This contribution...

  12. Scalability limitations of VIA-based technologies in supporting MPI

    Energy Technology Data Exchange (ETDEWEB)

    BRIGHTWELL,RONALD B.; MACCABE,ARTHUR BERNARD

    2000-04-17

    This paper analyzes the scalability limitations of networking technologies based on the Virtual Interface Architecture (VIA) in supporting the runtime environment needed for an implementation of the Message Passing Interface. The authors present an overview of the important characteristics of VIA and an overview of the runtime system being developed as part of the Computational Plant (Cplant) project at Sandia National Laboratories. They discuss the characteristics of VIA that prevent implementations based on this system to meet the scalability and performance requirements of Cplant.

  13. A Scalable Smart Meter Data Generator Using Spark

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Liu, Xiufeng; Danalachi, Sergiu

    2017-01-01

    Today, smart meters are being used worldwide. As a matter of fact smart meters produce large volumes of data. Thus, it is important for smart meter data management and analytics systems to process petabytes of data. Benchmarking and testing of these systems require scalable data, however, it can...... be challenging to get large data sets due to privacy and/or data protection regulations. This paper presents a scalable smart meter data generator using Spark that can generate realistic data sets. The proposed data generator is based on a supervised machine learning method that can generate data of any size...

  14. Investigating methods of supporting dynamically linked executables on high performance computing platforms.

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Suzanne Marie; Laros, James H., III; Pedretti, Kevin Thomas Tauke; Levenhagen, Michael J.

    2009-09-01

    Shared libraries have become ubiquitous and are used to achieve great resource efficiencies on many platforms. The same properties that enable efficiencies on time-shared computers and convenience on small clusters prove to be great obstacles to scalability on large clusters and High Performance Computing platforms. In addition, Light Weight operating systems such as Catamount have historically not supported the use of shared libraries specifically because they hinder scalability. In this report we will outline the methods of supporting shared libraries on High Performance Computing platforms using Light Weight kernels that we investigated. The considerations necessary to evaluate utility in this area are many and sometimes conflicting. While our initial path forward has been determined based on this evaluation we consider this effort ongoing and remain prepared to re-evaluate any technology that might provide a scalable solution. This report is an evaluation of a range of possible methods of supporting dynamically linked executables on capability class1 High Performance Computing platforms. Efforts are ongoing and extensive testing at scale is necessary to evaluate performance. While performance is a critical driving factor, supporting whatever method is used in a production environment is an equally important and challenging task.

  15. Platform development supportedby gaming

    DEFF Research Database (Denmark)

    Mikkola, Juliana Hsuan; Hansen, Poul H. Kyvsgård

    2007-01-01

    , possibly increasing the strategic risks for the firm. This paper reports preliminary findings on platform management process at LEGO, a Danish toy company.  Specifically, we report the process of applying games combined with simulations and workshops in the platform development. We also propose a framework...

  16. Green factory: plants as bioproduction platforms for recombinant proteins.

    Science.gov (United States)

    Xu, Jianfeng; Dolan, Maureen C; Medrano, Giuliana; Cramer, Carole L; Weathers, Pamela J

    2012-01-01

    Molecular farming, long considered a promising strategy to produce valuable recombinant proteins not only for human and veterinary medicine, but also for agriculture and industry, now has some commercially available products. Various plant-based production platforms including whole-plants, aquatic plants, plant cell suspensions, and plant tissues (hairy roots) have been compared in terms of their advantages and limits. Effective recombinant strategies are summarized along with descriptions of scalable culture systems and examples of commercial progress and success. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. Fable: Design of a Modular Robotic Playware Platform

    DEFF Research Database (Denmark)

    Pacheco, Moises; Moghadam, Mikael; Magnússon, Arnþór

    2013-01-01

    -based system composed of reconfigurable heterogeneous modules with a reliable and scalable connector. Furthermore, this paper describes tests where the connector design is tested with children, and presents examples of a moving snake and a quadruped robot, as well as an interactive upper humanoid torso.......We are developing the Fable modular robotic system as a playware platform that will enable non-expert users to develop robots ranging from advanced robotic toys to robotic solutions to problems encountered in their daily lives. This paper presents the mechanical design of Fable: a chain...

  18. A holistic approach to SIM platform and its application to early-warning satellite system

    Science.gov (United States)

    Sun, Fuyu; Zhou, Jianping; Xu, Zheyao

    2018-01-01

    This study proposes a new simulation platform named Simulation Integrated Management (SIM) for the analysis of parallel and distributed systems. The platform eases the process of designing and testing both applications and architectures. The main characteristics of SIM are flexibility, scalability, and expandability. To improve the efficiency of project development, new models of early-warning satellite system were designed based on the SIM platform. Finally, through a series of experiments, the correctness of SIM platform and the aforementioned early-warning satellite models was validated, and the systematical analyses for the orbital determination precision of the ballistic missile during its entire flight process were presented, as well as the deviation of the launch/landing point. Furthermore, the causes of deviation and prevention methods will be fully explained. The simulation platform and the models will lay the foundations for further validations of autonomy technology in space attack-defense architecture research.

  19. Final Scientific/Technical Report for "Enabling Exascale Hardware and Software Design through Scalable System Virtualization"

    Energy Technology Data Exchange (ETDEWEB)

    Dinda, Peter August [Northwestern Univ., Evanston, IL (United States)

    2015-03-17

    This report describes the activities, findings, and products of the Northwestern University component of the "Enabling Exascale Hardware and Software Design through Scalable System Virtualization" project. The purpose of this project has been to extend the state of the art of systems software for high-end computing (HEC) platforms, and to use systems software to better enable the evaluation of potential future HEC platforms, for example exascale platforms. Such platforms, and their systems software, have the goal of providing scientific computation at new scales, thus enabling new research in the physical sciences and engineering. Over time, the innovations in systems software for such platforms also become applicable to more widely used computing clusters, data centers, and clouds. This was a five-institution project, centered on the Palacios virtual machine monitor (VMM) systems software, a project begun at Northwestern, and originally developed in a previous collaboration between Northwestern University and the University of New Mexico. In this project, Northwestern (including via our subcontract to the University of Pittsburgh) contributed to the continued development of Palacios, along with other team members. We took the leadership role in (1) continued extension of support for emerging Intel and AMD hardware, (2) integration and performance enhancement of overlay networking, (3) connectivity with architectural simulation, (4) binary translation, and (5) support for modern Non-Uniform Memory Access (NUMA) hosts and guests. We also took a supporting role in support for specialized hardware for I/O virtualization, profiling, configurability, and integration with configuration tools. The efforts we led (1-5) were largely successful and executed as expected, with code and papers resulting from them. The project demonstrated the feasibility of a virtualization layer for HEC computing, similar to such layers for cloud or datacenter computing. For effort (3

  20. Scalable Track Initiation for Optical Space Surveillance

    Science.gov (United States)

    Schumacher, P.; Wilkins, M. P.

    2012-09-01

    least cubic and commonly quartic or higher. Therefore, practical implementations require attention to the scalability of the algorithms, when one is dealing with the very large number of observations from large surveillance telescopes. We address two broad categories of algorithms. The first category includes and extends the classical methods of Laplace and Gauss, as well as the more modern method of Gooding, in which one solves explicitly for the apparent range to the target in terms of the given data. In particular, recent ideas offered by Mortari and Karimi allow us to construct a family of range-solution methods that can be scaled to many processors efficiently. We find that the orbit solutions (data association hypotheses) can be ranked by means of a concept we call persistence, in which a simple statistical measure of likelihood is based on the frequency of occurrence of combinations of observations in consistent orbit solutions. Of course, range-solution methods can be expected to perform poorly if the orbit solutions of most interest are not well conditioned. The second category of algorithms addresses this difficulty. Instead of solving for range, these methods attach a set of range hypotheses to each measured line of sight. Then all pair-wise combinations of observations are considered and the family of Lambert problems is solved for each pair. These algorithms also have polynomial complexity, though now the complexity is quadratic in the number of observations and also quadratic in the number of range hypotheses. We offer a novel type of admissible-region analysis, constructing partitions of the orbital element space and deriving rigorous upper and lower bounds on the possible values of the range for each partition. This analysis allows us to parallelize with respect to the element partitions and to reduce the number of range hypotheses that have to be considered in each processor simply by making the partitions smaller. Naturally, there are many ways to

  1. Bioinformatics on the cloud computing platform Azure.

    Directory of Open Access Journals (Sweden)

    Hugh P Shanahan

    Full Text Available We discuss the applicability of the Microsoft cloud computing platform, Azure, for bioinformatics. We focus on the usability of the resource rather than its performance. We provide an example of how R can be used on Azure to analyse a large amount of microarray expression data deposited at the public database ArrayExpress. We provide a walk through to demonstrate explicitly how Azure can be used to perform these analyses in Appendix S1 and we offer a comparison with a local computation. We note that the use of the Platform as a Service (PaaS offering of Azure can represent a steep learning curve for bioinformatics developers who will usually have a Linux and scripting language background. On the other hand, the presence of an additional set of libraries makes it easier to deploy software in a parallel (scalable fashion and explicitly manage such a production run with only a few hundred lines of code, most of which can be incorporated from a template. We propose that this environment is best suited for running stable bioinformatics software by users not involved with its development.

  2. Bioinformatics on the cloud computing platform Azure.

    Science.gov (United States)

    Shanahan, Hugh P; Owen, Anne M; Harrison, Andrew P

    2014-01-01

    We discuss the applicability of the Microsoft cloud computing platform, Azure, for bioinformatics. We focus on the usability of the resource rather than its performance. We provide an example of how R can be used on Azure to analyse a large amount of microarray expression data deposited at the public database ArrayExpress. We provide a walk through to demonstrate explicitly how Azure can be used to perform these analyses in Appendix S1 and we offer a comparison with a local computation. We note that the use of the Platform as a Service (PaaS) offering of Azure can represent a steep learning curve for bioinformatics developers who will usually have a Linux and scripting language background. On the other hand, the presence of an additional set of libraries makes it easier to deploy software in a parallel (scalable) fashion and explicitly manage such a production run with only a few hundred lines of code, most of which can be incorporated from a template. We propose that this environment is best suited for running stable bioinformatics software by users not involved with its development.

  3. A simple force platform.

    Science.gov (United States)

    Bonde-Petersen, F

    1975-01-01

    The force platform consists of a sandwhich of steel, Rockwool and concrete plates about 900 X 700 mm in surface. Four steel rings were bolted to the under side of the steel plate in each corner. Each steel ring was furnished with only one strain gauge, two of which were placed on the outer- respectively on the inner side of each ring. The four strain gauges were connected to a measuring bridge. Before mounting the rings on the steel plate, the sensitivity to pressure of each ring was adjusted in such a way that they were all similar. Because of this the platform responded with a signal which was independent of where a pressure was applied within the surface of the platform. The platform showed a rectilinear response for static forces up to 500 kp with a stable zero value. In response to dynamic forces the platform showed a resononance frequency of about 50 Hz, with a damping factor of 0.15. Calibration of dynamic forces was carried out by calculation of the forces during a vertical jump compared with what would be expected from the time of flight also registered by the platform-measuring-bridge-ink-writer-set-up. The time of flight was significantly higher (11%) than exected from the time-force relations beforetake-off. This was esplained partly by the relatively low damping factor in the system, partly by the subjects not extending their knees at landing on the platform.

  4. Quicksilver: Middleware for Scalable Self-Regenerative Systems

    Science.gov (United States)

    2006-04-01

    standard best practice in the area, and hence helped us identify problems that can be justified in terms of real user needs. Our own group may write a...semantics, generally lack efficient, scalable implementations. Systems aproaches usually lack a precise formal specification, limiting the

  5. Scalable learning of probabilistic latent models for collaborative filtering

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre

    2015-01-01

    Collaborative filtering has emerged as a popular way of making user recommendations, but with the increasing sizes of the underlying databases scalability is becoming a crucial issue. In this paper we focus on a recently proposed probabilistic collaborative filtering model that explicitly...

  6. PSOM2—partitioning-based scalable ontology matching using ...

    Indian Academy of Sciences (India)

    B Sathiya

    2017-11-16

    Nov 16, 2017 ... Abstract. The growth and use of semantic web has led to a drastic increase in the size, heterogeneity and number of ontologies that are available on the web. Correspondingly, scalable ontology matching algorithms that will eliminate the heterogeneity among large ontologies have become a necessity.

  7. Cognition-inspired Descriptors for Scalable Cover Song Retrieval

    NARCIS (Netherlands)

    van Balen, J.M.H.; Bountouridis, D.; Wiering, F.; Veltkamp, R.C.

    2014-01-01

    Inspired by representations used in music cognition studies and computational musicology, we propose three simple and interpretable descriptors for use in mid- to high-level computational analysis of musical audio and applications in content-based retrieval. We also argue that the task of scalable

  8. Scalable Directed Self-Assembly Using Ultrasound Waves

    Science.gov (United States)

    2015-09-04

    at Aberdeen Proving Grounds (APG), to discuss a possible collaboration. The idea is to integrate the ultrasound directed self- assembly technique ...difference between the ultrasound technology studied in this project, and other directed self-assembly techniques is its scalability and...deliverable: A scientific tool to predict particle organization, pattern, and orientation, based on the operating and design parameters of the ultrasound

  9. Scalable Robust Principal Component Analysis Using Grassmann Averages.

    Science.gov (United States)

    Hauberg, Sren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael J

    2016-11-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average ( GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average ( TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.

  10. Coilable Crystalline Fiber (CCF) Lasers and their Scalability

    Science.gov (United States)

    2014-03-01

    highly power scalable, nearly diffraction-limited output laser. 37 References 1. Snitzer, E. Optical Maser Action of Nd 3+ in A Barium Crown Glass ...Electron Devices Directorate Helmuth Meissner Onyx Optics Approved for public release; distribution...lasers, but their composition ( glass ) poses significant disadvantages in pump absorption, gain, and thermal conductivity. All-crystalline fiber lasers

  11. Efficient Enhancement for Spatial Scalable Video Coding Transmission

    Directory of Open Access Journals (Sweden)

    Mayada Khairy

    2017-01-01

    Full Text Available Scalable Video Coding (SVC is an international standard technique for video compression. It is an extension of H.264 Advanced Video Coding (AVC. In the encoding of video streams by SVC, it is suitable to employ the macroblock (MB mode because it affords superior coding efficiency. However, the exhaustive mode decision technique that is usually used for SVC increases the computational complexity, resulting in a longer encoding time (ET. Many other algorithms were proposed to solve this problem with imperfection of increasing transmission time (TT across the network. To minimize the ET and TT, this paper introduces four efficient algorithms based on spatial scalability. The algorithms utilize the mode-distribution correlation between the base layer (BL and enhancement layers (ELs and interpolation between the EL frames. The proposed algorithms are of two categories. Those of the first category are based on interlayer residual SVC spatial scalability. They employ two methods, namely, interlayer interpolation (ILIP and the interlayer base mode (ILBM method, and enable ET and TT savings of up to 69.3% and 83.6%, respectively. The algorithms of the second category are based on full-search SVC spatial scalability. They utilize two methods, namely, full interpolation (FIP and the full-base mode (FBM method, and enable ET and TT savings of up to 55.3% and 76.6%, respectively.

  12. Scalable power selection method for wireless mesh networks

    CSIR Research Space (South Africa)

    Olwal, TO

    2009-01-01

    Full Text Available This paper addresses the problem of a scalable dynamic power control (SDPC) for wireless mesh networks (WMNs) based on IEEE 802.11 standards. An SDPC model that accounts for architectural complexities witnessed in multiple radios and hops...

  13. Estimates of the Sampling Distribution of Scalability Coefficient H

    Science.gov (United States)

    Van Onna, Marieke J. H.

    2004-01-01

    Coefficient "H" is used as an index of scalability in nonparametric item response theory (NIRT). It indicates the degree to which a set of items rank orders examinees. Theoretical sampling distributions, however, have only been derived asymptotically and only under restrictive conditions. Bootstrap methods offer an alternative possibility to…

  14. The vacuum platform

    Science.gov (United States)

    McNab, A.

    2017-10-01

    This paper describes GridPP’s Vacuum Platform for managing virtual machines (VMs), which has been used to run production workloads for WLCG and other HEP experiments. The platform provides a uniform interface between VMs and the sites they run at, whether the site is organised as an Infrastructure-as-a-Service cloud system such as OpenStack, or an Infrastructure-as-a-Client system such as Vac. The paper describes our experience in using this platform, in developing and operating VM lifecycle managers Vac and Vcycle, and in interacting with VMs provided by LHCb, ATLAS, ALICE, CMS, and the GridPP DIRAC service to run production workloads.

  15. Product Platform Replacements

    DEFF Research Database (Denmark)

    Sköld, Martin; Karlsson, Christer

    2012-01-01

    Purpose – It is argued in this article that too little is known about product platforms and how to deal with them from a manager's point of view. Specifically, little information exists regarding when old established platforms are replaced by new generations in R&D and production environments...... to challenge their existing knowledge about platform architectures. Issues on technologies, architectures, components and processes as well as on segments, applications and functions are identified. Practical implications – Practical implications are summarized and discussed in relation to a framework...

  16. Ladder attachment platform

    Science.gov (United States)

    Swygert,; Richard, W [Springfield, SC

    2012-08-28

    A ladder attachment platform is provided that includes a base for attachment to a ladder that has first and second side rails and a plurality of rungs that extend between in a lateral direction. Also included is a user platform for having a user stand thereon that is carried by the base. The user platform may be positioned with respect to the ladder so that it is not located between a first plane that extends through the first side rail and is perpendicular to the lateral direction and a second plane that extends through the second side rail and is perpendicular to the lateral direction.

  17. Evaluation of 3D printed anatomically scalable transfemoral prosthetic knee.

    Science.gov (United States)

    Ramakrishnan, Tyagi; Schlafly, Millicent; Reed, Kyle B

    2017-07-01

    This case study compares a transfemoral amputee's gait while using the existing Ossur Total Knee 2000 and our novel 3D printed anatomically scalable transfemoral prosthetic knee. The anatomically scalable transfemoral prosthetic knee is 3D printed out of a carbon-fiber and nylon composite that has a gear-mesh coupling with a hard-stop weight-actuated locking mechanism aided by a cross-linked four-bar spring mechanism. This design can be scaled using anatomical dimensions of a human femur and tibia to have a unique fit for each user. The transfemoral amputee who was tested is high functioning and walked on the Computer Assisted Rehabilitation Environment (CAREN) at a self-selected pace. The motion capture and force data that was collected showed that there were distinct differences in the gait dynamics. The data was used to perform the Combined Gait Asymmetry Metric (CGAM), where the scores revealed that the overall asymmetry of the gait on the Ossur Total Knee was more asymmetric than the anatomically scalable transfemoral prosthetic knee. The anatomically scalable transfemoral prosthetic knee had higher peak knee flexion that caused a large step time asymmetry. This made walking on the anatomically scalable transfemoral prosthetic knee more strenuous due to the compensatory movements in adapting to the different dynamics. This can be overcome by tuning the cross-linked spring mechanism to emulate the dynamics of the subject better. The subject stated that the knee would be good for daily use and has the potential to be adapted as a running knee.

  18. Library of graphic symbols for power equipment in the scalable vector graphics format

    Directory of Open Access Journals (Sweden)

    A.G. Yuferov

    2016-03-01

    Full Text Available This paper describes the results of developing and using a library of graphic symbols for components of power equipment under the state standards GOST 21.403-80 “Power Equipment” and GOST 2.789-74 “Heat Exchangers”. The library is implemented in the SVG (Scalable Vector Graphics format. The obtained solutions are in line with the well-known studies on creating libraries of parametrical fragments of symbols for elements of diagrams and drawings in design systems for various industrial applications. The SVG format is intended for use in web applications, so the creation of SVG codes for power equipment under GOST 21.403-80 and GOST 2.789-74 is an essential stage in the development of web programs for the thermodynamic optimization of power plants. One of the major arguments in favor of the SVG format is that it can be integrated with codes. So, in process control systems developed based on a web platform, scalable vector graphics provides for a dynamic user interface, functionality of mimic panels and changeability of their components depending on the availability and status of equipment. An important reason for the acquisition and use of the SVG format is also that it is becoming the basis (recommended for the time being, and mandatory in future for electronic document management in the sphere of design documentation as part of international efforts to standardize and harmonize data exchange formats. In a specific context, the effectiveness of the SVG format for the power equipment arrangement has been shown. The library is intended for solution of specific production problems involving an analysis of the power plant thermal circuits and in training of power engineering students. The library and related materials are publicly available through the Internet. A number of proposals on the future evolution of the library have been formulated.

  19. BIGSdb: Scalable analysis of bacterial genome variation at the population level

    Directory of Open Access Journals (Sweden)

    Maiden Martin CJ

    2010-12-01

    Full Text Available Abstract Background The opportunities for bacterial population genomics that are being realised by the application of parallel nucleotide sequencing require novel bioinformatics platforms. These must be capable of the storage, retrieval, and analysis of linked phenotypic and genotypic information in an accessible, scalable and computationally efficient manner. Results The Bacterial Isolate Genome Sequence Database (BIGSDB is a scalable, open source, web-accessible database system that meets these needs, enabling phenotype and sequence data, which can range from a single sequence read to whole genome data, to be efficiently linked for a limitless number of bacterial specimens. The system builds on the widely used mlstdbNet software, developed for the storage and distribution of multilocus sequence typing (MLST data, and incorporates the capacity to define and identify any number of loci and genetic variants at those loci within the stored nucleotide sequences. These loci can be further organised into 'schemes' for isolate characterisation or for evolutionary or functional analyses. Isolates and loci can be indexed by multiple names and any number of alternative schemes can be accommodated, enabling cross-referencing of different studies and approaches. LIMS functionality of the software enables linkage to and organisation of laboratory samples. The data are easily linked to external databases and fine-grained authentication of access permits multiple users to participate in community annotation by setting up or contributing to different schemes within the database. Some of the applications of BIGSDB are illustrated with the genera Neisseria and Streptococcus. The BIGSDB source code and documentation are available at http://pubmlst.org/software/database/bigsdb/. Conclusions Genomic data can be used to characterise bacterial isolates in many different ways but it can also be efficiently exploited for evolutionary or functional studies. BIGSDB

  20. USA Hire Testing Platform

    Data.gov (United States)

    Office of Personnel Management — The USA Hire Testing Platform delivers tests used in hiring for positions in the Federal Government. To safeguard the integrity of the hiring processes and ensure...

  1. Product Platform Performance

    DEFF Research Database (Denmark)

    Munk, Lone

    , and the subject has gained increased attention in industry and academia the past decade. Literature on platform-based product development is often based on single case studies and it is sparsely verified if expected effects are achieved. This makes it difficult to put forward realistic expectations for companies...... experienced representatives from the different life systems phase systems of the platform products. The effects are estimated and modeled within different scenarios, taking into account financial and real option aspects. The model illustrates and supports estimation and quantification of internal platform...... effects. The model empirically verifies findings in literature and received moderate support from industry in the validation study. The research findings document that product platforms achieve significant internal effects in terms of • reduced development time (often around 25 %), • reduced number...

  2. Platform-based production development

    DEFF Research Database (Denmark)

    Bossen, Jacob; Brunoe, Thomas Ditlev; Nielsen, Kjeld

    2015-01-01

    Platforms as a means for applying modular thinking in product development is relatively well studied, but platforms in the production system has until now not been given much attention. With the emerging concept of platform-based co-development the importance of production platforms is though ind...... indisputable. This paper presents state-of-the-art literature on platform research related to production platforms and investigates gaps in the literature. The paper concludes on findings by proposing future research directions....

  3. Seqcrawler: biological data indexing and browsing platform

    Directory of Open Access Journals (Sweden)

    Sallou Olivier

    2012-07-01

    Full Text Available Abstract Background Seqcrawler takes its roots in software like SRS or Lucegene. It provides an indexing platform to ease the search of data and meta-data in biological banks and it can scale to face the current flow of data. While many biological bank search tools are available on the Internet, mainly provided by large organizations to search their data, there is a lack of free and open source solutions to browse one’s own set of data with a flexible query system and able to scale from a single computer to a cloud system. A personal index platform will help labs and bioinformaticians to search their meta-data but also to build a larger information system with custom subsets of data. Results The software is scalable from a single computer to a cloud-based infrastructure. It has been successfully tested in a private cloud with 3 index shards (pieces of index hosting ~400 millions of sequence information (whole GenBank, UniProt, PDB and others for a total size of 600 GB in a fault tolerant architecture (high-availability. It has also been successfully integrated with software to add extra meta-data from blast results to enhance users’ result analysis. Conclusions Seqcrawler provides a complete open source search and store solution for labs or platforms needing to manage large amount of data/meta-data with a flexible and customizable web interface. All components (search engine, visualization and data storage, though independent, share a common and coherent data system that can be queried with a simple HTTP interface. The solution scales easily and can also provide a high availability infrastructure.

  4. Seqcrawler: biological data indexing and browsing platform.

    Science.gov (United States)

    Sallou, Olivier; Bretaudeau, Anthony; Roult, Aurelien

    2012-07-24

    Seqcrawler takes its roots in software like SRS or Lucegene. It provides an indexing platform to ease the search of data and meta-data in biological banks and it can scale to face the current flow of data. While many biological bank search tools are available on the Internet, mainly provided by large organizations to search their data, there is a lack of free and open source solutions to browse one's own set of data with a flexible query system and able to scale from a single computer to a cloud system. A personal index platform will help labs and bioinformaticians to search their meta-data but also to build a larger information system with custom subsets of data. The software is scalable from a single computer to a cloud-based infrastructure. It has been successfully tested in a private cloud with 3 index shards (pieces of index) hosting ~400 millions of sequence information (whole GenBank, UniProt, PDB and others) for a total size of 600 GB in a fault tolerant architecture (high-availability). It has also been successfully integrated with software to add extra meta-data from blast results to enhance users' result analysis. Seqcrawler provides a complete open source search and store solution for labs or platforms needing to manage large amount of data/meta-data with a flexible and customizable web interface. All components (search engine, visualization and data storage), though independent, share a common and coherent data system that can be queried with a simple HTTP interface. The solution scales easily and can also provide a high availability infrastructure.

  5. Temporal scalability comparison of the H.264/SVC and distributed video codec

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Belyaev, Evgeny

    2009-01-01

    The problem of the multimedia scalable video streaming is a current topic of interest. There exist many methods for scalable video coding. This paper is focused on the scalable extension of H.264/AVC (H.264/SVC) and distributed video coding (DVC). The paper presents an efficiency comparison of SVC...

  6. ARM for Platform Application

    Science.gov (United States)

    Patte, Mathieu; Poupat, Jean-Luc; Le Meur, Patrick

    2015-09-01

    The activities described in this paper are part of the CNES R&T “Study of a Cortex-R ARM based architecture” performed by Airbus DS Space System & Electronics in 2014. With the support of CNES, Airbus DS has performed the porting of a representative space application software on an ARM based demonstration platform. This paper presents the platform itself, the activities performed at software level and the first results on this evaluation study.

  7. Floating Ocean Platform

    Science.gov (United States)

    2003-08-15

    STRUCTURAL ENGINEERING-ASCE, 382-390 J. Struct. Eng.-ASCE, 1998 Bisht, RS Jain, AK, Wind and wave induced behaviour of offshore guyed tower ... PRESTRESSED CONCRETE OFFSHORE PLATFORMS, HOUILLE BLANCHE-REVUE INTERNATIONALE DE L EAU, 63-68 Houille Blanche-Rev. Int., 1995 MAURER, WC, RECENT ADVANCES... WIND LOADING OF TOWERS AND OFFSHORE PLATFORMS - DISCUSSION, JOURNAL OF WIND ENGINEERING AND INDUSTRIAL AERODYNAMICS, 369-372 J. Wind Eng. Ind. Aerodyn

  8. National Community Solar Platform

    Energy Technology Data Exchange (ETDEWEB)

    Rupert, Bart [Clean Energy Collective, Louisville, CO (United States)

    2016-06-30

    This project was created to provide a National Community Solar Platform (NCSP) portal known as Community Solar Hub, that is available to any entity or individual who wants to develop community solar. This has been done by providing a comprehensive portal to make CEC’s solutions, and other proven community solar solutions, externally available for everyone to access – making the process easy through proven platforms to protect subscribers, developers and utilities. The successful completion of this project provides these tools via a web platform and integration APIs, a wide spectrum of community solar projects included in the platform, multiple groups of customers (utilities, EPCs, and advocates) using the platform to develop community solar, and open access to anyone interested in community solar. CEC’s Incubator project includes web-based informational resources, integrated systems for project information and billing systems, and engagement with customers and users by community solar experts. The combined effort externalizes much of Clean Energy Collective’s industry-leading expertise, allowing third parties to develop community solar without duplicating expensive start-up efforts. The availability of this platform creates community solar projects that are cheaper to build and cheaper to participate in, furthering the goals of DOE’s SunShot Initiative. Final SF 425 Final SF 428 Final DOE F 2050.11 Final Report Narrative

  9. I(Re2-WiNoC: Exploring scalable wireless on-chip micronetworks for heterogeneous embedded many-core SoCs

    Directory of Open Access Journals (Sweden)

    Dan Zhao

    2015-02-01

    In this work, an irregular and reconfigurable WiNoC platform is proposed to tackle ever increasing complexity, density and heterogeneity challenges. A flexible RF infrastructure is established where RF nodes are properly distributed and IP cores are clustered. Consequently, a performance-cost effective topology is formed. A region-aided routing scheme is further deigned and implemented to realize loop-free, minimum path cost and high scalability for irregular WiNoC infrastructure. To implement the data transmission protocol, the RF microarchitecture of WiNoC is developed where the RF nodes are designed to fulfill the functions of distributed table routing, multi-channel arbitration, virtual output queuing, and distributed flow control. Our simulation studies based on synthetic traffics demonstrate the network efficiency and scalability of WiNoC.

  10. Carbon nanotube-based electrochemical biosensing platforms: fundamentals, applications, and future possibilities.

    Science.gov (United States)

    Luong, John H T; Male, Keith B; Hrapovic, Sabahudin

    2007-01-01

    Biosensors can be considered as a most plausible and exciting application area for nanobiotechnology. The recent bloom of nanofabrication technology and biofunctionalization methods of carbon nanotubes (CNTs) has stimulated significant research interest to develop CNT-based biosensors for monitoring biorecognition events and biocatalytic processes. The unique properties of CNTs, rolled-up sheets of carbon atoms with a diameter less than 1 nm, offer excellent prospects for interfacing biological recognition events with electronic signal transduction. CNT-based biosensors could be developed to sense only a few or even a single molecule of a chemical or biological agent. Both hydrogen peroxide and NADH, two by-products of over 300 oxidoreductases, are efficiently oxidized by CNT-modified electrodes at significantly lower potentials with minimal surface fouling. This appealing feature enables the development of useful biosensors for diversified applications. Aligned CNT "forests" can act as molecular wires to allow efficient electron transfer between the detecting electrode and the redox centers of enzymes to fabricate reagentless biosensors. Electrochemical sensing for DNA can greatly benefit from the use of CNT based platforms since guanine, one of the four bases, can be detected with significantly enhanced sensitivity. CNTs fluoresce, or emit light after absorbing light, in the near infrared region and retain their ability to fluoresce over time. This feature will allow CNT-based sensors to transmit information from inside the body. The combination of micro/nanofabrication and chemical functionalization, particularly nanoelectrode assembly interfaced with biomolecules, is expected to pave the way to fabricate improved biosensors for proteins, chemicals, and pathogens. However, several technical challenges need to be overcome to tightly integrate CNT-based platforms with sampling, fluidic handling, separation, and other detection principles. The biosensing platform

  11. The Platformization of the Web: Making Web Data Platform Ready

    NARCIS (Netherlands)

    Helmond, A.

    2015-01-01

    In this article, I inquire into Facebook’s development as a platform by situating it within the transformation of social network sites into social media platforms. I explore this shift with a historical perspective on, what I refer to as, platformization, or the rise of the platform as the dominant

  12. Highly scalable, uniform, and sensitive biosensors based on top-down indium oxide nanoribbons and electronic enzyme-linked immunosorbent assay.

    Science.gov (United States)

    Aroonyadet, Noppadol; Wang, Xiaoli; Song, Yan; Chen, Haitian; Cote, Richard J; Thompson, Mark E; Datar, Ram H; Zhou, Chongwu

    2015-03-11

    Nanostructure field-effect transistor (FET) biosensors have shown great promise for ultra sensitive biomolecular detection. Top-down assembly of these sensors increases scalability and device uniformity but faces fabrication challenges in achieving the small dimensions needed for sensitivity. We report top-down fabricated indium oxide (In2O3) nanoribbon FET biosensors using highly scalable radio frequency (RF) sputtering to create uniform channel thicknesses ranging from 50 to 10 nm. We combine this scalable sensing platform with amplification from electronic enzyme-linked immunosorbent assay (ELISA) to achieve high sensitivity to target analytes such as streptavidin and human immunodeficiency virus type 1 (HIV-1) p24 proteins. Our approach circumvents Debye screening in ionic solutions and detects p24 protein at 20 fg/mL (about 250 viruses/mL or about 3 orders of magnitude lower than commercial ELISA) with a 35% conduction change in human serum. The In2O3 nanoribbon biosensors have 100% device yield and use a simple 2 mask photolithography process. The electrical properties of 50 In2O3 nanoribbon FETs showed good uniformity in on-state current, on/off current ratio, mobility, and threshold voltage. In addition, the sensors show excellent pH sensitivity over a broad range (pH 4 to 9) as well as over the physiological-related pH range (pH 6.8 to 8.2). With the demonstrated sensitivity, scalability, and uniformity, the In2O3 nanoribbon sensor platform makes great progress toward clinical testing, such as for early diagnosis of acquired immunodeficiency syndrome (AIDS).

  13. On-field validation of the new platform for magnetic measurements at CERN

    CERN Document Server

    Arpaia, P; Inglese, V; Spiezia, G

    2009-01-01

    A new platform for magnetic measurements at the European Organization for Nuclear Research (CERN) is presented. The key concepts and the architecture of its main components, a multi-purpose digitizer (fast digital integrator – FDI) and a flexible software framework (flexible framework for magnetic measurements – FFMM), are detailed. The experimental results of the metrological characterization of FDI exhibit a significant advance with respect to the state of the art. Furthermore, the FFMM implementation is shown to provide a reusable and scalable measurement software. Finally, the results of the on-field validation of the platform for field measurements on superconducting magnets in the CERN test facility are discussed.

  14. Scalable graphene coatings for enhanced condensation heat transfer.

    Science.gov (United States)

    Preston, Daniel J; Mafra, Daniela L; Miljkovic, Nenad; Kong, Jing; Wang, Evelyn N

    2015-05-13

    Water vapor condensation is commonly observed in nature and routinely used as an effective means of transferring heat with dropwise condensation on nonwetting surfaces exhibiting heat transfer improvement compared to filmwise condensation on wetting surfaces. However, state-of-the-art techniques to promote dropwise condensation rely on functional hydrophobic coatings that either have challenges with chemical stability or are so thick that any potential heat transfer improvement is negated due to the added thermal resistance of the coating. In this work, we show the effectiveness of ultrathin scalable chemical vapor deposited (CVD) graphene coatings to promote dropwise condensation while offering robust chemical stability and maintaining low thermal resistance. Heat transfer enhancements of 4× were demonstrated compared to filmwise condensation, and the robustness of these CVD coatings was superior to typical hydrophobic monolayer coatings. Our results indicate that graphene is a promising surface coating to promote dropwise condensation of water in industrial conditions with the potential for scalable application via CVD.

  15. Continuity-Aware Scheduling Algorithm for Scalable Video Streaming

    Directory of Open Access Journals (Sweden)

    Atinat Palawan

    2016-05-01

    Full Text Available The consumer demand for retrieving and delivering visual content through consumer electronic devices has increased rapidly in recent years. The quality of video in packet networks is susceptible to certain traffic characteristics: average bandwidth availability, loss, delay and delay variation (jitter. This paper presents a scheduling algorithm that modifies the stream of scalable video to combat jitter. The algorithm provides unequal look-ahead by safeguarding the base layer (without the need for overhead of the scalable video. The results of the experiments show that our scheduling algorithm reduces the number of frames with a violated deadline and significantly improves the continuity of the video stream without compromising the average Y Peek Signal-to-Noise Ratio (PSNR.

  16. Scalable metagenomic taxonomy classification using a reference genome database.

    Science.gov (United States)

    Ames, Sasha K; Hysom, David A; Gardner, Shea N; Lloyd, G Scott; Gokhale, Maya B; Allen, Jonathan E

    2013-09-15

    Deep metagenomic sequencing of biological samples has the potential to recover otherwise difficult-to-detect microorganisms and accurately characterize biological samples with limited prior knowledge of sample contents. Existing metagenomic taxonomic classification algorithms, however, do not scale well to analyze large metagenomic datasets, and balancing classification accuracy with computational efficiency presents a fundamental challenge. A method is presented to shift computational costs to an off-line computation by creating a taxonomy/genome index that supports scalable metagenomic classification. Scalable performance is demonstrated on real and simulated data to show accurate classification in the presence of novel organisms on samples that include viruses, prokaryotes, fungi and protists. Taxonomic classification of the previously published 150 giga-base Tyrolean Iceman dataset was found to take contents of the sample. Software was implemented in C++ and is freely available at http://sourceforge.net/projects/lmat allen99@llnl.gov Supplementary data are available at Bioinformatics online.

  17. Semantic Models for Scalable Search in the Internet of Things

    Directory of Open Access Journals (Sweden)

    Dennis Pfisterer

    2013-03-01

    Full Text Available The Internet of Things is anticipated to connect billions of embedded devices equipped with sensors to perceive their surroundings. Thereby, the state of the real world will be available online and in real-time and can be combined with other data and services in the Internet to realize novel applications such as Smart Cities, Smart Grids, or Smart Healthcare. This requires an open representation of sensor data and scalable search over data from diverse sources including sensors. In this paper we show how the Semantic Web technologies RDF (an open semantic data format and SPARQL (a query language for RDF-encoded data can be used to address those challenges. In particular, we describe how prediction models can be employed for scalable sensor search, how these prediction models can be encoded as RDF, and how the models can be queried by means of SPARQL.

  18. Scalable, flexible and high resolution patterning of CVD graphene.

    Science.gov (United States)

    Hofmann, Mario; Hsieh, Ya-Ping; Hsu, Allen L; Kong, Jing

    2014-01-07

    The unique properties of graphene make it a promising material for interconnects in flexible and transparent electronics. To increase the commercial impact of graphene in those applications, a scalable and economical method for producing graphene patterns is required. The direct synthesis of graphene from an area-selectively passivated catalyst substrate can generate patterned graphene of high quality. We here present a solution-based method for producing patterned passivation layers. Various deposition methods such as ink-jet deposition and microcontact printing were explored, that can satisfy application demands for low cost, high resolution and scalable production of patterned graphene. The demonstrated high quality and nanometer precision of grown graphene establishes the potential of this synthesis approach for future commercial applications of graphene. Finally, the ability to transfer high resolution graphene patterns onto complex three-dimensional surfaces affords the vision of graphene-based interconnects in novel electronics.

  19. Transactional Network Platform: Applications

    Energy Technology Data Exchange (ETDEWEB)

    Katipamula, Srinivas; Lutes, Robert G.; Ngo, Hung; Underhill, Ronald M.

    2013-10-31

    In FY13, Pacific Northwest National Laboratory (PNNL) with funding from the Department of Energy’s (DOE’s) Building Technologies Office (BTO) designed, prototyped and tested a transactional network platform to support energy, operational and financial transactions between any networked entities (equipment, organizations, buildings, grid, etc.). Initially, in FY13, the concept demonstrated transactions between packaged rooftop air conditioners and heat pump units (RTUs) and the electric grid using applications or "agents" that reside on the platform, on the equipment, on a local building controller or in the Cloud. The transactional network project is a multi-lab effort with Oakridge National Laboratory (ORNL) and Lawrence Berkeley National Laboratory (LBNL) also contributing to the effort. PNNL coordinated the project and also was responsible for the development of the transactional network (TN) platform and three different applications associated with RTUs. This document describes two applications or "agents" in details, and also summarizes the platform. The TN platform details are described in another companion document.

  20. Nanocalorimeter platform for in situ specific heat measurements and x-ray diffraction at low temperature

    Science.gov (United States)

    Willa, K.; Diao, Z.; Campanini, D.; Welp, U.; Divan, R.; Hudl, M.; Islam, Z.; Kwok, W.-K.; Rydh, A.

    2017-12-01

    Recent advances in electronics and nanofabrication have enabled membrane-based nanocalorimetry for measurements of the specific heat of microgram-sized samples. We have integrated a nanocalorimeter platform into a 4.5 T split-pair vertical-field magnet to allow for the simultaneous measurement of the specific heat and x-ray scattering in magnetic fields and at temperatures as low as 4 K. This multi-modal approach empowers researchers to directly correlate scattering experiments with insights from thermodynamic properties including structural, electronic, orbital, and magnetic phase transitions. The use of a nanocalorimeter sample platform enables numerous technical advantages: precise measurement and control of the sample temperature, quantification of beam heating effects, fast and precise positioning of the sample in the x-ray beam, and fast acquisition of x-ray scans over a wide temperature range without the need for time-consuming re-centering and re-alignment. Furthermore, on an YBa2Cu3O7-δ crystal and a copper foil, we demonstrate a novel approach to x-ray absorption spectroscopy by monitoring the change in sample temperature as a function of incident photon energy. Finally, we illustrate the new insights that can be gained from in situ structural and thermodynamic measurements by investigating the superheated state occurring at the first-order magneto-elastic phase transition of Fe2P, a material that is of interest for magnetocaloric applications.

  1. Scalable Deployment of Advanced Building Energy Management Systems

    Science.gov (United States)

    2013-06-01

    rooms, classrooms, a quarterdeck with a two-story atrium and office spaces, and a large cafeteria /galley. Buildings 7113 and 7114 are functionally...similar (include barracks, classroom, cafeteria , etc.) and share a common central chilled water plant. 3.1.1 Building 7230 The drill hall (Building...scalability of the proposed approach, and expanded the capabilities developed for a single building to a building campus at Naval Station Great Lakes

  2. Scalable, Self Aligned Printing of Flexible Graphene Micro Supercapacitors (Postprint)

    Science.gov (United States)

    2017-05-11

    reduced graphene oxide: 0.4 mF cm−2)[11,39–41] prepared by conventional micro- fabrication techniques, the printed MSCs offer distinct advan- tages in...AFRL-RX-WP-JA-2017-0318 SCALABLE, SELF-ALIGNED PRINTING OF FLEXIBLE GRAPHENE MICRO-SUPERCAPACITORS (POSTPRINT) Woo Jin Hyun, Chang-Hyun Kim...including suggestions for reducing this burden, to Department of Defense, Washington Headquarters Services, Directorate for Information Operations

  3. Scalable Power-Component Models for Concept Testing

    Science.gov (United States)

    2011-08-16

    Technology: Permanent Magnet Brushless DC machine • Model: Self-generating torque-speed-efficiency map • Future improvements: Induction machine ...system to the standard driveline – Example: BAS System – 3 kW system ISG Block, Rev. 2.0 Revision 2.0 • Four quadrant • PM Brushless Machine • Speed...and systems engineering. • Scope: Scalable, generic MATLAB/Simulink models in three areas: – Electromechanical machines (Integrated Starter

  4. Scalable privacy-preserving big data aggregation mechanism

    OpenAIRE

    Dapeng Wu; Boran Yang; Ruyan Wang

    2016-01-01

    As the massive sensor data generated by large-scale Wireless Sensor Networks (WSNs) recently become an indispensable part of ‘Big Data’, the collection, storage, transmission and analysis of the big sensor data attract considerable attention from researchers. Targeting the privacy requirements of large-scale WSNs and focusing on the energy-efficient collection of big sensor data, a Scalable Privacy-preserving Big Data Aggregation (Sca-PBDA) method is proposed in this paper. Firstly, according...

  5. Fast & scalable pattern transfer via block copolymer nanolithography

    DEFF Research Database (Denmark)

    Li, Tao; Wang, Zhongli; Schulte, Lars

    2015-01-01

    A fully scalable and efficient pattern transfer process based on block copolymer (BCP) self-assembling directly on various substrates is demonstrated. PS-rich and PDMS-rich poly(styrene-b-dimethylsiloxane) (PS-b-PDMS) copolymers are used to give monolayer sphere morphology after spin-casting of s...... on long range lateral order, including fabrication of substrates for catalysis, solar cells, sensors, ultrafiltration membranes and templating of semiconductors or metals....

  6. Economical and scalable synthesis of 6-amino-2-cyanobenzothiazole

    Directory of Open Access Journals (Sweden)

    Jacob R. Hauser

    2016-09-01

    Full Text Available 2-Cyanobenzothiazoles (CBTs are useful building blocks for: 1 luciferin derivatives for bioluminescent imaging; and 2 handles for bioorthogonal ligations. A particularly versatile CBT is 6-amino-2-cyanobenzothiazole (ACBT, which has an amine handle for straight-forward derivatisation. Here we present an economical and scalable synthesis of ACBT based on a cyanation catalysed by 1,4-diazabicyclo[2.2.2]octane (DABCO, and discuss its advantages for scale-up over previously reported routes.

  7. Scalable Cluster-based Routing in Large Wireless Sensor Networks

    OpenAIRE

    Jiandong Li; Xuelian Cai; Jin Yang; Lina Zhu

    2012-01-01

    Large control overhead is the leading factor limiting the scalability of wireless sensor networks (WSNs). Clustering network nodes is an efficient solution, and Passive Clustering (PC) is one of the most efficient clustering methods. In this letter, we propose an improved PC-based route building scheme, named Route Reply (RREP) Broadcast with Passive Clustering (in short RBPC). Through broadcasting RREP packets on an expanding ring to build route, sensor nodes cache their route to the sink no...

  8. Semantic Models for Scalable Search in the Internet of Things

    OpenAIRE

    Dennis Pfisterer; Kay Römer; Richard Mietz; Sven Groppe

    2013-01-01

    The Internet of Things is anticipated to connect billions of embedded devices equipped with sensors to perceive their surroundings. Thereby, the state of the real world will be available online and in real-time and can be combined with other data and services in the Internet to realize novel applications such as Smart Cities, Smart Grids, or Smart Healthcare. This requires an open representation of sensor data and scalable search over data from diverse sources including sensors. In this paper...

  9. Scalable Coverage Maintenance for Dense Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jun Lu

    2007-06-01

    Full Text Available Owing to numerous potential applications, wireless sensor networks have been attracting significant research effort recently. The critical challenge that wireless sensor networks often face is to sustain long-term operation on limited battery energy. Coverage maintenance schemes can effectively prolong network lifetime by selecting and employing a subset of sensors in the network to provide sufficient sensing coverage over a target region. We envision future wireless sensor networks composed of a vast number of miniaturized sensors in exceedingly high density. Therefore, the key issue of coverage maintenance for future sensor networks is the scalability to sensor deployment density. In this paper, we propose a novel coverage maintenance scheme, scalable coverage maintenance (SCOM, which is scalable to sensor deployment density in terms of communication overhead (i.e., number of transmitted and received beacons and computational complexity (i.e., time and space complexity. In addition, SCOM achieves high energy efficiency and load balancing over different sensors. We have validated our claims through both analysis and simulations.

  10. Design and Implementation of Ceph: A Scalable Distributed File System

    Energy Technology Data Exchange (ETDEWEB)

    Weil, S A; Brandt, S A; Miller, E L; Long, D E; Maltzahn, C

    2006-04-19

    File system designers continue to look to new architectures to improve scalability. Object-based storage diverges from server-based (e.g. NFS) and SAN-based storage systems by coupling processors and memory with disk drives, delegating low-level allocation to object storage devices (OSDs) and decoupling I/O (read/write) from metadata (file open/close) operations. Even recent object-based systems inherit decades-old architectural choices going back to early UNIX file systems, however, limiting their ability to effectively scale to hundreds of petabytes. We present Ceph, a distributed file system that provides excellent performance and reliability with unprecedented scalability. Ceph maximizes the separation between data and metadata management by replacing allocation tables with a pseudo-random data distribution function (CRUSH) designed for heterogeneous and dynamic clusters of unreliable OSDs. We leverage OSD intelligence to distribute data replication, failure detection and recovery with semi-autonomous OSDs running a specialized local object storage file system (EBOFS). Finally, Ceph is built around a dynamic distributed metadata management cluster that provides extremely efficient metadata management that seamlessly adapts to a wide range of general purpose and scientific computing file system workloads. We present performance measurements under a variety of workloads that show superior I/O performance and scalable metadata management (more than a quarter million metadata ops/sec).

  11. The intergroup protocols: Scalable group communication for the internet

    Energy Technology Data Exchange (ETDEWEB)

    Berket, Karlo [Univ. of California, Santa Barbara, CA (United States)

    2000-12-04

    Reliable group ordered delivery of multicast messages in a distributed system is a useful service that simplifies the programming of distributed applications. Such a service helps to maintain the consistency of replicated information and to coordinate the activities of the various processes. With the increasing popularity of the Internet, there is an increasing interest in scaling the protocols that provide this service to the environment of the Internet. The InterGroup protocol suite, described in this dissertation, provides such a service, and is intended for the environment of the Internet with scalability to large numbers of nodes and high latency links. The InterGroup protocols approach the scalability problem from various directions. They redefine the meaning of group membership, allow voluntary membership changes, add a receiver-oriented selection of delivery guarantees that permits heterogeneity of the receiver set, and provide a scalable reliability service. The InterGroup system comprises several components, executing at various sites within the system. Each component provides part of the services necessary to implement a group communication system for the wide-area. The components can be categorized as: (1) control hierarchy, (2) reliable multicast, (3) message distribution and delivery, and (4) process group membership. We have implemented a prototype of the InterGroup protocols in Java, and have tested the system performance in both local-area and wide-area networks.

  12. Scalable MPEG-4 Encoder on FPGA Multiprocessor SOC

    Directory of Open Access Journals (Sweden)

    Kulmala Ari

    2006-01-01

    Full Text Available High computational requirements combined with rapidly evolving video coding algorithms and standards are a great challenge for contemporary encoder implementations. Rapid specification changes prefer full programmability and configurability both for software and hardware. This paper presents a novel scalable MPEG-4 video encoder on an FPGA-based multiprocessor system-on-chip (MPSOC. The MPSOC architecture is truly scalable and is based on a vendor-independent intellectual property (IP block interconnection network. The scalability in video encoding is achieved by spatial parallelization where images are divided to horizontal slices. A case design is presented with up to four synthesized processors on an Altera Stratix 1S40 device. A truly portable ANSI-C implementation that supports an arbitrary number of processors gives 11 QCIF frames/s at 50 MHz without processor specific optimizations. The parallelization efficiency is 97% for two processors and 93% with three. The FPGA utilization is 70%, requiring 28 797 logic elements. The implementation effort is significantly lower compared to traditional multiprocessor implementations.

  13. Scalable MPEG-4 Encoder on FPGA Multiprocessor SOC

    Directory of Open Access Journals (Sweden)

    Marko Hännikäinen

    2006-10-01

    Full Text Available High computational requirements combined with rapidly evolving video coding algorithms and standards are a great challenge for contemporary encoder implementations. Rapid specification changes prefer full programmability and configurability both for software and hardware. This paper presents a novel scalable MPEG-4 video encoder on an FPGA-based multiprocessor system-on-chip (MPSOC. The MPSOC architecture is truly scalable and is based on a vendor-independent intellectual property (IP block interconnection network. The scalability in video encoding is achieved by spatial parallelization where images are divided to horizontal slices. A case design is presented with up to four synthesized processors on an Altera Stratix 1S40 device. A truly portable ANSI-C implementation that supports an arbitrary number of processors gives 11 QCIF frames/s at 50 MHz without processor specific optimizations. The parallelization efficiency is 97% for two processors and 93% with three. The FPGA utilization is 70%, requiring 28 797 logic elements. The implementation effort is significantly lower compared to traditional multiprocessor implementations.

  14. Scalable force directed graph layout algorithms using fast multipole methods

    KAUST Repository

    Yunis, Enas Abdulrahman

    2012-06-01

    We present an extension to ExaFMM, a Fast Multipole Method library, as a generalized approach for fast and scalable execution of the Force-Directed Graph Layout algorithm. The Force-Directed Graph Layout algorithm is a physics-based approach to graph layout that treats the vertices V as repelling charged particles with the edges E connecting them acting as springs. Traditionally, the amount of work required in applying the Force-Directed Graph Layout algorithm is O(|V|2 + |E|) using direct calculations and O(|V| log |V| + |E|) using truncation, filtering, and/or multi-level techniques. Correct application of the Fast Multipole Method allows us to maintain a lower complexity of O(|V| + |E|) while regaining most of the precision lost in other techniques. Solving layout problems for truly large graphs with millions of vertices still requires a scalable algorithm and implementation. We have been able to leverage the scalability and architectural adaptability of the ExaFMM library to create a Force-Directed Graph Layout implementation that runs efficiently on distributed multicore and multi-GPU architectures. © 2012 IEEE.

  15. Scalable Video Coding with Interlayer Signal Decorrelation Techniques

    Directory of Open Access Journals (Sweden)

    Yang Wenxian

    2007-01-01

    Full Text Available Scalability is one of the essential requirements in the compression of visual data for present-day multimedia communications and storage. The basic building block for providing the spatial scalability in the scalable video coding (SVC standard is the well-known Laplacian pyramid (LP. An LP achieves the multiscale representation of the video as a base-layer signal at lower resolution together with several enhancement-layer signals at successive higher resolutions. In this paper, we propose to improve the coding performance of the enhancement layers through efficient interlayer decorrelation techniques. We first show that, with nonbiorthogonal upsampling and downsampling filters, the base layer and the enhancement layers are correlated. We investigate two structures to reduce this correlation. The first structure updates the base-layer signal by subtracting from it the low-frequency component of the enhancement layer signal. The second structure modifies the prediction in order that the low-frequency component in the new enhancement layer is diminished. The second structure is integrated in the JSVM 4.0 codec with suitable modifications in the prediction modes. Experimental results with some standard test sequences demonstrate coding gains up to 1 dB for I pictures and up to 0.7 dB for both I and P pictures.

  16. Windows Azure Platform

    CERN Document Server

    Redkar, Tejaswi

    2010-01-01

    The Azure Services Platform is a brand-new cloud-computing technology from Microsoft. It is composed of four core components-Windows Azure, .NET Services, SQL Services, and Live Services-each with a unique role in the functioning of your cloud service. It is the goal of this book to show you how to use these components, both separately and together, to build flawless cloud services. At its heart Windows Azure Platform is a down-to-earth, code-centric book. This book aims to show you precisely how the components are employed and to demonstrate the techniques and best practices you need to know

  17. Planetary Web Resource Platform

    Science.gov (United States)

    Xing, Z.

    2016-12-01

    In this presentation, we would like to discuss our recent work ona web-based data platform, that can simplify the use of planetarymission products and unify the operation of key applications.This platform is extensible and flexible. Products and applicationscan be added to or removed from it in a distributed fashion.It is built on top of known and proven information technologiesfor data exposure and discovery. Live examples of the end-to-endweb services and in-browser clients for current planetary missionswill be demonstrated.

  18. Wireless sensor platform

    Energy Technology Data Exchange (ETDEWEB)

    Joshi, Pooran C.; Killough, Stephen M.; Kuruganti, Phani Teja

    2017-08-08

    A wireless sensor platform and methods of manufacture are provided. The platform involves providing a plurality of wireless sensors, where each of the sensors is fabricated on flexible substrates using printing techniques and low temperature curing. Each of the sensors can include planar sensor elements and planar antennas defined using the printing and curing. Further, each of the sensors can include a communications system configured to encode the data from the sensors into a spread spectrum code sequence that is transmitted to a central computer(s) for use in monitoring an area associated with the sensors.

  19. Windows Azure Platform

    CERN Document Server

    Redkar, Tejaswi

    2011-01-01

    The Windows Azure Platform has rapidly established itself as one of the most sophisticated cloud computing platforms available. With Microsoft working to continually update their product and keep it at the cutting edge, the future looks bright - if you have the skills to harness it. In particular, new features such as remote desktop access, dynamic content caching and secure content delivery using SSL make the latest version of Azure a more powerful solution than ever before. It's widely agreed that cloud computing has produced a paradigm shift in traditional architectural concepts by providin

  20. Scalability of Sustainable Business Models in Hybrid Organizations

    Directory of Open Access Journals (Sweden)

    Adam Jabłoński

    2016-02-01

    Full Text Available The dynamics of change in modern business create new mechanisms for company management to determine their pursuit and the achievement of their high performance. This performance maintained over a long period of time becomes a source of ensuring business continuity by companies. An ontological being enabling the adoption of such assumptions is such a business model that has the ability to generate results in every possible market situation and, moreover, it has the features of permanent adaptability. A feature that describes the adaptability of the business model is its scalability. Being a factor ensuring more work and more efficient work with an increasing number of components, scalability can be applied to the concept of business models as the company’s ability to maintain similar or higher performance through it. Ensuring the company’s performance in the long term helps to build the so-called sustainable business model that often balances the objectives of stakeholders and shareholders, and that is created by the implemented principles of value-based management and corporate social responsibility. This perception of business paves the way for building hybrid organizations that integrate business activities with pro-social ones. The combination of an approach typical of hybrid organizations in designing and implementing sustainable business models pursuant to the scalability criterion seems interesting from the cognitive point of view. Today, hybrid organizations are great spaces for building effective and efficient mechanisms for dialogue between business and society. This requires the appropriate business model. The purpose of the paper is to present the conceptualization and operationalization of scalability of sustainable business models that determine the performance of a hybrid organization in the network environment. The paper presents the original concept of applying scalability in sustainable business models with detailed

  1. Center for Programming Models for Scalable Parallel Computing - Towards Enhancing OpenMP for Manycore and Heterogeneous Nodes

    Energy Technology Data Exchange (ETDEWEB)

    Barbara Chapman

    2012-02-01

    OpenMP was not well recognized at the beginning of the project, around year 2003, because of its limited use in DoE production applications and the inmature hardware support for an efficient implementation. Yet in the recent years, it has been graduately adopted both in HPC applications, mostly in the form of MPI+OpenMP hybrid code, and in mid-scale desktop applications for scientific and experimental studies. We have observed this trend and worked deligiently to improve our OpenMP compiler and runtimes, as well as to work with the OpenMP standard organization to make sure OpenMP are evolved in the direction close to DoE missions. In the Center for Programming Models for Scalable Parallel Computing project, the HPCTools team at the University of Houston (UH), directed by Dr. Barbara Chapman, has been working with project partners, external collaborators and hardware vendors to increase the scalability and applicability of OpenMP for multi-core (and future manycore) platforms and for distributed memory systems by exploring different programming models, language extensions, compiler optimizations, as well as runtime library support.

  2. Platform Performance and Challenges - using Platforms in Lego Company

    DEFF Research Database (Denmark)

    Munk, Lone; Mortensen, Niels Henrik

    2009-01-01

    This article studies the performance and challenges of using nine implemented product platforms in LEGO Company. Most of these do produce results, but do not meet their goals due to challenges in their usage in the daily product. The main challenges are that the platforms are not being used...... by the product defining users (product developers) and platform erosion. When the platforms are not used it is due to: unsuitable calculation models, lack of goals, rewards or benefits from management, unattractive tradeoffs and difficulties in understanding the platform. This indicates that platform design...

  3. Creative Platform Learning (CPL)

    DEFF Research Database (Denmark)

    Christensen, Jonna Langeland; Hansen, Søren

    Creative Platform Learning (CPL) er en pædagogisk metode, der skaber foretagsomme og innovative elever, der kan anvende deres kreativitet til at lære nyt. Ifølge den nye skolereform skal Innovation og entreprenørskab tydeliggøres i alle fag. I CPL er det en integreret del af undervisningen...

  4. Games and Platform Decisions

    DEFF Research Database (Denmark)

    Hansen, Poul H. Kyvsgård; Mikkola, Juliana Hsuan

    2007-01-01

    is the application of on-line games in order to provide training for decision makers and in order to generate overview over the implications of platform decisions. However, games have to be placed in a context with other methods and we argue that a mixture of games, workshops, and simulations can provide improved...

  5. Education Platforms for America

    Science.gov (United States)

    District Administration, 2012

    2012-01-01

    What is at stake for K12 education in next month's presidential election? Both President Barack Obama (Democratic Party) and Gov. Mitt Romney (Republican Party) say improving education will be a top priority in their administrations, but their policies and initiatives would likely be quite different. While political platforms rarely offer detailed…

  6. Postgraduate programmes as platform

    NARCIS (Netherlands)

    drs Ben Smit; Prof.Dr. Petra Ponte; dr Jacqueline van Swet

    2007-01-01

    Typical of postgraduate courses for experienced teachers is the wealth of professional experience that the studetns bring with them. Such students can examine their own practice, for which they are fully responsible. Authors from diverse backgrounds address important aspects of the platform, such as

  7. CERN Neutrino Platform Hardware

    CERN Document Server

    Nelson, Kevin

    2017-01-01

    My summer research was broadly in CERN's neutrino platform hardware efforts. This project had two main components: detector assembly and data analysis work for ICARUS. Specifically, I worked on assembly for the ProtoDUNE project and monitored the safety of ICARUS as it was transported to Fermilab by analyzing the accelerometer data from its move.

  8. The Creative Platform

    DEFF Research Database (Denmark)

    Byrge, Christian; Hansen, Søren

    whether you consider thirdgrade teaching, human-resource development, or radical new thinking in product development in a company. The Creative Platform was developed at Aalborg University through a series of research-and-development activities in collaboration with educational institutions and private...

  9. Building a reliable, scalable and affordable RTC for AO instruments on ELTs

    Science.gov (United States)

    Gratadour, Damien; Sevin, Arnaud; Perret, Denis; Brule, Julien

    2013-12-01

    Addressing the unprecedented amount of computing power needed by the ELTs AO instruments real-time controllers (RTC) is one of the key technological developments required for the design of the next generation AO systems. Throughput oriented architectures such as GPUs, providing orders of magnitude greater computational performance than high-end CPUs, have recently appeared as attractive and economically viable candidates since the fast emergence of devices capable of general purpose computing. However, using for real-time applications a I/0 device which cannot be scheduled nor controlled internally by the operating system but is sent commands through a closed source driver comes with a number of challenges. Building on the experience of almost real-time end-to-end simulations using GPUs, and relying on the development of the COMPASS platform, a unified and optimized framework for AO simulations and real-time control, our team has engaged into the development of a scalable, heterogeneous GPU-based prototype for an AO RTC. In this paper, we review the main challenges arising when utilizing GPUs in real-time systems for AO and rank them in terms of impact significance and available solutions. We present our strategy, to mitigate these issues including the general architecture of our prototype, the real-time core and additional dedicated components for data acquisition and distribution. Finally, we discuss the expected performance in terms of latency and jitter on the basis of realistic benchmarks and focusing on the dimensioning of the MICADO AO module RTC.

  10. Toward continuous and scalable production of colloidal nanocrystals by switching from batch to droplet reactors.

    Science.gov (United States)

    Niu, Guangda; Ruditskiy, Aleksey; Vara, Madeline; Xia, Younan

    2015-08-21

    Colloidal nanocrystals are finding widespread use in a wide variety of applications ranging from catalysis to photonics, electronics, energy harvesting/conversion/storage, environment protection, information storage, and biomedicine. Despite the large number of successful demonstrations, there still exists a significant gap between academic studies and industrial applications owing to the lack of an ability to produce colloidal nanocrystals in large quantities without losing control over their properties. Droplet reactors have shown great potential for the continuous and scalable production of colloidal nanocrystals with uniform and well-controlled sizes, shapes, structures, and compositions. In this tutorial review, we begin with rationales for the use of droplet reactors as a new platform to scale up the production of colloidal nanocrystals, followed by discussions of the general concepts and technical challenges in applying droplet reactors to the synthesis of nanocrystals, including droplet formation, introduction and mixing of reagents, management of gaseous species, and interfacial adsorption. At the end, we use a set of examples to highlight the unique capabilities of droplet reactors for the high-volume production of colloidal nanocrystals in the setting of both homogeneous nucleation and seed-mediated growth.

  11. Low-cost scalable quartz crystal microbalance array for environmental sensing

    Energy Technology Data Exchange (ETDEWEB)

    Anazagasty, Cristain [University of Puerto Rico; Hianik, Tibor [Comenius University, Bratislava, Slovakia; Ivanov, Ilia N [ORNL

    2016-01-01

    Proliferation of environmental sensors for internet of things (IoT) applications has increased the need for low-cost platforms capable of accommodating multiple sensors. Quartz crystal microbalance (QCM) crystals coated with nanometer-thin sensor films are suitable for use in high-resolution (~1 ng) selective gas sensor applications. We demonstrate a scalable array for measuring frequency response of six QCM sensors controlled by low-cost Arduino microcontrollers and a USB multiplexer. Gas pulses and data acquisition were controlled by a LabVIEW user interface. We test the sensor array by measuring the frequency shift of crystals coated with different compositions of polymer composites based on poly(3,4-ethylenedioxythiophene):polystyrene sulfonate (PEDOT:PSS) while films are exposed to water vapor and oxygen inside a controlled environmental chamber. Our sensor array exhibits comparable performance to that of a commercial QCM system, while enabling high-throughput 6 QCM testing for under $1,000. We use deep neural network structures to process sensor response and demonstrate that the QCM array is suitable for gas sensing, environmental monitoring, and electronic-nose applications.

  12. Possibly scalable solar hydrogen generation with quasi-artificial leaf approach.

    Science.gov (United States)

    Patra, Kshirodra Kumar; Bhuskute, Bela D; Gopinath, Chinnakonda S

    2017-07-26

    Any solar energy harvesting technology must provide a net positive energy balance, and artificial leaf concept provided a platform for solar water splitting (SWS) towards that. However, device stability, high photocurrent generation, and scalability are the major challenges. A wireless device based on quasi-artificial leaf concept (QuAL), comprising Au on porous TiO2 electrode sensitized by PbS and CdS quantum dots (QD), was demonstrated to show sustainable solar hydrogen (490 ± 25 µmol/h (corresponds to 12 ml H2 h-1) from ~2 mg of photoanode material coated over 1 cm2 area with aqueous hole (S2-/SO32-) scavenger. A linear extrapolation of the above results could lead to hydrogen production of 6 L/h.g over an area of ~23 × 23 cm2. Under one sun conditions, 4.3 mA/cm2 photocurrent generation, 5.6% power conversion efficiency, and spontaneous H2 generation were observed at no applied potential (see S1). A direct coupling of all components within themselves enhances the light absorption in the entire visible and NIR region and charge utilization. Thin film approach, as in DSSC, combined with porous titania enables networking of all the components of the device, and efficiently converts solar to chemical energy in a sustainable manner.

  13. Response of human dental pulp cells to a silver-containing PLGA/TCP-nanofabric as a potential antibacterial regenerative pulp-capping material.

    Science.gov (United States)

    Cvikl, Barbara; Hess, Samuel C; Miron, Richard J; Agis, Hermann; Bosshardt, Dieter; Attin, Thomas; Schmidlin, Patrick R; Lussi, Adrian

    2017-02-27

    Damage or exposure of the dental pulp requires immediate therapeutic intervention. This study assessed the biocompatibility of a silver-containing PLGA/TCP-nanofabric scaffold (PLGA/Ag-TCP) in two in vitro models, i.e. the material adapted on pre-cultured cells and cells directly cultured on the material, respectively. Collagen saffolds with and without hyaluronan acid (Coll-HA; Coll) using both cell culturing methods and cells growing on culture plates served as reference. Cell viability and proliferation were assessed after 24, 48, and 72 h based on formazan formation and BrdU incorporation. Scaffolds were harvested. Gene expression of interleukin(IL)-6, tumor necrosis factor (TNF)-alpha, and alkaline phosphatase (AP) was assessed 24 h after stimulation. In both models formazan formation and BrdU incorporation was reduced by PLGA/Ag-TCP on dental pulp cells, while no significant reduction was found in cells with Coll and Coll-HA. Cells with PLGA/Ag-TCP for 72 h showed similar relative BrdU incorporation than cells stimulated with Coll and Coll-HA. A prominent increase in the pro-inflammatory genes IL-6 and TNF-α was observed when cells were cultured with PLGA/Ag-TCP compared to the other groups. This increase was parallel with a slight increase in AP expression. Overall, no differences between the two culture methods were observed. PLGA/Ag-TCP decreased viability and proliferation rate of human dental pulp cells and increased the pro-inflammatory capacity and alkaline phosphatase expression. Whether these cellular responses observed in vitro translate into pulp regeneration in vivo will be assessed in further studies.

  14. Proba-V Mission Exploitation Platform

    Science.gov (United States)

    Goor, Erwin; Dries, Jeroen

    2017-04-01

    VITO and partners developed the Proba-V Mission Exploitation Platform (MEP) as an end-to-end solution to drastically improve the exploitation of the Proba-V (a Copernicus contributing mission) EO-data archive (http://proba-v.vgt.vito.be/), the past mission SPOT-VEGETATION and derived vegetation parameters by researchers, service providers and end-users. The analysis of time series of data (+1PB) is addressed, as well as the large scale on-demand processing of near real-time data on a powerful and scalable processing environment. Furthermore data from the Copernicus Global Land Service is in scope of the platform. From November 2015 an operational Proba-V MEP environment, as an ESA operation service, is gradually deployed at the VITO data center with direct access to the complete data archive. Since autumn 2016 the platform is operational and yet several applications are released to the users, e.g. - A time series viewer, showing the evolution of Proba-V bands and derived vegetation parameters from the Copernicus Global Land Service for any area of interest. - Full-resolution viewing services for the complete data archive. - On-demand processing chains on a powerfull Hadoop/Spark backend e.g. for the calculation of N-daily composites. - Virtual Machines can be provided with access to the data archive and tools to work with this data, e.g. various toolboxes (GDAL, QGIS, GrassGIS, SNAP toolbox, …) and support for R and Python. This allows users to immediately work with the data without having to install tools or download data, but as well to design, debug and test applications on the platform. - A prototype of jupyter Notebooks is available with some examples worked out to show the potential of the data. Today the platform is used by several third party projects to perform R&D activities on the data, and to develop/host data analysis toolboxes. In parallel the platform is further improved and extended. From the MEP PROBA-V, access to Sentinel-2 and landsat data will

  15. Mobile Platforms and Development Environments

    CERN Document Server

    Helal, Sumi; Li, Wengdong

    2012-01-01

    Mobile platform development has lately become a technological war zone with extremely dynamic and fluid movement, especially in the smart phone and tablet market space. This Synthesis lecture is a guide to the latest developments of the key mobile platforms that are shaping the mobile platform industry. The book covers the three currently dominant native platforms -- iOS, Android and Windows Phone -- along with the device-agnostic HTML5 mobile web platform. The lecture also covers location-based services (LBS) which can be considered as a platform in its own right. The lecture utilizes a sampl

  16. Highly scalable multichannel mesh electronics for stable chronic brain electrophysiology

    Science.gov (United States)

    Fu, Tian-Ming; Hong, Guosong; Viveros, Robert D.; Zhou, Tao

    2017-01-01

    Implantable electrical probes have led to advances in neuroscience, brain−machine interfaces, and treatment of neurological diseases, yet they remain limited in several key aspects. Ideally, an electrical probe should be capable of recording from large numbers of neurons across multiple local circuits and, importantly, allow stable tracking of the evolution of these neurons over the entire course of study. Silicon probes based on microfabrication can yield large-scale, high-density recording but face challenges of chronic gliosis and instability due to mechanical and structural mismatch with the brain. Ultraflexible mesh electronics, on the other hand, have demonstrated negligible chronic immune response and stable long-term brain monitoring at single-neuron level, although, to date, it has been limited to 16 channels. Here, we present a scalable scheme for highly multiplexed mesh electronics probes to bridge the gap between scalability and flexibility, where 32 to 128 channels per probe were implemented while the crucial brain-like structure and mechanics were maintained. Combining this mesh design with multisite injection, we demonstrate stable 128-channel local field potential and single-unit recordings from multiple brain regions in awake restrained mice over 4 mo. In addition, the newly integrated mesh is used to validate stable chronic recordings in freely behaving mice. This scalable scheme for mesh electronics together with demonstrated long-term stability represent important progress toward the realization of ideal implantable electrical probes allowing for mapping and tracking single-neuron level circuit changes associated with learning, aging, and neurodegenerative diseases. PMID:29109247

  17. Highly Scalable Trip Grouping for Large Scale Collective Transportation Systems

    DEFF Research Database (Denmark)

    Gidofalvi, Gyozo; Pedersen, Torben Bach; Risch, Tore

    2008-01-01

    Transportation-related problems, like road congestion, parking, and pollution, are increasing in most cities. In order to reduce traffic, recent work has proposed methods for vehicle sharing, for example for sharing cabs by grouping "closeby" cab requests and thus minimizing transportation cost...... and utilizing cab space. However, the methods published so far do not scale to large data volumes, which is necessary to facilitate large-scale collective transportation systems, e.g., ride-sharing systems for large cities. This paper presents highly scalable trip grouping algorithms, which generalize previous...

  18. A scalable pairwise class interaction framework for multidimensional classification

    DEFF Research Database (Denmark)

    Arias, Jacinto; Gámez, Jose A.; Nielsen, Thomas Dyhre

    2016-01-01

    We present a general framework for multidimensional classification that cap- tures the pairwise interactions between class variables. The pairwise class inter- actions are encoded using a collection of base classifiers (Phase 1), for which the class predictions are combined in a Markov random field...... inference methods in the second phase. We describe the basic framework and its main properties, as well as strategies for ensuring the scalability of the framework. We include a detailed experimental evaluation based on a range of publicly available databases. Here we analyze the overall performance...

  19. SAR++: A Multi-Channel Scalable and Reconfigurable SAR System

    DEFF Research Database (Denmark)

    Høeg, Flemming; Christensen, Erik Lintz

    2002-01-01

    SAR++ is a technology program aiming at developing know-how and technology needed to design the next generation civilian SAR systems. Technology has reached a state, which allows major parts of the digital subsystem to be built using custom-off-the-shelf (COTS) components. A design goal...... is to design a modular, scalable and reconfigurable SAR system using such components, in order to ensure maximum flexibility for the users of the actual system and for future system updates. Having these aspects in mind the SAR++ system is presented with focus on the digital subsystem architecture...

  20. Scalable brain network construction on white matter fibers

    Science.gov (United States)

    Chung, Moo K.; Adluru, Nagesh; Dalton, Kim M.; Alexander, Andrew L.; Davidson, Richard J.

    2011-03-01

    DTI offers a unique opportunity to characterize the structural connectivity of the human brain non-invasively by tracing white matter fiber tracts. Whole brain tractography studies routinely generate up to half million tracts per brain, which serves as edges in an extremely large 3D graph with up to half million edges. Currently there is no agreed-upon method for constructing the brain structural network graphs out of large number of white matter tracts. In this paper, we present a scalable iterative framework called the ɛ-neighbor method for building a network graph and apply it to testing abnormal connectivity in autism.

  1. Using overlay network architectures for scalable video distribution

    Science.gov (United States)

    Patrikakis, Charalampos Z.; Despotopoulos, Yannis; Fafali, Paraskevi; Cha, Jihun; Kim, Kyuheon

    2004-11-01

    Within the last years, the enormous growth of Internet based communication as well as the rapid increase of available processing power has lead to the widespread use of multimedia streaming as a means to convey information. This work aims at providing an open architecture designed to support scalable streaming to a large number of clients using application layer multicast. The architecture is based on media relay nodes that can be deployed transparently to any existing media distribution scheme, which can support media streamed using the RTP and RTSP protocols. The architecture is based on overlay networks at application level, featuring rate adaptation mechanisms for responding to network congestion.

  2. Design for scalability in 3D computer graphics architectures

    DEFF Research Database (Denmark)

    Holten-Lund, Hans Erik

    2002-01-01

    This thesis describes useful methods and techniques for designing scalable hybrid parallel rendering architectures for 3D computer graphics. Various techniques for utilizing parallelism in a pipelines system are analyzed. During the Ph.D study a prototype 3D graphics architecture named Hybris has...... been developed. Hybris is a prototype rendering architeture which can be tailored to many specific 3D graphics applications and implemented in various ways. Parallel software implementations for both single and multi-processor Windows 2000 system have been demonstrated. Working hardware...... as a case study and an application of the Hybris graphics architecture....

  3. Scalable web services for the PSIPRED Protein Analysis Workbench.

    Science.gov (United States)

    Buchan, Daniel W A; Minneci, Federico; Nugent, Tim C O; Bryson, Kevin; Jones, David T

    2013-07-01

    Here, we present the new UCL Bioinformatics Group's PSIPRED Protein Analysis Workbench. The Workbench unites all of our previously available analysis methods into a single web-based framework. The new web portal provides a greatly streamlined user interface with a number of new features to allow users to better explore their results. We offer a number of additional services to enable computationally scalable execution of our prediction methods; these include SOAP and XML-RPC web server access and new HADOOP packages. All software and services are available via the UCL Bioinformatics Group website at http://bioinf.cs.ucl.ac.uk/.

  4. A Scalable Framework to Detect Personal Health Mentions on Twitter.

    Science.gov (United States)

    Yin, Zhijun; Fabbri, Daniel; Rosenbloom, S Trent; Malin, Bradley

    2015-06-05

    Biomedical research has traditionally been conducted via surveys and the analysis of medical records. However, these resources are limited in their content, such that non-traditional domains (eg, online forums and social media) have an opportunity to supplement the view of an individual's health. The objective of this study was to develop a scalable framework to detect personal health status mentions on Twitter and assess the extent to which such information is disclosed. We collected more than 250 million tweets via the Twitter streaming API over a 2-month period in 2014. The corpus was filtered down to approximately 250,000 tweets, stratified across 34 high-impact health issues, based on guidance from the Medical Expenditure Panel Survey. We created a labeled corpus of several thousand tweets via a survey, administered over Amazon Mechanical Turk, that documents when terms correspond to mentions of personal health issues or an alternative (eg, a metaphor). We engineered a scalable classifier for personal health mentions via feature selection and assessed its potential over the health issues. We further investigated the utility of the tweets by determining the extent to which Twitter users disclose personal health status. Our investigation yielded several notable findings. First, we find that tweets from a small subset of the health issues can train a scalable classifier to detect health mentions. Specifically, training on 2000 tweets from four health issues (cancer, depression, hypertension, and leukemia) yielded a classifier with precision of 0.77 on all 34 health issues. Second, Twitter users disclosed personal health status for all health issues. Notably, personal health status was disclosed over 50% of the time for 11 out of 34 (33%) investigated health issues. Third, the disclosure rate was dependent on the health issue in a statistically significant manner (PTwitter in a scalable manner. These mentions correspond to the health issues of the Twitter users

  5. A Scalable Architecture of a Structured LDPC Decoder

    Science.gov (United States)

    Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon

    2004-01-01

    We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.

  6. Scalable implementation of boson sampling with trapped ions.

    Science.gov (United States)

    Shen, C; Zhang, Z; Duan, L-M

    2014-02-07

    Boson sampling solves a classically intractable problem by sampling from a probability distribution given by matrix permanents. We propose a scalable implementation of boson sampling using local transverse phonon modes of trapped ions to encode the bosons. The proposed scheme allows deterministic preparation and high-efficiency readout of the bosons in the Fock states and universal mode mixing. With the state-of-the-art trapped ion technology, it is feasible to realize boson sampling with tens of bosons by this scheme, which would outperform the most powerful classical computers and constitute an effective disproof of the famous extended Church-Turing thesis.

  7. Scalable video on demand adaptive Internet-based distribution

    CERN Document Server

    Zink, Michael

    2013-01-01

    In recent years, the proliferation of available video content and the popularity of the Internet have encouraged service providers to develop new ways of distributing content to clients. Increasing video scaling ratios and advanced digital signal processing techniques have led to Internet Video-on-Demand applications, but these currently lack efficiency and quality. Scalable Video on Demand: Adaptive Internet-based Distribution examines how current video compression and streaming can be used to deliver high-quality applications over the Internet. In addition to analysing the problems

  8. Empirical Evaluation of Superposition Coded Multicasting for Scalable Video

    KAUST Repository

    Chun Pong Lau

    2013-03-01

    In this paper we investigate cross-layer superposition coded multicast (SCM). Previous studies have proven its effectiveness in exploiting better channel capacity and service granularities via both analytical and simulation approaches. However, it has never been practically implemented using a commercial 4G system. This paper demonstrates our prototype in achieving the SCM using a standard 802.16 based testbed for scalable video transmissions. In particular, to implement the superposition coded (SPC) modulation, we take advantage a novel software approach, namely logical SPC (L-SPC), which aims to mimic the physical layer superposition coded modulation. The emulation results show improved throughput comparing with generic multicast method.

  9. Common tester platform concept.

    Energy Technology Data Exchange (ETDEWEB)

    Hurst, Michael James

    2008-05-01

    This report summarizes the results of a case study on the doctrine of a common tester platform, a concept of a standardized platform that can be applicable across the broad spectrum of testing requirements throughout the various stages of a weapons program, as well as across the various weapons programs. The common tester concept strives to define an affordable, next-generation design that will meet testing requirements with the flexibility to grow and expand; supporting the initial development stages of a weapons program through to the final production and surveillance stages. This report discusses a concept investing key leveraging technologies and operational concepts combined with prototype tester-development experiences and practical lessons learned gleaned from past weapons programs.

  10. Available: motorised platform

    CERN Multimedia

    The COMPASS collaboration

    2014-01-01

    The COMPASS collaboration would like to offer to a new owner the following useful and fully operational piece of equipment, which is due to be replaced with better adapted equipment.   Please contact Erwin Bielert (erwin.bielert@cern.ch or 160539) for further information.  Motorized platform (FOR FREE):   Fabricated by ACL (Alfredo Cardoso & Cia Ltd) in Portugal. The model number is MeXs 5-­‐30.  Specifications: 5 m wide, 1 m deep, adjustable height (1.5 m if folded). Maximum working floor height: 4 m. conforms to CERN regulations, number LV158. Type LD500, capacity 500 kg and weight 2000 kg.  If no interested party is found before December 2014, the platform will be thrown away.

  11. Cloud Robotics Platforms

    Directory of Open Access Journals (Sweden)

    Busra Koken

    2015-01-01

    Full Text Available Cloud robotics is a rapidly evolving field that allows robots to offload computation-intensive and storage-intensive jobs into the cloud. Robots are limited in terms of computational capacity, memory and storage. Cloud provides unlimited computation power, memory, storage and especially collaboration opportunity. Cloud-enabled robots are divided into two categories as standalone and networked robots. This article surveys cloud robotic platforms, standalone and networked robotic works such as grasping, simultaneous localization and mapping (SLAM and monitoring.

  12. "Platform switching": serendipity.

    Science.gov (United States)

    Kalavathy, N; Sridevi, J; Gehlot, Roshni; Kumar, Santosh

    2014-01-01

    Implant dentistry is the latest developing field in terms of clinical techniques, research, material science and oral rehabilitation. Extensive work is being done to improve the designing of implants in order to achieve better esthetics and function. The main drawback with respect to implant restoration is achieving good osseointegration along with satisfactory stress distribution, which in turn will improve the prognosis of implant prosthesis by reducing the crestal bone loss. Many concepts have been developed with reference to surface coating of implants, surgical techniques for implant placement, immediate and delayed loading, platform switching concept, etc. This article has made an attempt to review the concept of platform switching was in fact revealed accidentally due to the nonavailability of the abutment appropriate to the size of the implant placed. A few aspect of platform switching, an upcoming idea to reduce crestal bone loss have been covered. The various methods used for locating and preparing the data were done through textbooks, Google search and related articles.

  13. "Platform switching": Serendipity

    Directory of Open Access Journals (Sweden)

    N Kalavathy

    2014-01-01

    Full Text Available Implant dentistry is the latest developing field in terms of clinical techniques, research, material science and oral rehabilitation. Extensive work is being done to improve the designing of implants in order to achieve better esthetics and function. The main drawback with respect to implant restoration is achieving good osseointegration along with satisfactory stress distribution, which in turn will improve the prognosis of implant prosthesis by reducing the crestal bone loss. Many concepts have been developed with reference to surface coating of implants, surgical techniques for implant placement, immediate and delayed loading, platform switching concept, etc. This article has made an attempt to review the concept of platform switching was in fact revealed accidentally due to the nonavailability of the abutment appropriate to the size of the implant placed. A few aspect of platform switching, an upcoming idea to reduce crestal bone loss have been covered. The various methods used for locating and preparing the data were done through textbooks, Google search and related articles.

  14. ANALYZING AVIATION SAFETY REPORTS: FROM TOPIC MODELING TO SCALABLE MULTI-LABEL CLASSIFICATION

    Data.gov (United States)

    National Aeronautics and Space Administration — ANALYZING AVIATION SAFETY REPORTS: FROM TOPIC MODELING TO SCALABLE MULTI-LABEL CLASSIFICATION AMRUDIN AGOVIC*, HANHUAI SHAN, AND ARINDAM BANERJEE Abstract. The...

  15. Implementing a hardware-friendly wavelet entropy codec for scalable video

    Science.gov (United States)

    Eeckhaut, Hendrik; Christiaens, Mark; Devos, Harald; Stroobandt, Dirk

    2005-11-01

    In the RESUME project (Reconfigurable Embedded Systems for Use in Multimedia Environments) we explore the benefits of an implementation of scalable multimedia applications using reconfigurable hardware by building an FPGA implementation of a scalable wavelet-based video decoder. The term "scalable" refers to a design that can easily accommodate changes in quality of service with minimal computational overhead. This is important for portable devices that have different Quality of Service (QoS) requirements and have varying power restrictions. The scalable video decoder consists of three major blocks: a Wavelet Entropy Decoder (WED), an Inverse Discrete Wavelet Transformer (IDWT) and a Motion Compensator (MC). The WED decodes entropy encoded parts of the video stream into wavelet transformed frames. These frames are decoded bitlayer per bitlayer. The more bitlayers are decoded the higher the image quality (scalability in image quality). Resolution scalability is obtained as an inherent property of the IDWT. Finally framerate scalability is achieved through hierarchical motion compensation. In this article we present the results of our investigation into the hardware implementation of such a scalable video codec. In particular we found that the implementation of the entropy codec is a significant bottleneck. We present an alternative, hardware-friendly algorithm for entropy coding with excellent data locality (both temporal and spatial), streaming capabilities, a high degree of parallelism, a smaller memory footprint and state-of-the-art compression while maintaining all required scalability properties. These claims are supported by an effective hardware implementation on an FPGA.

  16. SCALABLE TIME SERIES CHANGE DETECTION FOR BIOMASS MONITORING USING GAUSSIAN PROCESS

    Data.gov (United States)

    National Aeronautics and Space Administration — SCALABLE TIME SERIES CHANGE DETECTION FOR BIOMASS MONITORING USING GAUSSIAN PROCESS VARUN CHANDOLA AND RANGA RAJU VATSAVAI Abstract. Biomass monitoring,...

  17. Traffic and Quality Characterization of the H.264/AVC Scalable Video Coding Extension

    Directory of Open Access Journals (Sweden)

    Geert Van der Auwera

    2008-01-01

    Full Text Available The recent scalable video coding (SVC extension to the H.264/AVC video coding standard has unprecedented compression efficiency while supporting a wide range of scalability modes, including temporal, spatial, and quality (SNR scalability, as well as combined spatiotemporal SNR scalability. The traffic characteristics, especially the bit rate variabilities, of the individual layer streams critically affect their network transport. We study the SVC traffic statistics, including the bit rate distortion and bit rate variability distortion, with long CIF resolution video sequences and compare them with the corresponding MPEG-4 Part 2 traffic statistics. We consider (i temporal scalability with three temporal layers, (ii spatial scalability with a QCIF base layer and a CIF enhancement layer, as well as (iii quality scalability modes FGS and MGS. We find that the significant improvement in RD efficiency of SVC is accompanied by substantially higher traffic variabilities as compared to the equivalent MPEG-4 Part 2 streams. We find that separately analyzing the traffic of temporal-scalability only encodings gives reasonable estimates of the traffic statistics of the temporal layers embedded in combined spatiotemporal encodings and in the base layer of combined FGS-temporal encodings. Overall, we find that SVC achieves significantly higher compression ratios than MPEG-4 Part 2, but produces unprecedented levels of traffic variability, thus presenting new challenges for the network transport of scalable video.

  18. Platform computing powers enterprise grid

    CERN Multimedia

    2002-01-01

    Platform Computing, today announced that the Stanford Linear Accelerator Center is using Platform LSF 5, to carry out groundbreaking research into the origins of the universe. Platform LSF 5 will deliver the mammoth computing power that SLAC's Linear Accelerator needs to process the data associated with intense high-energy physics research (1 page).

  19. Product Platform Screening at LEGO

    DEFF Research Database (Denmark)

    Mortensen, Niels Henrik; Steen Jensen, Thomas; Nielsen, Ole Fiil

    2012-01-01

    Product platforms offer great benefits to companies developing new products in highly competitive markets. Literature describes how a single platform can be designed from a technical point of view, but rarely mentions how the process begins. How do companies identify possible platform candidates...

  20. Contemporary Internet of Things platforms

    OpenAIRE

    Mineraud, Julien; Mazhelis, Oleksiy; Su, Xiang; Tarkoma, Sasu

    2015-01-01

    This document regroups a representative, but non-exhaustive, list of contemporary IoT platforms. The platforms are ordered alphabetically. The aim of this document is to provide the a quick review of current IoT platforms, as well as relevant information.

  1. NCI's Transdisciplinary High Performance Scientific Data Platform

    Science.gov (United States)

    Evans, Ben; Antony, Joseph; Bastrakova, Irina; Car, Nicholas; Cox, Simon; Druken, Kelsey; Evans, Bradley; Fraser, Ryan; Ip, Alex; Kemp, Carina; King, Edward; Minchin, Stuart; Larraondo, Pablo; Pugh, Tim; Richards, Clare; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2016-04-01

    The Australian National Computational Infrastructure (NCI) manages Earth Systems data collections sourced from several domains and organisations onto a single High Performance Data (HPD) Node to further Australia's national priority research and innovation agenda. The NCI HPD Node has rapidly established its value, currently managing over 10 PBytes of datasets from collections that span a wide range of disciplines including climate, weather, environment, geoscience, geophysics, water resources and social sciences. Importantly, in order to facilitate broad user uptake, maximise reuse and enable transdisciplinary access through software and standardised interfaces, the datasets, associated information systems and processes have been incorporated into the design and operation of a unified platform that NCI has called, the National Environmental Research Data Interoperability Platform (NERDIP). The key goal of the NERDIP is to regularise data access so that it is easily discoverable, interoperable for different domains and enabled for high performance methods. It adopts and implements international standards and data conventions, and promotes scientific integrity within a high performance computing and data analysis environment. NCI has established a rich and flexible computing environment to access to this data, through the NCI supercomputer; a private cloud that supports both domain focused virtual laboratories and in-common interactive analysis interfaces; as well as remotely through scalable data services. Data collections of this importance must be managed with careful consideration of both their current use and the needs of the end-communities, as well as its future potential use, such as transitioning to more advanced software and improved methods. It is therefore critical that the data platform is both well-managed and trusted for stable production use (including transparency and reproducibility), agile enough to incorporate new technological advances and

  2. Scalable and Fault Tolerant Failure Detection and Consensus

    Energy Technology Data Exchange (ETDEWEB)

    Katti, Amogh [University of Reading, UK; Di Fatta, Giuseppe [University of Reading, UK; Naughton III, Thomas J [ORNL; Engelmann, Christian [ORNL

    2015-01-01

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum's User Level Failure Mitigation proposal has introduced an operation, MPI_Comm_shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI_Comm_shrink operation requires a fault tolerant failure detection and consensus algorithm. This paper presents and compares two novel failure detection and consensus algorithms. The proposed algorithms are based on Gossip protocols and are inherently fault-tolerant and scalable. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that in both algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus.

  3. Performance and Scalability Evaluation of the Ceph Parallel File System

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Feiyi [ORNL; Nelson, Mark [Inktank Storage, Inc.; Oral, H Sarp [ORNL; Settlemyer, Bradley W [ORNL; Atchley, Scott [ORNL; Caldwell, Blake A [ORNL; Hill, Jason J [ORNL

    2013-01-01

    Ceph is an open-source and emerging parallel distributed file and storage system technology. By design, Ceph assumes running on unreliable and commodity storage and network hardware and provides reliability and fault-tolerance through controlled object placement and data replication. We evaluated the Ceph technology for scientific high-performance computing (HPC) environments. This paper presents our evaluation methodology, experiments, results and observations from mostly parallel I/O performance and scalability perspectives. Our work made two unique contributions. First, our evaluation is performed under a realistic setup for a large-scale capability HPC environment using a commercial high-end storage system. Second, our path of investigation, tuning efforts, and findings made direct contributions to Ceph's development and improved code quality, scalability, and performance. These changes should also benefit both Ceph and HPC communities at large. Throughout the evaluation, we observed that Ceph still is an evolving technology under fast-paced development and showing great promises.

  4. The Node Monitoring Component of a Scalable Systems Software Environment

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Samuel James [Iowa State Univ., Ames, IA (United States)

    2006-01-01

    This research describes Fountain, a suite of programs used to monitor the resources of a cluster. A cluster is a collection of individual computers that are connected via a high speed communication network. They are traditionally used by users who desire more resources, such as processing power and memory, than any single computer can provide. A common drawback to effectively utilizing such a large-scale system is the management infrastructure, which often does not often scale well as the system grows. Large-scale parallel systems provide new research challenges in the area of systems software, the programs or tools that manage the system from boot-up to running a parallel job. The approach presented in this thesis utilizes a collection of separate components that communicate with each other to achieve a common goal. While systems software comprises a broad array of components, this thesis focuses on the design choices for a node monitoring component. We will describe Fountain, an implementation of the Scalable Systems Software (SSS) node monitor specification. It is targeted at aggregate node monitoring for clusters, focusing on both scalability and fault tolerance as its design goals. It leverages widely used technologies such as XML and HTTP to present an interface to other components in the SSS environment.

  5. Towards Scalable Strain Gauge-Based Joint Torque Sensors.

    Science.gov (United States)

    Khan, Hamza; D'Imperio, Mariapaola; Cannella, Ferdinando; Caldwell, Darwin G; Cuschieri, Alfred; Semini, Claudio

    2017-08-18

    During recent decades, strain gauge-based joint torque sensors have been commonly used to provide high-fidelity torque measurements in robotics. Although measurement of joint torque/force is often required in engineering research and development, the gluing and wiring of strain gauges used as torque sensors pose difficulties during integration within the restricted space available in small joints. The problem is compounded by the need for a scalable geometric design to measure joint torque. In this communication, we describe a novel design of a strain gauge-based mono-axial torque sensor referred to as square-cut torque sensor (SCTS), the significant features of which are high degree of linearity, symmetry, and high scalability in terms of both size and measuring range. Most importantly, SCTS provides easy access for gluing and wiring of the strain gauges on sensor surface despite the limited available space. We demonstrated that the SCTS was better in terms of symmetry (clockwise and counterclockwise rotation) and more linear. These capabilities have been shown through finite element modeling (ANSYS) confirmed by observed data obtained by load testing experiments. The high performance of SCTS was confirmed by studies involving changes in size, material and/or wings width and thickness. Finally, we demonstrated that the SCTS can be successfully implementation inside the hip joints of miniaturized hydraulically actuated quadruped robot-MiniHyQ. This communication is based on work presented at the 18th International Conference on Climbing and Walking Robots (CLAWAR).

  6. Developing a scalable artificial photosynthesis technology through nanomaterials by design.

    Science.gov (United States)

    Lewis, Nathan S

    2016-12-06

    An artificial photosynthetic system that directly produces fuels from sunlight could provide an approach to scalable energy storage and a technology for the carbon-neutral production of high-energy-density transportation fuels. A variety of designs are currently being explored to create a viable artificial photosynthetic system, and the most technologically advanced systems are based on semiconducting photoelectrodes. Here, I discuss the development of an approach that is based on an architecture, first conceived around a decade ago, that combines arrays of semiconducting microwires with flexible polymeric membranes. I highlight the key steps that have been taken towards delivering a fully functional solar fuels generator, which have exploited advances in nanotechnology at all hierarchical levels of device construction, and include the discovery of earth-abundant electrocatalysts for fuel formation and materials for the stabilization of light absorbers. Finally, I consider the remaining scientific and engineering challenges facing the fulfilment of an artificial photosynthetic system that is simultaneously safe, robust, efficient and scalable.

  7. A highly scalable peptide-based assay system for proteomics.

    Directory of Open Access Journals (Sweden)

    Igor A Kozlov

    Full Text Available We report a scalable and cost-effective technology for generating and screening high-complexity customizable peptide sets. The peptides are made as peptide-cDNA fusions by in vitro transcription/translation from pools of DNA templates generated by microarray-based synthesis. This approach enables large custom sets of peptides to be designed in silico, manufactured cost-effectively in parallel, and assayed efficiently in a multiplexed fashion. The utility of our peptide-cDNA fusion pools was demonstrated in two activity-based assays designed to discover protease and kinase substrates. In the protease assay, cleaved peptide substrates were separated from uncleaved and identified by digital sequencing of their cognate cDNAs. We screened the 3,011 amino acid HCV proteome for susceptibility to cleavage by the HCV NS3/4A protease and identified all 3 known trans cleavage sites with high specificity. In the kinase assay, peptide substrates phosphorylated by tyrosine kinases were captured and identified by sequencing of their cDNAs. We screened a pool of 3,243 peptides against Abl kinase and showed that phosphorylation events detected were specific and consistent with the known substrate preferences of Abl kinase. Our approach is scalable and adaptable to other protein-based assays.

  8. Scalable privacy-preserving big data aggregation mechanism

    Directory of Open Access Journals (Sweden)

    Dapeng Wu

    2016-08-01

    Full Text Available As the massive sensor data generated by large-scale Wireless Sensor Networks (WSNs recently become an indispensable part of ‘Big Data’, the collection, storage, transmission and analysis of the big sensor data attract considerable attention from researchers. Targeting the privacy requirements of large-scale WSNs and focusing on the energy-efficient collection of big sensor data, a Scalable Privacy-preserving Big Data Aggregation (Sca-PBDA method is proposed in this paper. Firstly, according to the pre-established gradient topology structure, sensor nodes in the network are divided into clusters. Secondly, sensor data is modified by each node according to the privacy-preserving configuration message received from the sink. Subsequently, intra- and inter-cluster data aggregation is employed during the big sensor data reporting phase to reduce energy consumption. Lastly, aggregated results are recovered by the sink to complete the privacy-preserving big data aggregation. Simulation results validate the efficacy and scalability of Sca-PBDA and show that the big sensor data generated by large-scale WSNs is efficiently aggregated to reduce network resource consumption and the sensor data privacy is effectively protected to meet the ever-growing application requirements.

  9. The dust acoustic waves in three dimensional scalable complex plasma

    CERN Document Server

    Zhukhovitskii, D I

    2015-01-01

    Dust acoustic waves in the bulk of a dust cloud in complex plasma of low pressure gas discharge under microgravity conditions are considered. The dust component of complex plasma is assumed a scalable system that conforms to the ionization equation of state (IEOS) developed in our previous study. We find singular points of this IEOS that determine the behavior of the sound velocity in different regions of the cloud. The fluid approach is utilized to deduce the wave equation that includes the neutral drag term. It is shown that the sound velocity is fully defined by the particle compressibility, which is calculated on the basis of the scalable IEOS. The sound velocities and damping rates calculated for different 3D complex plasmas both in ac and dc discharges demonstrate a good correlation with experimental data that are within the limits of validity of the theory. The theory provides interpretation for the observed independence of the sound velocity on the coordinate and for a weak dependence on the particle ...

  10. Developing a scalable artificial photosynthesis technology through nanomaterials by design

    Science.gov (United States)

    Lewis, Nathan S.

    2016-12-01

    An artificial photosynthetic system that directly produces fuels from sunlight could provide an approach to scalable energy storage and a technology for the carbon-neutral production of high-energy-density transportation fuels. A variety of designs are currently being explored to create a viable artificial photosynthetic system, and the most technologically advanced systems are based on semiconducting photoelectrodes. Here, I discuss the development of an approach that is based on an architecture, first conceived around a decade ago, that combines arrays of semiconducting microwires with flexible polymeric membranes. I highlight the key steps that have been taken towards delivering a fully functional solar fuels generator, which have exploited advances in nanotechnology at all hierarchical levels of device construction, and include the discovery of earth-abundant electrocatalysts for fuel formation and materials for the stabilization of light absorbers. Finally, I consider the remaining scientific and engineering challenges facing the fulfilment of an artificial photosynthetic system that is simultaneously safe, robust, efficient and scalable.

  11. Scalable Nernst thermoelectric power using a coiled galfenol wire

    Directory of Open Access Journals (Sweden)

    Zihao Yang

    2017-09-01

    Full Text Available The Nernst thermopower usually is considered far too weak in most metals for waste heat recovery. However, its transverse orientation gives it an advantage over the Seebeck effect on non-flat surfaces. Here, we experimentally demonstrate the scalable generation of a Nernst voltage in an air-cooled metal wire coiled around a hot cylinder. In this geometry, a radial temperature gradient generates an azimuthal electric field in the coil. A Galfenol (Fe0.85Ga0.15 wire is wrapped around a cartridge heater, and the voltage drop across the wire is measured as a function of axial magnetic field. As expected, the Nernst voltage scales linearly with the length of the wire. Based on heat conduction and fluid dynamic equations, finite-element method is used to calculate the temperature gradient across the Galfenol wire and determine the Nernst coefficient. A giant Nernst coefficient of -2.6 μV/KT at room temperature is estimated, in agreement with measurements on bulk Galfenol. We expect that the giant Nernst effect in Galfenol arises from its magnetostriction, presumably through enhanced magnon-phonon coupling. Our results demonstrate the feasibility of a transverse thermoelectric generator capable of scalable output power from non-flat heat sources.

  12. Scalable Nernst thermoelectric power using a coiled galfenol wire

    Science.gov (United States)

    Yang, Zihao; Codecido, Emilio A.; Marquez, Jason; Zheng, Yuanhua; Heremans, Joseph P.; Myers, Roberto C.

    2017-09-01

    The Nernst thermopower usually is considered far too weak in most metals for waste heat recovery. However, its transverse orientation gives it an advantage over the Seebeck effect on non-flat surfaces. Here, we experimentally demonstrate the scalable generation of a Nernst voltage in an air-cooled metal wire coiled around a hot cylinder. In this geometry, a radial temperature gradient generates an azimuthal electric field in the coil. A Galfenol (Fe0.85Ga0.15) wire is wrapped around a cartridge heater, and the voltage drop across the wire is measured as a function of axial magnetic field. As expected, the Nernst voltage scales linearly with the length of the wire. Based on heat conduction and fluid dynamic equations, finite-element method is used to calculate the temperature gradient across the Galfenol wire and determine the Nernst coefficient. A giant Nernst coefficient of -2.6 μV/KT at room temperature is estimated, in agreement with measurements on bulk Galfenol. We expect that the giant Nernst effect in Galfenol arises from its magnetostriction, presumably through enhanced magnon-phonon coupling. Our results demonstrate the feasibility of a transverse thermoelectric generator capable of scalable output power from non-flat heat sources.

  13. Scalable fast multipole methods for vortex element methods

    KAUST Repository

    Hu, Qi

    2012-11-01

    We use a particle-based method to simulate incompressible flows, where the Fast Multipole Method (FMM) is used to accelerate the calculation of particle interactions. The most time-consuming kernelsâ\\'the Biot-Savart equation and stretching term of the vorticity equationâ\\'are mathematically reformulated so that only two Laplace scalar potentials are used instead of six, while automatically ensuring divergence-free far-field computation. Based on this formulation, and on our previous work for a scalar heterogeneous FMM algorithm, we develop a new FMM-based vortex method capable of simulating general flows including turbulence on heterogeneous architectures, which distributes the work between multi-core CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm also uses new data structures which can dynamically manage inter-node communication and load balance efficiently but with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s. © 2012 IEEE.

  14. Towards Scalable Strain Gauge-Based Joint Torque Sensors

    Science.gov (United States)

    D’Imperio, Mariapaola; Cannella, Ferdinando; Caldwell, Darwin G.; Cuschieri, Alfred

    2017-01-01

    During recent decades, strain gauge-based joint torque sensors have been commonly used to provide high-fidelity torque measurements in robotics. Although measurement of joint torque/force is often required in engineering research and development, the gluing and wiring of strain gauges used as torque sensors pose difficulties during integration within the restricted space available in small joints. The problem is compounded by the need for a scalable geometric design to measure joint torque. In this communication, we describe a novel design of a strain gauge-based mono-axial torque sensor referred to as square-cut torque sensor (SCTS), the significant features of which are high degree of linearity, symmetry, and high scalability in terms of both size and measuring range. Most importantly, SCTS provides easy access for gluing and wiring of the strain gauges on sensor surface despite the limited available space. We demonstrated that the SCTS was better in terms of symmetry (clockwise and counterclockwise rotation) and more linear. These capabilities have been shown through finite element modeling (ANSYS) confirmed by observed data obtained by load testing experiments. The high performance of SCTS was confirmed by studies involving changes in size, material and/or wings width and thickness. Finally, we demonstrated that the SCTS can be successfully implementation inside the hip joints of miniaturized hydraulically actuated quadruped robot-MiniHyQ. This communication is based on work presented at the 18th International Conference on Climbing and Walking Robots (CLAWAR). PMID:28820446

  15. The Geohazards Exploitation Platform

    Science.gov (United States)

    Laur, Henri; Casu, Francesco; Bally, Philippe; Caumont, Hervé; Pinto, Salvatore

    2016-04-01

    The Geohazards Exploitation Platform, or Geohazards TEP (GEP), is an ESA originated R&D activity of the EO ground segment to demonstrate the benefit of new technologies for large scale processing of EO data. This encompasses on-demand processing for specific user needs, systematic processing to address common information needs of the geohazards community, and integration of newly developed processors for scientists and other expert users. The platform supports the geohazards community's objectives as defined in the context of the International Forum on Satellite EO and Geohazards organised by ESA and GEO in Santorini in 2012. The GEP is a follow on to the Supersites Exploitation Platform (SSEP) an ESA initiative to support the Geohazards Supersites & Natural Laboratories initiative (GSNL). Today the GEP allows to exploit 70+ Terabyte of ERS and ENVISAT archive and the Copernicus Sentinel-1 data available on line. The platform has already engaged 22 European early adopters in a validation activity initiated in March 2015. Since September, this validation has reached 29 single user projects. Each project is concerned with either integrating an application, running on demand processing or systematically generating a product collection using an application available in the platform. The users primarily include 15 geoscience centres and universities based in Europe: British Geological Survey (UK), University of Leeds (UK), University College London (UK), ETH University of Zurich (CH), INGV (IT), CNR-IREA and CNR-IRPI (IT), University of L'Aquila (IT), NOA (GR), Univ. Blaise Pascal & CNRS (FR), Ecole Normale Supérieure (FR), ISTERRE / University of Grenoble-Alpes (FR). In addition, there are users from Africa and North America with the University of Rabat (MA) and the University of Miami (US). Furthermore two space agencies and four private companies are involved: the German Space Research Centre DLR (DE), the European Space Agency (ESA), Altamira Information (ES

  16. RemoteLabs Platform

    OpenAIRE

    Nils Crabeel; Betina Campos Neves; Benedita Malheiro

    2012-01-01

    This paper reports on a first step towards the implementation of a framework for remote experimentation of electric machines – the RemoteLabs platform. This project was focused on the development of two main modules: the user Web-based and the electric machines interfaces. The Web application provides the user with a front-end and interacts with the back-end – the user and experiment persistent data. The electric machines interface is implemented as a distributed client server application...

  17. Analysis of establishing back-end system for mobile devices IOS and Android on Google App Engine platform

    OpenAIRE

    Kambič, Dušan

    2013-01-01

    The following thesis discusses the Google App Engine (GAE), in relation to Google Cloud Endpoints (GCE). GAE is an example of cloud computing model, which is called Platform as a Service (PaaS). It enables developers to develop and host scalable applications on Google's infrastructure. The companies can hire the right amount of server resources for current needs, thereby reducing business costs. For developed GAE applications we can automatically generate libraries for Android, iOS and Javasc...

  18. A Framework for Managing Access of Large-Scale Distributed Resources in a Collaborative Platform

    Directory of Open Access Journals (Sweden)

    Su Chen

    2009-01-01

    Full Text Available In an e-Science environment, large-scale distributed resources in autonomous domains are aggregated by unified collaborative platforms to support scientific research across organizational boundaries. In order to enhance the scalability of access management, an integrated approach for decentralizing the task from resource owners to administrators on the platform is needed. We propose an extensible access management framework to meet this requirement by supporting an administrative delegation policy. This feature allows administrators on the platform to make new policies based on the original policies made by resources owners. An access protocol that merges SAML and XACML is also included in the framework. It defines how distributed parties operate with each other to make decentralized authorization decisions.

  19. Evaluation of secure capability-based access control in the M2M local cloud platform

    DEFF Research Database (Denmark)

    Anggorojati, Bayu; Prasad, Neeli R.; Prasad, Ramjee

    2016-01-01

    delegation. Recently, the capability based access control has been considered as method to manage access in the Internet of Things (IoT) or M2M domain. In this paper, the implementation and evaluation of a proposed secure capability based access control in the M2M local cloud platform is presented......Managing access to and protecting resources is one of the important aspect in managing security, especially in a distributed computing system such as Machine-to-Machine (M2M). One such platform known as the M2M local cloud platform, referring to BETaaS architecture [1], which conceptually consists...... of multiple distributed M2M gateways, creating new challenges in the access control. Some existing access control systems lack in scalability and flexibility to manage access from users or entity that belong to different authorization domains, or fails to provide fine grained and flexible access right...

  20. Globus Nexus: A Platform-as-a-Service provider of research identity, profile, and group management

    Energy Technology Data Exchange (ETDEWEB)

    Chard, Kyle; Lidman, Mattias; McCollam, Brendan; Bryan, Josh; Ananthakrishnan, Rachana; Tuecke, Steven; Foster, Ian

    2016-03-01

    Globus Nexus is a professionally hosted Platform-as-a-Service that provides identity, profile and group management functionality for the research community. Many collaborative e-Science applications need to manage large numbers of user identities, profiles, and groups. However, developing and maintaining such capabilities is often challenging given the complexity of modern security protocols and requirements for scalable, robust, and highly available implementations. By outsourcing this functionality to Globus Nexus, developers can leverage best-practice implementations without incurring development and operations overhead. Users benefit from enhanced capabilities such as identity federation, flexible profile management, and user-oriented group management. In this paper we present Globus Nexus, describe its capabilities and architecture, summarize how several e-Science applications leverage these capabilities, and present results that characterize its scalability, reliability, and availability.

  1. A Real-Time de novo DNA Sequencing Assembly Platform Based on an FPGA Implementation.

    Science.gov (United States)

    Hu, Yuanqi; Georgiou, Pantelis

    2016-01-01

    This paper presents an FPGA based DNA comparison platform which can be run concurrently with the sensing phase of DNA sequencing and shortens the overall time needed for de novo DNA assembly. A hybrid overlap searching algorithm is applied which is scalable and can deal with incremental detection of new bases. To handle the incomplete data set which gradually increases during sequencing time, all-against-all comparisons are broken down into successive window-against-window comparison phases and executed using a novel dynamic suffix comparison algorithm combined with a partitioned dynamic programming method. The complete system has been designed to facilitate parallel processing in hardware, which allows real-time comparison and full scalability as well as a decrease in the number of computations required. A base pair comparison rate of 51.2 G/s is achieved when implemented on an FPGA with successful DNA comparison when using data sets from real genomes.

  2. Cloud Computing Platform for an Online Model Library System

    Directory of Open Access Journals (Sweden)

    Mingang Chen

    2013-01-01

    Full Text Available The rapid developing of digital content industry calls for online model libraries. For the efficiency, user experience, and reliability merits of the model library, this paper designs a Web 3D model library system based on a cloud computing platform. Taking into account complex models, which cause difficulties in real-time 3D interaction, we adopt the model simplification and size adaptive adjustment methods to make the system with more efficient interaction. Meanwhile, a cloud-based architecture is developed to ensure the reliability and scalability of the system. The 3D model library system is intended to be accessible by online users with good interactive experiences. The feasibility of the solution has been tested by experiments.

  3. Large Scale Simulation Platform for NODES Validation Study

    Energy Technology Data Exchange (ETDEWEB)

    Sotorrio, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Qin, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Min, L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-04-27

    This report summarizes the Large Scale (LS) simulation platform created for the Eaton NODES project. The simulation environment consists of both wholesale market simulator and distribution simulator and includes the CAISO wholesale market model and a PG&E footprint of 25-75 feeders to validate the scalability under a scenario of 33% RPS in California with additional 17% of DERS coming from distribution and customers. The simulator can generate hourly unit commitment, 5-minute economic dispatch, and 4-second AGC regulation signals. The simulator is also capable of simulating greater than 10k individual controllable devices. Simulated DERs include water heaters, EVs, residential and light commercial HVAC/buildings, and residential-level battery storage. Feeder-level voltage regulators and capacitor banks are also simulated for feeder-level real and reactive power management and Vol/Var control.

  4. Photonic Integration on the Hybrid Silicon Evanescent Device Platform

    Directory of Open Access Journals (Sweden)

    Hyundai Park

    2008-01-01

    Full Text Available This paper reviews the recent progress of hybrid silicon evanescent devices. The hybrid silicon evanescent device structure consists of III-V epitaxial layers transferred to silicon waveguides through a low-temperature wafer bonding process to achieve optical gain, absorption, and modulation efficiently on a silicon photonics platform. The low-temperature wafer bonding process enables fusion of two different material systems without degradation of material quality and is scalable to wafer-level bonding. Lasers, amplifiers, photodetectors, and modulators have been demonstrated with this hybrid structure and integration of these individual components for improved optical functionality is also presented. This approach provides a unique way to build photonic active devices on silicon and should allow application of silicon photonic integrated circuits to optical telecommunication and optical interconnects.

  5. Platformation: Cloud Computing Tools at the Service of Social Change

    Directory of Open Access Journals (Sweden)

    Anil Patel

    2012-07-01

    Full Text Available The following article establishes some context and definitions for what is termed the “sharing imperative” – a movement or tendency towards sharing information online and in real time that has rapidly transformed several industries. As internet-enabled devices proliferate to all corners of the globe, ways of working and accessing information have changed. Users now expect to be able to access the products, services, and information that they want from anywhere, at any time, on any device. This article addresses how the nonprofit sector might respond to those demands by embracing the sharing imperative. It suggests that how well an organization shares has become one of the most pressing governance questions a nonprofit organization must tackle. Finally, the article introduces Platformation, a project whereby tools that enable better inter and intra-organizational sharing are tested for scalability, affordability, interoperability, and security, all with a non-profit lens.

  6. Homomorphic encryption experiments on IBM's cloud quantum computing platform

    Science.gov (United States)

    Huang, He-Liang; Zhao, You-Wei; Li, Tan; Li, Feng-Guang; Du, Yu-Tao; Fu, Xiang-Qun; Zhang, Shuo; Wang, Xiang; Bao, Wan-Su

    2017-02-01

    Quantum computing has undergone rapid development in recent years. Owing to limitations on scalability, personal quantum computers still seem slightly unrealistic in the near future. The first practical quantum computer for ordinary users is likely to be on the cloud. However, the adoption of cloud computing is possible only if security is ensured. Homomorphic encryption is a cryptographic protocol that allows computation to be performed on encrypted data without decrypting them, so it is well suited to cloud computing. Here, we first applied homomorphic encryption on IBM's cloud quantum computer platform. In our experiments, we successfully implemented a quantum algorithm for linear equations while protecting our privacy. This demonstration opens a feasible path to the next stage of development of cloud quantum information technology.

  7. CRISP: a flexible integrated development platform for RFID systems

    Science.gov (United States)

    Jamali, Behnam

    2008-12-01

    In this paper we present an introduction to Cognitive RFID Integrated System Platform (CRISP), a framework for development and implementation of RFID communication protocols. The framework enables advanced research in the area of RFID wireless communication protocols and algorithms by interfacing a large class of experimental medium access control (MAC) with custom physical layer (PHY) implementations. As such, CRISP provides a flexible, scalable, configurable and high performance RFID research tool. The low level protocol handling routines are written in VHDL and higher level functions are programmed in C and targeted to embedded Microblaze soft-core processor within the Xilinx Virtex 5 class of FPGAs. Furthermore, the online open-access repository from The University of Adelaide is available to document and share different architecture and designs with other researchers in the field.

  8. Facing Challenges in Real-Life Application of Surface-Enhanced Raman Scattering: Design and Nanofabrication of Surface-Enhanced Raman Scattering Substrates for Rapid Field Test of Food Contaminants.

    Science.gov (United States)

    Shi, Ruyi; Liu, Xiangjiang; Ying, Yibin

    2017-11-16

    Surface-enhanced Raman scattering (SERS) is capable of detecting a single molecule with high specificity and has become a promising technique for rapid chemical analysis of agricultural products and foods. With a deeper understanding of the SERS effect and advances in nanofabrication technology, SERS is now on the edge of going out of the laboratory and becoming a sophisticated analytical tool to fulfill various real-world tasks. This review focuses on the challenges that SERS has met in this progress, such as how to obtain a reliable SERS signal, improve the sensitivity and specificity in a complex sample matrix, develop simple and user-friendly practical sensing approach, reduce the running cost, etc. This review highlights the new thoughts on design and nanofabrication of SERS-active substrates for solving these challenges and introduces the recent advances of SERS applications in this area. We hope that our discussion will encourage more researches to address these challenges and eventually help to bring SERS technology out of the laboratory.

  9. Engineering soya bean seeds as a scalable platform to produce cyanovirin-N, a non-ARV microbicide against HIV.

    Science.gov (United States)

    O'Keefe, Barry R; Murad, André M; Vianna, Giovanni R; Ramessar, Koreen; Saucedo, Carrie J; Wilson, Jennifer; Buckheit, Karen W; da Cunha, Nicolau B; Araújo, Ana Claudia G; Lacorte, Cristiano C; Madeira, Luisa; McMahon, James B; Rech, Elibio L

    2015-09-01

    There is an urgent need to provide effective anti-HIV microbicides to resource-poor areas worldwide. Some of the most promising microbicide candidates are biotherapeutics targeting viral entry. To provide biotherapeutics to poorer areas, it is vital to reduce the cost. Here, we report the production of biologically active recombinant cyanovirin-N (rCV-N), an antiviral protein, in genetically engineered soya bean seeds. Pure, biologically active rCV-N was isolated with a yield of 350 μg/g of dry seed weight. The observed amino acid sequence of rCV-N matched the expected sequence of native CV-N, as did the mass of rCV-N (11 009 Da). Purified rCV-N from soya is active in anti-HIV assays with an EC50 of 0.82-2.7 nM (compared to 0.45-1.8 nM for E. coli-produced CV-N). Standard industrial processing of soya bean seeds to harvest soya bean oil does not diminish the antiviral activity of recovered rCV-N, allowing the use of industrial soya bean processing to generate both soya bean oil and a recombinant protein for anti-HIV microbicide development. © 2015 The Authors. Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd.

  10. CloudTPS: Scalable Transactions for Web Applications in the Cloud

    NARCIS (Netherlands)

    Zhou, W.; Pierre, G.E.O.; Chi, C.-H.

    2010-01-01

    NoSQL Cloud data services provide scalability and high availability properties for web applications but at the same time they sacrifice data consistency. However, many applications cannot afford any data inconsistency. CloudTPS is a scalable transaction manager to allow cloud database services to

  11. Scalable nanostructuring on polymer by a SiC stamp: optical and wetting effects

    DEFF Research Database (Denmark)

    Argyraki, Aikaterini; Lu, Weifang; Petersen, Paul Michael

    2015-01-01

    A method for fabricating scalable antireflective nanostructures on polymer surfaces (polycarbonate) is demonstrated. The transition from small scale fabrication of nanostructures to a scalable replication technique can be quite challenging. In this work, an area per print corresponding to a 2-inch...

  12. Extending JPEG-LS for low-complexity scalable video coding

    DEFF Research Database (Denmark)

    Ukhanova, Anna; Sergeev, Anton; Forchhammer, Søren

    2011-01-01

    JPEG-LS, the well-known international standard for lossless and near-lossless image compression, was originally designed for non-scalable applications. In this paper we propose a scalable modification of JPEG-LS and compare it with the leading image and video coding standards JPEG2000 and H.264/SVC...

  13. Scalable Multifunction Active Phased Array Systems: from concept to implementation; 2006BU1-IS

    NARCIS (Netherlands)

    LaMana, M.; Huizing, A.

    2006-01-01

    The SMRF (Scalable Multifunction Radio Frequency Systems) concept has been launched in the WEAG (Western European Armament Group) context, recently restructured into the EDA (European Defence Agency). A derived concept is introduced here, namely the SMRF-APAS (Scalable Multifunction Radio

  14. A NEaT Design for reliable and scalable network stacks

    NARCIS (Netherlands)

    Hruby, Tomas; Giuffrida, Cristiano; Sambuc, Lionel; Bos, Herbert; Tanenbaum, Andrew S.

    2016-01-01

    Operating systems provide a wide range of services, which are crucial for the increasingly high reliability and scalability demands of modern applications. Providing both reliability and scalability at the same time is hard. Commodity OS architectures simply lack the design abstractions to do so for

  15. Final Report: Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [William Marsh Rice University

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  16. Optimization of Hierarchical Modulation for Use of Scalable Media

    Directory of Open Access Journals (Sweden)

    Heneghan Conor

    2010-01-01

    Full Text Available This paper studies the Hierarchical Modulation, a transmission strategy of the approaching scalable multimedia over frequency-selective fading channel for improving the perceptible quality. An optimization strategy for Hierarchical Modulation and convolutional encoding, which can achieve the target bit error rates with minimum global signal-to-noise ratio in a single-user scenario, is suggested. This strategy allows applications to make a free choice of relationship between Higher Priority (HP and Lower Priority (LP stream delivery. The similar optimization can be used in multiuser scenario. An image transport task and a transport task of an H.264/MPEG4 AVC video embedding both QVGA and VGA resolutions are simulated as the implementation example of this optimization strategy, and demonstrate savings in SNR and improvement in Peak Signal-to-Noise Ratio (PSNR for the particular examples shown.

  17. Hierarchical Sets: Analyzing Pangenome Structure through Scalable Set Visualizations

    DEFF Research Database (Denmark)

    Pedersen, Thomas Lin

    2017-01-01

    information to increase in knowledge. As the pangenome data structure is essentially a collection of sets we explore the potential for scalable set visualization as a tool for pangenome analysis. We present a new hierarchical clustering algorithm based on set arithmetics that optimizes the intersection sizes...... along the branches. The intersection and union sizes along the hierarchy are visualized using a composite dendrogram and icicle plot, which, in pangenome context, shows the evolution of pangenome and core size along the evolutionary hierarchy. Outlying elements, i.e. elements whose presence pattern do...... of hierarchical sets by applying it to a pangenome based on 113 Escherichia and Shigella genomes and find it provides a powerful addition to pangenome analysis. The described clustering algorithm and visualizations are implemented in the hierarchicalSets R package available from CRAN (https...

  18. Scalable Spectrum Sharing Mechanism for Local Area Networks Deployment

    DEFF Research Database (Denmark)

    Da Costa, Gustavo Wagner Oliveira; Cattoni, Andrea Fabio; Kovacs, Istvan Zsolt

    2010-01-01

    The availability on the market of powerful and lightweight mobile devices has led to a fast diffusion of mobile services for end users and the trend is shifting from voice based services to multimedia contents distribution. The current access networks are, however, able to support relatively low...... data rates and with limited Quality of Service (QoS). In order to extend the access to high data rate services to wireless users, the International Telecommunication Union (ITU) established new requirements for future wireless communication technologies of up to 1Gbps in low mobility and up to 100Mbps...... management (RRM) functionalities in a CR framework, able to minimize the inter-OLA interferences. A Game Theory-inspired scalable algorithm is introduced to enable a distributed resource allocation in competitive radio environments. The proof-ofconcept simulation results demonstrate the effectiveness...

  19. Scalable Domain Decomposition Preconditioners for Heterogeneous Elliptic Problems

    Directory of Open Access Journals (Sweden)

    Pierre Jolivet

    2014-01-01

    Full Text Available Domain decomposition methods are, alongside multigrid methods, one of the dominant paradigms in contemporary large-scale partial differential equation simulation. In this paper, a lightweight implementation of a theoretically and numerically scalable preconditioner is presented in the context of overlapping methods. The performance of this work is assessed by numerical simulations executed on thousands of cores, for solving various highly heterogeneous elliptic problems in both 2D and 3D with billions of degrees of freedom. Such problems arise in computational science and engineering, in solid and fluid mechanics. While focusing on overlapping domain decomposition methods might seem too restrictive, it will be shown how this work can be applied to a variety of other methods, such as non-overlapping methods and abstract deflation based preconditioners. It is also presented how multilevel preconditioners can be used to avoid communication during an iterative process such as a Krylov method.

  20. Scalable Fabrication of 2D Semiconducting Crystals for Future Electronics

    Directory of Open Access Journals (Sweden)

    Jiantong Li

    2015-12-01

    Full Text Available Two-dimensional (2D layered materials are anticipated to be promising for future electronics. However, their electronic applications are severely restricted by the availability of such materials with high quality and at a large scale. In this review, we introduce systematically versatile scalable synthesis techniques in the literature for high-crystallinity large-area 2D semiconducting materials, especially transition metal dichalcogenides, and 2D material-based advanced structures, such as 2D alloys, 2D heterostructures and 2D material devices engineered at the wafer scale. Systematic comparison among different techniques is conducted with respect to device performance. The present status and the perspective for future electronics are discussed.

  1. A Modular, Scalable, Extensible, and Transparent Optical Packet Buffer

    Science.gov (United States)

    Small, Benjamin A.; Shacham, Assaf; Bergman, Keren

    2007-04-01

    We introduce a novel optical packet switching buffer architecture that is composed of multiple building-block modules, allowing for a large degree of scalability. The buffer supports independent and simultaneous read and write processes without packet rejection or misordering and can be considered a fully functional packet buffer. It can easily be programmed to support two prioritization schemes: first-in first-out (FIFO) and last-in first-out (LIFO). Because the system leverages semiconductor optical amplifiers as switching elements, wideband packets can be routed transparently. The operation of the system is discussed with illustrative packet sequences, which are then verified on an actual implementation composed of conventional fiber-optic componentry.

  2. Vortex Filaments in Grids for Scalable, Fine Smoke Simulation.

    Science.gov (United States)

    Meng, Zhang; Weixin, Si; Yinling, Qian; Hanqiu, Sun; Jing, Qin; Heng, Pheng-Ann

    2015-01-01

    Vortex modeling can produce attractive visual effects of dynamic fluids, which are widely applicable for dynamic media, computer games, special effects, and virtual reality systems. However, it is challenging to effectively simulate intensive and fine detailed fluids such as smoke with fast increasing vortex filaments and smoke particles. The authors propose a novel vortex filaments in grids scheme in which the uniform grids dynamically bridge the vortex filaments and smoke particles for scalable, fine smoke simulation with macroscopic vortex structures. Using the vortex model, their approach supports the trade-off between simulation speed and scale of details. After computing the whole velocity, external control can be easily exerted on the embedded grid to guide the vortex-based smoke motion. The experimental results demonstrate the efficiency of using the proposed scheme for a visually plausible smoke simulation with macroscopic vortex structures.

  3. Photonic Architecture for Scalable Quantum Information Processing in Diamond

    Directory of Open Access Journals (Sweden)

    Kae Nemoto

    2014-08-01

    Full Text Available Physics and information are intimately connected, and the ultimate information processing devices will be those that harness the principles of quantum mechanics. Many physical systems have been identified as candidates for quantum information processing, but none of them are immune from errors. The challenge remains to find a path from the experiments of today to a reliable and scalable quantum computer. Here, we develop an architecture based on a simple module comprising an optical cavity containing a single negatively charged nitrogen vacancy center in diamond. Modules are connected by photons propagating in a fiber-optical network and collectively used to generate a topological cluster state, a robust substrate for quantum information processing. In principle, all processes in the architecture can be deterministic, but current limitations lead to processes that are probabilistic but heralded. We find that the architecture enables large-scale quantum information processing with existing technology.

  4. Scalable Creation of Long-Lived Multipartite Entanglement

    Science.gov (United States)

    Kaufmann, H.; Ruster, T.; Schmiegelow, C. T.; Luda, M. A.; Kaushal, V.; Schulz, J.; von Lindenfels, D.; Schmidt-Kaler, F.; Poschinger, U. G.

    2017-10-01

    We demonstrate the deterministic generation of multipartite entanglement based on scalable methods. Four qubits are encoded in 40Ca+, stored in a microstructured segmented Paul trap. These qubits are sequentially entangled by laser-driven pairwise gate operations. Between these, the qubit register is dynamically reconfigured via ion shuttling operations, where ion crystals are separated and merged, and ions are moved in and out of a fixed laser interaction zone. A sequence consisting of three pairwise entangling gates yields a four-ion Greenberger-Horne-Zeilinger state |ψ ⟩=(1 /√{2 })(|0000 ⟩+|1111 ⟩) , and full quantum state tomography reveals a state fidelity of 94.4(3)%. We analyze the decoherence of this state and employ dynamic decoupling on the spatially distributed constituents to maintain 69(5)% coherence at a storage time of 1.1 sec.

  5. A Practical and Scalable Tool to Find Overlaps between Sequences

    Science.gov (United States)

    Haj Rachid, Maan

    2015-01-01

    The evolution of the next generation sequencing technology increases the demand for efficient solutions, in terms of space and time, for several bioinformatics problems. This paper presents a practical and easy-to-implement solution for one of these problems, namely, the all-pairs suffix-prefix problem, using a compact prefix tree. The paper demonstrates an efficient construction of this time-efficient and space-economical tree data structure. The paper presents techniques for parallel implementations of the proposed solution. Experimental evaluation indicates superior results in terms of space and time over existing solutions. Results also show that the proposed technique is highly scalable in a parallel execution environment. PMID:25961045

  6. A Practical and Scalable Tool to Find Overlaps between Sequences

    Directory of Open Access Journals (Sweden)

    Maan Haj Rachid

    2015-01-01

    Full Text Available The evolution of the next generation sequencing technology increases the demand for efficient solutions, in terms of space and time, for several bioinformatics problems. This paper presents a practical and easy-to-implement solution for one of these problems, namely, the all-pairs suffix-prefix problem, using a compact prefix tree. The paper demonstrates an efficient construction of this time-efficient and space-economical tree data structure. The paper presents techniques for parallel implementations of the proposed solution. Experimental evaluation indicates superior results in terms of space and time over existing solutions. Results also show that the proposed technique is highly scalable in a parallel execution environment.

  7. CloudETL: Scalable Dimensional ETL for Hadoop and Hive

    DEFF Research Database (Denmark)

    Xiufeng, Liu; Thomsen, Christian; Pedersen, Torben Bach

    Extract-Transform-Load (ETL) programs process data from sources into data warehouses (DWs). Due to the rapid growth of data volumes, there is an increasing demand for systems that can scale on demand. Recently, much attention has been given to MapReduce which is a framework for highly parallel...... handling of massive data sets in cloud environments. The MapReduce-based Hive has been proposed as a DBMS-like system for DWs and provides good and scalable analytical features. It is,however, still challenging to do proper dimensional ETL processing with Hive; for example, UPDATEs are not supported which...... makes handling of slowly changing dimensions (SCDs) very difficult. To remedy this, we here present the cloud-enabled ETL framework CloudETL. CloudETL uses the open source MapReduce implementation Hadoop to parallelize the ETL execution and to process data into Hive. The user defines the ETL process...

  8. A Software and Hardware IPTV Architecture for Scalable DVB Distribution

    Directory of Open Access Journals (Sweden)

    Georg Acher

    2009-01-01

    Full Text Available Many standards and even more proprietary technologies deal with IP-based television (IPTV. But none of them can transparently map popular public broadcast services such as DVB or ATSC to IPTV with acceptable effort. In this paper we explain why we believe that such a mapping using a light weight framework is an important step towards all-IP multimedia. We then present the NetCeiver architecture: it is based on well-known standards such as IPv6, and it allows zero configuration. The use of multicast streaming makes NetCeiver highly scalable. We also describe a low cost FPGA implementation of the proposed NetCeiver architecture, which can concurrently stream services from up to six full transponders.

  9. Adaptive Streaming of Scalable Videos over P2PTV

    Directory of Open Access Journals (Sweden)

    Youssef Lahbabi

    2015-01-01

    Full Text Available In this paper, we propose a new Scalable Video Coding (SVC quality-adaptive peer-to-peer television (P2PTV system executed at the peers and at the network. The quality adaptation mechanisms are developed as follows: on one hand, the Layer Level Initialization (LLI is used for adapting the video quality with the static resources at the peers in order to avoid long startup times. On the other hand, the Layer Level Adjustment (LLA is invoked periodically to adjust the SVC layer to the fluctuation of the network conditions with the aim of predicting the possible stalls before their occurrence. Our results demonstrate that our mechanisms allow quickly adapting the video quality to various system changes while providing best Quality of Experience (QoE that matches current resources of the peer devices and instantaneous throughput available at the network state.

  10. Scalable Quantum Circuit and Control for a Superconducting Surface Code

    Science.gov (United States)

    Versluis, R.; Poletto, S.; Khammassi, N.; Tarasinski, B.; Haider, N.; Michalak, D. J.; Bruno, A.; Bertels, K.; DiCarlo, L.

    2017-09-01

    We present a scalable scheme for executing the error-correction cycle of a monolithic surface-code fabric composed of fast-flux-tunable transmon qubits with nearest-neighbor coupling. An eight-qubit unit cell forms the basis for repeating both the quantum hardware and coherent control, enabling spatial multiplexing. This control uses three fixed frequencies for all single-qubit gates and a unique frequency-detuning pattern for each qubit in the cell. By pipelining the interaction and readout steps of ancilla-based X - and Z -type stabilizer measurements, we can engineer detuning patterns that avoid all second-order transmon-transmon interactions except those exploited in controlled-phase gates, regardless of fabric size. Our scheme is applicable to defect-based and planar logical qubits, including lattice surgery.

  11. Scalable and Flexible SLA Management Approach for Cloud

    Directory of Open Access Journals (Sweden)

    SHAUKAT MEHMOOD

    2017-01-01

    Full Text Available Cloud Computing is a cutting edge technology in market now a days. In Cloud Computing environment the customer should pay bills to use computing resources. Resource allocation is a primary task in a cloud environment. Significance of resources allocation and availability increase many fold because income of the cloud depends on how efficiently it provides the rented services to the clients. SLA (Service Level Agreement is signed between the cloud Services Provider and the Cloud Services Consumer to maintain stipulated QoS (Quality of Service. It is noted that SLAs are violated due to several reasons. These may include system malfunctions and change in workload conditions. Elastic and adaptive approaches are required to prevent SLA violations. We propose an application level monitoring novel scheme to prevent SLA violations. It is based on elastic and scalable characteristics. It is easy to deploy and use. It focuses on application level monitoring.

  12. A Scalable Framework and Prototype for CAS e-Science

    Directory of Open Access Journals (Sweden)

    Yuanchun Zhou

    2007-07-01

    Full Text Available Based on the Small-World model of CAS e-Science and the power low of Internet, this paper presents a scalable CAS e-Science Grid framework based on virtual region called Virtual Region Grid Framework (VRGF. VRGF takes virtual region and layer as logic manage-unit. In VRGF, the mode of intra-virtual region is pure P2P, and the model of inter-virtual region is centralized. Therefore, VRGF is decentralized framework with some P2P properties. Further more, VRGF is able to achieve satisfactory performance on resource organizing and locating at a small cost, and is well adapted to the complicated and dynamic features of scientific collaborations. We have implemented a demonstration VRGF based Grid prototype—SDG.

  13. MSDLSR: Margin Scalable Discriminative Least Squares Regression for Multicategory Classification.

    Science.gov (United States)

    Wang, Lingfeng; Zhang, Xu-Yao; Pan, Chunhong

    2016-12-01

    In this brief, we propose a new margin scalable discriminative least squares regression (MSDLSR) model for multicategory classification. The main motivation behind the MSDLSR is to explicitly control the margin of DLSR model. We first prove that the DLSR is a relaxation of the traditional L2 -support vector machine. Based on this fact, we further provide a theorem on the margin of DLSR. With this theorem, we add an explicit constraint on DLSR to restrict the number of zeros of dragging values, so as to control the margin of DLSR. The new model is called MSDLSR. Theoretically, we analyze the determination of the margin and support vectors of MSDLSR. Extensive experiments illustrate that our method outperforms the current state-of-the-art approaches on various machine leaning and real-world data sets.

  14. Optimization of Hierarchical Modulation for Use of Scalable Media

    Science.gov (United States)

    Liu, Yongheng; Heneghan, Conor

    2010-12-01

    This paper studies the Hierarchical Modulation, a transmission strategy of the approaching scalable multimedia over frequency-selective fading channel for improving the perceptible quality. An optimization strategy for Hierarchical Modulation and convolutional encoding, which can achieve the target bit error rates with minimum global signal-to-noise ratio in a single-user scenario, is suggested. This strategy allows applications to make a free choice of relationship between Higher Priority (HP) and Lower Priority (LP) stream delivery. The similar optimization can be used in multiuser scenario. An image transport task and a transport task of an H.264/MPEG4 AVC video embedding both QVGA and VGA resolutions are simulated as the implementation example of this optimization strategy, and demonstrate savings in SNR and improvement in Peak Signal-to-Noise Ratio (PSNR) for the particular examples shown.

  15. Simplifying Scalable Graph Processing with a Domain-Specific Language

    KAUST Repository

    Hong, Sungpack

    2014-01-01

    Large-scale graph processing, with its massive data sets, requires distributed processing. However, conventional frameworks for distributed graph processing, such as Pregel, use non-traditional programming models that are well-suited for parallelism and scalability but inconvenient for implementing non-trivial graph algorithms. In this paper, we use Green-Marl, a Domain-Specific Language for graph analysis, to intuitively describe graph algorithms and extend its compiler to generate equivalent Pregel implementations. Using the semantic information captured by Green-Marl, the compiler applies a set of transformation rules that convert imperative graph algorithms into Pregel\\'s programming model. Our experiments show that the Pregel programs generated by the Green-Marl compiler perform similarly to manually coded Pregel implementations of the same algorithms. The compiler is even able to generate a Pregel implementation of a complicated graph algorithm for which a manual Pregel implementation is very challenging.

  16. Scalable load-balance measurement for SPMD codes

    Energy Technology Data Exchange (ETDEWEB)

    Gamblin, T; de Supinski, B R; Schulz, M; Fowler, R; Reed, D

    2008-08-05

    Good load balance is crucial on very large parallel systems, but the most sophisticated algorithms introduce dynamic imbalances through adaptation in domain decomposition or use of adaptive solvers. To observe and diagnose imbalance, developers need system-wide, temporally-ordered measurements from full-scale runs. This potentially requires data collection from multiple code regions on all processors over the entire execution. Doing this instrumentation naively can, in combination with the application itself, exceed available I/O bandwidth and storage capacity, and can induce severe behavioral perturbations. We present and evaluate a novel technique for scalable, low-error load balance measurement. This uses a parallel wavelet transform and other parallel encoding methods. We show that our technique collects and reconstructs system-wide measurements with low error. Compression time scales sublinearly with system size and data volume is several orders of magnitude smaller than the raw data. The overhead is low enough for online use in a production environment.

  17. Fabrication of heterogeneous nanomaterial array by programmable heating and chemical supply within microfluidic platform towards multiplexed gas sensing application

    Science.gov (United States)

    Yang, Daejong; Kang, Kyungnam; Kim, Donghwan; Li, Zhiyong; Park, Inkyu

    2015-01-01

    A facile top-down/bottom-up hybrid nanofabrication process based on programmable temperature control and parallel chemical supply within microfluidic platform has been developed for the all liquid-phase synthesis of heterogeneous nanomaterial arrays. The synthesized materials and locations can be controlled by local heating with integrated microheaters and guided liquid chemical flow within microfluidic platform. As proofs-of-concept, we have demonstrated the synthesis of two types of nanomaterial arrays: (i) parallel array of TiO2 nanotubes, CuO nanospikes and ZnO nanowires, and (ii) parallel array of ZnO nanowire/CuO nanospike hybrid nanostructures, CuO nanospikes and ZnO nanowires. The laminar flow with negligible ionic diffusion between different precursor solutions as well as localized heating was verified by numerical calculation and experimental result of nanomaterial array synthesis. The devices made of heterogeneous nanomaterial array were utilized as a multiplexed sensor for toxic gases such as NO2 and CO. This method would be very useful for the facile fabrication of functional nanodevices based on highly integrated arrays of heterogeneous nanomaterials. PMID:25634814

  18. Fabrication of heterogeneous nanomaterial array by programmable heating and chemical supply within microfluidic platform towards multiplexed gas sensing application

    Science.gov (United States)

    Yang, Daejong; Kang, Kyungnam; Kim, Donghwan; Li, Zhiyong; Park, Inkyu

    2015-01-01

    A facile top-down/bottom-up hybrid nanofabrication process based on programmable temperature control and parallel chemical supply within microfluidic platform has been developed for the all liquid-phase synthesis of heterogeneous nanomaterial arrays. The synthesized materials and locations can be controlled by local heating with integrated microheaters and guided liquid chemical flow within microfluidic platform. As proofs-of-concept, we have demonstrated the synthesis of two types of nanomaterial arrays: (i) parallel array of TiO2 nanotubes, CuO nanospikes and ZnO nanowires, and (ii) parallel array of ZnO nanowire/CuO nanospike hybrid nanostructures, CuO nanospikes and ZnO nanowires. The laminar flow with negligible ionic diffusion between different precursor solutions as well as localized heating was verified by numerical calculation and experimental result of nanomaterial array synthesis. The devices made of heterogeneous nanomaterial array were utilized as a multiplexed sensor for toxic gases such as NO2 and CO. This method would be very useful for the facile fabrication of functional nanodevices based on highly integrated arrays of heterogeneous nanomaterials.

  19. Nanocalorimeter platform for in situ specific heat measurements and x-ray diffraction at low temperature

    Energy Technology Data Exchange (ETDEWEB)

    Willa, K. [Materials Science Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, Illinois 60439, USA; Diao, Z. [Department of Physics, Stockholm University, SE-106 91 Stockholm, Sweden; Laboratory of Mathematics, Physics and Electrical Engineering, Halmstad University, P.O. Box 823, SE-301 18 Halmstad, Sweden; Campanini, D. [Department of Physics, Stockholm University, SE-106 91 Stockholm, Sweden; Welp, U. [Materials Science Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, Illinois 60439, USA; Divan, R. [Center for Nanoscale Materials, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, Illinois 60439, USA; Hudl, M. [Department of Physics, Stockholm University, SE-106 91 Stockholm, Sweden; Islam, Z. [X-ray Science Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, Illinois 60439, USA; Kwok, W. -K. [Materials Science Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, Illinois 60439, USA; Rydh, A. [Department of Physics, Stockholm University, SE-106 91 Stockholm, Sweden

    2017-12-01

    Recent advances in electronics and nanofabrication have enabled membrane-based nanocalorimetry for measurements of the specific heat of microgram-sized samples. We have integrated a nanocalorimeter platform into a 4.5 T split-pair vertical-field magnet to allow for the simultaneous measurement of the specific heat and x-ray scattering in magnetic fields and at temperatures as low as 4 K. This multi-modal approach empowers researchers to directly correlate scattering experiments with insights from thermodynamic properties including structural, electronic, orbital, and magnetic phase transitions. The use of a nanocalorimeter sample platform enables numerous technical advantages: precise measurement and control of the sample temperature, quantification of beam heating effects, fast and precise positioning of the sample in the x-ray beam, and fast acquisition of x-ray scans over a wide temperature range without the need for time-consuming re-centering and re-alignment. Furthermore, on an YBa2Cu3O7-delta crystal and a copper foil, we demonstrate a novel approach to x-ray absorption spectroscopy by monitoring the change in sample temperature as a function of incident photon energy. Finally, we illustrate the new insights that can be gained from in situ structural and thermodynamic measurements by investigating the superheated state occurring at the first-order magneto-elastic phase transition of Fe2P, a material that is of interest for magnetocaloric applications.

  20. fastBMA: scalable network inference and transitive reduction.

    Science.gov (United States)

    Hung, Ling-Hong; Shi, Kaiyuan; Wu, Migao; Young, William Chad; Raftery, Adrian E; Yeung, Ka Yee

    2017-10-01

    Inferring genetic networks from genome-wide expression data is extremely demanding computationally. We have developed fastBMA, a distributed, parallel, and scalable implementation of Bayesian model averaging (BMA) for this purpose. fastBMA also includes a computationally efficient module for eliminating redundant indirect edges in the network by mapping the transitive reduction to an easily solved shortest-path problem. We evaluated the performance of fastBMA on synthetic data and experimental genome-wide time series yeast and human datasets. When using a single CPU core, fastBMA is up to 100 times faster than the next fastest method, LASSO, with increased accuracy. It is a memory-efficient, parallel, and distributed application that scales to human genome-wide expression data. A 10 000-gene regulation network can be obtained in a matter of hours using a 32-core cloud cluster (2 nodes of 16 cores). fastBMA is a significant improvement over its predecessor ScanBMA. It is more accurate and orders of magnitude faster than other fast network inference methods such as the 1 based on LASSO. The improved scalability allows it to calculate networks from genome scale data in a reasonable time frame. The transitive reduction method can improve accuracy in denser networks. fastBMA is available as code (M.I.T. license) from GitHub (https://github.com/lhhunghimself/fastBMA), as part of the updated networkBMA Bioconductor package (https://www.bioconductor.org/packages/release/bioc/html/networkBMA.html) and as ready-to-deploy Docker images (https://hub.docker.com/r/biodepot/fastbma/). © The Authors 2017. Published by Oxford University Press.

  1. Performance and complexity of color gamut scalable coding

    Science.gov (United States)

    He, Yuwen; Ye, Yan; Xiu, Xiaoyu

    2015-09-01

    Wide color gamut such as BT.2020 allows pictures to be rendered with sharper details and more vivid colors. It is considered an essential video parameter for next generation content generation, and has recently drawn significant commercial interest. As the upgrade cycle of content production work flow and consumer displays take place, current generation and next generation video content are expected to co-exist. Thus, maintaining backward compatibility becomes an important consideration for efficient content delivery systems. The scalable extension of HEVC (SHVC) was recently finalized in the second version of the HEVC specifications. SHVC provides a color mapping tool to improve scalable coding efficiency when the base layer and the enhancement layer video signals are in a different color gamut. The SHVC color mapping tool uses a 3D Look-Up Table (3D LUT) based cross-color linear model to efficiently convert the video in the base layer color gamut into the enhancement layer color gamut. Due to complexity concerns, certain limitations, including limiting the maximum 3D LUT size to 8x2x2, were applied to the color mapping process in SHVC. In this paper, we investigate the complexity and performance trade-off of the 3D LUT based color mapping process. Specifically, we explore the performance benefits of enlarging the size of the 3D LUT with various linear models. In order to reduce computational complexity, a simplified linear model is used within each 3D LUT partition. Simulation results are provided to detail the various performance vs. complexity trade-offs achievable in the proposed design.

  2. A scalable method for computing quadruplet wave-wave interactions

    Science.gov (United States)

    Van Vledder, Gerbrant

    2017-04-01

    Non-linear four-wave interactions are a key physical process in the evolution of wind generated ocean waves. The present generation operational wave models use the Discrete Interaction Approximation (DIA), but it accuracy is poor. It is now generally acknowledged that the DIA should be replaced with a more accurate method to improve predicted spectral shapes and derived parameters. The search for such a method is challenging as one should find a balance between accuracy and computational requirements. Such a method is presented here in the form of a scalable and adaptive method that can mimic both the time consuming exact Snl4 approach and the fast but inaccurate DIA, and everything in between. The method provides an elegant approach to improve the DIA, not by including more arbitrarily shaped wave number configurations, but by a mathematically consistent reduction of an exact method, viz. the WRT method. The adaptiveness is to adapt the abscissa of the locus integrand in relation to the magnitude of the known terms. The adaptiveness is extended to the highest level of the WRT method to select interacting wavenumber configurations in a hierarchical way in relation to their importance. This adaptiveness results in a speed-up of one to three orders of magnitude depending on the measure of accuracy. This definition of accuracy should not be expressed in terms of the quality of the transfer integral for academic spectra but rather in terms of wave model performance in a dynamic run. This has consequences for the balance between the required accuracy and the computational workload for evaluating these interactions. The performance of the scalable method on different scales is illustrated with results from academic spectra, simple growth curves to more complicated field cases using a 3G-wave model.

  3. BBCA: Improving the scalability of *BEAST using random binning.

    Science.gov (United States)

    Zimmermann, Théo; Mirarab, Siavash; Warnow, Tandy

    2014-01-01

    Species tree estimation can be challenging in the presence of gene tree conflict due to incomplete lineage sorting (ILS), which can occur when the time between speciation events is short relative to the population size. Of the many methods that have been developed to estimate species trees in the presence of ILS, *BEAST, a Bayesian method that co-estimates the species tree and gene trees given sequence alignments on multiple loci, has generally been shown to have the best accuracy. However, *BEAST is extremely computationally intensive so that it cannot be used with large numbers of loci; hence, *BEAST is not suitable for genome-scale analyses. We present BBCA (boosted binned coalescent-based analysis), a method that can be used with *BEAST (and other such co-estimation methods) to improve scalability. BBCA partitions the loci randomly into subsets, uses *BEAST on each subset to co-estimate the gene trees and species tree for the subset, and then combines the newly estimated gene trees together using MP-EST, a popular coalescent-based summary method. We compare time-restricted versions of BBCA and *BEAST on simulated datasets, and show that BBCA is at least as accurate as *BEAST, and achieves better convergence rates for large numbers of loci. Phylogenomic analysis using *BEAST is currently limited to datasets with a small number of loci, and analyses with even just 100 loci can be computationally challenging. BBCA uses a very simple divide-and-conquer approach that makes it possible to use *BEAST on datasets containing hundreds of loci. This study shows that BBCA provides excellent accuracy and is highly scalable.

  4. Reimagining the microscope in the 21st century using the scalable adaptive graphics environment

    Science.gov (United States)

    Mateevitsi, Victor; Patel, Tushar; Leigh, Jason; Levy, Bruce

    2015-01-01

    Background: Whole-slide imaging (WSI), while technologically mature, remains in the early adopter phase of the technology adoption lifecycle. One reason for this current situation is that current methods of visualizing and using WSI closely follow long-existing workflows for glass slides. We set out to “reimagine” the digital microscope in the era of cloud computing by combining WSI with the rich collaborative environment of the Scalable Adaptive Graphics Environment (SAGE). SAGE is a cross-platform, open-source visualization and collaboration tool that enables users to access, display and share a variety of data-intensive information, in a variety of resolutions and formats, from multiple sources, on display walls of arbitrary size. Methods: A prototype of a WSI viewer app in the SAGE environment was created. While not full featured, it enabled the testing of our hypothesis that these technologies could be blended together to change the essential nature of how microscopic images are utilized for patient care, medical education, and research. Results: Using the newly created WSI viewer app, demonstration scenarios were created in the patient care and medical education scenarios. This included a live demonstration of a pathology consultation at the International Academy of Digital Pathology meeting in Boston in November 2014. Conclusions: SAGE is well suited to display, manipulate and collaborate using WSIs, along with other images and data, for a variety of purposes. It goes beyond how glass slides and current WSI viewers are being used today, changing the nature of digital pathology in the process. A fully developed WSI viewer app within SAGE has the potential to encourage the wider adoption of WSI throughout pathology. PMID:26110092

  5. Reimagining the microscope in the 21(st) century using the scalable adaptive graphics environment.

    Science.gov (United States)

    Mateevitsi, Victor; Patel, Tushar; Leigh, Jason; Levy, Bruce

    2015-01-01

    Whole-slide imaging (WSI), while technologically mature, remains in the early adopter phase of the technology adoption lifecycle. One reason for this current situation is that current methods of visualizing and using WSI closely follow long-existing workflows for glass slides. We set out to "reimagine" the digital microscope in the era of cloud computing by combining WSI with the rich collaborative environment of the Scalable Adaptive Graphics Environment (SAGE). SAGE is a cross-platform, open-source visualization and collaboration tool that enables users to access, display and share a variety of data-intensive information, in a variety of resolutions and formats, from multiple sources, on display walls of arbitrary size. A prototype of a WSI viewer app in the SAGE environment was created. While not full featured, it enabled the testing of our hypothesis that these technologies could be blended together to change the essential nature of how microscopic images are utilized for patient care, medical education, and research. Using the newly created WSI viewer app, demonstration scenarios were created in the patient care and medical education scenarios. This included a live demonstration of a pathology consultation at the International Academy of Digital Pathology meeting in Boston in November 2014. SAGE is well suited to display, manipulate and collaborate using WSIs, along with other images and data, for a variety of purposes. It goes beyond how glass slides and current WSI viewers are being used today, changing the nature of digital pathology in the process. A fully developed WSI viewer app within SAGE has the potential to encourage the wider adoption of WSI throughout pathology.

  6. Web Platform Application

    Energy Technology Data Exchange (ETDEWEB)

    Paulsworth, Ashley [Sunvestment Group, Frederick, MD (United States); Kurtz, Jim [Sunvestment Group, Frederick, MD (United States); Brun de Pontet, Stephanie [Sunvestment Group, Frederick, MD (United States)

    2016-06-15

    Sunvestment Energy Group (previously called Sunvestment Group) was established to create a web application that brings together site hosts, those who will obtain the energy from the solar array, with project developers and funders, including affinity investors. Sunvestment Energy Group (SEG) uses a community-based model that engages with investors who have some affinity with the site host organization. In addition to a financial return, these investors receive non-financial value from their investments and are therefore willing to offer lower cost capital. This enables the site host to enjoy more savings from solar through these less expensive Community Power Purchase Agreements (CPPAs). The purpose of this award was to develop an online platform to bring site hosts and investors together virtually.

  7. Energy Tracking Software Platform

    Energy Technology Data Exchange (ETDEWEB)

    Ryan Davis; Nathan Bird; Rebecca Birx; Hal Knowles

    2011-04-04

    Acceleration has created an interactive energy tracking and visualization platform that supports decreasing electric, water, and gas usage. Homeowners have access to tools that allow them to gauge their use and track progress toward a smaller energy footprint. Real estate agents have access to consumption data, allowing for sharing a comparison with potential home buyers. Home builders have the opportunity to compare their neighborhood's energy efficiency with competitors. Home energy raters have a tool for gauging the progress of their clients after efficiency changes. And, social groups are able to help encourage members to reduce their energy bills and help their environment. EnergyIT.com is the business umbrella for all energy tracking solutions and is designed to provide information about our energy tracking software and promote sales. CompareAndConserve.com (Gainesville-Green.com) helps homeowners conserve energy through education and competition. ToolsForTenants.com helps renters factor energy usage into their housing decisions.

  8. DNA Based Molecular Scale Nanofabrication

    Science.gov (United States)

    2015-12-04

    high temperatures and corrosive chemicals. However, DNA is a soft, chemically labile material that has limited thermal and chemical stability. Due to...interfacial properties, such as biofouling, adhesion, frication and electrochemistry . Part of this result was recently published in four peer...graphite" Nature Mater., 2013, 12, 925-931 2. Zhou, F.; Li, Z.; Shenoy, G. J.; Lei, L.; Liu, H. "Enhanced Room-Temperature Corrosion of Copper in the

  9. Power Quality Indices Estimation Platform

    Directory of Open Access Journals (Sweden)

    Eliana I. Arango-Zuluaga

    2013-11-01

    Full Text Available An interactive platform for estimating the quality indices in single phase electric power systems is presented. It meets the IEEE 1459-2010 standard recommendations. The platform was developed in order to support teaching and research activities in electric power quality. The platform estimates the power quality indices from voltage and current signals using three different algorithms based on fast Fourier transform (FFT, wavelet packet transform (WPT and least squares method. The results show that the algorithms implemented are efficient for estimating the quality indices of the power and the platform can be used according to the objectives established. 

  10. Flexible experimental FPGA based platform

    DEFF Research Database (Denmark)

    Andersen, Karsten Holm; Nymand, Morten

    2016-01-01

    This paper presents an experimental flexible Field Programmable Gate Array (FPGA) based platform for testing and verifying digital controlled dc-dc converters. The platform supports different types of control strategies, dc-dc converter topologies and switching frequencies. The controller platform...... interface supporting configuration and reading of setup parameters, controller status and the acquisition memory in a simple way. The FPGA based platform, provides an easy way within education or research to use different digital control strategies and different converter topologies controlled by an FPGA...

  11. Data portability among online platforms

    Directory of Open Access Journals (Sweden)

    Barbara Engels

    2016-06-01

    Full Text Available This paper examines the competition effects of data portability among online platforms, providing policy recommendations for the preservation of innovative, undistorted competitive markets. Based on a platform-data model, it is illustrated how users, data and the products of a platform are related. Platform markets which entail an especially high risk of market power abuse are determined. It is concluded that the right to data portability as in the EU’s General Data Protection Regulation has to be interpreted in a nuanced fashion in order to avoid adverse effects on competition and innovation.

  12. Tier-Scalable Reconnaissance Missions for Autonomous Exploration and Spatio-Temporal Monitoring of Climate Change with Particular Application to Glaciers and their Environs

    Science.gov (United States)

    Fink, W.; Tarbell, M. A.; Furfaro, R.; Kargel, J. S.

    2010-12-01

    Spatio-temporal monitoring of climate change and its impacts is needed globally and thus requires satellite-based observations and analysis. However, needed ground truth can only be obtained in situ. In situ exploration of extreme and often hazardous environments can pose a significant challenge to human access. We propose the use of a disruptive exploration paradigm that has earlier been introduced with autonomous robotic space exploration, termed Tier-Scalable Reconnaissance (PSS 2005; SCIENCE 2010). Tier-scalable reconnaissance utilizes orbital, aerial, and surface/subsurface robotic platforms working in concert, enabling event-driven and integrated global to regional to local reconnaissance capabilities. We report on the development of a robotic test bed for Tier-scalable Reconnaissance at the University of Arizona and Caltech (SCIENCE 2010) for distributed and science-driven autonomous exploration, mapping, and spatio-temporal monitoring of climate change in hazardous or inaccessible environments. We focus in particular on glaciers and their environs, especially glacier lakes. Such glacier lakes can pose a significant natural hazard to inhabited areas and economies downstream. The test bed currently comprises several robotic surface vehicles: rovers equipped with cameras, and boats equipped with cameras and side-scanning sonar technology for bathymetry and the characterization of subsurface structures in glacier lakes and other water bodies. To achieve a fully operational Tier-scalable Reconnaissance test bed, aerial platforms will be integrated in short order. Automated mapping and spatio-temporal monitoring of glaciers and their environs necessitate increasing degrees of operational autonomy: (1) Automatic mapping of an operational area from different vantages (i.e., airborne, surface, subsurface); (2) automatic sensor deployment and sensor data gathering; (3) automatic feature extraction and region-of-interest/anomaly identification within the mapped

  13. GASPRNG: GPU accelerated scalable parallel random number generator library

    Science.gov (United States)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or

  14. Computational and experimental platform for understanding and optimizing water flux and salt rejection in nanoporous membranes.

    Energy Technology Data Exchange (ETDEWEB)

    Rempe, Susan B.

    2010-09-01

    Affordable clean water is both a global and a national security issue as lack of it can cause death, disease, and international tension. Furthermore, efficient water filtration reduces the demand for energy, another national issue. The best current solution to clean water lies in reverse osmosis (RO) membranes that remove salts from water with applied pressure, but widely used polymeric membrane technology is energy intensive and produces water depleted in useful electrolytes. Furthermore incremental improvements, based on engineering solutions rather than new materials, have yielded only modest gains in performance over the last 25 years. We have pursued a creative and innovative new approach to membrane design and development for cheap desalination membranes by approaching the problem at the molecular level of pore design. Our inspiration comes from natural biological channels, which permit faster water transport than current reverse osmosis membranes and selectively pass healthy ions. Aiming for an order-of-magnitude improvement over mature polymer technology carries significant inherent risks. The success of our fundamental research effort lies in our exploiting, extending, and integrating recent advances by our team in theory, modeling, nano-fabrication and platform development. A combined theoretical and experimental platform has been developed to understand the interplay between water flux and ion rejection in precisely-defined nano-channels. Our innovative functionalization of solid state nanoporous membranes with organic protein-mimetic polymers achieves 3-fold improvement in water flux over commercial RO membranes and has yielded a pending patent and industrial interest. Our success has generated useful contributions to energy storage, nanoscience, and membrane technology research and development important for national health and prosperity.

  15. A hybrid rule and machine learning based generic alerting platform for smart environments.

    Science.gov (United States)

    Rafferty, Joseph; Synnott, Jonathan; Nugent, Chris

    2016-08-01

    Existing smart environment based alert solutions have adopted a relatively complex and tailored approach to supporting individuals. These solutions have involved sensor based monitoring, activity recognition and assistance provisioning. Traditionally they have suffered from a number of issues, rooted in scalability and performance, associated with complex activity recognition processes. This paper introduces a generic approach to realizing an alerting platform within a smart environment. The core concept of this approach is presented and placed within the context of related work. A description of the approach is provided, followed by an evaluation. This evaluation shows the approach offers reasonable accuracy, future work will increase accuracy.

  16. SciCloud: A Scientific Cloud and Management Platform for Smart City Data

    DEFF Research Database (Denmark)

    Liu, Xiufeng; Nielsen, Per Sieverts; Heller, Alfred

    2017-01-01

    The pervasive use of Internet of Things and smart meter technologies in smart cities increases the complexity of managing the data, due to their sizes, diversity, and privacy issues. This requires an innovate solution to process and manage the data effectively. This paper presents an elastic...... private scientific cloud, SciCloud, to tackle these grand challenges. SciCloud provides on-demand computing resource provisions, a scalable data management platform and an in-place data analytics environment to support the scientific research using smart city data....

  17. SciCloud: A Scientific Cloud and Management Platform for Smart City Data

    DEFF Research Database (Denmark)

    Liu, Xiufeng; Nielsen, Per Sieverts; Heller, Alfred

    The pervasive use of Internet of Things and smart meter technologies in smart cities increases the complexity of managing the data, due to their sizes, diversity, and privacy issues. This requires an innovate solution to process and manage the data effectively. This paper presents an elastic...... private scientific cloud, SciCloud, to tackle these grand challenges. SciCloud provides on-demand computing resource provisions, a scalable data management platform and an in-place data analytics environment to support the scientific research using smart city data....

  18. Scalable production of biliverdin IXα by Escherichia coli.

    Science.gov (United States)

    Chen, Dong; Brown, Jason D; Kawasaki, Yukie; Bommer, Jerry; Takemoto, Jon Y

    2012-11-23

    Biliverdin IXα is produced when heme undergoes reductive ring cleavage at the α-methene bridge catalyzed by heme oxygenase. It is subsequently reduced by biliverdin reductase to bilirubin IXα which is a potent endogenous antioxidant. Biliverdin IXα, through interaction with biliverdin reductase, also initiates signaling pathways leading to anti-inflammatory responses and suppression of cellular pro-inflammatory events. The use of biliverdin IXα as a cytoprotective therapeutic has been suggested, but its clinical development and use is currently limited by insufficient quantity, uncertain purity, and derivation from mammalian materials. To address these limitations, methods to produce, recover and purify biliverdin IXα from bacterial cultures of Escherichia coli were investigated and developed. Recombinant E. coli strains BL21(HO1) and BL21(mHO1) expressing cyanobacterial heme oxygenase gene ho1 and a sequence modified version (mho1) optimized for E. coli expression, respectively, were constructed and shown to produce biliverdin IXα in batch and fed-batch bioreactor cultures. Strain BL21(mHO1) produced roughly twice the amount of biliverdin IXα than did strain BL21(HO1). Lactose either alone or in combination with glycerol supported consistent biliverdin IXα production by strain BL21(mHO1) (up to an average of 23. 5mg L(-1) culture) in fed-batch mode and production by strain BL21 (HO1) in batch-mode was scalable to 100L bioreactor culture volumes. Synthesis of the modified ho1 gene protein product was determined, and identity of the enzyme reaction product as biliverdin IXα was confirmed by spectroscopic and chromatographic analyses and its ability to serve as a substrate for human biliverdin reductase A. Methods for the scalable production, recovery, and purification of biliverdin IXα by E. coli were developed based on expression of a cyanobacterial ho1 gene. The purity of the produced biliverdin IXα and its ability to serve as substrate for human

  19. Scalable production of biliverdin IXα by Escherichia coli

    Directory of Open Access Journals (Sweden)

    Chen Dong

    2012-11-01

    Full Text Available Abstract Background Biliverdin IXα is produced when heme undergoes reductive ring cleavage at the α-methene bridge catalyzed by heme oxygenase. It is subsequently reduced by biliverdin reductase to bilirubin IXα which is a potent endogenous antioxidant. Biliverdin IXα, through interaction with biliverdin reductase, also initiates signaling pathways leading to anti-inflammatory responses and suppression of cellular pro-inflammatory events. The use of biliverdin IXα as a cytoprotective therapeutic has been suggested, but its clinical development and use is currently limited by insufficient quantity, uncertain purity, and derivation from mammalian materials. To address these limitations, methods to produce, recover and purify biliverdin IXα from bacterial cultures of Escherichia coli were investigated and developed. Results Recombinant E. coli strains BL21(HO1 and BL21(mHO1 expressing cyanobacterial heme oxygenase gene ho1 and a sequence modified version (mho1 optimized for E. coli expression, respectively, were constructed and shown to produce biliverdin IXα in batch and fed-batch bioreactor cultures. Strain BL21(mHO1 produced roughly twice the amount of biliverdin IXα than did strain BL21(HO1. Lactose either alone or in combination with glycerol supported consistent biliverdin IXα production by strain BL21(mHO1 (up to an average of 23. 5mg L-1 culture in fed-batch mode and production by strain BL21 (HO1 in batch-mode was scalable to 100L bioreactor culture volumes. Synthesis of the modified ho1 gene protein product was determined, and identity of the enzyme reaction product as biliverdin IXα was confirmed by spectroscopic and chromatographic analyses and its ability to serve as a substrate for human biliverdin reductase A. Conclusions Methods for the scalable production, recovery, and purification of biliverdin IXα by E. coli were developed based on expression of a cyanobacterial ho1 gene. The purity of the produced biliverdin IXα and

  20. A Scalable Gaussian Process Analysis Algorithm for Biomass Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Chandola, Varun [ORNL; Vatsavai, Raju [ORNL

    2011-01-01

    Biomass monitoring is vital for studying the carbon cycle of earth's ecosystem and has several significant implications, especially in the context of understanding climate change and its impacts. Recently, several change detection methods have been proposed to identify land cover changes in temporal profiles (time series) of vegetation collected using remote sensing instruments, but do not satisfy one or both of the two requirements of the biomass monitoring problem, i.e., {\\em operating in online mode} and {\\em handling periodic time series}. In this paper, we adapt Gaussian process regression to detect changes in such time series in an online fashion. While Gaussian process (GP) have been widely used as a kernel based learning method for regression and classification, their applicability to massive spatio-temporal data sets, such as remote sensing data, has been limited owing to the high computational costs involved. We focus on addressing the scalability issues associated with the proposed GP based change detection algorithm. This paper makes several significant contributions. First, we propose a GP based online time series change detection algorithm and demonstrate its effectiveness in detecting different types of changes in {\\em Normalized Difference Vegetation Index} (NDVI) data obtained from a study area in Iowa, USA. Second, we propose an efficient Toeplitz matrix based solution which significantly improves the computational complexity and memory requirements of the proposed GP based method. Specifically, the proposed solution can analyze a time series of length $t$ in $O(t^2)$ time while maintaining a $O(t)$ memory footprint, compared to the $O(t^3)$ time and $O(t^2)$ memory requirement of standard matrix manipulation based methods. Third, we describe a parallel version of the proposed solution which can be used to simultaneously analyze a large number of time series. We study three different parallel implementations: using threads, MPI, and a