WorldWideScience

Sample records for highly scalable udp-based

  1. Embedded High Performance Scalable Computing Systems

    National Research Council Canada - National Science Library

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  2. Highly scalable Ab initio genomic motif identification

    KAUST Repository

    Marchand, Benoit; Bajic, Vladimir B.; Kaushik, Dinesh

    2011-01-01

    We present results of scaling an ab initio motif family identification system, Dragon Motif Finder (DMF), to 65,536 processor cores of IBM Blue Gene/P. DMF seeks groups of mutually similar polynucleotide patterns within a set of genomic sequences and builds various motif families from them. Such information is of relevance to many problems in life sciences. Prior attempts to scale such ab initio motif-finding algorithms achieved limited success. We solve the scalability issues using a combination of mixed-mode MPI-OpenMP parallel programming, master-slave work assignment, multi-level workload distribution, multi-level MPI collectives, and serial optimizations. While the scalability of our algorithm was excellent (94% parallel efficiency on 65,536 cores relative to 256 cores on a modest-size problem), the final speedup with respect to the original serial code exceeded 250,000 when serial optimizations are included. This enabled us to carry out many large-scale ab initio motiffinding simulations in a few hours while the original serial code would have needed decades of execution time. Copyright 2011 ACM.

  3. Scalability of DL_POLY on High Performance Computing Platform

    Directory of Open Access Journals (Sweden)

    Mabule Samuel Mabakane

    2017-12-01

    Full Text Available This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.

  4. Highly Scalable Trip Grouping for Large Scale Collective Transportation Systems

    DEFF Research Database (Denmark)

    Gidofalvi, Gyozo; Pedersen, Torben Bach; Risch, Tore

    2008-01-01

    Transportation-related problems, like road congestion, parking, and pollution, are increasing in most cities. In order to reduce traffic, recent work has proposed methods for vehicle sharing, for example for sharing cabs by grouping "closeby" cab requests and thus minimizing transportation cost...... and utilizing cab space. However, the methods published so far do not scale to large data volumes, which is necessary to facilitate large-scale collective transportation systems, e.g., ride-sharing systems for large cities. This paper presents highly scalable trip grouping algorithms, which generalize previous...

  5. A highly scalable peptide-based assay system for proteomics.

    Directory of Open Access Journals (Sweden)

    Igor A Kozlov

    Full Text Available We report a scalable and cost-effective technology for generating and screening high-complexity customizable peptide sets. The peptides are made as peptide-cDNA fusions by in vitro transcription/translation from pools of DNA templates generated by microarray-based synthesis. This approach enables large custom sets of peptides to be designed in silico, manufactured cost-effectively in parallel, and assayed efficiently in a multiplexed fashion. The utility of our peptide-cDNA fusion pools was demonstrated in two activity-based assays designed to discover protease and kinase substrates. In the protease assay, cleaved peptide substrates were separated from uncleaved and identified by digital sequencing of their cognate cDNAs. We screened the 3,011 amino acid HCV proteome for susceptibility to cleavage by the HCV NS3/4A protease and identified all 3 known trans cleavage sites with high specificity. In the kinase assay, peptide substrates phosphorylated by tyrosine kinases were captured and identified by sequencing of their cDNAs. We screened a pool of 3,243 peptides against Abl kinase and showed that phosphorylation events detected were specific and consistent with the known substrate preferences of Abl kinase. Our approach is scalable and adaptable to other protein-based assays.

  6. High-performance, scalable optical network-on-chip architectures

    Science.gov (United States)

    Tan, Xianfang

    The rapid advance of technology enables a large number of processing cores to be integrated into a single chip which is called a Chip Multiprocessor (CMP) or a Multiprocessor System-on-Chip (MPSoC) design. The on-chip interconnection network, which is the communication infrastructure for these processing cores, plays a central role in a many-core system. With the continuously increasing complexity of many-core systems, traditional metallic wired electronic networks-on-chip (NoC) became a bottleneck because of the unbearable latency in data transmission and extremely high energy consumption on chip. Optical networks-on-chip (ONoC) has been proposed as a promising alternative paradigm for electronic NoC with the benefits of optical signaling communication such as extremely high bandwidth, negligible latency, and low power consumption. This dissertation focus on the design of high-performance and scalable ONoC architectures and the contributions are highlighted as follow: 1. A micro-ring resonator (MRR)-based Generic Wavelength-routed Optical Router (GWOR) is proposed. A method for developing any sized GWOR is introduced. GWOR is a scalable non-blocking ONoC architecture with simple structure, low cost and high power efficiency compared to existing ONoC designs. 2. To expand the bandwidth and improve the fault tolerance of the GWOR, a redundant GWOR architecture is designed by cascading different type of GWORs into one network. 3. The redundant GWOR built with MRR-based comb switches is proposed. Comb switches can expand the bandwidth while keep the topology of GWOR unchanged by replacing the general MRRs with comb switches. 4. A butterfly fat tree (BFT)-based hybrid optoelectronic NoC (HONoC) architecture is developed in which GWORs are used for global communication and electronic routers are used for local communication. The proposed HONoC uses less numbers of electronic routers and links than its counterpart of electronic BFT-based NoC. It takes the advantages of

  7. A Testbed for Highly-Scalable Mission Critical Information Systems

    National Research Council Canada - National Science Library

    Birman, Kenneth P

    2005-01-01

    ... systems in a networked environment. Headed by Professor Ken Birman, the project is exploring a novel fusion of classical protocols for reliable multicast communication with a new style of peer-to-peer protocol called scalable "gossip...

  8. High-performance scalable Information Service for the ATLAS experiment

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Hauser, R

    2012-01-01

    The ATLAS experiment is being operated by highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to access the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS TDAQ project. The IS provides high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about hundred gigabytes of information which is being constantly updated with the update interval varying from a second to few tens of seconds. IS ...

  9. Highly scalable and robust rule learner: performance evaluation and comparison.

    Science.gov (United States)

    Kurgan, Lukasz A; Cios, Krzysztof J; Dick, Scott

    2006-02-01

    Business intelligence and bioinformatics applications increasingly require the mining of datasets consisting of millions of data points, or crafting real-time enterprise-level decision support systems for large corporations and drug companies. In all cases, there needs to be an underlying data mining system, and this mining system must be highly scalable. To this end, we describe a new rule learner called DataSqueezer. The learner belongs to the family of inductive supervised rule extraction algorithms. DataSqueezer is a simple, greedy, rule builder that generates a set of production rules from labeled input data. In spite of its relative simplicity, DataSqueezer is a very effective learner. The rules generated by the algorithm are compact, comprehensible, and have accuracy comparable to rules generated by other state-of-the-art rule extraction algorithms. The main advantages of DataSqueezer are very high efficiency, and missing data resistance. DataSqueezer exhibits log-linear asymptotic complexity with the number of training examples, and it is faster than other state-of-the-art rule learners. The learner is also robust to large quantities of missing data, as verified by extensive experimental comparison with the other learners. DataSqueezer is thus well suited to modern data mining and business intelligence tasks, which commonly involve huge datasets with a large fraction of missing data.

  10. Highly scalable parallel processing of extracellular recordings of Multielectrode Arrays.

    Science.gov (United States)

    Gehring, Tiago V; Vasilaki, Eleni; Giugliano, Michele

    2015-01-01

    Technological advances of Multielectrode Arrays (MEAs) used for multisite, parallel electrophysiological recordings, lead to an ever increasing amount of raw data being generated. Arrays with hundreds up to a few thousands of electrodes are slowly seeing widespread use and the expectation is that more sophisticated arrays will become available in the near future. In order to process the large data volumes resulting from MEA recordings there is a pressing need for new software tools able to process many data channels in parallel. Here we present a new tool for processing MEA data recordings that makes use of new programming paradigms and recent technology developments to unleash the power of modern highly parallel hardware, such as multi-core CPUs with vector instruction sets or GPGPUs. Our tool builds on and complements existing MEA data analysis packages. It shows high scalability and can be used to speed up some performance critical pre-processing steps such as data filtering and spike detection, helping to make the analysis of larger data sets tractable.

  11. High-Performance Scalable Information Service for the ATLAS Experiment

    International Nuclear Information System (INIS)

    Kolos, S; Boutsioukis, G; Hauser, R

    2012-01-01

    The ATLAS[1] experiment is operated by a highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to assess the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS Trigger and Data Acquisition (TDAQ)[2] project. The IS provides a high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about a hundred gigabytes of information which is being constantly updated with the update interval varying from a second to a few tens of seconds. IS provides access to any information item on request as well as distributing notification to all the information subscribers. In the latter case IS subscribers receive information within a few milliseconds after it was updated. IS can handle arbitrary types of information, including histograms produced by the HLT applications, and provides C++, Java and Python API. The Information Service is a unique source of information for the majority of the online monitoring analysis and GUI applications used to control and monitor the ATLAS experiment. Information Service provides streaming functionality allowing efficient replication of all or part of the managed information. This functionality is used to duplicate the subset of the ATLAS monitoring data to the CERN public network with a latency of a few milliseconds, allowing efficient real-time monitoring of the data taking from outside the protected ATLAS network. Each information

  12. Scalable devices

    KAUST Repository

    Krü ger, Jens J.; Hadwiger, Markus

    2014-01-01

    In computer science in general and in particular the field of high performance computing and supercomputing the term scalable plays an important role. It indicates that a piece of hardware, a concept, an algorithm, or an entire system scales

  13. Scalable devices

    KAUST Repository

    Krüger, Jens J.

    2014-01-01

    In computer science in general and in particular the field of high performance computing and supercomputing the term scalable plays an important role. It indicates that a piece of hardware, a concept, an algorithm, or an entire system scales with the size of the problem, i.e., it can not only be used in a very specific setting but it\\'s applicable for a wide range of problems. From small scenarios to possibly very large settings. In this spirit, there exist a number of fixed areas of research on scalability. There are works on scalable algorithms, scalable architectures but what are scalable devices? In the context of this chapter, we are interested in a whole range of display devices, ranging from small scale hardware such as tablet computers, pads, smart-phones etc. up to large tiled display walls. What interests us mostly is not so much the hardware setup but mostly the visualization algorithms behind these display systems that scale from your average smart phone up to the largest gigapixel display walls.

  14. Scalable Algorithms for Large High-Resolution Terrain Data

    DEFF Research Database (Denmark)

    Mølhave, Thomas; Agarwal, Pankaj K.; Arge, Lars Allan

    2010-01-01

    In this paper we demonstrate that the technology required to perform typical GIS computations on very large high-resolution terrain models has matured enough to be ready for use by practitioners. We also demonstrate the impact that high-resolution data has on common problems. To our knowledge, so...

  15. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    International Nuclear Information System (INIS)

    Desai, Ajit; Pettit, Chris; Poirel, Dominique; Sarkar, Abhijit

    2017-01-01

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolution in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.

  16. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    Energy Technology Data Exchange (ETDEWEB)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K. [Cray Inc., St. Paul, MN 55101 (United States); Porter, D. [Minnesota Supercomputing Institute for Advanced Computational Research, Minneapolis, MN USA (United States); O’Neill, B. J.; Nolting, C.; Donnert, J. M. F.; Jones, T. W. [School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455 (United States); Edmon, P., E-mail: pjm@cray.com, E-mail: nradclif@cray.com, E-mail: kkandalla@cray.com, E-mail: oneill@astro.umn.edu, E-mail: nolt0040@umn.edu, E-mail: donnert@ira.inaf.it, E-mail: twj@umn.edu, E-mail: dhp@umn.edu, E-mail: pedmon@cfa.harvard.edu [Institute for Theory and Computation, Center for Astrophysics, Harvard University, Cambridge, MA 02138 (United States)

    2017-02-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.

  17. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    International Nuclear Information System (INIS)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K.; Porter, D.; O’Neill, B. J.; Nolting, C.; Donnert, J. M. F.; Jones, T. W.; Edmon, P.

    2017-01-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.

  18. Scalable Intersample Interpolation Architecture for High-channel-count Beamformers

    DEFF Research Database (Denmark)

    Tomov, Borislav Gueorguiev; Nikolov, Svetoslav I; Jensen, Jørgen Arendt

    2011-01-01

    Modern ultrasound scanners utilize digital beamformers that operate on sampled and quantized echo signals. Timing precision is of essence for achieving good focusing. The direct way to achieve it is through the use of high sampling rates, but that is not economical, so interpolation between echo...... samples is used. This paper presents a beamformer architecture that combines a band-pass filter-based interpolation algorithm with the dynamic delay-and-sum focusing of a digital beamformer. The reduction in the number of multiplications relative to a linear perchannel interpolation and band-pass per......-channel interpolation architecture is respectively 58 % and 75 % beamformer for a 256-channel beamformer using 4-tap filters. The approach allows building high channel count beamformers while maintaining high image quality due to the use of sophisticated intersample interpolation....

  19. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    Science.gov (United States)

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  20. Pursuit of a scalable high performance multi-petabyte database

    CERN Document Server

    Hanushevsky, A

    1999-01-01

    When the BaBar experiment at the Stanford Linear Accelerator Center starts in April 1999, it will generate approximately 200 TB/year of data at a rate of 10 MB/sec for 10 years. A mere six years later, CERN, the European Laboratory for Particle Physics, will start an experiment whose data storage requirements are two orders of magnitude larger. In both experiments, all of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). The quantity and rate at which the data is produced requires the use of a high performance hierarchical mass storage system in place of a standard Unix file system. Furthermore, the distributed nature of the experiment, involving scientists from 80 Institutions in 10 countries, also requires an extended security infrastructure not commonly found in standard Unix file systems. The combination of challenges that must be overcome in order to effectively deal with a multi-petabyte object oriented database is substantial. Our particular approach...

  1. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy.

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-19

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available.

  2. Scalable Light Module for Low-Cost, High-Efficiency Light- Emitting Diode Luminaires

    Energy Technology Data Exchange (ETDEWEB)

    Tarsa, Eric [Cree, Inc., Goleta, CA (United States)

    2015-08-31

    During this two-year program Cree developed a scalable, modular optical architecture for low-cost, high-efficacy light emitting diode (LED) luminaires. Stated simply, the goal of this architecture was to efficiently and cost-effectively convey light from LEDs (point sources) to broad luminaire surfaces (area sources). By simultaneously developing warm-white LED components and low-cost, scalable optical elements, a high system optical efficiency resulted. To meet program goals, Cree evaluated novel approaches to improve LED component efficacy at high color quality while not sacrificing LED optical efficiency relative to conventional packages. Meanwhile, efficiently coupling light from LEDs into modular optical elements, followed by optimally distributing and extracting this light, were challenges that were addressed via novel optical design coupled with frequent experimental evaluations. Minimizing luminaire bill of materials and assembly costs were two guiding principles for all design work, in the effort to achieve luminaires with significantly lower normalized cost ($/klm) than existing LED fixtures. Chief project accomplishments included the achievement of >150 lm/W warm-white LEDs having primary optics compatible with low-cost modular optical elements. In addition, a prototype Light Module optical efficiency of over 90% was measured, demonstrating the potential of this scalable architecture for ultra-high-efficacy LED luminaires. Since the project ended, Cree has continued to evaluate optical element fabrication and assembly methods in an effort to rapidly transfer this scalable, cost-effective technology to Cree production development groups. The Light Module concept is likely to make a strong contribution to the development of new cost-effective, high-efficacy luminaries, thereby accelerating widespread adoption of energy-saving SSL in the U.S.

  3. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  4. Scalable Clustering of High-Dimensional Data Technique Using SPCM with Ant Colony Optimization Intelligence

    Directory of Open Access Journals (Sweden)

    Thenmozhi Srinivasan

    2015-01-01

    Full Text Available Clusters of high-dimensional data techniques are emerging, according to data noisy and poor quality challenges. This paper has been developed to cluster data using high-dimensional similarity based PCM (SPCM, with ant colony optimization intelligence which is effective in clustering nonspatial data without getting knowledge about cluster number from the user. The PCM becomes similarity based by using mountain method with it. Though this is efficient clustering, it is checked for optimization using ant colony algorithm with swarm intelligence. Thus the scalable clustering technique is obtained and the evaluation results are checked with synthetic datasets.

  5. Scalable Active Optical Access Network Using Variable High-Speed PLZT Optical Switch/Splitter

    Science.gov (United States)

    Ashizawa, Kunitaka; Sato, Takehiro; Tokuhashi, Kazumasa; Ishii, Daisuke; Okamoto, Satoru; Yamanaka, Naoaki; Oki, Eiji

    This paper proposes a scalable active optical access network using high-speed Plumbum Lanthanum Zirconate Titanate (PLZT) optical switch/splitter. The Active Optical Network, called ActiON, using PLZT switching technology has been presented to increase the number of subscribers and the maximum transmission distance, compared to the Passive Optical Network (PON). ActiON supports the multicast slot allocation realized by running the PLZT switch elements in the splitter mode, which forces the switch to behave as an optical splitter. However, the previous ActiON creates a tradeoff between the network scalability and the power loss experienced by the optical signal to each user. It does not use the optical power efficiently because the optical power is simply divided into 0.5 to 0.5 without considering transmission distance from OLT to each ONU. The proposed network adopts PLZT switch elements in the variable splitter mode, which controls the split ratio of the optical power considering the transmission distance from OLT to each ONU, in addition to PLZT switch elements in existing two modes, the switching mode and the splitter mode. The proposed network introduces the flexible multicast slot allocation according to the transmission distance from OLT to each user and the number of required users using three modes, while keeping the advantages of ActiON, which are to support scalable and secure access services. Numerical results show that the proposed network dramatically reduces the required number of slots and supports high bandwidth efficiency services and extends the coverage of access network, compared to the previous ActiON, and the required computation time for selecting multicast users is less than 30msec, which is acceptable for on-demand broadcast services.

  6. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience

    Science.gov (United States)

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R.; Bock, Davi D.; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C.; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R. Clay; Smith, Stephen J.; Szalay, Alexander S.; Vogelstein, Joshua T.; Vogelstein, R. Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes— neural connectivity maps of the brain—using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems—reads to parallel disk arrays and writes to solid-state storage—to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization. PMID:24401992

  7. Scalable creation of gold nanostructures on high performance engineering polymeric substrate

    Science.gov (United States)

    Jia, Kun; Wang, Pan; Wei, Shiliang; Huang, Yumin; Liu, Xiaobo

    2017-12-01

    The article reveals a facile protocol for scalable production of gold nanostructures on a high performance engineering thermoplastic substrate made of polyarylene ether nitrile (PEN) for the first time. Firstly, gold thin films with different thicknesses of 2 nm, 4 nm and 6 nm were evaporated on a spin-coated PEN substrate on glass slide in vacuum. Next, the as-evaporated samples were thermally annealed around the glass transition temperature of the PEN substrate, on which gold nanostructures with island-like morphology were created. Moreover, it was found that the initial gold evaporation thickness and annealing atmosphere played an important role in determining the morphology and plasmonic properties of the formulated Au NPs. Interestingly, we discovered that isotropic Au NPs can be easily fabricated on the freestanding PEN substrate, which was fabricated by a cost-effective polymer solution casting method. More specifically, monodispersed Au nanospheres with an average size of ∼60 nm were obtained after annealing a 4 nm gold film covered PEN casting substrate at 220 °C for 2 h in oxygen. Therefore, the scalable production of Au NPs with controlled morphology on PEN substrate would open the way for development of robust flexible nanosensors and optical devices using high performance engineering polyarylene ethers.

  8. Space-Filling Supercapacitor Carpets: Highly scalable fractal architecture for energy storage

    Science.gov (United States)

    Tiliakos, Athanasios; Trefilov, Alexandra M. I.; Tanasǎ, Eugenia; Balan, Adriana; Stamatin, Ioan

    2018-04-01

    Revamping ground-breaking ideas from fractal geometry, we propose an alternative micro-supercapacitor configuration realized by laser-induced graphene (LIG) foams produced via laser pyrolysis of inexpensive commercial polymers. The Space-Filling Supercapacitor Carpet (SFSC) architecture introduces the concept of nested electrodes based on the pre-fractal Peano space-filling curve, arranged in a symmetrical equilateral setup that incorporates multiple parallel capacitor cells sharing common electrodes for maximum efficiency and optimal length-to-area distribution. We elucidate on the theoretical foundations of the SFSC architecture, and we introduce innovations (high-resolution vector-mode printing) in the LIG method that allow for the realization of flexible and scalable devices based on low iterations of the Peano algorithm. SFSCs exhibit distributed capacitance properties, leading to capacitance, energy, and power ratings proportional to the number of nested electrodes (up to 4.3 mF, 0.4 μWh, and 0.2 mW for the largest tested model of low iteration using aqueous electrolytes), with competitively high energy and power densities. This can pave the road for full scalability in energy storage, reaching beyond the scale of micro-supercapacitors for incorporating into larger and more demanding applications.

  9. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience.

    Science.gov (United States)

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R; Bock, Davi D; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R Clay; Smith, Stephen J; Szalay, Alexander S; Vogelstein, Joshua T; Vogelstein, R Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes - neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.

  10. Construction of a smart medication dispenser with high degree of scalability and remote manageability.

    Science.gov (United States)

    Pak, JuGeon; Park, KeeHyun

    2012-01-01

    We propose a smart medication dispenser having a high degree of scalability and remote manageability. We construct the dispenser to have extensible hardware architecture for achieving scalability, and we install an agent program in it for achieving remote manageability. The dispenser operates as follows: when the real-time clock reaches the predetermined medication time and the user presses the dispense button at that time, the predetermined medication is dispensed from the medication dispensing tray (MDT). In the proposed dispenser, the medication for each patient is stored in an MDT. One smart medication dispenser contains mainly one MDT; however, the dispenser can be extended to include more MDTs in order to support multiple users using one dispenser. For remote management, the proposed dispenser transmits the medication status and the system configurations to the monitoring server. In the case of a specific event such as a shortage of medication, memory overload, software error, or non-adherence, the event is transmitted immediately. All these operations are performed automatically without the intervention of patients, through the agent program installed in the dispenser. Results of implementation and verification show that the proposed dispenser operates normally and performs the management operations from the medication monitoring server suitably.

  11. Construction of a Smart Medication Dispenser with High Degree of Scalability and Remote Manageability

    Directory of Open Access Journals (Sweden)

    JuGeon Pak

    2012-01-01

    Full Text Available We propose a smart medication dispenser having a high degree of scalability and remote manageability. We construct the dispenser to have extensible hardware architecture for achieving scalability, and we install an agent program in it for achieving remote manageability. The dispenser operates as follows: when the real-time clock reaches the predetermined medication time and the user presses the dispense button at that time, the predetermined medication is dispensed from the medication dispensing tray (MDT. In the proposed dispenser, the medication for each patient is stored in an MDT. One smart medication dispenser contains mainly one MDT; however, the dispenser can be extended to include more MDTs in order to support multiple users using one dispenser. For remote management, the proposed dispenser transmits the medication status and the system configurations to the monitoring server. In the case of a specific event such as a shortage of medication, memory overload, software error, or non-adherence, the event is transmitted immediately. All these operations are performed automatically without the intervention of patients, through the agent program installed in the dispenser. Results of implementation and verification show that the proposed dispenser operates normally and performs the management operations from the medication monitoring server suitably.

  12. Frontier: High Performance Database Access Using Standard Web Components in a Scalable Multi-Tier Architecture

    International Nuclear Information System (INIS)

    Kosyakov, S.; Kowalkowski, J.; Litvintsev, D.; Lueking, L.; Paterno, M.; White, S.P.; Autio, Lauri; Blumenfeld, B.; Maksimovic, P.; Mathis, M.

    2004-01-01

    A high performance system has been assembled using standard web components to deliver database information to a large number of broadly distributed clients. The CDF Experiment at Fermilab is establishing processing centers around the world imposing a high demand on their database repository. For delivering read-only data, such as calibrations, trigger information, and run conditions data, we have abstracted the interface that clients use to retrieve data objects. A middle tier is deployed that translates client requests into database specific queries and returns the data to the client as XML datagrams. The database connection management, request translation, and data encoding are accomplished in servlets running under Tomcat. Squid Proxy caching layers are deployed near the Tomcat servers, as well as close to the clients, to significantly reduce the load on the database and provide a scalable deployment model. Details the system's construction and use are presented, including its architecture, design, interfaces, administration, performance measurements, and deployment plan

  13. Scalable High Performance Message Passing over InfiniBand for Open MPI

    Energy Technology Data Exchange (ETDEWEB)

    Friedley, A; Hoefler, T; Leininger, M L; Lumsdaine, A

    2007-10-24

    InfiniBand (IB) is a popular network technology for modern high-performance computing systems. MPI implementations traditionally support IB using a reliable, connection-oriented (RC) transport. However, per-process resource usage that grows linearly with the number of processes, makes this approach prohibitive for large-scale systems. IB provides an alternative in the form of a connectionless unreliable datagram transport (UD), which allows for near-constant resource usage and initialization overhead as the process count increases. This paper describes a UD-based implementation for IB in Open MPI as a scalable alternative to existing RC-based schemes. We use the software reliability capabilities of Open MPI to provide the guaranteed delivery semantics required by MPI. Results show that UD not only requires fewer resources at scale, but also allows for shorter MPI startup times. A connectionless model also improves performance for applications that tend to send small messages to many different processes.

  14. A scalable synthesis of highly stable and water dispersible Ag 44(SR)30 nanoclusters

    KAUST Repository

    AbdulHalim, Lina G.; Ashraf, Sumaira; Katsiev, Khabiboulakh; Kirmani, Ahmad R.; Kothalawala, Nuwan; Anjum, Dalaver H.; Abbas, Sikandar Zameer; Amassian, Aram; Stellacci, Francesco; Dass, Amala; Hussain, Irshad; Bakr, Osman

    2013-01-01

    We report the synthesis of atomically monodisperse thiol-protected silver nanoclusters [Ag44(SR)30] m, (SR = 5-mercapto-2-nitrobenzoic acid) in which the product nanocluster is highly stable in contrast to previous preparation methods. The method is one-pot, scalable, and produces nanoclusters that are stable in aqueous solution for at least 9 months at room temperature under ambient conditions, with very little degradation to their unique UV-Vis optical absorption spectrum. The composition, size, and monodispersity were determined by electrospray ionization mass spectrometry and analytical ultracentrifugation. The produced nanoclusters are likely to be in a superatom charge-state of m = 4-, due to the fact that their optical absorption spectrum shares most of the unique features of the intense and broadly absorbing nanoparticles identified as [Ag44(SR) 30]4- by Harkness et al. (Nanoscale, 2012, 4, 4269). A protocol to transfer the nanoclusters to organic solvents is also described. Using the disperse nanoclusters in organic media, we fabricated solid-state films of [Ag44(SR)30]m that retained all the distinct features of the optical absorption spectrum of the nanoclusters in solution. The films were studied by X-ray diffraction and photoelectron spectroscopy in order to investigate their crystallinity, atomic composition and valence band structure. The stability, scalability, and the film fabrication method demonstrated in this work pave the way towards the crystallization of [Ag44(SR)30]m and its full structural determination by single crystal X-ray diffraction. Moreover, due to their unique and attractive optical properties with multiple optical transitions, we anticipate these clusters to find practical applications in light-harvesting, such as photovoltaics and photocatalysis, which have been hindered so far by the instability of previous generations of the cluster. © 2013 The Royal Society of Chemistry.

  15. Scalable Sub-micron Patterning of Organic Materials Toward High Density Soft Electronics.

    Science.gov (United States)

    Kim, Jaekyun; Kim, Myung-Gil; Kim, Jaehyun; Jo, Sangho; Kang, Jingu; Jo, Jeong-Wan; Lee, Woobin; Hwang, Chahwan; Moon, Juhyuk; Yang, Lin; Kim, Yun-Hi; Noh, Yong-Young; Jaung, Jae Yun; Kim, Yong-Hoon; Park, Sung Kyu

    2015-09-28

    The success of silicon based high density integrated circuits ignited explosive expansion of microelectronics. Although the inorganic semiconductors have shown superior carrier mobilities for conventional high speed switching devices, the emergence of unconventional applications, such as flexible electronics, highly sensitive photosensors, large area sensor array, and tailored optoelectronics, brought intensive research on next generation electronic materials. The rationally designed multifunctional soft electronic materials, organic and carbon-based semiconductors, are demonstrated with low-cost solution process, exceptional mechanical stability, and on-demand optoelectronic properties. Unfortunately, the industrial implementation of the soft electronic materials has been hindered due to lack of scalable fine-patterning methods. In this report, we demonstrated facile general route for high throughput sub-micron patterning of soft materials, using spatially selective deep-ultraviolet irradiation. For organic and carbon-based materials, the highly energetic photons (e.g. deep-ultraviolet rays) enable direct photo-conversion from conducting/semiconducting to insulating state through molecular dissociation and disordering with spatial resolution down to a sub-μm-scale. The successful demonstration of organic semiconductor circuitry promise our result proliferate industrial adoption of soft materials for next generation electronics.

  16. Development of modular scalable pulsed power systems for high power magnetized plasma experiments

    Science.gov (United States)

    Bean, I. A.; Weber, T. E.; Adams, C. S.; Henderson, B. R.; Klim, A. J.

    2017-10-01

    New pulsed power switches and trigger drivers are being developed in order to explore higher energy regimes in the Magnetic Shock Experiment (MSX) at Los Alamos National Laboratory. To achieve the required plasma velocities, high-power (approx. 100 kV, 100s of kA), high charge transfer (approx. 1 C), low-jitter (few ns) gas switches are needed. A study has been conducted on the effects of various electrode geometries and materials, dielectric media, and triggering strategies; resulting in the design of a low-inductance annular field-distortion switch, optimized for use with dry air at 90 psig, and triggered by a low-jitter, rapid rise-time solid-state Linear Transformer Driver. The switch geometry and electrical characteristics are designed to be compatible with Syllac style capacitors, and are intended to be deployed in modular configurations. The scalable nature of this approach will enable the rapid design and implementation of a wide variety of high-power magnetized plasma experiments. This work is supported by the U.S. Department of Energy, National Nuclear Security Administration. Approved for unlimited release, LA-UR-17-2578.

  17. Scalable 2D Mesoporous Silicon Nanosheets for High-Performance Lithium-Ion Battery Anode.

    Science.gov (United States)

    Chen, Song; Chen, Zhuo; Xu, Xingyan; Cao, Chuanbao; Xia, Min; Luo, Yunjun

    2018-03-01

    Constructing unique mesoporous 2D Si nanostructures to shorten the lithium-ion diffusion pathway, facilitate interfacial charge transfer, and enlarge the electrode-electrolyte interface offers exciting opportunities in future high-performance lithium-ion batteries. However, simultaneous realization of 2D and mesoporous structures for Si material is quite difficult due to its non-van der Waals structure. Here, the coexistence of both mesoporous and 2D ultrathin nanosheets in the Si anodes and considerably high surface area (381.6 m 2 g -1 ) are successfully achieved by a scalable and cost-efficient method. After being encapsulated with the homogeneous carbon layer, the Si/C nanocomposite anodes achieve outstanding reversible capacity, high cycle stability, and excellent rate capability. In particular, the reversible capacity reaches 1072.2 mA h g -1 at 4 A g -1 even after 500 cycles. The obvious enhancements can be attributed to the synergistic effect between the unique 2D mesoporous nanostructure and carbon capsulation. Furthermore, full-cell evaluations indicate that the unique Si/C nanostructures have a great potential in the next-generation lithium-ion battery. These findings not only greatly improve the electrochemical performances of Si anode, but also shine some light on designing the unique nanomaterials for various energy devices. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Prodigious Effects of Concentration Intensification on Nanoparticle Synthesis: A High-Quality, Scalable Approach

    KAUST Repository

    Williamson, Curtis B.

    2015-12-23

    © 2015 American Chemical Society. Realizing the promise of nanoparticle-based technologies demands more efficient, robust synthesis methods (i.e., process intensification) that consistently produce large quantities of high-quality nanoparticles (NPs). We explored NP synthesis via the heat-up method in a regime of previously unexplored high concentrations near the solubility limit of the precursors. We discovered that in this highly concentrated and viscous regime the NP synthesis parameters are less sensitive to experimental variability and thereby provide a robust, scalable, and size-focusing NP synthesis. Specifically, we synthesize high-quality metal sulfide NPs (<7% relative standard deviation for Cu2-xS and CdS), and demonstrate a 10-1000-fold increase in Cu2-xS NP production (>200 g) relative to the current field of large-scale (0.1-5 g yields) and laboratory-scale (<0.1 g) efforts. Compared to conventional synthesis methods (hot injection with dilute precursor concentration) characterized by rapid growth and low yield, our highly concentrated NP system supplies remarkably controlled growth rates and a 10-fold increase in NP volumetric production capacity (86 g/L). The controlled growth, high yield, and robust nature of highly concentrated solutions can facilitate large-scale nanomanufacturing of NPs by relaxing the synthesis requirements to achieve monodisperse products. Mechanistically, our investigation of the thermal and rheological properties and growth rates reveals that this high concentration regime has reduced mass diffusion (a 5-fold increase in solution viscosity), is stable to thermal perturbations (64% increase in heat capacity), and is resistant to Ostwald ripening.

  19. StagBL : A Scalable, Portable, High-Performance Discretization and Solver Layer for Geodynamic Simulation

    Science.gov (United States)

    Sanan, P.; Tackley, P. J.; Gerya, T.; Kaus, B. J. P.; May, D.

    2017-12-01

    StagBL is an open-source parallel solver and discretization library for geodynamic simulation,encapsulating and optimizing operations essential to staggered-grid finite volume Stokes flow solvers.It provides a parallel staggered-grid abstraction with a high-level interface in C and Fortran.On top of this abstraction, tools are available to define boundary conditions and interact with particle systems.Tools and examples to efficiently solve Stokes systems defined on the grid are provided in small (direct solver), medium (simple preconditioners), and large (block factorization and multigrid) model regimes.By working directly with leading application codes (StagYY, I3ELVIS, and LaMEM) and providing an API and examples to integrate with others, StagBL aims to become a community tool supplying scalable, portable, reproducible performance toward novel science in regional- and planet-scale geodynamics and planetary science.By implementing kernels used by many research groups beneath a uniform abstraction layer, the library will enable optimization for modern hardware, thus reducing community barriers to large- or extreme-scale parallel simulation on modern architectures. In particular, the library will include CPU-, Manycore-, and GPU-optimized variants of matrix-free operators and multigrid components.The common layer provides a framework upon which to introduce innovative new tools.StagBL will leverage p4est to provide distributed adaptive meshes, and incorporate a multigrid convergence analysis tool.These options, in addition to a wealth of solver options provided by an interface to PETSc, will make the most modern solution techniques available from a common interface. StagBL in turn provides a PETSc interface, DMStag, to its central staggered grid abstraction.We present public version 0.5 of StagBL, including preliminary integration with application codes and demonstrations with its own demonstration application, StagBLDemo. Central to StagBL is the notion of an

  20. Facile and scalable fabrication of polymer-ceramic composite electrolyte with high ceramic loadings

    Science.gov (United States)

    Pandian, Amaresh Samuthira; Chen, X. Chelsea; Chen, Jihua; Lokitz, Bradley S.; Ruther, Rose E.; Yang, Guang; Lou, Kun; Nanda, Jagjit; Delnick, Frank M.; Dudney, Nancy J.

    2018-06-01

    Solid state electrolytes are a promising alternative to flammable liquid electrolytes for high-energy lithium battery applications. In this work polymer-ceramic composite electrolyte membrane with high ceramic loading (greater than 60 vol%) is fabricated using a model polymer electrolyte poly(ethylene oxide) + lithium trifluoromethane sulfonate and a lithium-conducting ceramic powder. The effects of processing methods, choice of plasticizer and varying composition on ionic conductivity of the composite electrolyte are thoroughly investigated. The physical, structural and thermal properties of the composites are exhaustively characterized. We demonstrate that aqueous spray coating followed by hot pressing is a scalable and inexpensive technique to obtain composite membranes that are amazingly dense and uniform. The ionic conductivity of composites fabricated using this protocol is at least one order of magnitude higher than those made by dry milling and solution casting. The introduction of tetraethylene glycol dimethyl ether further increases the ionic conductivity. The composite electrolyte's interfacial compatibility with metallic lithium and good cyclability is verified by constructing lithium symmetrical cells. A remarkable Li+ transference number of 0.79 is discovered for the composite electrolyte.

  1. Technical Report: Toward a Scalable Algorithm to Compute High-Dimensional Integrals of Arbitrary Functions

    International Nuclear Information System (INIS)

    Snyder, Abigail C.; Jiao, Yu

    2010-01-01

    Neutron experiments at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) frequently generate large amounts of data (on the order of 106-1012 data points). Hence, traditional data analysis tools run on a single CPU take too long to be practical and scientists are unable to efficiently analyze all data generated by experiments. Our goal is to develop a scalable algorithm to efficiently compute high-dimensional integrals of arbitrary functions. This algorithm can then be used to integrate the four-dimensional integrals that arise as part of modeling intensity from the experiments at the SNS. Here, three different one-dimensional numerical integration solvers from the GNU Scientific Library were modified and implemented to solve four-dimensional integrals. The results of these solvers on a final integrand provided by scientists at the SNS can be compared to the results of other methods, such as quasi-Monte Carlo methods, computing the same integral. A parallelized version of the most efficient method can allow scientists the opportunity to more effectively analyze all experimental data.

  2. Pre-coating of LSCM perovskite with metal catalyst for scalable high performance anodes

    KAUST Repository

    Boulfrad, Samir

    2013-07-01

    In this work, a highly scalable technique is proposed as an alternative to the lab-scale impregnation method. LSCM-CGO powders were pre-coated with 5 wt% of Ni from nitrates. After appropriate mixing and adequate heat treatment, coated powders were then dispersed into organic based vehicles to form a screen-printable ink which was deposited and fired to form SOFC anode layers. Electrochemical tests show a considerable enhancement of the pre-coated anode performances under 50 ml/min wet H2 flow with polarization resistance decreased from about 0.60cm2 to 0.38 cm2 at 900 C and from 6.70 cm2 to 1.37 cm2 at 700 C. This is most likely due to the pre-coating process resulting in nano-scaled Ni particles with two typical sizes; from 50 to 200 nm and from 10 to 40 nm. Converging indications suggest that the latter type of particle comes from solid state solution of Ni in LSCM phase under oxidizing conditions and exsolution as nanoparticles under reducing atmospheres. Copyright © 2013, Hydrogen Energy Publications, LLC. Published by Elsevier Ltd. All rights reserved.

  3. Highly scalable ZIF-based mixed-matrix hollow fiber membranes for advanced hydrocarbon separations

    KAUST Repository

    Zhang, Chen

    2014-05-29

    ZIF-8/6FDA-DAM, a proven mixed-matrix material that demonstrated remarkably enhanced C3H6/C3H8 selectivity in dense film geometry, was extended to scalable hollow fiber geometry in the current work. We successfully formed dual-layer ZIF-8/6FDA-DAM mixed-matrix hollow fiber membranes with ZIF-8 nanoparticle loading up to 30 wt % using the conventional dry-jet/wet-quench fiber spinning technique. The mixed-matrix hollow fibers showed significantly enhanced C3H6/C3H8 selectivity that was consistent with mixed-matrix dense films. Critical variables controlling successful formation of mixed-matrix hollow fiber membranes with desirable morphology and attractive transport properties were discussed. Furthermore, the effects of coating materials on selectivity recovery of partially defective fibers were investigated. To our best knowledge, this is the first article reporting successful formation of high-loading mixed-matrix hollow fiber membranes with significantly enhanced selectivity for separation of condensable olefin/paraffin mixtures. Therefore, it represents a major step in the research area of advanced mixed-matrix membranes. © 2014 American Institute of Chemical Engineers.

  4. A Highly Parallel and Scalable Motion Estimation Algorithm with GPU for HEVC

    Directory of Open Access Journals (Sweden)

    Yun-gang Xue

    2017-01-01

    Full Text Available We propose a highly parallel and scalable motion estimation algorithm, named multilevel resolution motion estimation (MLRME for short, by combining the advantages of local full search and downsampling. By subsampling a video frame, a large amount of computation is saved. While using the local full-search method, it can exploit massive parallelism and make full use of the powerful modern many-core accelerators, such as GPU and Intel Xeon Phi. We implanted the proposed MLRME into HM12.0, and the experimental results showed that the encoding quality of the MLRME method is close to that of the fast motion estimation in HEVC, which declines by less than 1.5%. We also implemented the MLRME with CUDA, which obtained 30–60x speed-up compared to the serial algorithm on single CPU. Specifically, the parallel implementation of MLRME on a GTX 460 GPU can meet the real-time coding requirement with about 25 fps for the 2560×1600 video format, while, for 832×480, the performance is more than 100 fps.

  5. A highly scalable massively parallel fast marching method for the Eikonal equation

    Science.gov (United States)

    Yang, Jianming; Stern, Frederick

    2017-03-01

    The fast marching method is a widely used numerical method for solving the Eikonal equation arising from a variety of scientific and engineering fields. It is long deemed inherently sequential and an efficient parallel algorithm applicable to large-scale practical applications is not available in the literature. In this study, we present a highly scalable massively parallel implementation of the fast marching method using a domain decomposition approach. Central to this algorithm is a novel restarted narrow band approach that coordinates the frequency of communications and the amount of computations extra to a sequential run for achieving an unprecedented parallel performance. Within each restart, the narrow band fast marching method is executed; simple synchronous local exchanges and global reductions are adopted for communicating updated data in the overlapping regions between neighboring subdomains and getting the latest front status, respectively. The independence of front characteristics is exploited through special data structures and augmented status tags to extract the masked parallelism within the fast marching method. The efficiency, flexibility, and applicability of the parallel algorithm are demonstrated through several examples. These problems are extensively tested on six grids with up to 1 billion points using different numbers of processes ranging from 1 to 65536. Remarkable parallel speedups are achieved using tens of thousands of processes. Detailed pseudo-codes for both the sequential and parallel algorithms are provided to illustrate the simplicity of the parallel implementation and its similarity to the sequential narrow band fast marching algorithm.

  6. Scalable Approach to Highly Efficient and Rapid Capacitive Deionization with CNT-Thread As Electrodes.

    Science.gov (United States)

    Moronshing, Maku; Subramaniam, Chandramouli

    2017-11-22

    A scalable route to highly efficient purification of water through capacitive deionization (CDI) is reported using CNT-thread as electrodes. Electro-sorption capacity (q e ) of 139 mg g -1 and average salt-adsorption rate (ASAR) of 2.78 mg g -1 min -1 achieved here is the highest among all known electrode materials and nonmembrane techniques, indicating efficient and rapid deionization. Such exceptional performance is achieved with feedstock concentrations (≤1000 ppm) where conventional techniques such as reverse osmosis and electrodialysis prove ineffective. Further, both cations (Na + , K + , Mg 2+ , and Ca 2+ ) and anions (Cl - , SO 4 2- and NO 3 - ) are removed with equally high efficiency (∼80%). Synergism between electrical conductivity (∼25 S cm -1 ), high specific surface area (∼900 m 2 g -1 ), porosity (0.7 nm, 3 nm) and hydrophilicity (contact angle ∼25°) in CNT-thread electrode enable superior contact with water, rapid formation of extensive electrical double layer and consequently efficient deionization. The tunable capacitance of the device (0.4-120 mF) and its high specific capacitance (∼27.2 F g -1 ) enable exceptional performance across a wide range of saline concentrations (50-1000 ppm). Facile regeneration of the electrode and reusability of the device is achieved for several cycles. The device demonstrated can desalinate water as it trickles down its surface because of gravity, thereby eliminating the requirement of any water pumping system. Finally, its portable adaptability is demonstrated by operating the device with an AA battery.

  7. A 1T Dynamic Random Access Memory Cell Based on Gated Thyristor with Surrounding Gate Structure for High Scalability.

    Science.gov (United States)

    Kim, Hyungjin; Kim, Sihyun; Kim, Hyun-Min; Lee, Kitae; Kim, Sangwan; Pak, Byung-Gook

    2018-09-01

    In this study, we investigate a one-transistor (1T) dynamic random access memory (DRAM) cell based on a gated-thyristor device utilizing voltage-driven bistability to enable high-speed operations. The structural feature of the surrounding gate using a sidewall provides high scalability with regard to constructing an array architecture of the proposed devices. In addition, the operation mechanism, I-V characteristics, DRAM operations, and bias dependence are analyzed using a commercial device simulator. Unlike conventional 1T DRAM cells utilizing the floating body effect, excess carriers which are required to be stored to make two different states are not generated but injected from the n+ cathode region, giving the device high-speed operation capabilities. The findings here indicate that the proposed DRAM cell offers distinct advantages in terms of scalability and high-speed operations.

  8. SRC: FenixOS - A Research Operating System Focused on High Scalability and Reliability

    DEFF Research Database (Denmark)

    Passas, Stavros; Karlsson, Sven

    2011-01-01

    Computer systems keep increasing in size. Systems scale in the number of processing units, memories and peripheral devices. This creates many and diverse architectural trade-offs that the existing operating systems are not able to address. We are designing and implementing, FenixOS, a new operating...... of the operating system....... system that aims to improve the state of the art in scalability and reliability. We achieve scalability through limiting data sharing when possible, and through extensive use of lock-free data structures. Reliability is addressed with a careful re-design of the programming interface and structure...

  9. Personalised Prescription of Scalable High Intensity Interval Training to Inactive Female Adults of Different Ages.

    Directory of Open Access Journals (Sweden)

    Jacqueline L Mair

    Full Text Available Stepping is a convenient form of scalable high-intensity interval training (HIIT that may lead to health benefits. However, the accurate personalised prescription of stepping is hampered by a lack of evidence on optimal stepping cadences and step heights for various populations. This study examined the acute physiological responses to stepping exercise at various heights and cadences in young (n = 14 and middle-aged (n = 14 females in order to develop an equation that facilitates prescription of stepping at targeted intensities. Participants completed a step test protocol consisting of randomised three-minute bouts at different step cadences (80, 90, 100, 110 steps·min-1 and step heights (17, 25, 30, 34 cm. Aerobic demand and heart rate values were measured throughout. Resting metabolic rate was measured in order to develop female specific metabolic equivalents (METs for stepping. Results revealed significant differences between age groups for METs and heart rate reserve, and within-group differences for METs, heart rate, and metabolic cost, at different step heights and cadences. At a given step height and cadence, middle-aged females were required to work at an intensity on average 1.9 ± 0.26 METs greater than the younger females. A prescriptive equation was developed to assess energy cost in METs using multilevel regression analysis with factors of step height, step cadence and age. Considering recent evidence supporting accumulated bouts of HIIT exercise for health benefits, this equation, which allows HIIT to be personally prescribed to inactive and sedentary women, has potential impact as a public health exercise prescription tool.

  10. Enabling Highly-Scalable Remote Memory Access Programming with MPI-3 One Sided

    Directory of Open Access Journals (Sweden)

    Robert Gerstenberger

    2014-01-01

    Full Text Available Modern interconnects offer remote direct memory access (RDMA features. Yet, most applications rely on explicit message passing for communications albeit their unwanted overheads. The MPI-3.0 standard defines a programming interface for exploiting RDMA networks directly, however, it's scalability and practicability has to be demonstrated in practice. In this work, we develop scalable bufferless protocols that implement the MPI-3.0 specification. Our protocols support scaling to millions of cores with negligible memory consumption while providing highest performance and minimal overheads. To arm programmers, we provide a spectrum of performance models for all critical functions and demonstrate the usability of our library and models with several application studies with up to half a million processes. We show that our design is comparable to, or better than UPC and Fortran Coarrays in terms of latency, bandwidth and message rate. We also demonstrate application performance improvements with comparable programming complexity.

  11. Computational Fluid Dynamics (CFD) Computations With Zonal Navier-Stokes Flow Solver (ZNSFLOW) Common High Performance Computing Scalable Software Initiative (CHSSI) Software

    National Research Council Canada - National Science Library

    Edge, Harris

    1999-01-01

    ...), computational fluid dynamics (CFD) 6 project. Under the project, a proven zonal Navier-Stokes solver was rewritten for scalable parallel performance on both shared memory and distributed memory high performance computers...

  12. Highly Scalable Asynchronous Computing Method for Partial Differential Equations: A Path Towards Exascale

    Science.gov (United States)

    Konduri, Aditya

    Many natural and engineering systems are governed by nonlinear partial differential equations (PDEs) which result in a multiscale phenomena, e.g. turbulent flows. Numerical simulations of these problems are computationally very expensive and demand for extreme levels of parallelism. At realistic conditions, simulations are being carried out on massively parallel computers with hundreds of thousands of processing elements (PEs). It has been observed that communication between PEs as well as their synchronization at these extreme scales take up a significant portion of the total simulation time and result in poor scalability of codes. This issue is likely to pose a bottleneck in scalability of codes on future Exascale systems. In this work, we propose an asynchronous computing algorithm based on widely used finite difference methods to solve PDEs in which synchronization between PEs due to communication is relaxed at a mathematical level. We show that while stability is conserved when schemes are used asynchronously, accuracy is greatly degraded. Since message arrivals at PEs are random processes, so is the behavior of the error. We propose a new statistical framework in which we show that average errors drop always to first-order regardless of the original scheme. We propose new asynchrony-tolerant schemes that maintain accuracy when synchronization is relaxed. The quality of the solution is shown to depend, not only on the physical phenomena and numerical schemes, but also on the characteristics of the computing machine. A novel algorithm using remote memory access communications has been developed to demonstrate excellent scalability of the method for large-scale computing. Finally, we present a path to extend this method in solving complex multi-scale problems on Exascale machines.

  13. Scalable coherent interface

    International Nuclear Information System (INIS)

    Alnaes, K.; Kristiansen, E.H.; Gustavson, D.B.; James, D.V.

    1990-01-01

    The Scalable Coherent Interface (IEEE P1596) is establishing an interface standard for very high performance multiprocessors, supporting a cache-coherent-memory model scalable to systems with up to 64K nodes. This Scalable Coherent Interface (SCI) will supply a peak bandwidth per node of 1 GigaByte/second. The SCI standard should facilitate assembly of processor, memory, I/O and bus bridge cards from multiple vendors into massively parallel systems with throughput far above what is possible today. The SCI standard encompasses two levels of interface, a physical level and a logical level. The physical level specifies electrical, mechanical and thermal characteristics of connectors and cards that meet the standard. The logical level describes the address space, data transfer protocols, cache coherence mechanisms, synchronization primitives and error recovery. In this paper we address logical level issues such as packet formats, packet transmission, transaction handshake, flow control, and cache coherence. 11 refs., 10 figs

  14. Scalable High-Performance Ultraminiature Graphene Micro-Supercapacitors by a Hybrid Technique Combining Direct Writing and Controllable Microdroplet Transfer.

    Science.gov (United States)

    Shen, Daozhi; Zou, Guisheng; Liu, Lei; Zhao, Wenzheng; Wu, Aiping; Duley, Walter W; Zhou, Y Norman

    2018-02-14

    Miniaturization of energy storage devices can significantly decrease the overall size of electronic systems. However, this miniaturization is limited by the reduction of electrode dimensions and the reproducible transfer of small electrolyte drops. This paper reports first a simple scalable direct writing method for the production of ultraminiature microsupercapacitor (MSC) electrodes, based on femtosecond laser reduced graphene oxide (fsrGO) interlaced pads. These pads, separated by 2 μm spacing, are 100 μm long and 8 μm wide. A second stage involves the accurate transfer of an electrolyte microdroplet on top of each individual electrode, which can avoid any interference of the electrolyte with other electronic components. Abundant in-plane mesopores in fsrGO induced by a fs laser together with ultrashort interelectrode spacing enables MSCs to exhibit a high specific capacitance (6.3 mF cm -2 and 105 F cm -3 ) and ∼100% retention after 1000 cycles. An all graphene resistor-capacitor (RC) filter is also constructed by combining the MSC and a fsrGO resistor, which is confirmed to exhibit highly enhanced performance characteristics. This new hybrid technique combining fs laser direct writing and precise microdroplet transfer easily enables scalable production of ultraminiature MSCs, which is believed to be significant for practical application of micro-supercapacitor microelectronic systems.

  15. Scalable, incremental learning with MapReduce parallelization for cell detection in high-resolution 3D microscopy data

    KAUST Repository

    Sung, Chul

    2013-08-01

    Accurate estimation of neuronal count and distribution is central to the understanding of the organization and layout of cortical maps in the brain, and changes in the cell population induced by brain disorders. High-throughput 3D microscopy techniques such as Knife-Edge Scanning Microscopy (KESM) are enabling whole-brain survey of neuronal distributions. Data from such techniques pose serious challenges to quantitative analysis due to the massive, growing, and sparsely labeled nature of the data. In this paper, we present a scalable, incremental learning algorithm for cell body detection that can address these issues. Our algorithm is computationally efficient (linear mapping, non-iterative) and does not require retraining (unlike gradient-based approaches) or retention of old raw data (unlike instance-based learning). We tested our algorithm on our rat brain Nissl data set, showing superior performance compared to an artificial neural network-based benchmark, and also demonstrated robust performance in a scenario where the data set is rapidly growing in size. Our algorithm is also highly parallelizable due to its incremental nature, and we demonstrated this empirically using a MapReduce-based implementation of the algorithm. We expect our scalable, incremental learning approach to be widely applicable to medical imaging domains where there is a constant flux of new data. © 2013 IEEE.

  16. SAME4HPC: A Promising Approach in Building a Scalable and Mobile Environment for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Karthik, Rajasekar [ORNL

    2014-01-01

    In this paper, an architecture for building Scalable And Mobile Environment For High-Performance Computing with spatial capabilities called SAME4HPC is described using cutting-edge technologies and standards such as Node.js, HTML5, ECMAScript 6, and PostgreSQL 9.4. Mobile devices are increasingly becoming powerful enough to run high-performance apps. At the same time, there exist a significant number of low-end and older devices that rely heavily on the server or the cloud infrastructure to do the heavy lifting. Our architecture aims to support both of these types of devices to provide high-performance and rich user experience. A cloud infrastructure consisting of OpenStack with Ubuntu, GeoServer, and high-performance JavaScript frameworks are some of the key open-source and industry standard practices that has been adopted in this architecture.

  17. Slot-die Coating of a High Performance Copolymer in a Readily Scalable Roll Process for Polymer Solar Cells

    DEFF Research Database (Denmark)

    Helgesen, Martin; Carlé, Jon Eggert; Krebs, Frederik C

    2013-01-01

    reported compact coating/printing machine, which enables the preparation of PSCs that are directly scalable with full roll-to-roll processing. The positioning of the side-chains on the thiophene units proves to be very significant in terms of solubility of the polymers and consequently has a major impact...... on the device yield and process control. The most successful processing is accomplished with the polymer, PDTSTTz-4, that has the side-chains situated in the 4-position on the thiophene units. Inverted PSCs based on PDTSTTz-4 demonstrate high fill factors, up to 59%, even with active layer thicknesses well...... above 200 nm. Power conversion efficiencies of up to 3.5% can be reached with the roll-coated PDTSTTz-4:PCBM solar cells that, together with good process control and high device yield, designate PDTSTTz-4 as a convincing candidate for high-throughput roll-to-roll production of PSCs....

  18. A Highly Scalable Data Service (HSDS) using Cloud-based Storage Technologies for Earth Science Data

    Science.gov (United States)

    Michaelis, A.; Readey, J.; Votava, P.; Henderson, J.; Willmore, F.

    2017-12-01

    Cloud based infrastructure may offer several key benefits of scalability, built in redundancy, security mechanisms and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and legacy software systems developed for online data repositories within the federal government were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Moreover, services bases on object storage are well established and provided through all the leading cloud service providers (Amazon Web Service, Microsoft Azure, Google Cloud, etc…) of which can often provide unmatched "scale-out" capabilities and data availability to a large and growing consumer base at a price point unachievable from in-house solutions. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows a performance advantage for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.

  19. A Scalable and Highly Configurable Cache-Aware Hybrid Flash Translation Layer

    Directory of Open Access Journals (Sweden)

    Jalil Boukhobza

    2014-03-01

    Full Text Available This paper presents a cache-aware configurable hybrid flash translation layer (FTL, named CACH-FTL. It was designed based on the observation that most state-of­­-the-art flash-specific cache systems above FTLs flush groups of pages belonging to the same data block. CACH-FTL relies on this characteristic to optimize flash write operations placement, as large groups of pages are flushed to a block-mapped region, named BMR, whereas small groups are buffered into a page-mapped region, named PMR. Page group placement is based on a configurable threshold defining the limit under which it is more cost-effective to use page mapping (PMR and wait for grouping more pages before flushing to the BMR. CACH-FTL is scalable in terms of mapping table size and flexible in terms of Input/Output (I/O workload support. CACH-FTL performs very well, as the performance difference with the ideal page-mapped FTL is less than 15% in most cases and has a mean of 4% for the best CACH-FTL configurations, while using at least 78% less memory for table mapping storage on RAM.

  20. Churchill: an ultra-fast, deterministic, highly scalable and balanced parallelization strategy for the discovery of human genetic variation in clinical and population-scale genomics.

    Science.gov (United States)

    Kelly, Benjamin J; Fitch, James R; Hu, Yangqiu; Corsmeier, Donald J; Zhong, Huachun; Wetzel, Amy N; Nordquist, Russell D; Newsom, David L; White, Peter

    2015-01-20

    While advances in genome sequencing technology make population-scale genomics a possibility, current approaches for analysis of these data rely upon parallelization strategies that have limited scalability, complex implementation and lack reproducibility. Churchill, a balanced regional parallelization strategy, overcomes these challenges, fully automating the multiple steps required to go from raw sequencing reads to variant discovery. Through implementation of novel deterministic parallelization techniques, Churchill allows computationally efficient analysis of a high-depth whole genome sample in less than two hours. The method is highly scalable, enabling full analysis of the 1000 Genomes raw sequence dataset in a week using cloud resources. http://churchill.nchri.org/.

  1. PM2006: a highly scalable urban planning management information system--Case study: Suzhou Urban Planning Bureau

    Science.gov (United States)

    Jing, Changfeng; Liang, Song; Ruan, Yong; Huang, Jie

    2008-10-01

    During the urbanization process, when facing complex requirements of city development, ever-growing urban data, rapid development of planning business and increasing planning complexity, a scalable, extensible urban planning management information system is needed urgently. PM2006 is such a system that can deal with these problems. In response to the status and problems in urban planning, the scalability and extensibility of PM2006 are introduced which can be seen as business-oriented workflow extensibility, scalability of DLL-based architecture, flexibility on platforms of GIS and database, scalability of data updating and maintenance and so on. It is verified that PM2006 system has good extensibility and scalability which can meet the requirements of all levels of administrative divisions and can adapt to ever-growing changes in urban planning business. At the end of this paper, the application of PM2006 in Urban Planning Bureau of Suzhou city is described.

  2. Scalability of a Low-Cost Multi-Teraflop Linux Cluster for High-End Classical Atomistic and Quantum Mechanical Simulations

    Science.gov (United States)

    Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash

    2003-01-01

    Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.

  3. Cactus and Visapult: A case study of ultra-high performance distributed visualization using connectionless protocols

    Energy Technology Data Exchange (ETDEWEB)

    Shalf, John; Bethel, E. Wes

    2002-05-07

    This past decade has seen rapid growth in the size, resolution, and complexity of Grand Challenge simulation codes. Many such problems still require interactive visualization tools to make sense of multi-terabyte data stores. Visapult is a parallel volume rendering tool that employs distributed components, latency tolerant algorithms, and high performance network I/O for effective remote visualization of massive datasets. In this paper we discuss using connectionless protocols to accelerate Visapult network I/O and interfacing Visapult to the Cactus General Relativity code to enable scalable remote monitoring and steering capabilities. With these modifications, network utilization has moved from 25 percent of line-rate using tuned multi-streamed TCP to sustaining 88 percent of line rate using the new UDP-based transport protocol.

  4. High performance SDN enabled flat data center network architecture based on scalable and flow-controlled optical switching system

    NARCIS (Netherlands)

    Calabretta, N.; Miao, W.; Dorren, H.J.S.

    2015-01-01

    We demonstrate a reconfigurable virtual datacenter network by utilizing statistical multiplexing offered by scalable and flow-controlled optical switching system. Results show QoS guarantees by the priority assignment and load balancing for applications in virtual networks.

  5. SciSpark: Highly Interactive and Scalable Model Evaluation and Climate Metrics for Scientific Data and Analysis

    Data.gov (United States)

    National Aeronautics and Space Administration — We will construct SciSpark, a scalable system for interactive model evaluation and for the rapid development of climate metrics and analyses. SciSpark directly...

  6. Scalable shear-exfoliation of high-quality phosphorene nanoflakes with reliable electrochemical cycleability in nano batteries

    International Nuclear Information System (INIS)

    Xu, Feng; Min, Huihua; Zhu, Chongyang; Xia, Weiwei; Li, Zhengrui; Li, Shengli; Yu, Kaihao; Sun, Litao; Ge, Binghui; Chen, Jing; Cui, Yiping; Nathan, Arokia; Xin, Linhuo L; Ma, Hongyu; Wu, Lijun; Zhu, Yimei

    2016-01-01

    Atomically thin black phosphorus (called phosphorene) holds great promise as an alternative to graphene and other two-dimensional transition-metal dichalcogenides as an anode material for lithium-ion batteries (LIBs). However, bulk black phosphorus (BP) suffers from rapid capacity fading and poor rechargeable performance. This work reports for the first time the use of in situ transmission electron microscopy (TEM) to construct nanoscale phosphorene LIBs. This enables direct visualization of the mechanisms underlying capacity fading in thick multilayer phosphorene through real-time capture of delithiation-induced structural decomposition, which serves to reduce electrical conductivity thus causing irreversibility of the lithiated phases. We further demonstrate that few-layer-thick phosphorene successfully circumvents the structural decomposition and holds superior structural restorability, even when subject to multi-cycle lithiation/delithiation processes and concomitant huge volume expansion. This finding provides breakthrough insights into thickness-dependent lithium diffusion kinetics in phosphorene. More importantly, a scalable liquid-phase shear exfoliation route has been developed to produce high-quality ultrathin phosphorene using simple means such as a high-speed shear mixer or even a household kitchen blender with the shear rate threshold of ∼1.25 × 10 4 s −1 . The results reported here will pave the way for industrial-scale applications of rechargeable phosphorene LIBs. (paper)

  7. A peripheral component interconnect express-based scalable and highly integrated pulsed spectrometer for solution state dynamic nuclear polarization

    Energy Technology Data Exchange (ETDEWEB)

    He, Yugui; Liu, Chaoyang, E-mail: chyliu@wipm.ac.cn [Wuhan National Laboratory for Optoelectronics, School of Optical and Electronic Information, Huazhong University of Science and Technology, Wuhan 430074 (China); State Key Laboratory of Magnet Resonance and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, Chinese Academy of Sciences, Wuhan 430071 (China); Feng, Jiwen; Wang, Dong; Chen, Fang; Liu, Maili [State Key Laboratory of Magnet Resonance and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, Chinese Academy of Sciences, Wuhan 430071 (China); Zhang, Zhi; Wang, Chao [State Key Laboratory of Magnet Resonance and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, Chinese Academy of Sciences, Wuhan 430071 (China); University of Chinese Academy of Sciences, Beijing 100048 (China)

    2015-08-15

    High sensitivity, high data rates, fast pulses, and accurate synchronization all represent challenges for modern nuclear magnetic resonance spectrometers, which make any expansion or adaptation of these devices to new techniques and experiments difficult. Here, we present a Peripheral Component Interconnect Express (PCIe)-based highly integrated distributed digital architecture pulsed spectrometer that is implemented with electron and nucleus double resonances and is scalable specifically for broad dynamic nuclear polarization (DNP) enhancement applications, including DNP-magnetic resonance spectroscopy/imaging (DNP-MRS/MRI). The distributed modularized architecture can implement more transceiver channels flexibly to meet a variety of MRS/MRI instrumentation needs. The proposed PCIe bus with high data rates can significantly improve data transmission efficiency and communication reliability and allow precise control of pulse sequences. An external high speed double data rate memory chip is used to store acquired data and pulse sequence elements, which greatly accelerates the execution of the pulse sequence, reduces the TR (time of repetition) interval, and improves the accuracy of TR in imaging sequences. Using clock phase-shift technology, we can produce digital pulses accurately with high timing resolution of 1 ns and narrow widths of 4 ns to control the microwave pulses required by pulsed DNP and ensure overall system synchronization. The proposed spectrometer is proved to be both feasible and reliable by observation of a maximum signal enhancement factor of approximately −170 for {sup 1}H, and a high quality water image was successfully obtained by DNP-enhanced spin-echo {sup 1}H MRI at 0.35 T.

  8. Scalable preparation of high purity rutin fatty acid esters following enzymatic synthesis

    DEFF Research Database (Denmark)

    Lue, Bena-Marie; Guo, Zheng; Xu, Xuebing

    2010-01-01

    Investigations into expanded uses of modified flavonoids are often limited by the availability of these high purity compounds. As such, a simple, effective and relatively fast method for isolation of gram quantities of both long and medium chain fatty acid esters of rutin following scaled......-up biosynthesis reactions was established. Acylation reactions of rutin and palmitic or lauric acids were efficient in systems containing dried acetone and molecular sieves, yielding from 70–77% bioconversion after 96 h. Thereafter, high purity isolates (>97%) were easily obtained in significant quantities...

  9. Scalable Upcycling Silicon from Waste Slicing Sludge for High-performance Lithium-ion Battery Anodes

    International Nuclear Information System (INIS)

    Bao, Qi; Huang, Yao-Hui; Lan, Chun-Kai; Chen, Bing-Hong; Duh, Jenq-Gong

    2015-01-01

    Silicon (Si) has been perceived as a promising next-generation anode material for lithium ion batteries (LIBs) due to its superior theoretical capacity. Despite the natural abundance of this element on Earth, large-scale production of high-purity Si nanomaterials in a green and energy-efficient way is yet to become an industrial reality. Spray-drying methods have been exploited to recover Si particles from low-value sludge produced in the photovoltaic industry, providing a massive and cost-effective Si resource for fabricating anode materials. To address such drawbacks like volume expansion, low electrical and Li + conductivity and unstable solid electrolyte interphase (SEI) formation, the recycled silicon particles have been downsized into nanoscale and shielded by a highly conductive and protective graphene multilayer through high energy ball milling. Cyclic voltammetry and electrochemical impedance spectroscopy measurements have revealed that the graphene wrapping and size reduction approach have significantly improved the electrochemical performance. It delivers an excellent reversible capacity of 1,138 mA h g −1 and a long cycle life with 73% capacity retention over 150 cycles at a high current of 450 mA g −1 . The plentiful waste conversion methodology also provides considerable opportunities for developing additional rechargeable devices, ceramic, powder metallurgy and silane/siloxane products

  10. Development of scalable high throughput fermentation approaches for physiological characterisation of yeast and filamentous fungi

    DEFF Research Database (Denmark)

    Knudsen, Peter Boldsen

    producing the heterologous model polyketide, 6-methylsalicylic acid (6-MSA). An automated methodology for high throughput screening focusing on growth rates, together with a fully automated method for quantitative physiological characterisation in microtiter plates, was established for yeast. Full...

  11. Scalable optical packet switch architecture for low latency and high load computer communication networks

    NARCIS (Netherlands)

    Calabretta, N.; Di Lucente, S.; Nazarathy, Y.; Raz, O.; Dorren, H.J.S.

    2011-01-01

    High performance computer and data-centers require PetaFlop/s processing speed and Petabyte storage capacity with thousands of low-latency short link interconnections between computers nodes. Switch matrices that operate transparently in the optical domain are a potential way to efficiently

  12. An accurate, fast, and scalable solver for high-frequency wave propagation

    Science.gov (United States)

    Zepeda-Núñez, L.; Taus, M.; Hewett, R.; Demanet, L.

    2017-12-01

    In many science and engineering applications, solving time-harmonic high-frequency wave propagation problems quickly and accurately is of paramount importance. For example, in geophysics, particularly in oil exploration, such problems can be the forward problem in an iterative process for solving the inverse problem of subsurface inversion. It is important to solve these wave propagation problems accurately in order to efficiently obtain meaningful solutions of the inverse problems: low order forward modeling can hinder convergence. Additionally, due to the volume of data and the iterative nature of most optimization algorithms, the forward problem must be solved many times. Therefore, a fast solver is necessary to make solving the inverse problem feasible. For time-harmonic high-frequency wave propagation, obtaining both speed and accuracy is historically challenging. Recently, there have been many advances in the development of fast solvers for such problems, including methods which have linear complexity with respect to the number of degrees of freedom. While most methods scale optimally only in the context of low-order discretizations and smooth wave speed distributions, the method of polarized traces has been shown to retain optimal scaling for high-order discretizations, such as hybridizable discontinuous Galerkin methods and for highly heterogeneous (and even discontinuous) wave speeds. The resulting fast and accurate solver is consequently highly attractive for geophysical applications. To date, this method relies on a layered domain decomposition together with a preconditioner applied in a sweeping fashion, which has limited straight-forward parallelization. In this work, we introduce a new version of the method of polarized traces which reveals more parallel structure than previous versions while preserving all of its other advantages. We achieve this by further decomposing each layer and applying the preconditioner to these new components separately and

  13. Scalable preparation of porous micron-SnO2/C composites as high performance anode material for lithium ion battery

    Science.gov (United States)

    Wang, Ming-Shan; Lei, Ming; Wang, Zhi-Qiang; Zhao, Xing; Xu, Jun; Yang, Wei; Huang, Yun; Li, Xing

    2016-03-01

    Nano tin dioxide-carbon (SnO2/C) composites prepared by various carbon materials, such as carbon nanotubes, porous carbon, and graphene, have attracted extensive attention in wide fields. However, undesirable concerns of nanoparticles, including in higher surface area, low tap density, and self-agglomeration, greatly restricted their large-scale practical applications. In this study, novel porous micron-SnO2/C (p-SnO2/C) composites are scalable prepared by a simple hydrothermal approach using glucose as a carbon source and Pluronic F127 as a pore forming agent/soft template. The SnO2 nanoparticles were homogeneously dispersed in micron carbon spheres by assembly with F127/glucose. The continuous three-dimensional porous carbon networks have effectively provided strain relaxation for SnO2 volume expansion/shrinkage during lithium insertion/extraction. In addition, the carbon matrix could largely minimize the direct exposure of SnO2 to the electrolyte, thus ensure formation of stable solid electrolyte interface films. Moreover, the porous structure could also create efficient channels for the fast transport of lithium ions. As a consequence, the p-SnO2/C composites exhibit stable cycle performance, such as a high capacity retention of over 96% for 100 cycles at a current density of 200 mA g-1 and a long cycle life up to 800 times at a higher current density of 1000 mA g-1.

  14. Homogenous 96-plex PEA immunoassay exhibiting high sensitivity, specificity, and excellent scalability

    DEFF Research Database (Denmark)

    Assarsson, Erika; Lundberg, Martin; Holmquist, Göran

    2014-01-01

    reporters, shown potential to relieve the shortcomings of antibodies and their inherent cross-reactivity in multiplex protein quantification applications. The aim of the present study was to develop a robust 96-plex immunoassay based on the proximity extension assay (PEA) for improved high throughput...... detection of protein biomarkers. This was enabled by: (1) a modified design leading to a reduced number of pipetting steps compared to the existing PEA protocol, as well as improved intra-assay precision; (2) a new enzymatic system that uses a hyper-thermostabile enzyme, Pwo, for uniting the two probes......, such as serum and plasma, and also in xenografted mice and resuspended dried blood spots, consuming only 1 µL sample per test. All-in-all, the development of the current multiplex technique is a step toward robust high throughput protein marker discovery and research....

  15. Scalable Computational Methods for the Analysis of High-Throughput Biological Data

    Energy Technology Data Exchange (ETDEWEB)

    Langston, Michael A. [Univ. of Tennessee, Knoxville, TN (United States)

    2012-09-06

    This primary focus of this research project is elucidating genetic regulatory mechanisms that control an organism's responses to low-dose ionizing radiation. Although low doses (at most ten centigrays) are not lethal to humans, they elicit a highly complex physiological response, with the ultimate outcome in terms of risk to human health unknown. The tools of molecular biology and computational science will be harnessed to study coordinated changes in gene expression that orchestrate the mechanisms a cell uses to manage the radiation stimulus. High performance implementations of novel algorithms that exploit the principles of fixed-parameter tractability will be used to extract gene sets suggestive of co-regulation. Genomic mining will be performed to scrutinize, winnow and highlight the most promising gene sets for more detailed investigation. The overall goal is to increase our understanding of the health risks associated with exposures to low levels of radiation.

  16. Performance and scalability of isolated DC-DC converter topologies in low voltage, high current applications

    Energy Technology Data Exchange (ETDEWEB)

    Vaisanen, V.

    2012-07-01

    Fuel cells are a promising alternative for clean and efficient energy production. A fuel cell is probably the most demanding of all distributed generation power sources. It resembles a solar cell in many ways, but sets strict limits to current ripple, common mode voltages and load variations. The typically low output voltage from the fuel cell stack needs to be boosted to a higher voltage level for grid interfacing. Due to the high electrical efficiency of the fuel cell, there is a need for high efficiency power converters, and in the case of low voltage, high current and galvanic isolation, the implementation of such converters is not a trivial task. This thesis presents galvanically isolated DC-DC converter topologies that have favorable characteristics for fuel cell usage and reviews the topologies from the viewpoint of electrical efficiency and cost efficiency. The focus is on evaluating the design issues when considering a single converter module having large current stresses. The dominating loss mechanism in low voltage, high current applications is conduction losses. In the case of MOSFETs, the conduction losses can be efficiently reduced by paralleling, but in the case of diodes, the effectiveness of paralleling depends strongly on the semiconductor material, diode parameters and output configuration. The transformer winding losses can be a major source of losses if the windings are not optimized according to the topology and the operating conditions. Transformer prototyping can be expensive and time consuming, and thus it is preferable to utilize various calculation methods during the design process in order to evaluate the performance of the transformer. This thesis reviews calculation methods for solid wire, litz wire and copper foil winding losses, and in order to evaluate the applicability of the methods, the calculations are compared against measurements and FEM simulations. By selecting a proper calculation method for each winding type, the winding

  17. Versatile, High Quality and Scalable Continuous Flow Production of Metal-Organic Frameworks

    Science.gov (United States)

    Rubio-Martinez, Marta; Batten, Michael P.; Polyzos, Anastasios; Carey, Keri-Constanti; Mardel, James I.; Lim, Kok-Seng; Hill, Matthew R.

    2014-01-01

    Further deployment of Metal-Organic Frameworks in applied settings requires their ready preparation at scale. Expansion of typical batch processes can lead to unsuccessful or low quality synthesis for some systems. Here we report how continuous flow chemistry can be adapted as a versatile route to a range of MOFs, by emulating conditions of lab-scale batch synthesis. This delivers ready synthesis of three different MOFs, with surface areas that closely match theoretical maxima, with production rates of 60 g/h at extremely high space-time yields. PMID:24962145

  18. Scalable high-affinity stabilization of magnetic iron oxide nanostructures by a biocompatible antifouling homopolymer

    KAUST Repository

    Luongo, Giovanni

    2017-10-12

    Iron oxide nanostructures have been widely developed for biomedical applications, due to their magnetic properties and biocompatibility. In clinical application, the stabilization of these nanostructures against aggregation and non-specific interactions is typically achieved using weakly anchored polysaccharides, with better-defined and more strongly anchored synthetic polymers not commercially adopted due to complexity of synthesis and use. Here, we show for the first time stabilization and biocompatibilization of iron oxide nanoparticles by a synthetic homopolymer with strong surface anchoring and a history of clinical use in other applications, poly(2-methacryloyloxyethy phosphorylcholine) (poly(MPC)). For the commercially important case of spherical particles, binding of poly(MPC) to iron oxide surfaces and highly effective individualization of magnetite nanoparticles (20 nm) are demonstrated. Next-generation high-aspect ratio nanowires (both magnetite/maghemite and core-shell iron/iron oxide) are furthermore stabilized by poly(MPC)-coating, with nanowire cytotoxicity at large concentrations significantly reduced. The synthesis approach is exploited to incorporate functionality into the poly(MPC) chain is demonstrated by random copolymerization with an alkyne-containing monomer for click-chemistry. Taking these results together, poly(MPC) homopolymers and random copolymers offer a significant improvement over current iron oxide nanoformulations, combining straightforward synthesis, strong surface-anchoring and well-defined molecular weight.

  19. Amphiphilic ligand exchange reaction-induced supercapacitor electrodes with high volumetric and scalable areal capacitances

    Science.gov (United States)

    Nam, Donghyeon; Heo, Yeongbeom; Cheong, Sanghyuk; Ko, Yongmin; Cho, Jinhan

    2018-05-01

    We introduce high-performance supercapacitor electrodes with ternary components prepared from consecutive amphiphilic ligand-exchange-based layer-by-layer (LbL) assembly among amine-functionalized multi-walled carbon nanotubes (NH2-MWCNTs) in alcohol, oleic acid-stabilized Fe3O4 nanoparticles (OA-Fe3O4 NPs) in toluene, and semiconducting polymers (PEDOT:PSS) in water. The periodic insertion of semiconducting polymers within the (OA-Fe3O4 NP/NH2-MWCNT)n multilayer-coated indium tin oxide (ITO) electrode enhanced the volumetric and areal capacitances up to 408 ± 4 F cm-3 and 8.79 ± 0.06 mF cm-2 at 5 mV s-1, respectively, allowing excellent cycling stability (98.8% of the initial capacitance after 5000 cycles) and good rate capability. These values were higher than those of the OA-Fe3O4 NP/NH2-MWCNT multilayered electrode without semiconducting polymer linkers (volumetric capacitance ∼241 ± 4 F cm-3 and areal capacitance ∼1.95 ± 0.03 mF cm-2) at the same scan rate. Furthermore, when the asymmetric supercapacitor cells (ASCs) were prepared using OA-Fe3O4 NP- and OA-MnO NP-based ternary component electrodes, they displayed high volumetric energy (0.36 mW h cm-3) and power densities (820 mW cm-3).

  20. Scalable high-affinity stabilization of magnetic iron oxide nanostructures by a biocompatible antifouling homopolymer

    KAUST Repository

    Luongo, Giovanni; Campagnolo, Paola; Perez, Jose E.; Kosel, Jü rgen; Georgiou, Theoni K.; Regoutz, Anna; Payne, David J; Stevens, Molly M.; Ryan, Mary P.; Porter, Alexandra E; Dunlop, Iain E

    2017-01-01

    Iron oxide nanostructures have been widely developed for biomedical applications, due to their magnetic properties and biocompatibility. In clinical application, the stabilization of these nanostructures against aggregation and non-specific interactions is typically achieved using weakly anchored polysaccharides, with better-defined and more strongly anchored synthetic polymers not commercially adopted due to complexity of synthesis and use. Here, we show for the first time stabilization and biocompatibilization of iron oxide nanoparticles by a synthetic homopolymer with strong surface anchoring and a history of clinical use in other applications, poly(2-methacryloyloxyethy phosphorylcholine) (poly(MPC)). For the commercially important case of spherical particles, binding of poly(MPC) to iron oxide surfaces and highly effective individualization of magnetite nanoparticles (20 nm) are demonstrated. Next-generation high-aspect ratio nanowires (both magnetite/maghemite and core-shell iron/iron oxide) are furthermore stabilized by poly(MPC)-coating, with nanowire cytotoxicity at large concentrations significantly reduced. The synthesis approach is exploited to incorporate functionality into the poly(MPC) chain is demonstrated by random copolymerization with an alkyne-containing monomer for click-chemistry. Taking these results together, poly(MPC) homopolymers and random copolymers offer a significant improvement over current iron oxide nanoformulations, combining straightforward synthesis, strong surface-anchoring and well-defined molecular weight.

  1. Hyperelastic "bone": A highly versatile, growth factor-free, osteoregenerative, scalable, and surgically friendly biomaterial.

    Science.gov (United States)

    Jakus, Adam E; Rutz, Alexandra L; Jordan, Sumanas W; Kannan, Abhishek; Mitchell, Sean M; Yun, Chawon; Koube, Katie D; Yoo, Sung C; Whiteley, Herbert E; Richter, Claus-Peter; Galiano, Robert D; Hsu, Wellington K; Stock, Stuart R; Hsu, Erin L; Shah, Ramille N

    2016-09-28

    Despite substantial attention given to the development of osteoregenerative biomaterials, severe deficiencies remain in current products. These limitations include an inability to adequately, rapidly, and reproducibly regenerate new bone; high costs and limited manufacturing capacity; and lack of surgical ease of handling. To address these shortcomings, we generated a new, synthetic osteoregenerative biomaterial, hyperelastic "bone" (HB). HB, which is composed of 90 weight % (wt %) hydroxyapatite and 10 wt % polycaprolactone or poly(lactic-co-glycolic acid), could be rapidly three-dimensionally (3D) printed (up to 275 cm(3)/hour) from room temperature extruded liquid inks. The resulting 3D-printed HB exhibited elastic mechanical properties (~32 to 67% strain to failure, ~4 to 11 MPa elastic modulus), was highly absorbent (50% material porosity), supported cell viability and proliferation, and induced osteogenic differentiation of bone marrow-derived human mesenchymal stem cells cultured in vitro over 4 weeks without any osteo-inducing factors in the medium. We evaluated HB in vivo in a mouse subcutaneous implant model for material biocompatibility (7 and 35 days), in a rat posterolateral spinal fusion model for new bone formation (8 weeks), and in a large, non-human primate calvarial defect case study (4 weeks). HB did not elicit a negative immune response, became vascularized, quickly integrated with surrounding tissues, and rapidly ossified and supported new bone growth without the need for added biological factors. Copyright © 2016, American Association for the Advancement of Science.

  2. A scalable-low cost architecture for high gain beamforming antennas

    KAUST Repository

    Bakr, Omar

    2010-10-01

    Many state-of-the-art wireless systems, such as long distance mesh networks and high bandwidth networks using mm-wave frequencies, require high gain antennas to overcome adverse channel conditions. These networks could be greatly aided by adaptive beamforming antenna arrays, which can significantly simplify the installation and maintenance costs (e.g., by enabling automatic beam alignment). However, building large, low cost beamforming arrays is very complicated. In this paper, we examine the main challenges presented by large arrays, starting from electromagnetic and antenna design and proceeding to the signal processing and algorithms domain. We propose 3-dimensional antenna structures and hybrid RF/digital radio architectures that can significantly reduce the complexity and improve the power efficiency of adaptive array systems. We also present signal processing techniques based on adaptive filtering methods that enhance the robustness of these architectures. Finally, we present computationally efficient vector quantization techniques that significantly improve the interference cancellation capabilities of analog beamforming architectures. © 2010 IEEE.

  3. A scalable-low cost architecture for high gain beamforming antennas

    KAUST Repository

    Bakr, Omar; Johnson, Mark; Jungdong Park,; Adabi, Ehsan; Jones, Kevin; Niknejad, Ali

    2010-01-01

    Many state-of-the-art wireless systems, such as long distance mesh networks and high bandwidth networks using mm-wave frequencies, require high gain antennas to overcome adverse channel conditions. These networks could be greatly aided by adaptive beamforming antenna arrays, which can significantly simplify the installation and maintenance costs (e.g., by enabling automatic beam alignment). However, building large, low cost beamforming arrays is very complicated. In this paper, we examine the main challenges presented by large arrays, starting from electromagnetic and antenna design and proceeding to the signal processing and algorithms domain. We propose 3-dimensional antenna structures and hybrid RF/digital radio architectures that can significantly reduce the complexity and improve the power efficiency of adaptive array systems. We also present signal processing techniques based on adaptive filtering methods that enhance the robustness of these architectures. Finally, we present computationally efficient vector quantization techniques that significantly improve the interference cancellation capabilities of analog beamforming architectures. © 2010 IEEE.

  4. A scalable pipeline for highly effective genetic modification of a malaria parasite

    KAUST Repository

    Pfander, Claudia

    2011-10-23

    In malaria parasites, the systematic experimental validation of drug and vaccine targets by reverse genetics is constrained by the inefficiency of homologous recombination and by the difficulty of manipulating adenine and thymine (A+T)-rich DNA of most Plasmodium species in Escherichia coli. We overcame these roadblocks by creating a high-integrity library of Plasmodium berghei genomic DNA (>77% A+T content) in a bacteriophage N15-based vector that can be modified efficiently using the lambda Red method of recombineering. We built a pipeline for generating P. berghei genetic modification vectors at genome scale in serial liquid cultures on 96-well plates. Vectors have long homology arms, which increase recombination frequency up to tenfold over conventional designs. The feasibility of efficient genetic modification at scale will stimulate collaborative, genome-wide knockout and tagging programs for P. berghei. © 2011 Nature America, Inc. All rights reserved.

  5. A scalable pipeline for highly effective genetic modification of a malaria parasite

    KAUST Repository

    Pfander, Claudia; Anar, Burcu; Schwach, Frank; Otto, Thomas D.; Brochet, Mathieu; Volkmann, Katrin; Quail, Michael A.; Pain, Arnab; Rosen, Barry; Skarnes, William; Rayner, Julian C.; Billker, Oliver

    2011-01-01

    In malaria parasites, the systematic experimental validation of drug and vaccine targets by reverse genetics is constrained by the inefficiency of homologous recombination and by the difficulty of manipulating adenine and thymine (A+T)-rich DNA of most Plasmodium species in Escherichia coli. We overcame these roadblocks by creating a high-integrity library of Plasmodium berghei genomic DNA (>77% A+T content) in a bacteriophage N15-based vector that can be modified efficiently using the lambda Red method of recombineering. We built a pipeline for generating P. berghei genetic modification vectors at genome scale in serial liquid cultures on 96-well plates. Vectors have long homology arms, which increase recombination frequency up to tenfold over conventional designs. The feasibility of efficient genetic modification at scale will stimulate collaborative, genome-wide knockout and tagging programs for P. berghei. © 2011 Nature America, Inc. All rights reserved.

  6. PIConGPU - A highly-scalable particle-in-cell implementation for GPU clusters

    Energy Technology Data Exchange (ETDEWEB)

    Bussmann, Michael; Burau, Heiko; Debus, Alexander; Huebl, Axel; Kluge, Thomas; Pausch, Richard; Schmeisser, Nils; Steiniger, Klaus; Widera, Rene; Wyderka, Nikolai; Schramm, Ulrich; Cowan, Thomas [HZDR, Dresden (Germany); Schneider, Benjamin [HZDR, Dresden (Germany); TU Dresden (Germany); Schmitt, Felix [NVIDIA, Austin, TX (United States); Grottel, Sebastian; Gumhold, Stefan [TU Dresden (Germany); Juckeland, Guido; Angel, Wolfgang [TU Dresden (Germany); ZIH, Dresden (Germany)

    2013-07-01

    PIConGPU can handle large-scale simulations of laser plasma and astrophysical plasma dynamics on GPU clusters with thousands of GPUs. High data throughput allows to conduct large parameter surveys but makes it necessary to rethink data analysis and look for new ways of analyzing large simulation data sets. The speedup seen on GPUs enables scientists to add physical effects to their code that up until recently have been too computationally demanding. We present recent results obtained with PIConGPU, discuss scaling behaviour, the most important building blocks of the code and new physics modules recently added. In addition we give an outlook on data analysis, resiliance and load balancing with PIConGPU.

  7. Scalable whole-exome sequencing of cell-free DNA reveals high concordance with metastatic tumors.

    Science.gov (United States)

    Adalsteinsson, Viktor A; Ha, Gavin; Freeman, Samuel S; Choudhury, Atish D; Stover, Daniel G; Parsons, Heather A; Gydush, Gregory; Reed, Sarah C; Rotem, Denisse; Rhoades, Justin; Loginov, Denis; Livitz, Dimitri; Rosebrock, Daniel; Leshchiner, Ignaty; Kim, Jaegil; Stewart, Chip; Rosenberg, Mara; Francis, Joshua M; Zhang, Cheng-Zhong; Cohen, Ofir; Oh, Coyin; Ding, Huiming; Polak, Paz; Lloyd, Max; Mahmud, Sairah; Helvie, Karla; Merrill, Margaret S; Santiago, Rebecca A; O'Connor, Edward P; Jeong, Seong H; Leeson, Rachel; Barry, Rachel M; Kramkowski, Joseph F; Zhang, Zhenwei; Polacek, Laura; Lohr, Jens G; Schleicher, Molly; Lipscomb, Emily; Saltzman, Andrea; Oliver, Nelly M; Marini, Lori; Waks, Adrienne G; Harshman, Lauren C; Tolaney, Sara M; Van Allen, Eliezer M; Winer, Eric P; Lin, Nancy U; Nakabayashi, Mari; Taplin, Mary-Ellen; Johannessen, Cory M; Garraway, Levi A; Golub, Todd R; Boehm, Jesse S; Wagle, Nikhil; Getz, Gad; Love, J Christopher; Meyerson, Matthew

    2017-11-06

    Whole-exome sequencing of cell-free DNA (cfDNA) could enable comprehensive profiling of tumors from blood but the genome-wide concordance between cfDNA and tumor biopsies is uncertain. Here we report ichorCNA, software that quantifies tumor content in cfDNA from 0.1× coverage whole-genome sequencing data without prior knowledge of tumor mutations. We apply ichorCNA to 1439 blood samples from 520 patients with metastatic prostate or breast cancers. In the earliest tested sample for each patient, 34% of patients have ≥10% tumor-derived cfDNA, sufficient for standard coverage whole-exome sequencing. Using whole-exome sequencing, we validate the concordance of clonal somatic mutations (88%), copy number alterations (80%), mutational signatures, and neoantigens between cfDNA and matched tumor biopsies from 41 patients with ≥10% cfDNA tumor content. In summary, we provide methods to identify patients eligible for comprehensive cfDNA profiling, revealing its applicability to many patients, and demonstrate high concordance of cfDNA and metastatic tumor whole-exome sequencing.

  8. High-throughput miniaturized bioreactors for cell culture process development: reproducibility, scalability, and control.

    Science.gov (United States)

    Rameez, Shahid; Mostafa, Sigma S; Miller, Christopher; Shukla, Abhinav A

    2014-01-01

    Decreasing the timeframe for cell culture process development has been a key goal toward accelerating biopharmaceutical development. Advanced Microscale Bioreactors (ambr™) is an automated micro-bioreactor system with miniature single-use bioreactors with a 10-15 mL working volume controlled by an automated workstation. This system was compared to conventional bioreactor systems in terms of its performance for the production of a monoclonal antibody in a recombinant Chinese Hamster Ovary cell line. The miniaturized bioreactor system was found to produce cell culture profiles that matched across scales to 3 L, 15 L, and 200 L stirred tank bioreactors. The processes used in this article involve complex feed formulations, perturbations, and strict process control within the design space, which are in-line with processes used for commercial scale manufacturing of biopharmaceuticals. Changes to important process parameters in ambr™ resulted in predictable cell growth, viability and titer changes, which were in good agreement to data from the conventional larger scale bioreactors. ambr™ was found to successfully reproduce variations in temperature, dissolved oxygen (DO), and pH conditions similar to the larger bioreactor systems. Additionally, the miniature bioreactors were found to react well to perturbations in pH and DO through adjustments to the Proportional and Integral control loop. The data presented here demonstrates the utility of the ambr™ system as a high throughput system for cell culture process development. © 2014 American Institute of Chemical Engineers.

  9. A power scalable PLL frequency synthesizer for high-speed Δ—Σ ADC

    International Nuclear Information System (INIS)

    Han Siyang; Chi Baoyong; Zhang Xinwang; Wang Zhihua

    2014-01-01

    A 35–130 MHz/300–360 MHz phase-locked loop frequency synthesizer for Δ—Σ analog-to-digital converter (ADC) in 65 nm CMOS is presented. The frequency synthesizer can work in low phase-noise mode (300–360 MHz) or in low-power mode (35–130 MHz) to satisfy the ADC's requirements. To switch between these two modes, a high frequency GHz LC VCO followed by a divided-by-four frequency divider and a low frequency ring VCO followed by a divided-by-two frequency divider are integrated on-chip. The measured results show that the frequency synthesizer achieves a phase-noise of −132 dBc/Hz at 1 MHz offset and an integrated RMS jitter of 1.12 ps with 1.74 mW power consumption from a 1.2 V power supply in low phase-noise mode. In low-power mode, the frequency synthesizer achieves a phase-noise of −112 dBc/Hz at 1 MHz offset and an integrated RMS jitter of 7.23 ps with 0.92 mW power consumption from a 1.2 V power supply. (semiconductor integrated circuits)

  10. VISPA2: a scalable pipeline for high-throughput identification and annotation of vector integration sites.

    Science.gov (United States)

    Spinozzi, Giulio; Calabria, Andrea; Brasca, Stefano; Beretta, Stefano; Merelli, Ivan; Milanesi, Luciano; Montini, Eugenio

    2017-11-25

    Bioinformatics tools designed to identify lentiviral or retroviral vector insertion sites in the genome of host cells are used to address the safety and long-term efficacy of hematopoietic stem cell gene therapy applications and to study the clonal dynamics of hematopoietic reconstitution. The increasing number of gene therapy clinical trials combined with the increasing amount of Next Generation Sequencing data, aimed at identifying integration sites, require both highly accurate and efficient computational software able to correctly process "big data" in a reasonable computational time. Here we present VISPA2 (Vector Integration Site Parallel Analysis, version 2), the latest optimized computational pipeline for integration site identification and analysis with the following features: (1) the sequence analysis for the integration site processing is fully compliant with paired-end reads and includes a sequence quality filter before and after the alignment on the target genome; (2) an heuristic algorithm to reduce false positive integration sites at nucleotide level to reduce the impact of Polymerase Chain Reaction or trimming/alignment artifacts; (3) a classification and annotation module for integration sites; (4) a user friendly web interface as researcher front-end to perform integration site analyses without computational skills; (5) the time speedup of all steps through parallelization (Hadoop free). We tested VISPA2 performances using simulated and real datasets of lentiviral vector integration sites, previously obtained from patients enrolled in a hematopoietic stem cell gene therapy clinical trial and compared the results with other preexisting tools for integration site analysis. On the computational side, VISPA2 showed a > 6-fold speedup and improved precision and recall metrics (1 and 0.97 respectively) compared to previously developed computational pipelines. These performances indicate that VISPA2 is a fast, reliable and user-friendly tool for

  11. Towards Scalable Cost-Effective Service and Survivability Provisioning in Ultra High Speed Networks

    Energy Technology Data Exchange (ETDEWEB)

    Bin Wang

    2006-12-01

    Optical transport networks based on wavelength division multiplexing (WDM) are considered to be the most appropriate choice for future Internet backbone. On the other hand, future DOE networks are expected to have the ability to dynamically provision on-demand survivable services to suit the needs of various high performance scientific applications and remote collaboration. Since a failure in aWDMnetwork such as a cable cut may result in a tremendous amount of data loss, efficient protection of data transport in WDM networks is therefore essential. As the backbone network is moving towards GMPLS/WDM optical networks, the unique requirement to support DOE’s science mission results in challenging issues that are not directly addressed by existing networking techniques and methodologies. The objectives of this project were to develop cost effective protection and restoration mechanisms based on dedicated path, shared path, preconfigured cycle (p-cycle), and so on, to deal with single failure, dual failure, and shared risk link group (SRLG) failure, under different traffic and resource requirement models; to devise efficient service provisioning algorithms that deal with application specific network resource requirements for both unicast and multicast; to study various aspects of traffic grooming in WDM ring and mesh networks to derive cost effective solutions while meeting application resource and QoS requirements; to design various diverse routing and multi-constrained routing algorithms, considering different traffic models and failure models, for protection and restoration, as well as for service provisioning; to propose and study new optical burst switched architectures and mechanisms for effectively supporting dynamic services; and to integrate research with graduate and undergraduate education. All objectives have been successfully met. This report summarizes the major accomplishments of this project. The impact of the project manifests in many aspects: First

  12. Highly Efficient and Scalable Compound Decomposition of Two-Electron Integral Tensor and Its Application in Coupled Cluster Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Peng, Bo [William R. Wiley Environmental; Kowalski, Karol [William R. Wiley Environmental

    2017-08-11

    The representation and storage of two-electron integral tensors are vital in large- scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this paper, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition of the two-electron integral tensor in our implementation. For the size of atomic basis set N_b ranging from ~ 100 up to ~ 2, 000, the observed numerical scaling of our implementation shows O(N_b^{2.5~3}) versus O(N_b^{3~4}) of single CD in most of other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic-orbital (AO) two-electron integral tensor from O(N_b^4) to O(N_b^2 log_{10}(N_b)) with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled- cluster formalism employing single and double excitations (CCSD) on several bench- mark systems including the C_{60} molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10^{-4} to 10^{-3} to give acceptable compromise between efficiency and accuracy.

  13. A highly scalable particle tracking algorithm using partitioned global address space (PGAS) programming for extreme-scale turbulence simulations

    Science.gov (United States)

    Buaria, D.; Yeung, P. K.

    2017-12-01

    A new parallel algorithm utilizing a partitioned global address space (PGAS) programming model to achieve high scalability is reported for particle tracking in direct numerical simulations of turbulent fluid flow. The work is motivated by the desire to obtain Lagrangian information necessary for the study of turbulent dispersion at the largest problem sizes feasible on current and next-generation multi-petaflop supercomputers. A large population of fluid particles is distributed among parallel processes dynamically, based on instantaneous particle positions such that all of the interpolation information needed for each particle is available either locally on its host process or neighboring processes holding adjacent sub-domains of the velocity field. With cubic splines as the preferred interpolation method, the new algorithm is designed to minimize the need for communication, by transferring between adjacent processes only those spline coefficients determined to be necessary for specific particles. This transfer is implemented very efficiently as a one-sided communication, using Co-Array Fortran (CAF) features which facilitate small data movements between different local partitions of a large global array. The cost of monitoring transfer of particle properties between adjacent processes for particles migrating across sub-domain boundaries is found to be small. Detailed benchmarks are obtained on the Cray petascale supercomputer Blue Waters at the University of Illinois, Urbana-Champaign. For operations on the particles in a 81923 simulation (0.55 trillion grid points) on 262,144 Cray XE6 cores, the new algorithm is found to be orders of magnitude faster relative to a prior algorithm in which each particle is tracked by the same parallel process at all times. This large speedup reduces the additional cost of tracking of order 300 million particles to just over 50% of the cost of computing the Eulerian velocity field at this scale. Improving support of PGAS models on

  14. Investigating the Role of Biogeochemical Processes in the Northern High Latitudes on Global Climate Feedbacks Using an Efficient Scalable Earth System Model

    Energy Technology Data Exchange (ETDEWEB)

    Jain, Atul K. [Univ. of Illinois, Urbana-Champaign, IL (United States)

    2016-09-14

    The overall objectives of this DOE funded project is to combine scientific and computational challenges in climate modeling by expanding our understanding of the biogeophysical-biogeochemical processes and their interactions in the northern high latitudes (NHLs) using an earth system modeling (ESM) approach, and by adopting an adaptive parallel runtime system in an ESM to achieve efficient and scalable climate simulations through improved load balancing algorithms.

  15. Scalable Directed Assembly of Highly Crystalline 2,7-Dioctyl[1]benzothieno[3,2- b][1]benzothiophene (C8-BTBT) Films.

    Science.gov (United States)

    Chai, Zhimin; Abbasi, Salman A; Busnaina, Ahmed A

    2018-05-30

    Assembly of organic semiconductors with ordered crystal structure has been actively pursued for electronics applications such as organic field-effect transistors (OFETs). Among various film deposition methods, solution-based film growth from small molecule semiconductors is preferable because of its low material and energy consumption, low cost, and scalability. Here, we show scalable and controllable directed assembly of highly crystalline 2,7-dioctyl[1]benzothieno[3,2- b][1]benzothiophene (C8-BTBT) films via a dip-coating process. Self-aligned stripe patterns with tunable thickness and morphology over a centimeter scale are obtained by adjusting two governing parameters: the pulling speed of a substrate and the solution concentration. OFETs are fabricated using the C8-BTBT films assembled at various conditions. A field-effect hole mobility up to 3.99 cm 2 V -1 s -1 is obtained. Owing to the highly scalable crystalline film formation, the dip-coating directed assembly process could be a great candidate for manufacturing next-generation electronics. Meanwhile, the film formation mechanism discussed in this paper could provide a general guideline to prepare other organic semiconducting films from small molecule solutions.

  16. Scalable cloud without dedicated storage

    Science.gov (United States)

    Batkovich, D. V.; Kompaniets, M. V.; Zarochentsev, A. K.

    2015-05-01

    We present a prototype of a scalable computing cloud. It is intended to be deployed on the basis of a cluster without the separate dedicated storage. The dedicated storage is replaced by the distributed software storage. In addition, all cluster nodes are used both as computing nodes and as storage nodes. This solution increases utilization of the cluster resources as well as improves fault tolerance and performance of the distributed storage. Another advantage of this solution is high scalability with a relatively low initial and maintenance cost. The solution is built on the basis of the open source components like OpenStack, CEPH, etc.

  17. Highly nitrogen-doped carbon capsules: scalable preparation and high-performance applications in fuel cells and lithium ion batteries.

    Science.gov (United States)

    Hu, Chuangang; Xiao, Ying; Zhao, Yang; Chen, Nan; Zhang, Zhipan; Cao, Minhua; Qu, Liangti

    2013-04-07

    Highly nitrogen-doped carbon capsules (hN-CCs) have been successfully prepared by using inexpensive melamine and glyoxal as precursors via solvothermal reaction and carbonization. With a great promise for large scale production, the hN-CCs, having large surface area and high-level nitrogen content (N/C atomic ration of ca. 13%), possess superior crossover resistance, selective activity and catalytic stability towards oxygen reduction reaction for fuel cells in alkaline medium. As a new anode material in lithium-ion battery, hN-CCs also exhibit excellent cycle performance and high rate capacity with a reversible capacity of as high as 1046 mA h g(-1) at a current density of 50 mA g(-1) after 50 cycles. These features make the hN-CCs developed in this study promising as suitable substitutes for the expensive noble metal catalysts in the next generation alkaline fuel cells, and as advanced electrode materials in lithium-ion batteries.

  18. Fast volume reconstruction in positron emission tomography: Implementation of four algorithms on a high-performance scalable parallel platform

    International Nuclear Information System (INIS)

    Egger, M.L.; Scheurer, A.H.; Joseph, C.

    1996-01-01

    The issue of long reconstruction times in PET has been addressed from several points of view, resulting in an affordable dedicated system capable of handling routine 3D reconstruction in a few minutes per frame: on the hardware side using fast processors and a parallel architecture, and on the software side, using efficient implementations of computationally less intensive algorithms. Execution times obtained for the PRT-1 data set on a parallel system of five hybrid nodes, each combining an Alpha processor for computation and a transputer for communication, are the following (256 sinograms of 96 views by 128 radial samples): Ramp algorithm 56 s, Favor 81 s and reprojection algorithm of Kinahan and Rogers 187 s. The implementation of fast rebinning algorithms has shown our hardware platform to become communications-limited; they execute faster on a conventional single-processor Alpha workstation: single-slice rebinning 7 s, Fourier rebinning 22 s, 2D filtered backprojection 5 s. The scalability of the system has been demonstrated, and a saturation effect at network sizes above ten nodes has become visible; new T9000-based products lifting most of the constraints on network topology and link throughput are expected to result in improved parallel efficiency and scalability properties

  19. High performance flexible metal oxide/silver nanowire based transparent conductive films by a scalable lamination-assisted solution method

    Directory of Open Access Journals (Sweden)

    Hua Yu

    2017-03-01

    Full Text Available Flexible MoO3/silver nanowire (AgNW/MoO3/TiO2/Epoxy electrodes with comparable performance to ITO were fabricated by a scalable solution-processed method with lamination assistance for transparent and conductive applications. Silver nanoparticle-based electrodes were also prepared for comparison. Using a simple spin-coating and lamination-assisted planarization method, a full solution-based approach allows preparation of AgNW-based composite electrodes at temperatures as low as 140 °C. The resulting flexible AgNW-based electrodes exhibit higher transmittance of 82% at 550 nm and lower sheet resistance about 12–15 Ω sq−1, in comparison with the values of 68% and 22–25 Ω sq−1 separately for AgNP based electrodes. Scanning electron microscopy (SEM and Atomic force microscopy (AFM reveals that the multi-stacked metal-oxide layers embedded with the AgNWs possess lower surface roughness (<15 nm. The AgNW/MoO3 composite network could enhance the charge transport and collection efficiency by broadening the lateral conduction range due to the built of an efficient charge transport network with long-sized nanowire. In consideration of the manufacturing cost, the lamination-assisted solution-processed method is cost-effective and scalable, which is desire for large-area fabrication. While in view of the materials cost and comparable performance, this AgNW-based transparent and conductive electrodes is potential as an alternative to ITO for various optoelectronic applications.

  20. Scalable Production of the Silicon-Tin Yin-Yang Hybrid Structure with Graphene Coating for High Performance Lithium-Ion Battery Anodes.

    Science.gov (United States)

    Jin, Yan; Tan, Yingling; Hu, Xiaozhen; Zhu, Bin; Zheng, Qinghui; Zhang, Zijiao; Zhu, Guoying; Yu, Qian; Jin, Zhong; Zhu, Jia

    2017-05-10

    Alloy anodes possessed of high theoretical capacity show great potential for next-generation advanced lithium-ion battery. Even though huge volume change during lithium insertion and extraction leads to severe problems, such as pulverization and an unstable solid-electrolyte interphase (SEI), various nanostructures including nanoparticles, nanowires, and porous networks can address related challenges to improve electrochemical performance. However, the complex and expensive fabrication process hinders the widespread application of nanostructured alloy anodes, which generate an urgent demand of low-cost and scalable processes to fabricate building blocks with fine controls of size, morphology, and porosity. Here, we demonstrate a scalable and low-cost process to produce a porous yin-yang hybrid composite anode with graphene coating through high energy ball-milling and selective chemical etching. With void space to buffer the expansion, the produced functional electrodes demonstrate stable cycling performance of 910 mAh g -1 over 600 cycles at a rate of 0.5C for Si-graphene "yin" particles and 750 mAh g -1 over 300 cycles at 0.2C for Sn-graphene "yang" particles. Therefore, we open up a new approach to fabricate alloy anode materials at low-cost, low-energy consumption, and large scale. This type of porous silicon or tin composite with graphene coating can also potentially play a significant role in thermoelectrics and optoelectronics applications.

  1. Performance improvement and better scalability of wet-recessed and wet-oxidized AlGaN/GaN high electron mobility transistors

    Science.gov (United States)

    Takhar, Kuldeep; Meer, Mudassar; Upadhyay, Bhanu B.; Ganguly, Swaroop; Saha, Dipankar

    2017-05-01

    We have demonstrated that a thin layer of Al2O3 grown by wet-oxidation of wet-recessed AlGaN barrier layer in an AlGaN/GaN heterostructure can significantly improve the performance of GaN based high electron mobility transistors (HEMTs). The wet-etching leads to a damage free recession of the gate region and compensates for the decreased gate capacitance and increased gate leakage. The performance improvement is manifested as an increase in the saturation drain current, transconductance, and unity current gain frequency (fT). This is further augmented with a large decrease in the subthreshold current. The performance improvement is primarily ascribed to an increase in the effective velocity in two-dimensional electron gas without sacrificing gate capacitance, which make the wet-recessed gate oxide-HEMTs much more scalable in comparison to their conventional counterpart. The improved scalability leads to an increase in the product of unity current gain frequency and gate length (fT × Lg).

  2. Scalable fabrication of high-power graphene micro-supercapacitors for flexible and on-chip energy storage

    Science.gov (United States)

    El-Kady, Maher F.; Kaner, Richard B.

    2013-02-01

    The rapid development of miniaturized electronic devices has increased the demand for compact on-chip energy storage. Microscale supercapacitors have great potential to complement or replace batteries and electrolytic capacitors in a variety of applications. However, conventional micro-fabrication techniques have proven to be cumbersome in building cost-effective micro-devices, thus limiting their widespread application. Here we demonstrate a scalable fabrication of graphene micro-supercapacitors over large areas by direct laser writing on graphite oxide films using a standard LightScribe DVD burner. More than 100 micro-supercapacitors can be produced on a single disc in 30 min or less. The devices are built on flexible substrates for flexible electronics and on-chip uses that can be integrated with MEMS or CMOS in a single chip. Remarkably, miniaturizing the devices to the microscale results in enhanced charge-storage capacity and rate capability. These micro-supercapacitors demonstrate a power density of ~200 W cm-3, which is among the highest values achieved for any supercapacitor.

  3. Scalable synthesis of interconnected porous silicon/carbon composites by the Rochow reaction as high-performance anodes of lithium ion batteries.

    Science.gov (United States)

    Zhang, Zailei; Wang, Yanhong; Ren, Wenfeng; Tan, Qiangqiang; Chen, Yunfa; Li, Hong; Zhong, Ziyi; Su, Fabing

    2014-05-12

    Despite the promising application of porous Si-based anodes in future Li ion batteries, the large-scale synthesis of these materials is still a great challenge. A scalable synthesis of porous Si materials is presented by the Rochow reaction, which is commonly used to produce organosilane monomers for synthesizing organosilane products in chemical industry. Commercial Si microparticles reacted with gas CH3 Cl over various Cu-based catalyst particles to substantially create macropores within the unreacted Si accompanying with carbon deposition to generate porous Si/C composites. Taking advantage of the interconnected porous structure and conductive carbon-coated layer after simple post treatment, these composites as anodes exhibit high reversible capacity and long cycle life. It is expected that by integrating the organosilane synthesis process and controlling reaction conditions, the manufacture of porous Si-based anodes on an industrial scale is highly possible. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Crickets are not a free lunch: protein capture from scalable organic side-streams via high-density populations of Acheta domesticus.

    Directory of Open Access Journals (Sweden)

    Mark E Lundy

    Full Text Available It has been suggested that the ecological impact of crickets as a source of dietary protein is less than conventional forms of livestock due to their comparatively efficient feed conversion and ability to consume organic side-streams. This study measured the biomass output and feed conversion ratios of house crickets (Acheta domesticus reared on diets that varied in quality, ranging from grain-based to highly cellulosic diets. The measurements were made at a much greater population scale and density than any previously reported in the scientific literature. The biomass accumulation was strongly influenced by the quality of the diet (p99% mortality without reaching a harvestable size. Therefore, the potential for A. domesticus to sustainably supplement the global protein supply, beyond what is currently produced via grain-fed chickens, will depend on capturing regionally scalable organic side-streams of relatively high-quality that are not currently being used for livestock production.

  5. Facile and Scalable Synthesis Method for High-Quality Few-Layer Graphene through Solution-Based Exfoliation of Graphite.

    Science.gov (United States)

    Wee, Boon-Hong; Wu, Tong-Fei; Hong, Jong-Dal

    2017-02-08

    Here we describe a facile and scalable method for preparing defect-free graphene sheets exfoliated from graphite using the positively charged polyelectrolyte precursor poly(p-phenylenevinylene) (PPV-pre) as a stabilizer in an aqueous solution. The graphene exfoliated by PPV-pre was apparently stabilized in the solution as a form of graphene/PPV-pre (denoted to GPPV-pre), which remains in a homogeneous dispersion over a year. The thickness values of 300 selected 76% GPPV-pre flakes ranged from 1 to 10 nm, corresponding to between one and a few layers of graphene in the lateral dimensions of 1 to 2 μm. Furthermore, this approach was expected to yield a marked decrease in the density of defects in the electronic conjugation of graphene compared to that of graphene oxide (GO) obtained by Hummers' method. The positively charged GPPV-pre was employed to fabricate a poly(ethylene terephthalate) (PET) electrode layer-by-layer with negatively charged GO, yielding (GPPV-pre/GO) n film electrode. The PPV-pre and GO in the (GPPV-pre/GO) n films were simultaneously converted using hydroiodic acid vapor to fully conjugated PPV and reduced graphene oxide (RGO), respectively. The electrical conductivity of (GPPV/RGO) 23 multilayer films was 483 S/cm, about three times greater than that of the (PPV/RGO) 23 multilayer films (166 S/cm) comprising RGO (prepared by Hummers method). Furthermore, the superior electrical properties of GPPV were made evident, when comparing the capacitive performances of two supercapacitor systems; (polyaniline PANi/RGO) 30 /(GPPV/RGO) 23 /PET (volumetric capacitance = 216 F/cm 3 ; energy density = 19 mWh/cm 3 ; maximum power density = 498 W/cm 3 ) and (PANi/RGO) 30 /(PPV/RGO) 23 /PET (152 F/cm 3 ; 9 mWh/cm 3 ; 80 W/cm 3 ).

  6. Secure Secondary Use of Clinical Data with Cloud-based NLP Services. Towards a Highly Scalable Research Infrastructure.

    Science.gov (United States)

    Christoph, J; Griebel, L; Leb, I; Engel, I; Köpcke, F; Toddenroth, D; Prokosch, H-U; Laufer, J; Marquardt, K; Sedlmayr, M

    2015-01-01

    The secondary use of clinical data provides large opportunities for clinical and translational research as well as quality assurance projects. For such purposes, it is necessary to provide a flexible and scalable infrastructure that is compliant with privacy requirements. The major goals of the cloud4health project are to define such an architecture, to implement a technical prototype that fulfills these requirements and to evaluate it with three use cases. The architecture provides components for multiple data provider sites such as hospitals to extract free text as well as structured data from local sources and de-identify such data for further anonymous or pseudonymous processing. Free text documentation is analyzed and transformed into structured information by text-mining services, which are provided within a cloud-computing environment. Thus, newly gained annotations can be integrated along with the already available structured data items and the resulting data sets can be uploaded to a central study portal for further analysis. Based on the architecture design, a prototype has been implemented and is under evaluation in three clinical use cases. Data from several hundred patients provided by a University Hospital and a private hospital chain have already been processed. Cloud4health has shown how existing components for secondary use of structured data can be complemented with text-mining in a privacy compliant manner. The cloud-computing paradigm allows a flexible and dynamically adaptable service provision that facilitates the adoption of services by data providers without own investments in respective hardware resources and software tools.

  7. Numeric Analysis for Relationship-Aware Scalable Streaming Scheme

    Directory of Open Access Journals (Sweden)

    Heung Ki Lee

    2014-01-01

    Full Text Available Frequent packet loss of media data is a critical problem that degrades the quality of streaming services over mobile networks. Packet loss invalidates frames containing lost packets and other related frames at the same time. Indirect loss caused by losing packets decreases the quality of streaming. A scalable streaming service can decrease the amount of dropped multimedia resulting from a single packet loss. Content providers typically divide one large media stream into several layers through a scalable streaming service and then provide each scalable layer to the user depending on the mobile network. Also, a scalable streaming service makes it possible to decode partial multimedia data depending on the relationship between frames and layers. Therefore, a scalable streaming service provides a way to decrease the wasted multimedia data when one packet is lost. However, the hierarchical structure between frames and layers of scalable streams determines the service quality of the scalable streaming service. Even if whole packets of layers are transmitted successfully, they cannot be decoded as a result of the absence of reference frames and layers. Therefore, the complicated relationship between frames and layers in a scalable stream increases the volume of abandoned layers. For providing a high-quality scalable streaming service, we choose a proper relationship between scalable layers as well as the amount of transmitted multimedia data depending on the network situation. We prove that a simple scalable scheme outperforms a complicated scheme in an error-prone network. We suggest an adaptive set-top box (AdaptiveSTB to lower the dependency between scalable layers in a scalable stream. Also, we provide a numerical model to obtain the indirect loss of multimedia data and apply it to various multimedia streams. Our AdaptiveSTB enhances the quality of a scalable streaming service by removing indirect loss.

  8. PKI Scalability Issues

    OpenAIRE

    Slagell, Adam J; Bonilla, Rafael

    2004-01-01

    This report surveys different PKI technologies such as PKIX and SPKI and the issues of PKI that affect scalability. Much focus is spent on certificate revocation methodologies and status verification systems such as CRLs, Delta-CRLs, CRS, Certificate Revocation Trees, Windowed Certificate Revocation, OCSP, SCVP and DVCS.

  9. N- and S-doped high surface area carbon derived from soya chunks as scalable and efficient electrocatalysts for oxygen reduction

    Science.gov (United States)

    Rana, Moumita; Arora, Gunjan; Gautam, Ujjal K.

    2015-02-01

    Highly stable, cost-effective electrocatalysts facilitating oxygen reduction are crucial for the commercialization of membrane-based fuel cell and battery technologies. Herein, we demonstrate that protein-rich soya chunks with a high content of N, S and P atoms are an excellent precursor for heteroatom-doped highly graphitized carbon materials. The materials are nanoporous, with a surface area exceeding 1000 m2 g-1, and they are tunable in doping quantities. These materials exhibit highly efficient catalytic performance toward oxygen reduction reaction (ORR) with an onset potential of -0.045 V and a half-wave potential of -0.211 V (versus a saturated calomel electrode) in a basic medium, which is comparable to commercial Pt catalysts and is better than other recently developed metal-free carbon-based catalysts. These exhibit complete methanol tolerance and a performance degradation of merely ˜5% as compared to ˜14% for a commercial Pt/C catalyst after continuous use for 3000 s at the highest reduction current. We found that the fraction of graphitic N increases at a higher graphitization temperature, leading to the near complete reduction of oxygen. It is believed that due to the easy availability of the precursor and the possibility of genetic engineering to homogeneously control the heteroatom distribution, the synthetic strategy is easily scalable, with further improvement in performance.

  10. A proposed scalable parallel open architecture data acquisition system for low to high rate experiments, test beams and all SSC detectors

    International Nuclear Information System (INIS)

    Barsotti, E.; Booth, A.; Bowden, M.; Swoboda, C.; Lockyer, N.; Vanberg, R.

    1990-01-01

    A new era of high-energy physics research is beginning requiring accelerators with much higher luminosities and interaction rates in order to discover new elementary particles. As a consequence, both orders of magnitude higher data rates from the detector and online processing power, well beyond the capabilities of current high energy physics data acquisition systems, are required. This paper describes a proposed new data acquisition system architecture which draws heavily from the communications industry, is totally parallel (i.e., without any bottlenecks), is capable of data rates of hundreds of Gigabytes per second from the detector and into an array of online processors (i.e., processor farm), and uses an open systems architecture to guarantee compatibility with future commercially available online processor farms. The main features of the proposed Scalable Parallel Open Architecture data acquisition system are standard interface ICs to detector subsystems wherever possible, fiber optic digital data transmission from the near-detector electronics, a self-routing parallel event builder, and the use of industry-supported and high-level language programmable processors in the proposed BCD system for both triggers and online filters. A brief status report of an ongoing project at Fermilab to build a prototype of the proposed data acquisition system architecture is given in the paper. The major component of the system, a self-routing parallel event builder, is described in detail

  11. Facile and Scalable Fabrication of Highly Efficient Lead Iodide Perovskite Thin-Film Solar Cells in Air Using Gas Pump Method.

    Science.gov (United States)

    Ding, Bin; Gao, Lili; Liang, Lusheng; Chu, Qianqian; Song, Xiaoxuan; Li, Yan; Yang, Guanjun; Fan, Bin; Wang, Mingkui; Li, Chengxin; Li, Changjiu

    2016-08-10

    Control of the perovskite film formation process to produce high-quality organic-inorganic metal halide perovskite thin films with uniform morphology, high surface coverage, and minimum pinholes is of great importance to highly efficient solar cells. Herein, we report on large-area light-absorbing perovskite films fabrication with a new facile and scalable gas pump method. By decreasing the total pressure in the evaporation environment, the gas pump method can significantly enhance the solvent evaporation rate by 8 times faster and thereby produce an extremely dense, uniform, and full-coverage perovskite thin film. The resulting planar perovskite solar cells can achieve an impressive power conversion efficiency up to 19.00% with an average efficiency of 17.38 ± 0.70% for 32 devices with an area of 5 × 2 mm, 13.91% for devices with a large area up to 1.13 cm(2). The perovskite films can be easily fabricated in air conditions with a relative humidity of 45-55%, which definitely has a promising prospect in industrial application of large-area perovskite solar panels.

  12. Developing Scalable Information Security Systems

    Directory of Open Access Journals (Sweden)

    Valery Konstantinovich Ablekov

    2013-06-01

    Full Text Available Existing physical security systems has wide range of lacks, including: high cost, a large number of vulnerabilities, problems of modification and support system. This paper covers an actual problem of developing systems without this list of drawbacks. The paper presents the architecture of the information security system, which operates through the network protocol TCP/IP, including the ability to connect different types of devices and integration with existing security systems. The main advantage is a significant increase in system reliability, scalability, both vertically and horizontally, with minimal cost of both financial and time resources.

  13. A facile and scalable strategy for synthesis of size-tunable NiCo2O4 with nanocoral-like architecture for high-performance supercapacitors

    International Nuclear Information System (INIS)

    Tao, Yan; Ruiyi, Li; Zaijun, Li; Yinjun, Fang

    2014-01-01

    Graphical abstract: We reported a facile and scalable strategy for synthesis of size-tunable NiCo 2 O 4 with nanocoral-like architecture. The unique structure will improve faradaic redox reaction and mass transfer, NiCo 2 O 4 offers excellent electrochemical performance for supercapacitors. - Highlights: • We reported a facile and scalable strategy for synthesis of size-tunable NiCo 2 O 4 withnanocoral-lide architecture. • Combination of microwave and tertbutanol as medium creates ultrathin nickel/cobalt double hydroxide with flowerclusters. • The method is very simple, rapid and efficient, it can be used for large scale productionof nanomaterials. • The size of NiCo 2 O 4 nanocorals is easy to be can be controlled by adjusting calcination temperature. • Unique structure enhances rates of electron transfer and mass transport, NiCo 2 O 4 shows high electrochemical performance. - Abstract: There is a great need to develop high-performance electroactive materials for supercapacitors. The study reported a facile and scalable strategy for synthesis of size-tunable NiCo 2 O 4 with nanocoral-like architecture. Cobalt nitrate and nickel nitrate were dissolved in a tertbutanol solution and heated to reflux state under microwave radiation. The amounts of ammonia was dropped into the mixed solution to form nickel/cobalt double hydroxides. The reaction can complete within 15 min with the productivity of 99.9%. The obtained double hydroxides display flowercluster-like ultrathin nanostructure. The double hydroxide was calcined into different NiCo 2 O 4 products using different calcination temperature, including 400 °C, 500 °C, 600 °C and 700 °C. The resulting NiCo 2 O 4 is of nanocoral-like architecture. Interestingly, the size of coral can be easily controlled by adjusting the temperature. The NiCo 2 O 4 prepared at 400°C gives a minimum building block size (10.2 nm) and maximum specific surface area (108.8 m 2 ·g −1 ). The unique structure will greatly

  14. Rad-Hard, Miniaturized, Scalable, High-Voltage Switching Module for Power Applications Rad-Hard, Miniaturized

    Science.gov (United States)

    Adell, Philippe C.; Mojarradi, Mohammad; DelCastillo, Linda Y.; Vo, Tuan A.

    2011-01-01

    A paper discusses the successful development of a miniaturized radiation hardened high-voltage switching module operating at 2.5 kV suitable for space application. The high-voltage architecture was designed, fabricated, and tested using a commercial process that uses a unique combination of 0.25 micrometer CMOS (complementary metal oxide semiconductor) transistors and high-voltage lateral DMOS (diffusion metal oxide semiconductor) device with high breakdown voltage (greater than 650 V). The high-voltage requirements are achieved by stacking a number of DMOS devices within one module, while two modules can be placed in series to achieve higher voltages. Besides the high-voltage requirements, a second generation prototype is currently being developed to provide improved switching capabilities (rise time and fall time for full range of target voltages and currents), the ability to scale the output voltage to a desired value with good accuracy (few percent) up to 10 kV, to cover a wide range of high-voltage applications. In addition, to ensure miniaturization, long life, and high reliability, the assemblies will require intensive high-voltage electrostatic modeling (optimized E-field distribution throughout the module) to complete the proposed packaging approach and test the applicability of using advanced materials in a space-like environment (temperature and pressure) to help prevent potential arcing and corona due to high field regions. Finally, a single-event effect evaluation would have to be performed and single-event mitigation methods implemented at the design and system level or developed to ensure complete radiation hardness of the module.

  15. Environmentally benign and scalable synthesis of LiFePO4 nanoplates with high capacity and excellent rate cycling performance for lithium ion batteries

    International Nuclear Information System (INIS)

    Zhao, Chunsong; Wang, Lu-Ning; Chen, Jitao; Gao, Min

    2017-01-01

    Highlights: •LiFePO 4 precursors were successfully prepared in pure water phase under atmosphere. •LiFePO 4 nanostructures were also regenerated by recycling filtrate. •LiFePO 4 /C delivers high discharge capacity of 160 mAh g −1 at 0.2 C and high rate capacity of 107 mAh g −1 at 20C. •LiFePO 4 /C delivers a capacity retention rate closed to 97% after 240 cycles at 20C. -- Abstract: An economical and scalable synthesis route of LiFePO 4 nanoplate precursors is successfully prepared in pure water phase under atmosphere without employing environmentally toxic surfactants or high temperature and high pressure compared with traditional hydrothermal or solvothermal methods, which also involves recycling the filtrate to regenerate LiFePO 4 nanoplate precursors and collecting by-product Na 2 SO 4 . The LiFePO 4 precursors present a plate-like morphology with mean thickness and length of 50–100 and 100–300 nm, respectively. After carbon coating, the LiFePO 4 /C nanoparticles with particle size around 200 nm can be observed which exhibit a high discharge capacity of 160 mAh g −1 at 0.2 C and 107 mAh g −1 at 20 C. A high capacity retention closed to 91% can be reached after 500 cycles even at a high current rate of 20C with coulombic efficiency of 99.5%. This work suggests a simple, economic and environmentally benign method in preparation of LiFePO 4 /C cathode material for power batteries that would be feasible for large scale industrial production.

  16. Scalable, incremental learning with MapReduce parallelization for cell detection in high-resolution 3D microscopy data

    KAUST Repository

    Sung, Chul; Woo, Jongwook; Goodman, Matthew; Huffman, Todd; Choe, Yoonsuck

    2013-01-01

    Accurate estimation of neuronal count and distribution is central to the understanding of the organization and layout of cortical maps in the brain, and changes in the cell population induced by brain disorders. High-throughput 3D microscopy

  17. Scalable Resolution Display Walls

    KAUST Repository

    Leigh, Jason; Johnson, Andrew; Renambot, Luc; Peterka, Tom; Jeong, Byungil; Sandin, Daniel J.; Talandis, Jonas; Jagodic, Ratko; Nam, Sungwon; Hur, Hyejung; Sun, Yiwen

    2013-01-01

    This article will describe the progress since 2000 on research and development in 2-D and 3-D scalable resolution display walls that are built from tiling individual lower resolution flat panel displays. The article will describe approaches and trends in display hardware construction, middleware architecture, and user-interaction design. The article will also highlight examples of use cases and the benefits the technology has brought to their respective disciplines. © 1963-2012 IEEE.

  18. LHCb: Performance evaluation and capacity planning for a scalable and highly available virtulization infrastructure for the LHCb experiment

    CERN Multimedia

    Sborzacchi, F; Neufeld, N

    2013-01-01

    The virtual computing is often run to satisfy different needs: reduce costs, reduce resources, simplify maintenance and the last but not the least add flexibility. The use of Virtualization in a complex system such as a farm of PCs that control the hardware of an experiment (PLC, power supplies ,gas, magnets..) put us in a condition where not only an High Performance requirements need to be carefully considered but also a deep analysis of strategies to achieve a certain level of High Availability. We conducted a performance evaluation on different and comparable storage/network/virtulization platforms. The performance is measured using a series of independent benchmarks , testing the speed an the stability of multiple VMs runnng heavy-load operations on the I/O of virtualized storage and the virtualized network. The result from the benchmark tests allowed us to study and evaluate how the different workloads of Vm interact with the Hardware/Software resource layers.

  19. Enhancing Scalability of Sparse Direct Methods

    International Nuclear Information System (INIS)

    Li, Xiaoye S.; Demmel, James; Grigori, Laura; Gu, Ming; Xia, Jianlin; Jardin, Steve; Sovinec, Carl; Lee, Lie-Quan

    2007-01-01

    TOPS is providing high-performance, scalable sparse direct solvers, which have had significant impacts on the SciDAC applications, including fusion simulation (CEMM), accelerator modeling (COMPASS), as well as many other mission-critical applications in DOE and elsewhere. Our recent developments have been focusing on new techniques to overcome scalability bottleneck of direct methods, in both time and memory. These include parallelizing symbolic analysis phase and developing linear-complexity sparse factorization methods. The new techniques will make sparse direct methods more widely usable in large 3D simulations on highly-parallel petascale computers

  20. Scalable Density-Based Subspace Clustering

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Günnemann, Stephan

    2011-01-01

    For knowledge discovery in high dimensional databases, subspace clustering detects clusters in arbitrary subspace projections. Scalability is a crucial issue, as the number of possible projections is exponential in the number of dimensions. We propose a scalable density-based subspace clustering...... method that steers mining to few selected subspace clusters. Our novel steering technique reduces subspace processing by identifying and clustering promising subspaces and their combinations directly. Thereby, it narrows down the search space while maintaining accuracy. Thorough experiments on real...... and synthetic databases show that steering is efficient and scalable, with high quality results. For future work, our steering paradigm for density-based subspace clustering opens research potential for speeding up other subspace clustering approaches as well....

  1. A low-cost, scalable, current-sensing digital headstage for high channel count μECoG

    Science.gov (United States)

    Trumpis, Michael; Insanally, Michele; Zou, Jialin; Elsharif, Ashraf; Ghomashchi, Ali; Sertac Artan, N.; Froemke, Robert C.; Viventi, Jonathan

    2017-04-01

    Objective. High channel count electrode arrays allow for the monitoring of large-scale neural activity at high spatial resolution. Implantable arrays featuring many recording sites require compact, high bandwidth front-end electronics. In the present study, we investigated the use of a small, light weight, and low cost digital current-sensing integrated circuit for acquiring cortical surface signals from a 61-channel micro-electrocorticographic (μECoG) array. Approach. We recorded both acute and chronic μECoG signal from rat auditory cortex using our novel digital current-sensing headstage. For direct comparison, separate recordings were made in the same anesthetized preparations using an analog voltage headstage. A model of electrode impedance explained the transformation between current- and voltage-sensed signals, and was used to reconstruct cortical potential. We evaluated the digital headstage using several metrics of the baseline and response signals. Main results. The digital current headstage recorded neural signal with similar spatiotemporal statistics and auditory frequency tuning compared to the voltage signal. The signal-to-noise ratio of auditory evoked responses (AERs) was significantly stronger in the current signal. Stimulus decoding based on true and reconstructed voltage signals were not significantly different. Recordings from an implanted system showed AERs that were detectable and decodable for 52 d. The reconstruction filter mitigated the thermal current noise of the electrode impedance and enhanced overall SNR. Significance. We developed and validated a novel approach to headstage acquisition that used current-input circuits to independently digitize 61 channels of μECoG measurements of the cortical field. These low-cost circuits, intended to measure photo-currents in digital imaging, not only provided a signal representing the local cortical field with virtually the same sensitivity and specificity as a traditional voltage headstage but

  2. Facile and scalable preparation of highly wear-resistance superhydrophobic surface on wood substrates using silica nanoparticles modified by VTES

    Energy Technology Data Exchange (ETDEWEB)

    Jia, Shanshan; Liu, Ming [College of Materials Science and Engineering, Central South University of Forestry and Technology, Changsha 410004 (China); Wu, Yiqiang, E-mail: wuyq0506@126.com [College of Materials Science and Engineering, Central South University of Forestry and Technology, Changsha 410004 (China); Hunan Provincial Collaborative Innovation Center for High-efficiency Utilization of Wood and Bamboo Resources, Central South University of Forestry and Technology, Changsha 410004 (China); Luo, Sha [College of Materials Science and Engineering, Central South University of Forestry and Technology, Changsha 410004 (China); Qing, Yan, E-mail: qingyan0429@163.com [College of Materials Science and Engineering, Central South University of Forestry and Technology, Changsha 410004 (China); Hunan Provincial Collaborative Innovation Center for High-efficiency Utilization of Wood and Bamboo Resources, Central South University of Forestry and Technology, Changsha 410004 (China); Chen, Haibo [College of Materials Science and Engineering, Central South University of Forestry and Technology, Changsha 410004 (China)

    2016-11-15

    Graphical abstract: Highly wear-resistance superhydrophobic surface on wood substrates was fabricated using silica nanoparticles modified by VTES. Display Omitted - Highlights: • Superhydrophobic surface on wood substrates was efficiently fabricated using nanoparticles modified by VTES. • The superhydrophobic surface exhibited a CA of 154° and a SAclose to 0°. • The superhydrophobic surface showed a durable and robust wear-resistance performance. - Abstract: In this study, an efficient, facile method has been developed for fabricating superhydrophobic surfaces on wood substrates using silica nanoparticles modified by VTES. The as-prepared superhydrophobic wood surface had a water contact angle of 154° and water slide angle close to 0°. Simultaneously, this superhydrophobic wood showed highly durable and robust wear resistance when having undergone a long period of sandpaper abrasion or being scratched by a knife. Even under extreme conditions of boiling water, the superhydrophobicity of the as-prepared wood composite was preserved. Characterizations by scanning electron microscopy, energy-dispersive X-ray spectroscopy, and Fourier transform infrared spectroscopy showed that a typical and tough hierarchical micro/nanostructure was created on the wood substrate and vinyltriethoxysilane contributed to preventing the agglomeration of silica nanoparticles and serving as low-surface-free-energy substances. This superhydrophobic wood was easy to fabricate, mechanically resistant and exhibited long-term stability. Therefore, it is considered to be of significant importance in the industrial production of functional wood, especially for outdoor applications.

  3. Scalable integration of Li5FeO4 towards robust, high-performance lithium-ion hybrid capacitors.

    Science.gov (United States)

    Park, Min-Sik; Lim, Young-Geun; Hwang, Soo Min; Kim, Jung Ho; Kim, Jeom-Soo; Dou, Shi Xue; Cho, Jaephil; Kim, Young-Jun

    2014-11-01

    Lithium-ion hybrid capacitors have attracted great interest due to their high specific energy relative to conventional electrical double-layer capacitors. Nevertheless, the safety issue still remains a drawback for lithium-ion capacitors in practical operational environments because of the use of metallic lithium. Herein, single-phase Li5FeO4 with an antifluorite structure that acts as an alternative lithium source (instead of metallic lithium) is employed and its potential use for lithium-ion capacitors is verified. Abundant Li(+) amounts can be extracted from Li5FeO4 incorporated in the positive electrode and efficiently doped into the negative electrode during the first electrochemical charging. After the first Li(+) extraction, Li(+) does not return to the Li5FeO4 host structure and is steadily involved in the electrochemical reactions of the negative electrode during subsequent cycling. Various electrochemical and structural analyses support its superior characteristics for use as a promising lithium source. This versatile approach can yield a sufficient Li(+)-doping efficiency of >90% and improved safety as a result of the removal of metallic lithium from the cell. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Si-Carbon Composite Nanofibers with Good scalability and Favorable Architecture for Highly Reversible Lithium Storage and Superb Kinetics

    International Nuclear Information System (INIS)

    Lee, Youngmin; Heo, Yoon-Uk; Song, Dahye; Shin, Dong Wook; Kang, Yong-Mook

    2014-01-01

    We demonstrate a simple electrospinning for preparing Si-carbon composite Nanofiber (NF) in which aciniform aggregates of Si particles are well encased by amorphous carbon. The Si-carbon composite NF exhibit a significantly improved electrochemical performance with a high specific capacity of 1250 mAh·g −1 and a superior cycling performance during 50 cycles at a rate of 0.2 C. More importantly, Si-carbon composite NF maintain about 70% of initial capacity at 0.2 C and an excellent cycling stability even at 25 times higher current density compared to the initial condition, proving that it has superb kinetics compared to ever reported Si or SiO x materials. The electrochemical superiority of Si-carbon composite NF can be attributed to amorphous carbon framework accommodating the inherent volume expansion of Si during lithiation as well as the enlarged contact area between active materials and conducting agent attributed to the morphological characteristics of its one dimensional (1D) nanostructure

  5. Designing a Scalable Fault Tolerance Model for High Performance Computational Chemistry: A Case Study with Coupled Cluster Perturbative Triples.

    Science.gov (United States)

    van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A

    2011-01-11

    In the past couple of decades, the massive computational power provided by the most modern supercomputers has resulted in simulation of higher-order computational chemistry methods, previously considered intractable. As the system sizes continue to increase, the computational chemistry domain continues to escalate this trend using parallel computing with programming models such as Message Passing Interface (MPI) and Partitioned Global Address Space (PGAS) programming models such as Global Arrays. The ever increasing scale of these supercomputers comes at a cost of reduced Mean Time Between Failures (MTBF), currently on the order of days and projected to be on the order of hours for upcoming extreme scale systems. While traditional disk-based check pointing methods are ubiquitous for storing intermediate solutions, they suffer from high overhead of writing and recovering from checkpoints. In practice, checkpointing itself often brings the system down. Clearly, methods beyond checkpointing are imperative to handling the aggravating issue of reducing MTBF. In this paper, we address this challenge by designing and implementing an efficient fault tolerant version of the Coupled Cluster (CC) method with NWChem, using in-memory data redundancy. We present the challenges associated with our design, including an efficient data storage model, maintenance of at least one consistent data copy, and the recovery process. Our performance evaluation without faults shows that the current design exhibits a small overhead. In the presence of a simulated fault, the proposed design incurs negligible overhead in comparison to the state of the art implementation without faults.

  6. High-performance flat data center network architecture based on scalable and flow-controlled optical switching system

    Science.gov (United States)

    Calabretta, Nicola; Miao, Wang; Dorren, Harm

    2016-03-01

    Traffic in data centers networks (DCNs) is steadily growing to support various applications and virtualization technologies. Multi-tenancy enabling efficient resource utilization is considered as a key requirement for the next generation DCs resulting from the growing demands for services and applications. Virtualization mechanisms and technologies can leverage statistical multiplexing and fast switch reconfiguration to further extend the DC efficiency and agility. We present a novel high performance flat DCN employing bufferless and distributed fast (sub-microsecond) optical switches with wavelength, space, and time switching operation. The fast optical switches can enhance the performance of the DCNs by providing large-capacity switching capability and efficiently sharing the data plane resources by exploiting statistical multiplexing. Benefiting from the Software-Defined Networking (SDN) control of the optical switches, virtual DCNs can be flexibly created and reconfigured by the DCN provider. Numerical and experimental investigations of the DCN based on the fast optical switches show the successful setup of virtual network slices for intra-data center interconnections. Experimental results to assess the DCN performance in terms of latency and packet loss show less than 10^-5 packet loss and 640ns end-to-end latency with 0.4 load and 16- packet size buffer. Numerical investigation on the performance of the systems when the port number of the optical switch is scaled to 32x32 system indicate that more than 1000 ToRs each with Terabit/s interface can be interconnected providing a Petabit/s capacity. The roadmap to photonic integration of large port optical switches will be also presented.

  7. Scalable optical quantum computer

    Energy Technology Data Exchange (ETDEWEB)

    Manykin, E A; Mel' nichenko, E V [Institute for Superconductivity and Solid-State Physics, Russian Research Centre ' Kurchatov Institute' , Moscow (Russian Federation)

    2014-12-31

    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

  8. Scalable optical quantum computer

    International Nuclear Information System (INIS)

    Manykin, E A; Mel'nichenko, E V

    2014-01-01

    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr 3+ , regularly located in the lattice of the orthosilicate (Y 2 SiO 5 ) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

  9. Scalable Transactions for Web Applications in the Cloud

    NARCIS (Netherlands)

    Zhou, W.; Pierre, G.E.O.; Chi, C.-H.

    2009-01-01

    Cloud Computing platforms provide scalability and high availability properties for web applications but they sacrifice data consistency at the same time. However, many applications cannot afford any data inconsistency. We present a scalable transaction manager for NoSQL cloud database services to

  10. Blind Cooperative Routing for Scalable and Energy-Efficient Internet of Things

    KAUST Repository

    Bader, Ahmed; Alouini, Mohamed-Slim

    2016-01-01

    Multihop networking is promoted in this paper for energy-efficient and highly-scalable Internet of Things (IoT). Recognizing concerns related to the scalability of classical multihop routing and medium access techniques, the use of blind cooperation

  11. Up-scalable sheet-to-sheet production of high efficiency perovskite module and solar cells on 6-in. substrate using slot die coating

    NARCIS (Netherlands)

    Di Giacomo, Francesco; Shanmugam, Santhosh; Fledderus, Henri; Bruijnaers, Bardo J.; Verhees, Wiljan J.H.; Dorenkamper, Maarten S.; Veenstra, Sjoerd C.; Qiu, Weiming; Gehlhaar, Robert; Merckx, Tamara; Aernouts, Tom; Andriessen, Ronn; Galagan, Yulia

    2018-01-01

    Scalable sheet-to-sheet slot die coating processes have been demonstrated for perovskite solar cells and modules. The processes have been developed on 6 in. × 6 in. glass/ITO substrates for two functional layers: the perovskite photo-active layer and the Spiro-OMeTAD hole transport layer. Perovskite

  12. Scalable Synthesis of Triple-Core-Shell Nanostructures of TiO2 @MnO2 @C for High Performance Supercapacitors Using Structure-Guided Combustion Waves.

    Science.gov (United States)

    Shin, Dongjoon; Shin, Jungho; Yeo, Taehan; Hwang, Hayoung; Park, Seonghyun; Choi, Wonjoon

    2018-03-01

    Core-shell nanostructures of metal oxides and carbon-based materials have emerged as outstanding electrode materials for supercapacitors and batteries. However, their synthesis requires complex procedures that incur high costs and long processing times. Herein, a new route is proposed for synthesizing triple-core-shell nanoparticles of TiO 2 @MnO 2 @C using structure-guided combustion waves (SGCWs), which originate from incomplete combustion inside chemical-fuel-wrapped nanostructures, and their application in supercapacitor electrodes. SGCWs transform TiO 2 to TiO 2 @C and TiO 2 @MnO 2 to TiO 2 @MnO 2 @C via the incompletely combusted carbonaceous fuels under an open-air atmosphere, in seconds. The synthesized carbon layers act as templates for MnO 2 shells in TiO 2 @C and organic shells of TiO 2 @MnO 2 @C. The TiO 2 @MnO 2 @C-based electrodes exhibit a greater specific capacitance (488 F g -1 at 5 mV s -1 ) and capacitance retention (97.4% after 10 000 cycles at 1.0 V s -1 ), while the absence of MnO 2 and carbon shells reveals a severe degradation in the specific capacitance and capacitance retention. Because the core-TiO 2 nanoparticles and carbon shell prevent the deformation of the inner and outer sides of the MnO 2 shell, the nanostructures of the TiO 2 @MnO 2 @C are preserved despite the long-term cycling, giving the superior performance. This SGCW-driven fabrication enables the scalable synthesis of multiple-core-shell structures applicable to diverse electrochemical applications. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Scalable photoreactor for hydrogen production

    KAUST Repository

    Takanabe, Kazuhiro; Shinagawa, Tatsuya

    2017-01-01

    Provided herein are scalable photoreactors that can include a membrane-free water- splitting electrolyzer and systems that can include a plurality of membrane-free water- splitting electrolyzers. Also provided herein are methods of using the scalable photoreactors provided herein.

  14. Scalable photoreactor for hydrogen production

    KAUST Repository

    Takanabe, Kazuhiro

    2017-04-06

    Provided herein are scalable photoreactors that can include a membrane-free water- splitting electrolyzer and systems that can include a plurality of membrane-free water- splitting electrolyzers. Also provided herein are methods of using the scalable photoreactors provided herein.

  15. Scalable Frequent Subgraph Mining

    KAUST Repository

    Abdelhamid, Ehab

    2017-06-19

    A graph is a data structure that contains a set of nodes and a set of edges connecting these nodes. Nodes represent objects while edges model relationships among these objects. Graphs are used in various domains due to their ability to model complex relations among several objects. Given an input graph, the Frequent Subgraph Mining (FSM) task finds all subgraphs with frequencies exceeding a given threshold. FSM is crucial for graph analysis, and it is an essential building block in a variety of applications, such as graph clustering and indexing. FSM is computationally expensive, and its existing solutions are extremely slow. Consequently, these solutions are incapable of mining modern large graphs. This slowness is caused by the underlying approaches of these solutions which require finding and storing an excessive amount of subgraph matches. This dissertation proposes a scalable solution for FSM that avoids the limitations of previous work. This solution is composed of four components. The first component is a single-threaded technique which, for each candidate subgraph, needs to find only a minimal number of matches. The second component is a scalable parallel FSM technique that utilizes a novel two-phase approach. The first phase quickly builds an approximate search space, which is then used by the second phase to optimize and balance the workload of the FSM task. The third component focuses on accelerating frequency evaluation, which is a critical step in FSM. To do so, a machine learning model is employed to predict the type of each graph node, and accordingly, an optimized method is selected to evaluate that node. The fourth component focuses on mining dynamic graphs, such as social networks. To this end, an incremental index is maintained during the dynamic updates. Only this index is processed and updated for the majority of graph updates. Consequently, search space is significantly pruned and efficiency is improved. The empirical evaluation shows that the

  16. Scalable Nanomanufacturing—A Review

    Directory of Open Access Journals (Sweden)

    Khershed Cooper

    2017-01-01

    Full Text Available This article describes the field of scalable nanomanufacturing, its importance and need, its research activities and achievements. The National Science Foundation is taking a leading role in fostering basic research in scalable nanomanufacturing (SNM. From this effort several novel nanomanufacturing approaches have been proposed, studied and demonstrated, including scalable nanopatterning. This paper will discuss SNM research areas in materials, processes and applications, scale-up methods with project examples, and manufacturing challenges that need to be addressed to move nanotechnology discoveries closer to the marketplace.

  17. Scalable Nonlinear Compact Schemes

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, Debojyoti [Argonne National Lab. (ANL), Argonne, IL (United States); Constantinescu, Emil M. [Univ. of Chicago, IL (United States); Brown, Jed [Univ. of Colorado, Boulder, CO (United States)

    2014-04-01

    In this work, we focus on compact schemes resulting in tridiagonal systems of equations, specifically the fifth-order CRWENO scheme. We propose a scalable implementation of the nonlinear compact schemes by implementing a parallel tridiagonal solver based on the partitioning/substructuring approach. We use an iterative solver for the reduced system of equations; however, we solve this system to machine zero accuracy to ensure that no parallelization errors are introduced. It is possible to achieve machine-zero convergence with few iterations because of the diagonal dominance of the system. The number of iterations is specified a priori instead of a norm-based exit criterion, and collective communications are avoided. The overall algorithm thus involves only point-to-point communication between neighboring processors. Our implementation of the tridiagonal solver differs from and avoids the drawbacks of past efforts in the following ways: it introduces no parallelization-related approximations (multiprocessor solutions are exactly identical to uniprocessor ones), it involves minimal communication, the mathematical complexity is similar to that of the Thomas algorithm on a single processor, and it does not require any communication and computation scheduling.

  18. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Chase Qishi [New Jersey Inst. of Technology, Newark, NJ (United States); Univ. of Memphis, TN (United States); Zhu, Michelle Mengxia [Southern Illinois Univ., Carbondale, IL (United States)

    2016-06-06

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models feature diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific

  19. Scalable Partitioning Algorithms for FPGAs With Heterogeneous Resources

    National Research Council Canada - National Science Library

    Selvakkumaran, Navaratnasothie; Ranjan, Abhishek; Raje, Salil; Karypis, George

    2004-01-01

    As FPGA densities increase, partitioning-based FPGA placement approaches are becoming increasingly important as they can be used to provide high-quality and computationally scalable placement solutions...

  20. SOL: A Library for Scalable Online Learning Algorithms

    OpenAIRE

    Wu, Yue; Hoi, Steven C. H.; Liu, Chenghao; Lu, Jing; Sahoo, Doyen; Yu, Nenghai

    2016-01-01

    SOL is an open-source library for scalable online learning algorithms, and is particularly suitable for learning with high-dimensional data. The library provides a family of regular and sparse online learning algorithms for large-scale binary and multi-class classification tasks with high efficiency, scalability, portability, and extensibility. SOL was implemented in C++, and provided with a collection of easy-to-use command-line tools, python wrappers and library calls for users and develope...

  1. Scalable Algorithms for Adaptive Statistical Designs

    Directory of Open Access Journals (Sweden)

    Robert Oehmke

    2000-01-01

    Full Text Available We present a scalable, high-performance solution to multidimensional recurrences that arise in adaptive statistical designs. Adaptive designs are an important class of learning algorithms for a stochastic environment, and we focus on the problem of optimally assigning patients to treatments in clinical trials. While adaptive designs have significant ethical and cost advantages, they are rarely utilized because of the complexity of optimizing and analyzing them. Computational challenges include massive memory requirements, few calculations per memory access, and multiply-nested loops with dynamic indices. We analyze the effects of various parallelization options, and while standard approaches do not work well, with effort an efficient, highly scalable program can be developed. This allows us to solve problems thousands of times more complex than those solved previously, which helps make adaptive designs practical. Further, our work applies to many other problems involving neighbor recurrences, such as generalized string matching.

  2. Scalable fabrication of perovskite solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Li, Zhen; Klein, Talysa R.; Kim, Dong Hoe; Yang, Mengjin; Berry, Joseph J.; van Hest, Maikel F. A. M.; Zhu, Kai

    2018-03-27

    Perovskite materials use earth-abundant elements, have low formation energies for deposition and are compatible with roll-to-roll and other high-volume manufacturing techniques. These features make perovskite solar cells (PSCs) suitable for terawatt-scale energy production with low production costs and low capital expenditure. Demonstrations of performance comparable to that of other thin-film photovoltaics (PVs) and improvements in laboratory-scale cell stability have recently made scale up of this PV technology an intense area of research focus. Here, we review recent progress and challenges in scaling up PSCs and related efforts to enable the terawatt-scale manufacturing and deployment of this PV technology. We discuss common device and module architectures, scalable deposition methods and progress in the scalable deposition of perovskite and charge-transport layers. We also provide an overview of device and module stability, module-level characterization techniques and techno-economic analyses of perovskite PV modules.

  3. High-capacity cathodes for lithium-ion batteries from nanostructured LiFePO4 synthesized by highly-flexible and scalable flame spray pyrolysis

    Science.gov (United States)

    Hamid, N. A.; Wennig, S.; Hardt, S.; Heinzel, A.; Schulz, C.; Wiggers, H.

    2012-10-01

    Olivine, LiFePO4 is a promising cathode material for lithium-ion batteries due to its low cost, environmental acceptability and high stability. Its low electric conductivity prevented it for a long time from being used in large-scale applications. Decreasing its particle size along with carbon coating significantly improves electronic conductivity and lithium diffusion. With respect to the controlled formation of very small particles with large specific surface, gas-phase synthesis opens an economic and flexible route towards high-quality battery materials. Amorphous FePO4 was synthesized as precursor material for LiFePO4 by flame spray pyrolysis of a solution of iron acetylacetonate and tributyl phosphate in toluene. The pristine FePO4 with a specific surface from 126-218 m2 g-1 was post-processed to LiFePO4/C composite material via a solid-state reaction using Li2CO3 and glucose. The final olivine LiFePO4/C particles still showed a large specific surface of 24 m2 g-1 and were characterized using X-ray diffraction (XRD), electron microscopy, X-ray photoelectron spectrocopy (XPS) and elemental analysis. Electrochemical investigations of the final LiFePO4/C composites show reversible capacities of more than 145 mAh g-1 (about 115 mAh g-1 with respect to the total coating mass). The material supports high drain rates at 16 C while delivering 40 mAh g-1 and causes excellent cycle stability.

  4. Scalable algorithms for contact problems

    CERN Document Server

    Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít

    2016-01-01

    This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...

  5. Scuba: scalable kernel-based gene prioritization.

    Science.gov (United States)

    Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio

    2018-01-25

    The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .

  6. DISP: Optimizations towards Scalable MPI Startup

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Huansong [Florida State University, Tallahassee; Pophale, Swaroop S [ORNL; Gorentla Venkata, Manjunath [ORNL; Yu, Weikuan [Florida State University, Tallahassee

    2016-01-01

    Despite the popularity of MPI for high performance computing, the startup of MPI programs faces a scalability challenge as both the execution time and memory consumption increase drastically at scale. We have examined this problem using the collective modules of Cheetah and Tuned in Open MPI as representative implementations. Previous improvements for collectives have focused on algorithmic advances and hardware off-load. In this paper, we examine the startup cost of the collective module within a communicator and explore various techniques to improve its efficiency and scalability. Accordingly, we have developed a new scalable startup scheme with three internal techniques, namely Delayed Initialization, Module Sharing and Prediction-based Topology Setup (DISP). Our DISP scheme greatly benefits the collective initialization of the Cheetah module. At the same time, it helps boost the performance of non-collective initialization in the Tuned module. We evaluate the performance of our implementation on Titan supercomputer at ORNL with up to 4096 processes. The results show that our delayed initialization can speed up the startup of Tuned and Cheetah by an average of 32.0% and 29.2%, respectively, our module sharing can reduce the memory consumption of Tuned and Cheetah by up to 24.1% and 83.5%, respectively, and our prediction-based topology setup can speed up the startup of Cheetah by up to 80%.

  7. Scalable and template-free synthesis of nanostructured Na{sub 1.08}V{sub 6}O{sub 15} as high-performance cathode material for lithium-ion batteries

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Shili, E-mail: slzheng@ipe.ac.cn [National Engineering Laboratory for Hydrometallurgical Cleaner Production Technology, Key Laboratory of Green Process and Engineering, Institute of Process Engineering, Chinese Academy of Sciences, Beijing (China); Wang, Xinran; Yan, Hong [National Engineering Laboratory for Hydrometallurgical Cleaner Production Technology, Key Laboratory of Green Process and Engineering, Institute of Process Engineering, Chinese Academy of Sciences, Beijing (China); University of Chinese Academy of Sciences, No. 19A Yuquan Road, Beijing (China); Du, Hao; Zhang, Yi [National Engineering Laboratory for Hydrometallurgical Cleaner Production Technology, Key Laboratory of Green Process and Engineering, Institute of Process Engineering, Chinese Academy of Sciences, Beijing (China)

    2016-09-15

    Highlights: • Nanostructured Na{sub 1.08}V{sub 6}O{sub 15} was synthesized through additive-free sol-gel process. • Prepared Na{sub 1.08}V{sub 6}O{sub 15} demonstrated high capacity and sufficient cycling stability. • The reaction temperature was optimized to allow scalable Na{sub 1.08}V{sub 6}O{sub 15} fabrication. - Abstract: Developing high-capacity cathode material with feasibility and scalability is still challenging for lithium-ion batteries (LIBs). In this study, a high-capacity ternary sodium vanadate compound, nanostructured NaV{sub 6}O{sub 15}, was template-free synthesized through sol-gel process with high producing efficiency. The as-prepared sample was systematically post-treated at different temperature and the post-annealing temperature was found to determine the cycling stability and capacity of NaV{sub 6}O{sub 15}. The well-crystallized one exhibited good electrochemical performance with a high specific capacity of 302 mAh g{sup −1} when cycled at current density of 0.03 mA g{sup −1}. Its relatively long-term cycling stability was characterized by the cell performance under the current density of 1 A g{sup −1}, delivering a reversible capacity of 118 mAh g{sup −1} after 300 cycles with 79% capacity retention and nearly 100% coulombic efficiency: all demonstrating its significant promise of proposed strategy for large-scale synthesis of NaV{sub 6}O{sub 15} as cathode with high-capacity and high energy density for LIBs.

  8. Scalable shared-memory multiprocessing

    CERN Document Server

    Lenoski, Daniel E

    1995-01-01

    Dr. Lenoski and Dr. Weber have experience with leading-edge research and practical issues involved in implementing large-scale parallel systems. They were key contributors to the architecture and design of the DASH multiprocessor. Currently, they are involved with commercializing scalable shared-memory technology.

  9. Scalable and balanced dynamic hybrid data assimilation

    Science.gov (United States)

    Kauranne, Tuomo; Amour, Idrissa; Gunia, Martin; Kallio, Kari; Lepistö, Ahti; Koponen, Sampsa

    2017-04-01

    Scalability of complex weather forecasting suites is dependent on the technical tools available for implementing highly parallel computational kernels, but to an equally large extent also on the dependence patterns between various components of the suite, such as observation processing, data assimilation and the forecast model. Scalability is a particular challenge for 4D variational assimilation methods that necessarily couple the forecast model into the assimilation process and subject this combination to an inherently serial quasi-Newton minimization process. Ensemble based assimilation methods are naturally more parallel, but large models force ensemble sizes to be small and that results in poor assimilation accuracy, somewhat akin to shooting with a shotgun in a million-dimensional space. The Variational Ensemble Kalman Filter (VEnKF) is an ensemble method that can attain the accuracy of 4D variational data assimilation with a small ensemble size. It achieves this by processing a Gaussian approximation of the current error covariance distribution, instead of a set of ensemble members, analogously to the Extended Kalman Filter EKF. Ensemble members are re-sampled every time a new set of observations is processed from a new approximation of that Gaussian distribution which makes VEnKF a dynamic assimilation method. After this a smoothing step is applied that turns VEnKF into a dynamic Variational Ensemble Kalman Smoother VEnKS. In this smoothing step, the same process is iterated with frequent re-sampling of the ensemble but now using past iterations as surrogate observations until the end result is a smooth and balanced model trajectory. In principle, VEnKF could suffer from similar scalability issues as 4D-Var. However, this can be avoided by isolating the forecast model completely from the minimization process by implementing the latter as a wrapper code whose only link to the model is calling for many parallel and totally independent model runs, all of them

  10. Programming Scala Scalability = Functional Programming + Objects

    CERN Document Server

    Wampler, Dean

    2009-01-01

    Learn how to be more productive with Scala, a new multi-paradigm language for the Java Virtual Machine (JVM) that integrates features of both object-oriented and functional programming. With this book, you'll discover why Scala is ideal for highly scalable, component-based applications that support concurrency and distribution. Programming Scala clearly explains the advantages of Scala as a JVM language. You'll learn how to leverage the wealth of Java class libraries to meet the practical needs of enterprise and Internet projects more easily. Packed with code examples, this book provides us

  11. Scalable Optical-Fiber Communication Networks

    Science.gov (United States)

    Chow, Edward T.; Peterson, John C.

    1993-01-01

    Scalable arbitrary fiber extension network (SAFEnet) is conceptual fiber-optic communication network passing digital signals among variety of computers and input/output devices at rates from 200 Mb/s to more than 100 Gb/s. Intended for use with very-high-speed computers and other data-processing and communication systems in which message-passing delays must be kept short. Inherent flexibility makes it possible to match performance of network to computers by optimizing configuration of interconnections. In addition, interconnections made redundant to provide tolerance to faults.

  12. Computational scalability of large size image dissemination

    Science.gov (United States)

    Kooper, Rob; Bajcsy, Peter

    2011-01-01

    We have investigated the computational scalability of image pyramid building needed for dissemination of very large image data. The sources of large images include high resolution microscopes and telescopes, remote sensing and airborne imaging, and high resolution scanners. The term 'large' is understood from a user perspective which means either larger than a display size or larger than a memory/disk to hold the image data. The application drivers for our work are digitization projects such as the Lincoln Papers project (each image scan is about 100-150MB or about 5000x8000 pixels with the total number to be around 200,000) and the UIUC library scanning project for historical maps from 17th and 18th century (smaller number but larger images). The goal of our work is understand computational scalability of the web-based dissemination using image pyramids for these large image scans, as well as the preservation aspects of the data. We report our computational benchmarks for (a) building image pyramids to be disseminated using the Microsoft Seadragon library, (b) a computation execution approach using hyper-threading to generate image pyramids and to utilize the underlying hardware, and (c) an image pyramid preservation approach using various hard drive configurations of Redundant Array of Independent Disks (RAID) drives for input/output operations. The benchmarks are obtained with a map (334.61 MB, JPEG format, 17591x15014 pixels). The discussion combines the speed and preservation objectives.

  13. Adolescent sexuality education: An appraisal of some scalable ...

    African Journals Online (AJOL)

    Adolescent sexuality education: An appraisal of some scalable interventions for the Nigerian context. VC Pam. Abstract. Most issues around sexual intercourse are highly sensitive topics in Nigeria. Despite the disturbingly high adolescent HIV prevalence and teenage pregnancy rate in Nigeria, sexuality education is ...

  14. Facile and Scalable Synthesis of Zn3V2O7(OH)2·2H2O Microflowers as a High-Performance Anode for Lithium-Ion Batteries.

    Science.gov (United States)

    Yan, Haowu; Luo, Yanzhu; Xu, Xu; He, Liang; Tan, Jian; Li, Zhaohuai; Hong, Xufeng; He, Pan; Mai, Liqiang

    2017-08-23

    The employment of nanomaterials and nanotechnologies has been widely acknowledged as an effective strategy to enhance the electrochemical performance of lithium-ion batteries (LIBs). However, how to produce nanomaterials effectively on a large scale remains a challenge. Here, the highly crystallized Zn 3 V 2 O 7 (OH) 2 ·2H 2 O is synthesized through a simple liquid phase method at room temperature in a large scale, which is easily realized in industry. Through suppressing the reaction dynamics with ethylene glycol, a uniform morphology of microflowers is obtained. Owing to the multiple reaction mechanisms (insertion, conversion, and alloying) during Li insertion/extraction, the prepared electrode delivers a remarkable specific capacity of 1287 mA h g -1 at 0.2 A g -1 after 120 cycles. In addition, a high capacity of 298 mA h g -1 can be obtained at 5 A g -1 after 1400 cycles. The excellent electrochemical performance can be attributed to the high crystallinity and large specific surface area of active materials. The smaller particles after cycling could facilitate the lithium-ion transport and provide more reaction sites. The facile and scalable synthesis process and excellent electrochemical performance make this material a highly promising anode for the commercial LIBs.

  15. Quality Scalability Compression on Single-Loop Solution in HEVC

    Directory of Open Access Journals (Sweden)

    Mengmeng Zhang

    2014-01-01

    Full Text Available This paper proposes a quality scalable extension design for the upcoming high efficiency video coding (HEVC standard. In the proposed design, the single-loop decoder solution is extended into the proposed scalable scenario. A novel interlayer intra/interprediction is added to reduce the amount of bits representation by exploiting the correlation between coding layers. The experimental results indicate that the average Bjøntegaard delta rate decrease of 20.50% can be gained compared with the simulcast encoding. The proposed technique achieved 47.98% Bjøntegaard delta rate reduction compared with the scalable video coding extension of the H.264/AVC. Consequently, significant rate savings confirm that the proposed method achieves better performance.

  16. Scalable Techniques for Formal Verification

    CERN Document Server

    Ray, Sandip

    2010-01-01

    This book presents state-of-the-art approaches to formal verification techniques to seamlessly integrate different formal verification methods within a single logical foundation. It should benefit researchers and practitioners looking to get a broad overview of the spectrum of formal verification techniques, as well as approaches to combining such techniques within a single framework. Coverage includes a range of case studies showing how such combination is fruitful in developing a scalable verification methodology for industrial designs. This book outlines both theoretical and practical issue

  17. CloudTPS: Scalable Transactions for Web Applications in the Cloud

    NARCIS (Netherlands)

    Zhou, W.; Pierre, G.E.O.; Chi, C.-H.

    2010-01-01

    NoSQL Cloud data services provide scalability and high availability properties for web applications but at the same time they sacrifice data consistency. However, many applications cannot afford any data inconsistency. CloudTPS is a scalable transaction manager to allow cloud database services to

  18. Scalable high-performance algorithm for the simulation of exciton-dynamics. Application to the light harvesting complex II in the presence of resonant vibrational modes

    DEFF Research Database (Denmark)

    Kreisbeck, Christoph; Kramer, Tobias; Aspuru-Guzik, Alán

    2014-01-01

    high-performance many-core platforms using the Open Compute Language (OpenCL). For the light-harvesting complex II (LHC II) found in spinach, the HEOM results deviate from predictions of approximate theories and clarify the time-scale of the transfer-process. We investigate the impact of resonantly...

  19. Scalable on-chip quantum state tomography

    Science.gov (United States)

    Titchener, James G.; Gräfe, Markus; Heilmann, René; Solntsev, Alexander S.; Szameit, Alexander; Sukhorukov, Andrey A.

    2018-03-01

    Quantum information systems are on a path to vastly exceed the complexity of any classical device. The number of entangled qubits in quantum devices is rapidly increasing, and the information required to fully describe these systems scales exponentially with qubit number. This scaling is the key benefit of quantum systems, however it also presents a severe challenge. To characterize such systems typically requires an exponentially long sequence of different measurements, becoming highly resource demanding for large numbers of qubits. Here we propose and demonstrate a novel and scalable method for characterizing quantum systems based on expanding a multi-photon state to larger dimensionality. We establish that the complexity of this new measurement technique only scales linearly with the number of qubits, while providing a tomographically complete set of data without a need for reconfigurability. We experimentally demonstrate an integrated photonic chip capable of measuring two- and three-photon quantum states with statistical reconstruction fidelity of 99.71%.

  20. Scalable graphene aptasensors for drug quantification

    Science.gov (United States)

    Vishnubhotla, Ramya; Ping, Jinglei; Gao, Zhaoli; Lee, Abigail; Saouaf, Olivia; Vrudhula, Amey; Johnson, A. T. Charlie

    2017-11-01

    Simpler and more rapid approaches for therapeutic drug-level monitoring are highly desirable to enable use at the point-of-care. We have developed an all-electronic approach for detection of the HIV drug tenofovir based on scalable fabrication of arrays of graphene field-effect transistors (GFETs) functionalized with a commercially available DNA aptamer. The shift in the Dirac voltage of the GFETs varied systematically with the concentration of tenofovir in deionized water, with a detection limit less than 1 ng/mL. Tests against a set of negative controls confirmed the specificity of the sensor response. This approach offers the potential for further development into a rapid and convenient point-of-care tool with clinically relevant performance.

  1. Scalable Performance Measurement and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gamblin, Todd [Univ. of North Carolina, Chapel Hill, NC (United States)

    2009-01-01

    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number of tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.

  2. Highly efficient blue organic light emitting device using indium-free transparent anode Ga:ZnO with scalability for large area coating

    International Nuclear Information System (INIS)

    Wang Liang; Matson, Dean W.; Polikarpov, Evgueni; Swensen, James S.; Bonham, Charles C.; Cosimbescu, Lelia; Gaspar, Daniel J.; Padmaperuma, Asanga B.; Berry, Joseph J.; Ginley, David S.

    2010-01-01

    Organic light emitting devices have been achieved with an indium-free transparent anode, Ga doped ZnO (GZO). A large area coating technique was used (RF magnetron sputtering) to deposit the GZO films onto glass. The respective organic light emitting devices exhibited an operational voltage of 3.7 V, an external quantum efficiency of 17%, and a power efficiency of 39 lm/W at a current density of 1 mA/cm 2 . These parameters are well within acceptable standards for blue OLEDs to generate a white light with high enough brightness for general lighting applications. It is expected that high-efficiency, long-lifetime, large area, and cost-effective white OLEDs can be made with these indium-free anode materials.

  3. A scalable parallel open architecture data acquisition system for low to high rate experiments, test beams and all SSC [Superconducting Super Collider] detectors

    International Nuclear Information System (INIS)

    Barsotti, E.; Booth, A.; Bowden, M.; Swoboda, C.; Lockyer, N.; VanBerg, R.

    1989-12-01

    A new era of high-energy physics research is beginning requiring accelerators with much higher luminosities and interaction rates in order to discover new elementary particles. As a consequences, both orders of magnitude higher data rates from the detector and online processing power, well beyond the capabilities of current high energy physics data acquisition systems, are required. This paper describes a new data acquisition system architecture which draws heavily from the communications industry, is totally parallel (i.e., without any bottlenecks), is capable of data rates of hundreds of GigaBytes per second from the detector and into an array of online processors (i.e., processor farm), and uses an open systems architecture to guarantee compatibility with future commercially available online processor farms. The main features of the system architecture are standard interface ICs to detector subsystems wherever possible, fiber optic digital data transmission from the near-detector electronics, a self-routing parallel event builder, and the use of industry-supported and high-level language programmable processors in the proposed BCD system for both triggers and online filters. A brief status report of an ongoing project at Fermilab to build the self-routing parallel event builder will also be given in the paper. 3 figs., 1 tab

  4. Development of a rapid high-efficiency scalable process for acetylated Sus scrofa cationic trypsin production from Escherichia coli inclusion bodies.

    Science.gov (United States)

    Zhao, Mingzhi; Wu, Feilin; Xu, Ping

    2015-12-01

    Trypsin is one of the most important enzymatic tools in proteomics and biopharmaceutical studies. Here, we describe the complete recombinant expression and purification from a trypsinogen expression vector construct. The Sus scrofa cationic trypsin gene with a propeptide sequence was optimized according to Escherichia coli codon-usage bias and chemically synthesized. The gene was inserted into pET-11c plasmid to yield an expression vector. Using high-density E. coli fed-batch fermentation, trypsinogen was expressed in inclusion bodies at 1.47 g/L. The inclusion body was refolded with a high yield of 36%. The purified trypsinogen was then activated to produce trypsin. To address stability problems, the trypsin thus produced was acetylated. The final product was generated upon gel filtration. The final yield of acetylated trypsin was 182 mg/L from a 5-L fermenter. Our acetylated trypsin product demonstrated higher BAEE activity (30,100 BAEE unit/mg) than a commercial product (9500 BAEE unit/mg, Promega). It also demonstrated resistance to autolysis. This is the first report of production of acetylated recombinant trypsin that is stable and suitable for scale-up. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. A fast, high-endurance and scalable non-volatile memory device made from asymmetric Ta2O5-x/TaO2-x bilayer structures

    Science.gov (United States)

    Lee, Myoung-Jae; Lee, Chang Bum; Lee, Dongsoo; Lee, Seung Ryul; Chang, Man; Hur, Ji Hyun; Kim, Young-Bae; Kim, Chang-Jung; Seo, David H.; Seo, Sunae; Chung, U.-In; Yoo, In-Kyeong; Kim, Kinam

    2011-08-01

    Numerous candidates attempting to replace Si-based flash memory have failed for a variety of reasons over the years. Oxide-based resistance memory and the related memristor have succeeded in surpassing the specifications for a number of device requirements. However, a material or device structure that satisfies high-density, switching-speed, endurance, retention and most importantly power-consumption criteria has yet to be announced. In this work we demonstrate a TaOx-based asymmetric passive switching device with which we were able to localize resistance switching and satisfy all aforementioned requirements. In particular, the reduction of switching current drastically reduces power consumption and results in extreme cycling endurances of over 1012. Along with the 10 ns switching times, this allows for possible applications to the working-memory space as well. Furthermore, by combining two such devices each with an intrinsic Schottky barrier we eliminate any need for a discrete transistor or diode in solving issues of stray leakage current paths in high-density crossbar arrays.

  6. Bright conjugated polymer nanoparticles containing a biodegradable shell produced at high yields and with tuneable optical properties by a scalable microfluidic device.

    Science.gov (United States)

    Abelha, T F; Phillips, T W; Bannock, J H; Nightingale, A M; Dreiss, C A; Kemal, E; Urbano, L; deMello, J C; Green, M; Dailey, L A

    2017-02-02

    This study compares the performance of a microfluidic technique and a conventional bulk method to manufacture conjugated polymer nanoparticles (CPNs) embedded within a biodegradable poly(ethylene glycol) methyl ether-block-poly(lactide-co-glycolide) (PEG 5K -PLGA 55K ) matrix. The influence of PEG 5K -PLGA 55K and conjugated polymers cyano-substituted poly(p-phenylene vinylene) (CN-PPV) and poly(9,9-dioctylfluorene-2,1,3-benzothiadiazole) (F8BT) on the physicochemical properties of the CPNs was also evaluated. Both techniques enabled CPN production with high end product yields (∼70-95%). However, while the bulk technique (solvent displacement) under optimal conditions generated small nanoparticles (∼70-100 nm) with similar optical properties (quantum yields ∼35%), the microfluidic approach produced larger CPNs (140-260 nm) with significantly superior quantum yields (49-55%) and tailored emission spectra. CPNs containing CN-PPV showed smaller size distributions and tuneable emission spectra compared to F8BT systems prepared under the same conditions. The presence of PEG 5K -PLGA 55K did not affect the size or optical properties of the CPNs and provided a neutral net electric charge as is often required for biomedical applications. The microfluidics flow-based device was successfully used for the continuous preparation of CPNs over a 24 hour period. On the basis of the results presented here, it can be concluded that the microfluidic device used in this study can be used to optimize the production of bright CPNs with tailored properties with good reproducibility.

  7. Final Project Report: DOE Award FG02-04ER25606 Overlay Transit Networking for Scalable, High Performance Data Communication across Heterogeneous Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Beck, Micah; Moore, Terry

    2007-08-31

    As the flood of data associated with leading edge computational science continues to escalate, the challenge of supporting the distributed collaborations that are now characteristic of it becomes increasingly daunting. The chief obstacles to progress on this front lie less in the synchronous elements of collaboration, which have been reasonably well addressed by new global high performance networks, than in the asynchronous elements, where appropriate shared storage infrastructure seems to be lacking. The recent report from the Department of Energy on the emerging 'data management challenge' captures the multidimensional nature of this problem succinctly: Data inevitably needs to be buffered, for periods ranging from seconds to weeks, in order to be controlled as it moves through the distributed and collaborative research process. To meet the diverse and changing set of application needs that different research communities have, large amounts of non-archival storage are required for transitory buffering, and it needs to be widely dispersed, easily available, and configured to maximize flexibility of use. In today's grid fabric, however, massive storage is mostly concentrated in data centers, available only to those with user accounts and membership in the appropriate virtual organizations, allocated as if its usage were non-transitory, and encapsulated behind legacy interfaces that inhibit the flexibility of use and scheduling. This situation severely restricts the ability of application communities to access and schedule usable storage where and when they need to in order to make their workflow more productive. (p.69f) One possible strategy to deal with this problem lies in creating a storage infrastructure that can be universally shared because it provides only the most generic of asynchronous services. Different user communities then define higher level services as necessary to meet their needs. One model of such a service is a Storage Network

  8. Requirements for Scalable Access Control and Security Management Architectures

    National Research Council Canada - National Science Library

    Keromytis, Angelos D; Smith, Jonathan M

    2005-01-01

    Maximizing local autonomy has led to a scalable Internet. Scalability and the capacity for distributed control have unfortunately not extended well to resource access control policies and mechanisms...

  9. Adaptive format conversion for scalable video coding

    Science.gov (United States)

    Wan, Wade K.; Lim, Jae S.

    2001-12-01

    The enhancement layer in many scalable coding algorithms is composed of residual coding information. There is another type of information that can be transmitted instead of (or in addition to) residual coding. Since the encoder has access to the original sequence, it can utilize adaptive format conversion (AFC) to generate the enhancement layer and transmit the different format conversion methods as enhancement data. This paper investigates the use of adaptive format conversion information as enhancement data in scalable video coding. Experimental results are shown for a wide range of base layer qualities and enhancement bitrates to determine when AFC can improve video scalability. Since the parameters needed for AFC are small compared to residual coding, AFC can provide video scalability at low enhancement layer bitrates that are not possible with residual coding. In addition, AFC can also be used in addition to residual coding to improve video scalability at higher enhancement layer bitrates. Adaptive format conversion has not been studied in detail, but many scalable applications may benefit from it. An example of an application that AFC is well-suited for is the migration path for digital television where AFC can provide immediate video scalability as well as assist future migrations.

  10. Architectural Techniques to Enable Reliable and Scalable Memory Systems

    OpenAIRE

    Nair, Prashant J.

    2017-01-01

    High capacity and scalable memory systems play a vital role in enabling our desktops, smartphones, and pervasive technologies like Internet of Things (IoT). Unfortunately, memory systems are becoming increasingly prone to faults. This is because we rely on technology scaling to improve memory density, and at small feature sizes, memory cells tend to break easily. Today, memory reliability is seen as the key impediment towards using high-density devices, adopting new technologies, and even bui...

  11. ALADDIN - enhancing applicability and scalability

    International Nuclear Information System (INIS)

    Roverso, Davide

    2001-02-01

    The ALADDIN project aims at the study and development of flexible, accurate, and reliable techniques and principles for computerised event classification and fault diagnosis for complex machinery and industrial processes. The main focus of the project is on advanced numerical techniques, such as wavelets, and empirical modelling with neural networks. This document reports on recent important advancements, which significantly widen the practical applicability of the developed principles, both in terms of flexibility of use, and in terms of scalability to large problem domains. In particular, two novel techniques are here described. The first, which we call Wavelet On- Line Pre-processing (WOLP), is aimed at extracting, on-line, relevant dynamic features from the process data streams. This technique allows a system a greater flexibility in detecting and processing transients at a range of different time scales. The second technique, which we call Autonomous Recursive Task Decomposition (ARTD), is aimed at tackling the problem of constructing a classifier able to discriminate among a large number of different event/fault classes, which is often the case when the application domain is a complex industrial process. ARTD also allows for incremental application development (i.e. the incremental addition of new classes to an existing classifier, without the need of retraining the entire system), and for simplified application maintenance. The description of these novel techniques is complemented by reports of quantitative experiments that show in practice the extent of these improvements. (Author)

  12. Fast and scalable inequality joins

    KAUST Repository

    Khayyat, Zuhair

    2016-09-07

    Inequality joins, which is to join relations with inequality conditions, are used in various applications. Optimizing joins has been the subject of intensive research ranging from efficient join algorithms such as sort-merge join, to the use of efficient indices such as (Formula presented.)-tree, (Formula presented.)-tree and Bitmap. However, inequality joins have received little attention and queries containing such joins are notably very slow. In this paper, we introduce fast inequality join algorithms based on sorted arrays and space-efficient bit-arrays. We further introduce a simple method to estimate the selectivity of inequality joins which is then used to optimize multiple predicate queries and multi-way joins. Moreover, we study an incremental inequality join algorithm to handle scenarios where data keeps changing. We have implemented a centralized version of these algorithms on top of PostgreSQL, a distributed version on top of Spark SQL, and an existing data cleaning system, Nadeef. By comparing our algorithms against well-known optimization techniques for inequality joins, we show our solution is more scalable and several orders of magnitude faster. © 2016 Springer-Verlag Berlin Heidelberg

  13. Generic algorithms for high performance scalable geocomputing

    Science.gov (United States)

    de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek

    2016-04-01

    During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system. This contrasts with practices in which code for distributing of compute tasks is mixed with model-specific code, and results in a better maintainable model. For flexibility and efficiency, the algorithms are configurable at compile-time with the respect to the following aspects: data type, value type, no-data handling, input value domain handling, and output value range handling. This makes the algorithms usable in very different contexts, without the need for making intrusive changes to existing models when using them. Applications that benefit from using the Fern library include the construction of forward simulation models in (global) hydrology (e.g. PCR-GLOBWB (Van Beek et al. 2011)), ecology, geomorphology, or land use change (e.g. PLUC (Verstegen et al. 2014)) and manipulation of hyper-resolution land surface data such as digital elevation models and remote sensing data. Using the Fern library, we have also created an add-on to the PCRaster Python Framework (Karssenberg et al. 2010) allowing its users to speed up their spatio-temporal models, sometimes by changing just a single line of Python code in their model. In our presentation we will give an overview of the design of the algorithms, providing examples of different contexts where they can be used to replace existing sequential algorithms, including the PCRaster environmental modeling software (www.pcraster.eu). We will show how the algorithms can be configured to behave differently when necessary. References Karssenberg, D., Schmitz, O., Salamon, P., De Jong, K. and Bierkens, M.F.P., 2010, A software framework for construction of process-based stochastic spatio-temporal models and data assimilation. Environmental Modelling & Software, 25, pp. 489-502, Link. Best Paper Award 2010: Software and Decision Support. Van Beek, L. P. H., Y. Wada, and M. F. P. Bierkens. 2011. Global monthly water stress: 1. Water balance and water availability. Water Resources Research. 47. Verstegen, J. A., D. Karssenberg, F. van der Hilst, and A. P. C. Faaij. 2014. Identifying a land use change cellular automaton by Bayesian data assimilation. Environmental Modelling & Software 53:121-136.

  14. Evaluation of 3D printed anatomically scalable transfemoral prosthetic knee.

    Science.gov (United States)

    Ramakrishnan, Tyagi; Schlafly, Millicent; Reed, Kyle B

    2017-07-01

    This case study compares a transfemoral amputee's gait while using the existing Ossur Total Knee 2000 and our novel 3D printed anatomically scalable transfemoral prosthetic knee. The anatomically scalable transfemoral prosthetic knee is 3D printed out of a carbon-fiber and nylon composite that has a gear-mesh coupling with a hard-stop weight-actuated locking mechanism aided by a cross-linked four-bar spring mechanism. This design can be scaled using anatomical dimensions of a human femur and tibia to have a unique fit for each user. The transfemoral amputee who was tested is high functioning and walked on the Computer Assisted Rehabilitation Environment (CAREN) at a self-selected pace. The motion capture and force data that was collected showed that there were distinct differences in the gait dynamics. The data was used to perform the Combined Gait Asymmetry Metric (CGAM), where the scores revealed that the overall asymmetry of the gait on the Ossur Total Knee was more asymmetric than the anatomically scalable transfemoral prosthetic knee. The anatomically scalable transfemoral prosthetic knee had higher peak knee flexion that caused a large step time asymmetry. This made walking on the anatomically scalable transfemoral prosthetic knee more strenuous due to the compensatory movements in adapting to the different dynamics. This can be overcome by tuning the cross-linked spring mechanism to emulate the dynamics of the subject better. The subject stated that the knee would be good for daily use and has the potential to be adapted as a running knee.

  15. Scalable single point power extraction for compact mobile and stand-alone solar harvesting power sources based on fully printed organic photovoltaic modules and efficient high voltage DC/DC conversion

    DEFF Research Database (Denmark)

    Garcia Valverde, Rafael; Villarejo, José A.; Hösel, Markus

    2015-01-01

    (AM1.5G, 1000 W m−2). As a demonstration we present a scalable fully integrated and compact power unit for mobile applications comprising solar energy harvesting OPV modules, power conversion and storage. Applications possible include electrical charging of mobile devices, illumination using LED lamps...

  16. Epitaxial Growth of Two-Dimensional Layered Transition-Metal Dichalcogenides: Growth Mechanism, Controllability, and Scalability

    KAUST Repository

    Li, Henan; Li, Ying; Aljarb, Areej; Shi, Yumeng; Li, Lain-Jong

    2017-01-01

    to generate high-quality TMDC layers with scalable size, controllable thickness, and excellent electronic properties suitable for both technological applications and fundamental sciences. The capability to precisely engineer 2D materials by chemical approaches

  17. Study on scalable Coulombic degradation for estimating the lifetime of organic light-emitting devices

    International Nuclear Information System (INIS)

    Zhang Wenwen; Hou Xun; Wu Zhaoxin; Liang Shixiong; Jiao Bo; Zhang Xinwen; Wang Dawei; Chen Zhijian; Gong Qihuang

    2011-01-01

    The luminance decays of organic light-emitting diodes (OLEDs) are investigated with initial luminance of 1000 to 20 000 cd m -2 through a scalable Coulombic degradation and a stretched exponential decay. We found that the estimated lifetime by scalable Coulombic degradation deviates from the experimental results when the OLEDs work with high initial luminance. By measuring the temperature of the device during degradation, we found that the higher device temperatures will lead to instabilities of organic materials in devices, which is expected to result in the difference between the experimental results and estimation using the scalable Coulombic degradation.

  18. On the Scalability of Time-predictable Chip-Multiprocessing

    DEFF Research Database (Denmark)

    Puffitsch, Wolfgang; Schoeberl, Martin

    2012-01-01

    Real-time systems need a time-predictable execution platform to be able to determine the worst-case execution time statically. In order to be time-predictable, several advanced processor features, such as out-of-order execution and other forms of speculation, have to be avoided. However, just using...... simple processors is not an option for embedded systems with high demands on computing power. In order to provide high performance and predictability we argue to use multiprocessor systems with a time-predictable memory interface. In this paper we present the scalability of a Java chip......-multiprocessor system that is designed to be time-predictable. Adding time-predictable caches is mandatory to achieve scalability with a shared memory multi-processor system. As Java bytecode retains information about the nature of memory accesses, it is possible to implement a memory hierarchy that takes...

  19. Resource-aware complexity scalability for mobile MPEG encoding

    NARCIS (Netherlands)

    Mietens, S.O.; With, de P.H.N.; Hentschel, C.; Panchanatan, S.; Vasudev, B.

    2004-01-01

    Complexity scalability attempts to scale the required resources of an algorithm with the chose quality settings, in order to broaden the application range. In this paper, we present complexity-scalable MPEG encoding of which the core processing modules are modified for scalability. Scalability is

  20. Scalable inference for stochastic block models

    KAUST Repository

    Peng, Chengbin

    2017-12-08

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference algorithms for such a model are increasingly limited due to their high time complexity and poor scalability. In this paper, we propose a multi-stage maximum likelihood approach to recover the latent parameters of the stochastic block model, in time linear with respect to the number of edges. We also propose a parallel algorithm based on message passing. Our algorithm can overlap communication and computation, providing speedup without compromising accuracy as the number of processors grows. For example, to process a real-world graph with about 1.3 million nodes and 10 million edges, our algorithm requires about 6 seconds on 64 cores of a contemporary commodity Linux cluster. Experiments demonstrate that the algorithm can produce high quality results on both benchmark and real-world graphs. An example of finding more meaningful communities is illustrated consequently in comparison with a popular modularity maximization algorithm.

  1. CODA: A scalable, distributed data acquisition system

    International Nuclear Information System (INIS)

    Watson, W.A. III; Chen, J.; Heyes, G.; Jastrzembski, E.; Quarrie, D.

    1994-01-01

    A new data acquisition system has been designed for physics experiments scheduled to run at CEBAF starting in the summer of 1994. This system runs on Unix workstations connected via ethernet, FDDI, or other network hardware to multiple intelligent front end crates -- VME, CAMAC or FASTBUS. CAMAC crates may either contain intelligent processors, or may be interfaced to VME. The system is modular and scalable, from a single front end crate and one workstation linked by ethernet, to as may as 32 clusters of front end crates ultimately connected via a high speed network to a set of analysis workstations. The system includes an extensible, device independent slow controls package with drivers for CAMAC, VME, and high voltage crates, as well as a link to CEBAF accelerator controls. All distributed processes are managed by standard remote procedure calls propagating change-of-state requests, or reading and writing program variables. Custom components may be easily integrated. The system is portable to any front end processor running the VxWorks real-time kernel, and to most workstations supplying a few standard facilities such as rsh and X-windows, and Motif and socket libraries. Sample implementations exist for 2 Unix workstation families connected via ethernet or FDDI to VME (with interfaces to FASTBUS or CAMAC), and via ethernet to FASTBUS or CAMAC

  2. Scalable, full-colour and controllable chromotropic plasmonic printing

    Science.gov (United States)

    Xue, Jiancai; Zhou, Zhang-Kai; Wei, Zhiqiang; Su, Rongbin; Lai, Juan; Li, Juntao; Li, Chao; Zhang, Tengwei; Wang, Xue-Hua

    2015-01-01

    Plasmonic colour printing has drawn wide attention as a promising candidate for the next-generation colour-printing technology. However, an efficient approach to realize full colour and scalable fabrication is still lacking, which prevents plasmonic colour printing from practical applications. Here we present a scalable and full-colour plasmonic printing approach by combining conjugate twin-phase modulation with a plasmonic broadband absorber. More importantly, our approach also demonstrates controllable chromotropic capability, that is, the ability of reversible colour transformations. This chromotropic capability affords enormous potentials in building functionalized prints for anticounterfeiting, special label, and high-density data encryption storage. With such excellent performances in functional colour applications, this colour-printing approach could pave the way for plasmonic colour printing in real-world commercial utilization. PMID:26567803

  3. Scalable Combinatorial Tools for Health Disparities Research

    Directory of Open Access Journals (Sweden)

    Michael A. Langston

    2014-10-01

    Full Text Available Despite staggering investments made in unraveling the human genome, current estimates suggest that as much as 90% of the variance in cancer and chronic diseases can be attributed to factors outside an individual’s genetic endowment, particularly to environmental exposures experienced across his or her life course. New analytical approaches are clearly required as investigators turn to complicated systems theory and ecological, place-based and life-history perspectives in order to understand more clearly the relationships between social determinants, environmental exposures and health disparities. While traditional data analysis techniques remain foundational to health disparities research, they are easily overwhelmed by the ever-increasing size and heterogeneity of available data needed to illuminate latent gene x environment interactions. This has prompted the adaptation and application of scalable combinatorial methods, many from genome science research, to the study of population health. Most of these powerful tools are algorithmically sophisticated, highly automated and mathematically abstract. Their utility motivates the main theme of this paper, which is to describe real applications of innovative transdisciplinary models and analyses in an effort to help move the research community closer toward identifying the causal mechanisms and associated environmental contexts underlying health disparities. The public health exposome is used as a contemporary focus for addressing the complex nature of this subject.

  4. Scalable Coverage Maintenance for Dense Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jun Lu

    2007-06-01

    Full Text Available Owing to numerous potential applications, wireless sensor networks have been attracting significant research effort recently. The critical challenge that wireless sensor networks often face is to sustain long-term operation on limited battery energy. Coverage maintenance schemes can effectively prolong network lifetime by selecting and employing a subset of sensors in the network to provide sufficient sensing coverage over a target region. We envision future wireless sensor networks composed of a vast number of miniaturized sensors in exceedingly high density. Therefore, the key issue of coverage maintenance for future sensor networks is the scalability to sensor deployment density. In this paper, we propose a novel coverage maintenance scheme, scalable coverage maintenance (SCOM, which is scalable to sensor deployment density in terms of communication overhead (i.e., number of transmitted and received beacons and computational complexity (i.e., time and space complexity. In addition, SCOM achieves high energy efficiency and load balancing over different sensors. We have validated our claims through both analysis and simulations.

  5. SuperLU{_}DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiaoye S.; Demmel, James W.

    2002-03-27

    In this paper, we present the main algorithmic features in the software package SuperLU{_}DIST, a distributed-memory sparse direct solver for large sets of linear equations. We give in detail our parallelization strategies, with focus on scalability issues, and demonstrate the parallel performance and scalability on current machines. The solver is based on sparse Gaussian elimination, with an innovative static pivoting strategy proposed earlier by the authors. The main advantage of static pivoting over classical partial pivoting is that it permits a priori determination of data structures and communication pattern for sparse Gaussian elimination, which makes it more scalable on distributed memory machines. Based on this a priori knowledge, we designed highly parallel and scalable algorithms for both LU decomposition and triangular solve and we show that they are suitable for large-scale distributed memory machines.

  6. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    Thomas André

    2007-03-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  7. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    André Thomas

    2007-01-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  8. The Concept of Business Model Scalability

    DEFF Research Database (Denmark)

    Lund, Morten; Nielsen, Christian

    2018-01-01

    -term pro table business. However, the main message of this article is that while providing a good value proposition may help the rm ‘get by’, the really successful businesses of today are those able to reach the sweet-spot of business model scalability. Design/Methodology/Approach: The article is based...... on a ve-year longitudinal action research project of over 90 companies that participated in the International Center for Innovation project aimed at building 10 global network-based business models. Findings: This article introduces and discusses the term scalability from a company-level perspective......Purpose: The purpose of the article is to de ne what scalable business models are. Central to the contemporary understanding of business models is the value proposition towards the customer and the hypotheses generated about delivering value to the customer which become a good foundation for a long...

  9. Declarative and Scalable Selection for Map Visualizations

    DEFF Research Database (Denmark)

    Kefaloukos, Pimin Konstantin Balic

    and is itself a source and cause of prolific data creation. This calls for scalable map processing techniques that can handle the data volume and which play well with the predominant data models on the Web. (4) Maps are now consumed around the clock by a global audience. While historical maps were singleuser......-defined constraints as well as custom objectives. The purpose of the language is to derive a target multi-scale database from a source database according to holistic specifications. (b) The Glossy SQL compiler allows Glossy SQL to be scalably executed in a spatial analytics system, such as a spatial relational......, there are indications that the method is scalable for databases that contain millions of records, especially if the target language of the compiler is substituted by a cluster-ready variant of SQL. While several realistic use cases for maps have been implemented in CVL, additional non-geographic data visualization uses...

  10. Space Situational Awareness Data Processing Scalability Utilizing Google Cloud Services

    Science.gov (United States)

    Greenly, D.; Duncan, M.; Wysack, J.; Flores, F.

    Space Situational Awareness (SSA) is a fundamental and critical component of current space operations. The term SSA encompasses the awareness, understanding and predictability of all objects in space. As the population of orbital space objects and debris increases, the number of collision avoidance maneuvers grows and prompts the need for accurate and timely process measures. The SSA mission continually evolves to near real-time assessment and analysis demanding the need for higher processing capabilities. By conventional methods, meeting these demands requires the integration of new hardware to keep pace with the growing complexity of maneuver planning algorithms. SpaceNav has implemented a highly scalable architecture that will track satellites and debris by utilizing powerful virtual machines on the Google Cloud Platform. SpaceNav algorithms for processing CDMs outpace conventional means. A robust processing environment for tracking data, collision avoidance maneuvers and various other aspects of SSA can be created and deleted on demand. Migrating SpaceNav tools and algorithms into the Google Cloud Platform will be discussed and the trials and tribulations involved. Information will be shared on how and why certain cloud products were used as well as integration techniques that were implemented. Key items to be presented are: 1.Scientific algorithms and SpaceNav tools integrated into a scalable architecture a) Maneuver Planning b) Parallel Processing c) Monte Carlo Simulations d) Optimization Algorithms e) SW Application Development/Integration into the Google Cloud Platform 2. Compute Engine Processing a) Application Engine Automated Processing b) Performance testing and Performance Scalability c) Cloud MySQL databases and Database Scalability d) Cloud Data Storage e) Redundancy and Availability

  11. Superlinearly scalable noise robustness of redundant coupled dynamical systems.

    Science.gov (United States)

    Kohar, Vivek; Kia, Behnam; Lindner, John F; Ditto, William L

    2016-03-01

    We illustrate through theory and numerical simulations that redundant coupled dynamical systems can be extremely robust against local noise in comparison to uncoupled dynamical systems evolving in the same noisy environment. Previous studies have shown that the noise robustness of redundant coupled dynamical systems is linearly scalable and deviations due to noise can be minimized by increasing the number of coupled units. Here, we demonstrate that the noise robustness can actually be scaled superlinearly if some conditions are met and very high noise robustness can be realized with very few coupled units. We discuss these conditions and show that this superlinear scalability depends on the nonlinearity of the individual dynamical units. The phenomenon is demonstrated in discrete as well as continuous dynamical systems. This superlinear scalability not only provides us an opportunity to exploit the nonlinearity of physical systems without being bogged down by noise but may also help us in understanding the functional role of coupled redundancy found in many biological systems. Moreover, engineers can exploit superlinear noise suppression by starting a coupled system near (not necessarily at) the appropriate initial condition.

  12. Event metadata records as a testbed for scalable data mining

    International Nuclear Information System (INIS)

    Gemmeren, P van; Malon, D

    2010-01-01

    At a data rate of 200 hertz, event metadata records ('TAGs,' in ATLAS parlance) provide fertile grounds for development and evaluation of tools for scalable data mining. It is easy, of course, to apply HEP-specific selection or classification rules to event records and to label such an exercise 'data mining,' but our interest is different. Advanced statistical methods and tools such as classification, association rule mining, and cluster analysis are common outside the high energy physics community. These tools can prove useful, not for discovery physics, but for learning about our data, our detector, and our software. A fixed and relatively simple schema makes TAG export to other storage technologies such as HDF5 straightforward. This simplifies the task of exploiting very-large-scale parallel platforms such as Argonne National Laboratory's BlueGene/P, currently the largest supercomputer in the world for open science, in the development of scalable tools for data mining. Using a domain-neutral scientific data format may also enable us to take advantage of existing data mining components from other communities. There is, further, a substantial literature on the topic of one-pass algorithms and stream mining techniques, and such tools may be inserted naturally at various points in the event data processing and distribution chain. This paper describes early experience with event metadata records from ATLAS simulation and commissioning as a testbed for scalable data mining tool development and evaluation.

  13. The intergroup protocols: Scalable group communication for the internet

    Energy Technology Data Exchange (ETDEWEB)

    Berket, Karlo [Univ. of California, Santa Barbara, CA (United States)

    2000-12-04

    Reliable group ordered delivery of multicast messages in a distributed system is a useful service that simplifies the programming of distributed applications. Such a service helps to maintain the consistency of replicated information and to coordinate the activities of the various processes. With the increasing popularity of the Internet, there is an increasing interest in scaling the protocols that provide this service to the environment of the Internet. The InterGroup protocol suite, described in this dissertation, provides such a service, and is intended for the environment of the Internet with scalability to large numbers of nodes and high latency links. The InterGroup protocols approach the scalability problem from various directions. They redefine the meaning of group membership, allow voluntary membership changes, add a receiver-oriented selection of delivery guarantees that permits heterogeneity of the receiver set, and provide a scalable reliability service. The InterGroup system comprises several components, executing at various sites within the system. Each component provides part of the services necessary to implement a group communication system for the wide-area. The components can be categorized as: (1) control hierarchy, (2) reliable multicast, (3) message distribution and delivery, and (4) process group membership. We have implemented a prototype of the InterGroup protocols in Java, and have tested the system performance in both local-area and wide-area networks.

  14. Software performance and scalability a quantitative approach

    CERN Document Server

    Liu, Henry H

    2009-01-01

    Praise from the Reviewers:"The practicality of the subject in a real-world situation distinguishes this book from othersavailable on the market."—Professor Behrouz Far, University of Calgary"This book could replace the computer organization texts now in use that every CS and CpEstudent must take. . . . It is much needed, well written, and thoughtful."—Professor Larry Bernstein, Stevens Institute of TechnologyA distinctive, educational text onsoftware performance and scalabilityThis is the first book to take a quantitative approach to the subject of software performance and scalability

  15. From Digital Disruption to Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten; Thomsen, Peter Poulsen

    2017-01-01

    This article discusses the terms disruption, digital disruption, business models and business model scalability. It illustrates how managers should be using these terms for the benefit of their business by developing business models capable of achieving exponentially increasing returns to scale...... will seldom lead to business model scalability capable of competing with digital disruption(s)....... as a response to digital disruption. A series of case studies illustrate that besides frequent existing messages in the business literature relating to the importance of creating agile businesses, both in growing and declining economies, as well as hard to copy value propositions or value propositions that take...

  16. Content-Aware Scalability-Type Selection for Rate Adaptation of Scalable Video

    Directory of Open Access Journals (Sweden)

    Tekalp A Murat

    2007-01-01

    Full Text Available Scalable video coders provide different scaling options, such as temporal, spatial, and SNR scalabilities, where rate reduction by discarding enhancement layers of different scalability-type results in different kinds and/or levels of visual distortion depend on the content and bitrate. This dependency between scalability type, video content, and bitrate is not well investigated in the literature. To this effect, we first propose an objective function that quantifies flatness, blockiness, blurriness, and temporal jerkiness artifacts caused by rate reduction by spatial size, frame rate, and quantization parameter scaling. Next, the weights of this objective function are determined for different content (shot types and different bitrates using a training procedure with subjective evaluation. Finally, a method is proposed for choosing the best scaling type for each temporal segment that results in minimum visual distortion according to this objective function given the content type of temporal segments. Two subjective tests have been performed to validate the proposed procedure for content-aware selection of the best scalability type on soccer videos. Soccer videos scaled from 600 kbps to 100 kbps by the proposed content-aware selection of scalability type have been found visually superior to those that are scaled using a single scalability option over the whole sequence.

  17. A scalable healthcare information system based on a service-oriented architecture.

    Science.gov (United States)

    Yang, Tzu-Hsiang; Sun, Yeali S; Lai, Feipei

    2011-06-01

    Many existing healthcare information systems are composed of a number of heterogeneous systems and face the important issue of system scalability. This paper first describes the comprehensive healthcare information systems used in National Taiwan University Hospital (NTUH) and then presents a service-oriented architecture (SOA)-based healthcare information system (HIS) based on the service standard HL7. The proposed architecture focuses on system scalability, in terms of both hardware and software. Moreover, we describe how scalability is implemented in rightsizing, service groups, databases, and hardware scalability. Although SOA-based systems sometimes display poor performance, through a performance evaluation of our HIS based on SOA, the average response time for outpatient, inpatient, and emergency HL7Central systems are 0.035, 0.04, and 0.036 s, respectively. The outpatient, inpatient, and emergency WebUI average response times are 0.79, 1.25, and 0.82 s. The scalability of the rightsizing project and our evaluation results show that the SOA HIS we propose provides evidence that SOA can provide system scalability and sustainability in a highly demanding healthcare information system.

  18. Using scalable vector graphics to evolve art

    NARCIS (Netherlands)

    den Heijer, E.; Eiben, A. E.

    2016-01-01

    In this paper, we describe our investigations of the use of scalable vector graphics as a genotype representation in evolutionary art. We describe the technical aspects of using SVG in evolutionary art, and explain our custom, SVG specific operators initialisation, mutation and crossover. We perform

  19. Scalable fast multipole accelerated vortex methods

    KAUST Repository

    Hu, Qi; Gumerov, Nail A.; Yokota, Rio; Barba, Lorena A.; Duraiswami, Ramani

    2014-01-01

    -node communication and load balance efficiently, with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff

  20. Scalable Domain Decomposed Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  1. Scalable Open Source Smart Grid Simulator (SGSim)

    DEFF Research Database (Denmark)

    Ebeid, Emad Samuel Malki; Jacobsen, Rune Hylsberg; Stefanni, Francesco

    2017-01-01

    . This paper presents an open source smart grid simulator (SGSim). The simulator is based on open source SystemC Network Simulation Library (SCNSL) and aims to model scalable smart grid applications. SGSim has been tested under different smart grid scenarios that contain hundreds of thousands of households...

  2. Cooperative Scalable Moving Continuous Query Processing

    DEFF Research Database (Denmark)

    Li, Xiaohui; Karras, Panagiotis; Jensen, Christian S.

    2012-01-01

    of the global view and handle the majority of the workload. Meanwhile, moving clients, having basic memory and computation resources, handle small portions of the workload. This model is further enhanced by dynamic region allocation and grid size adjustment mechanisms that reduce the communication...... and computation cost for both servers and clients. An experimental study demonstrates that our approaches offer better scalability than competitors...

  3. Scalable optical switches for computing applications

    NARCIS (Netherlands)

    White, I.H.; Aw, E.T.; Williams, K.A.; Wang, Haibo; Wonfor, A.; Penty, R.V.

    2009-01-01

    A scalable photonic interconnection network architecture is proposed whereby a Clos network is populated with broadcast-and-select stages. This enables the efficient exploitation of an emerging class of photonic integrated switch fabric. A low distortion space switch technology based on recently

  4. Scalability of Sustainable Business Models in Hybrid Organizations

    Directory of Open Access Journals (Sweden)

    Adam Jabłoński

    2016-02-01

    Full Text Available The dynamics of change in modern business create new mechanisms for company management to determine their pursuit and the achievement of their high performance. This performance maintained over a long period of time becomes a source of ensuring business continuity by companies. An ontological being enabling the adoption of such assumptions is such a business model that has the ability to generate results in every possible market situation and, moreover, it has the features of permanent adaptability. A feature that describes the adaptability of the business model is its scalability. Being a factor ensuring more work and more efficient work with an increasing number of components, scalability can be applied to the concept of business models as the company’s ability to maintain similar or higher performance through it. Ensuring the company’s performance in the long term helps to build the so-called sustainable business model that often balances the objectives of stakeholders and shareholders, and that is created by the implemented principles of value-based management and corporate social responsibility. This perception of business paves the way for building hybrid organizations that integrate business activities with pro-social ones. The combination of an approach typical of hybrid organizations in designing and implementing sustainable business models pursuant to the scalability criterion seems interesting from the cognitive point of view. Today, hybrid organizations are great spaces for building effective and efficient mechanisms for dialogue between business and society. This requires the appropriate business model. The purpose of the paper is to present the conceptualization and operationalization of scalability of sustainable business models that determine the performance of a hybrid organization in the network environment. The paper presents the original concept of applying scalability in sustainable business models with detailed

  5. Responsive, Flexible and Scalable Broader Impacts (Invited)

    Science.gov (United States)

    Decharon, A.; Companion, C.; Steinman, M.

    2010-12-01

    investment of time. Initiated in summer 2010, the webinars are interactive and highly flexible: people can participate from their homes anywhere and can interact according to their comfort levels (i.e., submitting questions in “chat boxes” rather than orally). Expansion - To expand scientists’ research beyond educators attending a workshop or webinar, COSEE-OS uses a blog as an additional mode of communication. Topically focused by concept maps, blogs serve as a forum for scalable content. The varied types of formatting allow scientists to create long-lived resources that remain attributed to them while supporting sustained educator engagement. Blogs are another point of contact and allow educators further asynchronous access to scientists. Based on COSEE-OS evaluations, interacting on a blog was found to be educators’ preferred method of following up with scientists. Sustained engagement of scientists or educators requires a specific return on investment. Workshops and web tools can be used together to maximize scientist impact with a relatively small investment of time. As one educator stated, “It really helps my students’ interest when we discuss concepts and I tell them my knowledge comes directly from a scientist!” [A. deCharon et al. (2009), Online tools help get scientists and educators on the same page, Eos Transactions, American Geophysical Union, 90(34), 289-290.

  6. pcircle - A Suite of Scalable Parallel File System Tools

    Energy Technology Data Exchange (ETDEWEB)

    2015-10-01

    Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.

  7. Scalable video on demand adaptive Internet-based distribution

    CERN Document Server

    Zink, Michael

    2013-01-01

    In recent years, the proliferation of available video content and the popularity of the Internet have encouraged service providers to develop new ways of distributing content to clients. Increasing video scaling ratios and advanced digital signal processing techniques have led to Internet Video-on-Demand applications, but these currently lack efficiency and quality. Scalable Video on Demand: Adaptive Internet-based Distribution examines how current video compression and streaming can be used to deliver high-quality applications over the Internet. In addition to analysing the problems

  8. Scalable error correction in distributed ion trap computers

    International Nuclear Information System (INIS)

    Oi, Daniel K. L.; Devitt, Simon J.; Hollenberg, Lloyd C. L.

    2006-01-01

    A major challenge for quantum computation in ion trap systems is scalable integration of error correction and fault tolerance. We analyze a distributed architecture with rapid high-fidelity local control within nodes and entangled links between nodes alleviating long-distance transport. We demonstrate fault-tolerant operator measurements which are used for error correction and nonlocal gates. This scheme is readily applied to linear ion traps which cannot be scaled up beyond a few ions per individual trap but which have access to a probabilistic entanglement mechanism. A proof-of-concept system is presented which is within the reach of current experiment

  9. Scalable Nonlinear AUC Maximization Methods

    OpenAIRE

    Khalid, Majdi; Ray, Indrakshi; Chitsaz, Hamidreza

    2017-01-01

    The area under the ROC curve (AUC) is a measure of interest in various machine learning and data mining applications. It has been widely used to evaluate classification performance on heavily imbalanced data. The kernelized AUC maximization machines have established a superior generalization ability compared to linear AUC machines because of their capability in modeling the complex nonlinear structure underlying most real world-data. However, the high training complexity renders the kernelize...

  10. Towards Scalable Strain Gauge-Based Joint Torque Sensors

    Science.gov (United States)

    D’Imperio, Mariapaola; Cannella, Ferdinando; Caldwell, Darwin G.; Cuschieri, Alfred

    2017-01-01

    During recent decades, strain gauge-based joint torque sensors have been commonly used to provide high-fidelity torque measurements in robotics. Although measurement of joint torque/force is often required in engineering research and development, the gluing and wiring of strain gauges used as torque sensors pose difficulties during integration within the restricted space available in small joints. The problem is compounded by the need for a scalable geometric design to measure joint torque. In this communication, we describe a novel design of a strain gauge-based mono-axial torque sensor referred to as square-cut torque sensor (SCTS), the significant features of which are high degree of linearity, symmetry, and high scalability in terms of both size and measuring range. Most importantly, SCTS provides easy access for gluing and wiring of the strain gauges on sensor surface despite the limited available space. We demonstrated that the SCTS was better in terms of symmetry (clockwise and counterclockwise rotation) and more linear. These capabilities have been shown through finite element modeling (ANSYS) confirmed by observed data obtained by load testing experiments. The high performance of SCTS was confirmed by studies involving changes in size, material and/or wings width and thickness. Finally, we demonstrated that the SCTS can be successfully implementation inside the hip joints of miniaturized hydraulically actuated quadruped robot-MiniHyQ. This communication is based on work presented at the 18th International Conference on Climbing and Walking Robots (CLAWAR). PMID:28820446

  11. Scalable Packet Classification with Hash Tables

    Science.gov (United States)

    Wang, Pi-Chung

    In the last decade, the technique of packet classification has been widely deployed in various network devices, including routers, firewalls and network intrusion detection systems. In this work, we improve the performance of packet classification by using multiple hash tables. The existing hash-based algorithms have superior scalability with respect to the required space; however, their search performance may not be comparable to other algorithms. To improve the search performance, we propose a tuple reordering algorithm to minimize the number of accessed hash tables with the aid of bitmaps. We also use pre-computation to ensure the accuracy of our search procedure. Performance evaluation based on both real and synthetic filter databases shows that our scheme is effective and scalable and the pre-computation cost is moderate.

  12. Scalable Atomistic Simulation Algorithms for Materials Research

    Directory of Open Access Journals (Sweden)

    Aiichiro Nakano

    2002-01-01

    Full Text Available A suite of scalable atomistic simulation programs has been developed for materials research based on space-time multiresolution algorithms. Design and analysis of parallel algorithms are presented for molecular dynamics (MD simulations and quantum-mechanical (QM calculations based on the density functional theory. Performance tests have been carried out on 1,088-processor Cray T3E and 1,280-processor IBM SP3 computers. The linear-scaling algorithms have enabled 6.44-billion-atom MD and 111,000-atom QM calculations on 1,024 SP3 processors with parallel efficiency well over 90%. production-quality programs also feature wavelet-based computational-space decomposition for adaptive load balancing, spacefilling-curve-based adaptive data compression with user-defined error bound for scalable I/O, and octree-based fast visibility culling for immersive and interactive visualization of massive simulation data.

  13. Scalable manufacturing processes with soft materials

    OpenAIRE

    White, Edward; Case, Jennifer; Kramer, Rebecca

    2014-01-01

    The emerging field of soft robotics will benefit greatly from new scalable manufacturing techniques for responsive materials. Currently, most of soft robotic examples are fabricated one-at-a-time, using techniques borrowed from lithography and 3D printing to fabricate molds. This limits both the maximum and minimum size of robots that can be fabricated, and hinders batch production, which is critical to gain wider acceptance for soft robotic systems. We have identified electrical structures, ...

  14. Architecture Knowledge for Evaluating Scalable Databases

    Science.gov (United States)

    2015-01-16

    Architecture Knowledge for Evaluating Scalable Databases 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Nurgaliev... Scala , Erlang, Javascript Cursor-based queries Supported, Not Supported JOIN queries Supported, Not Supported Complex data types Lists, maps, sets...is therefore needed, using technology such as machine learning to extract content from product documentation. The terminology used in the database

  15. Randomized Algorithms for Scalable Machine Learning

    OpenAIRE

    Kleiner, Ariel Jacob

    2012-01-01

    Many existing procedures in machine learning and statistics are computationally intractable in the setting of large-scale data. As a result, the advent of rapidly increasing dataset sizes, which should be a boon yielding improved statistical performance, instead severely blunts the usefulness of a variety of existing inferential methods. In this work, we use randomness to ameliorate this lack of scalability by reducing complex, computationally difficult inferential problems to larger sets o...

  16. Bitcoin-NG: A Scalable Blockchain Protocol

    OpenAIRE

    Eyal, Ittay; Gencer, Adem Efe; Sirer, Emin Gun; van Renesse, Robbert

    2015-01-01

    Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin's blockchain protocol, Bitcoin-NG is By...

  17. A scalable distributed RRT for motion planning

    KAUST Repository

    Jacobs, Sam Ade

    2013-05-01

    Rapidly-exploring Random Tree (RRT), like other sampling-based motion planning methods, has been very successful in solving motion planning problems. Even so, sampling-based planners cannot solve all problems of interest efficiently, so attention is increasingly turning to parallelizing them. However, one challenge in parallelizing RRT is the global computation and communication overhead of nearest neighbor search, a key operation in RRTs. This is a critical issue as it limits the scalability of previous algorithms. We present two parallel algorithms to address this problem. The first algorithm extends existing work by introducing a parameter that adjusts how much local computation is done before a global update. The second algorithm radially subdivides the configuration space into regions, constructs a portion of the tree in each region in parallel, and connects the subtrees,i removing cycles if they exist. By subdividing the space, we increase computation locality enabling a scalable result. We show that our approaches are scalable. We present results demonstrating almost linear scaling to hundreds of processors on a Linux cluster and a Cray XE6 machine. © 2013 IEEE.

  18. A scalable distributed RRT for motion planning

    KAUST Repository

    Jacobs, Sam Ade; Stradford, Nicholas; Rodriguez, Cesar; Thomas, Shawna; Amato, Nancy M.

    2013-01-01

    Rapidly-exploring Random Tree (RRT), like other sampling-based motion planning methods, has been very successful in solving motion planning problems. Even so, sampling-based planners cannot solve all problems of interest efficiently, so attention is increasingly turning to parallelizing them. However, one challenge in parallelizing RRT is the global computation and communication overhead of nearest neighbor search, a key operation in RRTs. This is a critical issue as it limits the scalability of previous algorithms. We present two parallel algorithms to address this problem. The first algorithm extends existing work by introducing a parameter that adjusts how much local computation is done before a global update. The second algorithm radially subdivides the configuration space into regions, constructs a portion of the tree in each region in parallel, and connects the subtrees,i removing cycles if they exist. By subdividing the space, we increase computation locality enabling a scalable result. We show that our approaches are scalable. We present results demonstrating almost linear scaling to hundreds of processors on a Linux cluster and a Cray XE6 machine. © 2013 IEEE.

  19. Scalable robotic biofabrication of tissue spheroids

    International Nuclear Information System (INIS)

    Mehesz, A Nagy; Hajdu, Z; Visconti, R P; Markwald, R R; Mironov, V; Brown, J; Beaver, W; Da Silva, J V L

    2011-01-01

    Development of methods for scalable biofabrication of uniformly sized tissue spheroids is essential for tissue spheroid-based bioprinting of large size tissue and organ constructs. The most recent scalable technique for tissue spheroid fabrication employs a micromolded recessed template prepared in a non-adhesive hydrogel, wherein the cells loaded into the template self-assemble into tissue spheroids due to gravitational force. In this study, we present an improved version of this technique. A new mold was designed to enable generation of 61 microrecessions in each well of a 96-well plate. The microrecessions were seeded with cells using an EpMotion 5070 automated pipetting machine. After 48 h of incubation, tissue spheroids formed at the bottom of each microrecession. To assess the quality of constructs generated using this technology, 600 tissue spheroids made by this method were compared with 600 spheroids generated by the conventional hanging drop method. These analyses showed that tissue spheroids fabricated by the micromolded method are more uniform in diameter. Thus, use of micromolded recessions in a non-adhesive hydrogel, combined with automated cell seeding, is a reliable method for scalable robotic fabrication of uniform-sized tissue spheroids.

  20. Scalable robotic biofabrication of tissue spheroids

    Energy Technology Data Exchange (ETDEWEB)

    Mehesz, A Nagy; Hajdu, Z; Visconti, R P; Markwald, R R; Mironov, V [Advanced Tissue Biofabrication Center, Department of Regenerative Medicine and Cell Biology, Medical University of South Carolina, Charleston, SC (United States); Brown, J [Department of Mechanical Engineering, Clemson University, Clemson, SC (United States); Beaver, W [York Technical College, Rock Hill, SC (United States); Da Silva, J V L, E-mail: mironovv@musc.edu [Renato Archer Information Technology Center-CTI, Campinas (Brazil)

    2011-06-15

    Development of methods for scalable biofabrication of uniformly sized tissue spheroids is essential for tissue spheroid-based bioprinting of large size tissue and organ constructs. The most recent scalable technique for tissue spheroid fabrication employs a micromolded recessed template prepared in a non-adhesive hydrogel, wherein the cells loaded into the template self-assemble into tissue spheroids due to gravitational force. In this study, we present an improved version of this technique. A new mold was designed to enable generation of 61 microrecessions in each well of a 96-well plate. The microrecessions were seeded with cells using an EpMotion 5070 automated pipetting machine. After 48 h of incubation, tissue spheroids formed at the bottom of each microrecession. To assess the quality of constructs generated using this technology, 600 tissue spheroids made by this method were compared with 600 spheroids generated by the conventional hanging drop method. These analyses showed that tissue spheroids fabricated by the micromolded method are more uniform in diameter. Thus, use of micromolded recessions in a non-adhesive hydrogel, combined with automated cell seeding, is a reliable method for scalable robotic fabrication of uniform-sized tissue spheroids.

  1. Optimizing Nanoelectrode Arrays for Scalable Intracellular Electrophysiology.

    Science.gov (United States)

    Abbott, Jeffrey; Ye, Tianyang; Ham, Donhee; Park, Hongkun

    2018-03-20

    Electrode technology for electrophysiology has a long history of innovation, with some decisive steps including the development of the voltage-clamp measurement technique by Hodgkin and Huxley in the 1940s and the invention of the patch clamp electrode by Neher and Sakmann in the 1970s. The high-precision intracellular recording enabled by the patch clamp electrode has since been a gold standard in studying the fundamental cellular processes underlying the electrical activities of neurons and other excitable cells. One logical next step would then be to parallelize these intracellular electrodes, since simultaneous intracellular recording from a large number of cells will benefit the study of complex neuronal networks and will increase the throughput of electrophysiological screening from basic neurobiology laboratories to the pharmaceutical industry. Patch clamp electrodes, however, are not built for parallelization; as for now, only ∼10 patch measurements in parallel are possible. It has long been envisioned that nanoscale electrodes may help meet this challenge. First, nanoscale electrodes were shown to enable intracellular access. Second, because their size scale is within the normal reach of the standard top-down fabrication, the nanoelectrodes can be scaled into a large array for parallelization. Third, such a nanoelectrode array can be monolithically integrated with complementary metal-oxide semiconductor (CMOS) electronics to facilitate the large array operation and the recording of the signals from a massive number of cells. These are some of the central ideas that have motivated the research activity into nanoelectrode electrophysiology, and these past years have seen fruitful developments. This Account aims to synthesize these findings so as to provide a useful reference. Summing up from the recent studies, we will first elucidate the morphology and associated electrical properties of the interface between a nanoelectrode and a cellular membrane

  2. Laplacian embedded regression for scalable manifold regularization.

    Science.gov (United States)

    Chen, Lin; Tsang, Ivor W; Xu, Dong

    2012-06-01

    Semi-supervised learning (SSL), as a powerful tool to learn from a limited number of labeled data and a large number of unlabeled data, has been attracting increasing attention in the machine learning community. In particular, the manifold regularization framework has laid solid theoretical foundations for a large family of SSL algorithms, such as Laplacian support vector machine (LapSVM) and Laplacian regularized least squares (LapRLS). However, most of these algorithms are limited to small scale problems due to the high computational cost of the matrix inversion operation involved in the optimization problem. In this paper, we propose a novel framework called Laplacian embedded regression by introducing an intermediate decision variable into the manifold regularization framework. By using ∈-insensitive loss, we obtain the Laplacian embedded support vector regression (LapESVR) algorithm, which inherits the sparse solution from SVR. Also, we derive Laplacian embedded RLS (LapERLS) corresponding to RLS under the proposed framework. Both LapESVR and LapERLS possess a simpler form of a transformed kernel, which is the summation of the original kernel and a graph kernel that captures the manifold structure. The benefits of the transformed kernel are two-fold: (1) we can deal with the original kernel matrix and the graph Laplacian matrix in the graph kernel separately and (2) if the graph Laplacian matrix is sparse, we only need to perform the inverse operation for a sparse matrix, which is much more efficient when compared with that for a dense one. Inspired by kernel principal component analysis, we further propose to project the introduced decision variable into a subspace spanned by a few eigenvectors of the graph Laplacian matrix in order to better reflect the data manifold, as well as accelerate the calculation of the graph kernel, allowing our methods to efficiently and effectively cope with large scale SSL problems. Extensive experiments on both toy and real

  3. Impact of multiplexed reading scheme on nanocrossbar memristor memory's scalability

    International Nuclear Information System (INIS)

    Zhu Xuan; Tang Yu-Hua; Wu Jun-Jie; Yi Xun; Wu Chun-Qing

    2014-01-01

    Nanocrossbar is a potential memory architecture to integrate memristor to achieve large scale and high density memory. However, based on the currently widely-adopted parallel reading scheme, scalability of the nanocrossbar memory is limited, since the overhead of the reading circuits is in proportion with the size of the nanocrossbar component. In this paper, a multiplexed reading scheme is adopted as the foundation of the discussion. Through HSPICE simulation, we reanalyze scalability of the nanocrossbar memristor memory by investigating the impact of various circuit parameters on the output voltage swing as the memory scales to larger size. We find that multiplexed reading maintains sufficient noise margin in large size nanocrossbar memristor memory. In order to improve the scalability of the memory, memristors with nonlinear I—V characteristics and high LRS (low resistive state) resistance should be adopted. (interdisciplinary physics and related areas of science and technology)

  4. Scalable Motion Estimation Processor Core for Multimedia System-on-Chip Applications

    Science.gov (United States)

    Lai, Yeong-Kang; Hsieh, Tian-En; Chen, Lien-Fei

    2007-04-01

    In this paper, we describe a high-throughput and scalable motion estimation processor architecture for multimedia system-on-chip applications. The number of processing elements (PEs) is scalable according to the variable algorithm parameters and the performance required for different applications. Using the PE rings efficiently and an intelligent memory-interleaving organization, the efficiency of the architecture can be increased. Moreover, using efficient on-chip memories and a data management technique can effectively decrease the power consumption and memory bandwidth. Techniques for reducing the number of interconnections and external memory accesses are also presented. Our results demonstrate that the proposed scalable PE-ringed architecture is a flexible and high-performance processor core in multimedia system-on-chip applications.

  5. A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system

    International Nuclear Information System (INIS)

    Toor, S; Eerola, P; Kraemer, O; Lindén, T; Osmani, L; Tarkoma, S; White, J

    2014-01-01

    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

  6. A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system

    Science.gov (United States)

    Toor, S.; Osmani, L.; Eerola, P.; Kraemer, O.; Lindén, T.; Tarkoma, S.; White, J.

    2014-06-01

    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

  7. Facile, one-pot and scalable synthesis of highly emissive aqueous-based Ag,Ni:ZnCdS/ZnS core/shell quantum dots with high chemical and optical stability

    Science.gov (United States)

    Sahraei, Reza; Soheyli, Ehsan; Faraji, Zahra; Soleiman-Beigi, Mohammad

    2017-11-01

    We report here on a one-pot, mild and low cost aqueous-based synthetic route for the preparation of colloidally stable and highly luminescent dual-doped Ag,Ni:ZnCdS/ZnS core/shell quantum dots (QDs). The pure dopant emission of the Ni-doped core/shell QDs was found to be highly affected by the presence of a second dopant ion (Ag+). Results showed that the PL emission intensity increases while its peak position experiences an obvious blue shift with an increase in the content of Ag+ ions. Regarding the optical observations, we provide a simple scheme for absorption-recombination processes of the carriers through impurity centers. To obtain optimum conditions with a better emission characteristic, we also study the effect of different reaction parameters, such as refluxing temperature, the pH of the core and shell solution, molar ratio of the dopant ions (Ni:(Zn+Cd) and Ag:(Zn+Cd)), and concentration of the core and shell precursors. Nonetheless, the most effective parameter is the presence of the ZnS shell in a suitable amount to eliminate surface trap states and enhance their emission intensity. It can also improve the bio-compatibility of the prepared QDs by restricting the Cd2+ toxic ions inside the core of the QDs. The present suggested route also revealed the remarkable optical and chemical stability of the colloidal QDs which establishes them as a decent kind of nano-scale structure for light emitting applications, especially in biological technologies. The suggested process also has the potential to be scaled-up while maintaining the emission characteristics and structural quality necessary for industrial applications in optoelectronic devices.

  8. The Node Monitoring Component of a Scalable Systems Software Environment

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Samuel James [Iowa State Univ., Ames, IA (United States)

    2006-01-01

    This research describes Fountain, a suite of programs used to monitor the resources of a cluster. A cluster is a collection of individual computers that are connected via a high speed communication network. They are traditionally used by users who desire more resources, such as processing power and memory, than any single computer can provide. A common drawback to effectively utilizing such a large-scale system is the management infrastructure, which often does not often scale well as the system grows. Large-scale parallel systems provide new research challenges in the area of systems software, the programs or tools that manage the system from boot-up to running a parallel job. The approach presented in this thesis utilizes a collection of separate components that communicate with each other to achieve a common goal. While systems software comprises a broad array of components, this thesis focuses on the design choices for a node monitoring component. We will describe Fountain, an implementation of the Scalable Systems Software (SSS) node monitor specification. It is targeted at aggregate node monitoring for clusters, focusing on both scalability and fault tolerance as its design goals. It leverages widely used technologies such as XML and HTTP to present an interface to other components in the SSS environment.

  9. ENDEAVOUR: A Scalable SDN Architecture for Real-World IXPs

    KAUST Repository

    Antichi, Gianni

    2017-10-25

    Innovation in interdomain routing has remained stagnant for over a decade. Recently, IXPs have emerged as economically-advantageous interconnection points for reducing path latencies and exchanging ever increasing traffic volumes among, possibly, hundreds of networks. Given their far-reaching implications on interdomain routing, IXPs are the ideal place to foster network innovation and extend the benefits of SDN to the interdomain level. In this paper, we present, evaluate, and demonstrate ENDEAVOUR, an SDN platform for IXPs. ENDEAVOUR can be deployed on a multi-hop IXP fabric, supports a large number of use cases, and is highly-scalable while avoiding broadcast storms. Our evaluation with real data from one of the largest IXPs, demonstrates the benefits and scalability of our solution: ENDEAVOUR requires around 70% fewer rules than alternative SDN solutions thanks to our rule partitioning mechanism. In addition, by providing an open source solution, we invite everyone from the community to experiment (and improve) our implementation as well as adapt it to new use cases.

  10. Performance-scalable volumetric data classification for online industrial inspection

    Science.gov (United States)

    Abraham, Aby J.; Sadki, Mustapha; Lea, R. M.

    2002-03-01

    Non-intrusive inspection and non-destructive testing of manufactured objects with complex internal structures typically requires the enhancement, analysis and visualization of high-resolution volumetric data. Given the increasing availability of fast 3D scanning technology (e.g. cone-beam CT), enabling on-line detection and accurate discrimination of components or sub-structures, the inherent complexity of classification algorithms inevitably leads to throughput bottlenecks. Indeed, whereas typical inspection throughput requirements range from 1 to 1000 volumes per hour, depending on density and resolution, current computational capability is one to two orders-of-magnitude less. Accordingly, speeding up classification algorithms requires both reduction of algorithm complexity and acceleration of computer performance. A shape-based classification algorithm, offering algorithm complexity reduction, by using ellipses as generic descriptors of solids-of-revolution, and supporting performance-scalability, by exploiting the inherent parallelism of volumetric data, is presented. A two-stage variant of the classical Hough transform is used for ellipse detection and correlation of the detected ellipses facilitates position-, scale- and orientation-invariant component classification. Performance-scalability is achieved cost-effectively by accelerating a PC host with one or more COTS (Commercial-Off-The-Shelf) PCI multiprocessor cards. Experimental results are reported to demonstrate the feasibility and cost-effectiveness of the data-parallel classification algorithm for on-line industrial inspection applications.

  11. Advanced technologies for scalable ATLAS conditions database access on the grid

    CERN Document Server

    Basset, R; Dimitrov, G; Girone, M; Hawkings, R; Nevski, P; Valassi, A; Vaniachine, A; Viegas, F; Walker, R; Wong, A

    2010-01-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysi...

  12. Tip-Based Nanofabrication for Scalable Manufacturing

    Directory of Open Access Journals (Sweden)

    Huan Hu

    2017-03-01

    Full Text Available Tip-based nanofabrication (TBN is a family of emerging nanofabrication techniques that use a nanometer scale tip to fabricate nanostructures. In this review, we first introduce the history of the TBN and the technology development. We then briefly review various TBN techniques that use different physical or chemical mechanisms to fabricate features and discuss some of the state-of-the-art techniques. Subsequently, we focus on those TBN methods that have demonstrated potential to scale up the manufacturing throughput. Finally, we discuss several research directions that are essential for making TBN a scalable nano-manufacturing technology.

  13. Tip-Based Nanofabrication for Scalable Manufacturing

    International Nuclear Information System (INIS)

    Hu, Huan; Somnath, Suhas

    2017-01-01

    Tip-based nanofabrication (TBN) is a family of emerging nanofabrication techniques that use a nanometer scale tip to fabricate nanostructures. Here in this review, we first introduce the history of the TBN and the technology development. We then briefly review various TBN techniques that use different physical or chemical mechanisms to fabricate features and discuss some of the state-of-the-art techniques. Subsequently, we focus on those TBN methods that have demonstrated potential to scale up the manufacturing throughput. Finally, we discuss several research directions that are essential for making TBN a scalable nano-manufacturing technology.

  14. Towards a Scalable, Biomimetic, Antibacterial Coating

    Science.gov (United States)

    Dickson, Mary Nora

    Corneal afflictions are the second leading cause of blindness worldwide. When a corneal transplant is unavailable or contraindicated, an artificial cornea device is the only chance to save sight. Bacterial or fungal biofilm build up on artificial cornea devices can lead to serious complications including the need for systemic antibiotic treatment and even explantation. As a result, much emphasis has been placed on anti-adhesion chemical coatings and antibiotic leeching coatings. These methods are not long-lasting, and microorganisms can eventually circumvent these measures. Thus, I have developed a surface topographical antimicrobial coating. Various surface structures including rough surfaces, superhydrophobic surfaces, and the natural surfaces of insects' wings and sharks' skin are promising anti-biofilm candidates, however none meet the criteria necessary for implementation on the surface of an artificial cornea device. In this thesis I: 1) developed scalable fabrication protocols for a library of biomimetic nanostructure polymer surfaces 2) assessed the potential these for poly(methyl methacrylate) nanopillars to kill or prevent formation of biofilm by E. coli bacteria and species of Pseudomonas and Staphylococcus bacteria and improved upon a proposed mechanism for the rupture of Gram-negative bacterial cell walls 3) developed a scalable, commercially viable method for producing antibacterial nanopillars on a curved, PMMA artificial cornea device and 4) developed scalable fabrication protocols for implantation of antibacterial nanopatterned surfaces on the surfaces of thermoplastic polyurethane materials, commonly used in catheter tubings. This project constitutes a first step towards fabrication of the first entirely PMMA artificial cornea device. The major finding of this work is that by precisely controlling the topography of a polymer surface at the nano-scale, we can kill adherent bacteria and prevent biofilm formation of certain pathogenic bacteria

  15. Scalable Tensor Factorizations with Missing Data

    DEFF Research Database (Denmark)

    Acar, Evrim; Dunlavy, Daniel M.; Kolda, Tamara G.

    2010-01-01

    of missing data, many important data sets will be discarded or improperly analyzed. Therefore, we need a robust and scalable approach for factorizing multi-way arrays (i.e., tensors) in the presence of missing data. We focus on one of the most well-known tensor factorizations, CANDECOMP/PARAFAC (CP...... is shown to successfully factor tensors with noise and up to 70% missing data. Moreover, our approach is significantly faster than the leading alternative and scales to larger problems. To show the real-world usefulness of CP-WOPT, we illustrate its applicability on a novel EEG (electroencephalogram...

  16. Scalable and Anonymous Group Communication with MTor

    Directory of Open Access Journals (Sweden)

    Lin Dong

    2016-04-01

    Full Text Available This paper presents MTor, a low-latency anonymous group communication system. We construct MTor as an extension to Tor, allowing the construction of multi-source multicast trees on top of the existing Tor infrastructure. MTor does not depend on an external service to broker the group communication, and avoids central points of failure and trust. MTor’s substantial bandwidth savings and graceful scalability enable new classes of anonymous applications that are currently too bandwidth-intensive to be viable through traditional unicast Tor communication-e.g., group file transfer, collaborative editing, streaming video, and real-time audio conferencing.

  17. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  18. SWAP-Assembler: scalable and efficient genome assembly towards thousands of cores.

    Science.gov (United States)

    Meng, Jintao; Wang, Bingqiang; Wei, Yanjie; Feng, Shengzhong; Balaji, Pavan

    2014-01-01

    There is a widening gap between the throughput of massive parallel sequencing machines and the ability to analyze these sequencing data. Traditional assembly methods requiring long execution time and large amount of memory on a single workstation limit their use on these massive data. This paper presents a highly scalable assembler named as SWAP-Assembler for processing massive sequencing data using thousands of cores, where SWAP is an acronym for Small World Asynchronous Parallel model. In the paper, a mathematical description of multi-step bi-directed graph (MSG) is provided to resolve the computational interdependence on merging edges, and a highly scalable computational framework for SWAP is developed to automatically preform the parallel computation of all operations. Graph cleaning and contig extension are also included for generating contigs with high quality. Experimental results show that SWAP-Assembler scales up to 2048 cores on Yanhuang dataset using only 26 minutes, which is better than several other parallel assemblers, such as ABySS, Ray, and PASHA. Results also show that SWAP-Assembler can generate high quality contigs with good N50 size and low error rate, especially it generated the longest N50 contig sizes for Fish and Yanhuang datasets. In this paper, we presented a highly scalable and efficient genome assembly software, SWAP-Assembler. Compared with several other assemblers, it showed very good performance in terms of scalability and contig quality. This software is available at: https://sourceforge.net/projects/swapassembler.

  19. A lightweight scalable agarose-gel-synthesized thermoelectric composite

    Science.gov (United States)

    Kim, Jin Ho; Fernandes, Gustavo E.; Lee, Do-Joong; Hirst, Elizabeth S.; Osgood, Richard M., III; Xu, Jimmy

    2018-03-01

    Electronic devices are now advancing beyond classical, rigid systems and moving into lighweight flexible regimes, enabling new applications such as body-wearables and ‘e-textiles’. To support this new electronic platform, composite materials that are highly conductive yet scalable, flexible, and wearable are needed. Materials with high electrical conductivity often have poor thermoelectric properties because their thermal transport is made greater by the same factors as their electronic conductivity. We demonstrate, in proof-of-principle experiments, that a novel binary composite can disrupt thermal (phononic) transport, while maintaining high electrical conductivity, thus yielding promising thermoelectric properties. Highly conductive Multi-Wall Carbon Nanotube (MWCNT) composites are combined with a low-band gap semiconductor, PbS. The work functions of the two materials are closely matched, minimizing the electrical contact resistance within the composite. Disparities in the speed of sound in MWCNTs and PbS help to inhibit phonon propagation, and boundary layer scattering at interfaces between these two materials lead to large Seebeck coefficient (> 150 μV/K) (Mott N F and Davis E A 1971 Electronic Processes in Non-crystalline Materials (Oxford: Clarendon), p 47) and a power factor as high as 10 μW/(K2 m). The overall fabrication process is not only scalable but also conformal and compatible with large-area flexible hosts including metal sheets, films, coatings, possibly arrays of fibers, textiles and fabrics. We explain the behavior of this novel thermoelectric material platform in terms of differing length scales for electrical conductivity and phononic heat transfer, and explore new material configurations for potentially lightweight and flexible thermoelectric devices that could be networked in a textile.

  20. Efficient Delivery of Scalable Video Using a Streaming Class Model

    Directory of Open Access Journals (Sweden)

    Jason J. Quinlan

    2018-03-01

    Full Text Available When we couple the rise in video streaming with the growing number of portable devices (smart phones, tablets, laptops, we see an ever-increasing demand for high-definition video online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide a graceful changes in video quality, all while respecting viewing satisfaction. In this context, the use of well-known scalable/layered media streaming techniques, commonly known as scalable video coding (SVC, is an attractive solution. SVC encodes a number of video quality levels within a single media stream. This has been shown to be an especially effective and efficient solution, but it fares badly in the presence of datagram losses. While multiple description coding (MDC can reduce the effects of packet loss on scalable video delivery, the increased delivery cost is counterproductive for constrained networks. This situation is accentuated in cases where only the lower quality level is required. In this paper, we assess these issues and propose a new approach called Streaming Classes (SC through which we can define a key set of quality levels, each of which can be delivered in a self-contained manner. This facilitates efficient delivery, yielding reduced transmission byte-cost for devices requiring lower quality, relative to MDC and Adaptive Layer Distribution (ALD (42% and 76% respective reduction for layer 2, while also maintaining high levels of consistent quality. We also illustrate how selective packetisation technique can further reduce the effects of packet loss on viewable quality by

  1. Scalability Optimization of Seamless Positioning Service

    Directory of Open Access Journals (Sweden)

    Juraj Machaj

    2016-01-01

    Full Text Available Recently positioning services are getting more attention not only within research community but also from service providers. From the service providers point of view positioning service that will be able to work seamlessly in all environments, for example, indoor, dense urban, and rural, has a huge potential to open new markets. However, such system does not only need to provide accurate position estimates but have to be scalable and resistant to fake positioning requests. In the previous works we have proposed a modular system, which is able to provide seamless positioning in various environments. The system automatically selects optimal positioning module based on available radio signals. The system currently consists of three positioning modules—GPS, GSM based positioning, and Wi-Fi based positioning. In this paper we will propose algorithm which will reduce time needed for position estimation and thus allow higher scalability of the modular system and thus allow providing positioning services to higher amount of users. Such improvement is extremely important, for real world application where large number of users will require position estimates, since positioning error is affected by response time of the positioning server.

  2. Algorithmic psychometrics and the scalable subject.

    Science.gov (United States)

    Stark, Luke

    2018-04-01

    Recent public controversies, ranging from the 2014 Facebook 'emotional contagion' study to psychographic data profiling by Cambridge Analytica in the 2016 American presidential election, Brexit referendum and elsewhere, signal watershed moments in which the intersecting trajectories of psychology and computer science have become matters of public concern. The entangled history of these two fields grounds the application of applied psychological techniques to digital technologies, and an investment in applying calculability to human subjectivity. Today, a quantifiable psychological subject position has been translated, via 'big data' sets and algorithmic analysis, into a model subject amenable to classification through digital media platforms. I term this position the 'scalable subject', arguing it has been shaped and made legible by algorithmic psychometrics - a broad set of affordances in digital platforms shaped by psychology and the behavioral sciences. In describing the contours of this 'scalable subject', this paper highlights the urgent need for renewed attention from STS scholars on the psy sciences, and on a computational politics attentive to psychology, emotional expression, and sociality via digital media.

  3. Scalable Simulation of Electromagnetic Hybrid Codes

    International Nuclear Information System (INIS)

    Perumalla, Kalyan S.; Fujimoto, Richard; Karimabadi, Dr. Homa

    2006-01-01

    New discrete-event formulations of physics simulation models are emerging that can outperform models based on traditional time-stepped techniques. Detailed simulation of the Earth's magnetosphere, for example, requires execution of sub-models that are at widely differing timescales. In contrast to time-stepped simulation which requires tightly coupled updates to entire system state at regular time intervals, the new discrete event simulation (DES) approaches help evolve the states of sub-models on relatively independent timescales. However, parallel execution of DES-based models raises challenges with respect to their scalability and performance. One of the key challenges is to improve the computation granularity to offset synchronization and communication overheads within and across processors. Our previous work was limited in scalability and runtime performance due to the parallelization challenges. Here we report on optimizations we performed on DES-based plasma simulation models to improve parallel performance. The net result is the capability to simulate hybrid particle-in-cell (PIC) models with over 2 billion ion particles using 512 processors on supercomputing platforms

  4. Towards Scalable Graph Computation on Mobile Devices.

    Science.gov (United States)

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2014-10-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach.

  5. Scalable fast multipole accelerated vortex methods

    KAUST Repository

    Hu, Qi

    2014-05-01

    The fast multipole method (FMM) is often used to accelerate the calculation of particle interactions in particle-based methods to simulate incompressible flows. To evaluate the most time-consuming kernels - the Biot-Savart equation and stretching term of the vorticity equation, we mathematically reformulated it so that only two Laplace scalar potentials are used instead of six. This automatically ensuring divergence-free far-field computation. Based on this formulation, we developed a new FMM-based vortex method on heterogeneous architectures, which distributed the work between multicore CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm uses new data structures which can dynamically manage inter-node communication and load balance efficiently, with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching calculation for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s.

  6. Towards Scalable Graph Computation on Mobile Devices

    Science.gov (United States)

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2015-01-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach. PMID:25859564

  7. Big data integration: scalability and sustainability

    KAUST Repository

    Zhang, Zhang

    2016-01-26

    Integration of various types of omics data is critically indispensable for addressing most important and complex biological questions. In the era of big data, however, data integration becomes increasingly tedious, time-consuming and expensive, posing a significant obstacle to fully exploit the wealth of big biological data. Here we propose a scalable and sustainable architecture that integrates big omics data through community-contributed modules. Community modules are contributed and maintained by different committed groups and each module corresponds to a specific data type, deals with data collection, processing and visualization, and delivers data on-demand via web services. Based on this community-based architecture, we build Information Commons for Rice (IC4R; http://ic4r.org), a rice knowledgebase that integrates a variety of rice omics data from multiple community modules, including genome-wide expression profiles derived entirely from RNA-Seq data, resequencing-based genomic variations obtained from re-sequencing data of thousands of rice varieties, plant homologous genes covering multiple diverse plant species, post-translational modifications, rice-related literatures, and community annotations. Taken together, such architecture achieves integration of different types of data from multiple community-contributed modules and accordingly features scalable, sustainable and collaborative integration of big data as well as low costs for database update and maintenance, thus helpful for building IC4R into a comprehensive knowledgebase covering all aspects of rice data and beneficial for both basic and translational researches.

  8. Temporal signal energy correction and low-complexity encoder feedback for lossy scalable video coding

    NARCIS (Netherlands)

    Loomans, M.J.H.; Koeleman, C.J.; With, de P.H.N.

    2010-01-01

    In this paper, we address two problems found in embedded implementations of Scalable Video Codecs (SVCs): the temporal signal energy distribution and frame-to-frame quality fluctuations. The unequal energy distribution between the low- and high-pass band with integer-based wavelets leads to

  9. Scalable Video Streaming Adaptive to Time-Varying IEEE 802.11 MAC Parameters

    Science.gov (United States)

    Lee, Kyung-Jun; Suh, Doug-Young; Park, Gwang-Hoon; Huh, Jae-Doo

    This letter proposes a QoS control method for video streaming service over wireless networks. Based on statistical analysis, the time-varying MAC parameters highly related to channel condition are selected to predict available bitrate. Adaptive bitrate control of scalably-encoded video guarantees continuity in streaming service even if the channel condition changes abruptly.

  10. Scalable Multi-Platform Distribution of Spatial 3d Contents

    Science.gov (United States)

    Klimke, J.; Hagedorn, B.; Döllner, J.

    2013-09-01

    Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.

  11. Overview of the Scalable Coherent Interface, IEEE STD 1596 (SCI)

    International Nuclear Information System (INIS)

    Gustavson, D.B.; James, D.V.; Wiggers, H.A.

    1992-10-01

    The Scalable Coherent Interface standard defines a new generation of interconnection that spans the full range from supercomputer memory 'bus' to campus-wide network. SCI provides bus-like services and a shared-memory software model while using an underlying, packet protocol on many independent communication links. Initially these links are 1 GByte/s (wires) and 1 GBit/s (fiber), but the protocol scales well to future faster or lower-cost technologies. The interconnect may use switches, meshes, and rings. The SCI distributed-shared-memory model is simple and versatile, enabling for the first time a smooth integration of highly parallel multiprocessors, workstations, personal computers, I/O, networking and data acquisition

  12. Scalable Task Assignment for Heterogeneous Multi-Robot Teams

    Directory of Open Access Journals (Sweden)

    Paula García

    2013-02-01

    Full Text Available This work deals with the development of a dynamic task assignment strategy for heterogeneous multi-robot teams in typical real world scenarios. The strategy must be efficiently scalable to support problems of increasing complexity with minimum designer intervention. To this end, we have selected a very simple auction-based strategy, which has been implemented and analysed in a multi-robot cleaning problem that requires strong coordination and dynamic complex subtask organization. We will show that the selection of a simple auction strategy provides a linear computational cost increase with the number of robots that make up the team and allows the solving of highly complex assignment problems in dynamic conditions by means of a hierarchical sub-auction policy. To coordinate and control the team, a layered behaviour-based architecture has been applied that allows the reusing of the auction-based strategy to achieve different coordination levels.

  13. A Practical and Scalable Tool to Find Overlaps between Sequences

    Directory of Open Access Journals (Sweden)

    Maan Haj Rachid

    2015-01-01

    Full Text Available The evolution of the next generation sequencing technology increases the demand for efficient solutions, in terms of space and time, for several bioinformatics problems. This paper presents a practical and easy-to-implement solution for one of these problems, namely, the all-pairs suffix-prefix problem, using a compact prefix tree. The paper demonstrates an efficient construction of this time-efficient and space-economical tree data structure. The paper presents techniques for parallel implementations of the proposed solution. Experimental evaluation indicates superior results in terms of space and time over existing solutions. Results also show that the proposed technique is highly scalable in a parallel execution environment.

  14. A Software and Hardware IPTV Architecture for Scalable DVB Distribution

    Directory of Open Access Journals (Sweden)

    Georg Acher

    2009-01-01

    Full Text Available Many standards and even more proprietary technologies deal with IP-based television (IPTV. But none of them can transparently map popular public broadcast services such as DVB or ATSC to IPTV with acceptable effort. In this paper we explain why we believe that such a mapping using a light weight framework is an important step towards all-IP multimedia. We then present the NetCeiver architecture: it is based on well-known standards such as IPv6, and it allows zero configuration. The use of multicast streaming makes NetCeiver highly scalable. We also describe a low cost FPGA implementation of the proposed NetCeiver architecture, which can concurrently stream services from up to six full transponders.

  15. Smartphone based scalable reverse engineering by digital image correlation

    Science.gov (United States)

    Vidvans, Amey; Basu, Saurabh

    2018-03-01

    There is a need for scalable open source 3D reconstruction systems for reverse engineering. This is because most commercially available reconstruction systems are capital and resource intensive. To address this, a novel reconstruction technique is proposed. The technique involves digital image correlation based characterization of surface speeds followed by normalization with respect to angular speed during rigid body rotational motion of the specimen. Proof of concept of the same is demonstrated and validated using simulation and empirical characterization. Towards this, smart-phone imaging and inexpensive off the shelf components along with those fabricated additively using poly-lactic acid polymer with a standard 3D printer are used. Some sources of error in this reconstruction methodology are discussed. It is seen that high curvatures on the surface suppress accuracy of reconstruction. Reasons behind this are delineated in the nature of the correlation function. Theoretically achievable resolution during smart-phone based 3D reconstruction by digital image correlation is derived.

  16. Scalable Domain Decomposition Preconditioners for Heterogeneous Elliptic Problems

    Directory of Open Access Journals (Sweden)

    Pierre Jolivet

    2014-01-01

    Full Text Available Domain decomposition methods are, alongside multigrid methods, one of the dominant paradigms in contemporary large-scale partial differential equation simulation. In this paper, a lightweight implementation of a theoretically and numerically scalable preconditioner is presented in the context of overlapping methods. The performance of this work is assessed by numerical simulations executed on thousands of cores, for solving various highly heterogeneous elliptic problems in both 2D and 3D with billions of degrees of freedom. Such problems arise in computational science and engineering, in solid and fluid mechanics. While focusing on overlapping domain decomposition methods might seem too restrictive, it will be shown how this work can be applied to a variety of other methods, such as non-overlapping methods and abstract deflation based preconditioners. It is also presented how multilevel preconditioners can be used to avoid communication during an iterative process such as a Krylov method.

  17. A Secure and Scalable Data Communication Scheme in Smart Grids

    Directory of Open Access Journals (Sweden)

    Chunqiang Hu

    2018-01-01

    Full Text Available The concept of smart grid gained tremendous attention among researchers and utility providers in recent years. How to establish a secure communication among smart meters, utility companies, and the service providers is a challenging issue. In this paper, we present a communication architecture for smart grids and propose a scheme to guarantee the security and privacy of data communications among smart meters, utility companies, and data repositories by employing decentralized attribute based encryption. The architecture is highly scalable, which employs an access control Linear Secret Sharing Scheme (LSSS matrix to achieve a role-based access control. The security analysis demonstrated that the scheme ensures security and privacy. The performance analysis shows that the scheme is efficient in terms of computational cost.

  18. Scalable conditional induction variables (CIV) analysis

    DEFF Research Database (Denmark)

    Oancea, Cosmin Eugen; Rauchwerger, Lawrence

    2015-01-01

    parallelizing compiler and evaluated its impact on five Fortran benchmarks. We have found that that there are many important loops using CIV subscripts and that our analysis can lead to their scalable parallelization. This in turn has led to the parallelization of the benchmark programs they appear in.......Subscripts using induction variables that cannot be expressed as a formula in terms of the enclosing-loop indices appear in the low-level implementation of common programming abstractions such as filter, or stack operations and pose significant challenges to automatic parallelization. Because...... the complexity of such induction variables is often due to their conditional evaluation across the iteration space of loops we name them Conditional Induction Variables (CIV). This paper presents a flow-sensitive technique that summarizes both such CIV-based and affine subscripts to program level, using the same...

  19. Scalable Faceted Ranking in Tagging Systems

    Science.gov (United States)

    Orlicki, José I.; Alvarez-Hamelin, J. Ignacio; Fierens, Pablo I.

    Nowadays, web collaborative tagging systems which allow users to upload, comment on and recommend contents, are growing. Such systems can be represented as graphs where nodes correspond to users and tagged-links to recommendations. In this paper we analyze the problem of computing a ranking of users with respect to a facet described as a set of tags. A straightforward solution is to compute a PageRank-like algorithm on a facet-related graph, but it is not feasible for online computation. We propose an alternative: (i) a ranking for each tag is computed offline on the basis of tag-related subgraphs; (ii) a faceted order is generated online by merging rankings corresponding to all the tags in the facet. Based on the graph analysis of YouTube and Flickr, we show that step (i) is scalable. We also present efficient algorithms for step (ii), which are evaluated by comparing their results with two gold standards.

  20. A graph algebra for scalable visual analytics.

    Science.gov (United States)

    Shaverdian, Anna A; Zhou, Hao; Michailidis, George; Jagadish, Hosagrahar V

    2012-01-01

    Visual analytics (VA), which combines analytical techniques with advanced visualization features, is fast becoming a standard tool for extracting information from graph data. Researchers have developed many tools for this purpose, suggesting a need for formal methods to guide these tools' creation. Increased data demands on computing requires redesigning VA tools to consider performance and reliability in the context of analysis of exascale datasets. Furthermore, visual analysts need a way to document their analyses for reuse and results justification. A VA graph framework encapsulated in a graph algebra helps address these needs. Its atomic operators include selection and aggregation. The framework employs a visual operator and supports dynamic attributes of data to enable scalable visual exploration of data.

  1. Parallel scalability of Hartree-Fock calculations

    Science.gov (United States)

    Chow, Edmond; Liu, Xing; Smelyanskiy, Mikhail; Hammond, Jeff R.

    2015-03-01

    Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree-Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.

  2. iSIGHT-FD scalability test report.

    Energy Technology Data Exchange (ETDEWEB)

    Clay, Robert L.; Shneider, Max S.

    2008-07-01

    The engineering analysis community at Sandia National Laboratories uses a number of internal and commercial software codes and tools, including mesh generators, preprocessors, mesh manipulators, simulation codes, post-processors, and visualization packages. We define an analysis workflow as the execution of an ordered, logical sequence of these tools. Various forms of analysis (and in particular, methodologies that use multiple function evaluations or samples) involve executing parameterized variations of these workflows. As part of the DART project, we are evaluating various commercial workflow management systems, including iSIGHT-FD from Engineous. This report documents the results of a scalability test that was driven by DAKOTA and conducted on a parallel computer (Thunderbird). The purpose of this experiment was to examine the suitability and performance of iSIGHT-FD for large-scale, parameterized analysis workflows. As the results indicate, we found iSIGHT-FD to be suitable for this type of application.

  3. Scalable group level probabilistic sparse factor analysis

    DEFF Research Database (Denmark)

    Hinrich, Jesper Løve; Nielsen, Søren Føns Vind; Riis, Nicolai Andre Brogaard

    2017-01-01

    Many data-driven approaches exist to extract neural representations of functional magnetic resonance imaging (fMRI) data, but most of them lack a proper probabilistic formulation. We propose a scalable group level probabilistic sparse factor analysis (psFA) allowing spatially sparse maps, component...... pruning using automatic relevance determination (ARD) and subject specific heteroscedastic spatial noise modeling. For task-based and resting state fMRI, we show that the sparsity constraint gives rise to components similar to those obtained by group independent component analysis. The noise modeling...... shows that noise is reduced in areas typically associated with activation by the experimental design. The psFA model identifies sparse components and the probabilistic setting provides a natural way to handle parameter uncertainties. The variational Bayesian framework easily extends to more complex...

  4. A versatile scalable PET processing system

    International Nuclear Information System (INIS)

    Dong, H.; Weisenberger, A.; McKisson, J.; Wenze, Xi; Cuevas, C.; Wilson, J.; Zukerman, L.

    2011-01-01

    Positron Emission Tomography (PET) historically has major clinical and preclinical applications in cancerous oncology, neurology, and cardiovascular diseases. Recently, in a new direction, an application specific PET system is being developed at Thomas Jefferson National Accelerator Facility (Jefferson Lab) in collaboration with Duke University, University of Maryland at Baltimore (UMAB), and West Virginia University (WVU) targeted for plant eco-physiology research. The new plant imaging PET system is versatile and scalable such that it could adapt to several plant imaging needs - imaging many important plant organs including leaves, roots, and stems. The mechanical arrangement of the detectors is designed to accommodate the unpredictable and random distribution in space of the plant organs without requiring the plant be disturbed. Prototyping such a system requires a new data acquisition system (DAQ) and data processing system which are adaptable to the requirements of these unique and versatile detectors.

  5. The Concept of Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten

    2015-01-01

    The power of business models lies in their ability to visualize and clarify how firms’ may configure their value creation processes. Among the key aspects of business model thinking are a focus on what the customer values, how this value is best delivered to the customer and how strategic partners...... are leveraged in this value creation, delivery and realization exercise. Central to the mainstream understanding of business models is the value proposition towards the customer and the hypothesis generated is that if the firm delivers to the customer what he/she requires, then there is a good foundation...... for a long-term profitable business. However, the message conveyed in this article is that while providing a good value proposition may help the firm ‘get by’, the really successful businesses of today are those able to reach the sweet-spot of business model scalability. This article introduces and discusses...

  6. The scalable coherent interface, IEEE P1596

    International Nuclear Information System (INIS)

    Gustavson, D.B.

    1990-01-01

    IEEE P1596, the scalable coherent interface (formerly known as SuperBus) is based on experience gained while developing Fastbus (ANSI/IEEE 960--1986, IEC 935), Futurebus (IEEE P896.x) and other modern 32-bit buses. SCI goals include a minimum bandwidth of 1 GByte/sec per processor in multiprocessor systems with thousands of processors; efficient support of a coherent distributed-cache image of distributed shared memory; support for repeaters which interface to existing or future buses; and support for inexpensive small rings as well as for general switched interconnections like Banyan, Omega, or crossbar networks. This paper presents a summary of current directions, reports the status of the work in progress, and suggests some applications in data acquisition and physics

  7. BASSET: Scalable Gateway Finder in Large Graphs

    Energy Technology Data Exchange (ETDEWEB)

    Tong, H; Papadimitriou, S; Faloutsos, C; Yu, P S; Eliassi-Rad, T

    2010-11-03

    Given a social network, who is the best person to introduce you to, say, Chris Ferguson, the poker champion? Or, given a network of people and skills, who is the best person to help you learn about, say, wavelets? The goal is to find a small group of 'gateways': persons who are close enough to us, as well as close enough to the target (person, or skill) or, in other words, are crucial in connecting us to the target. The main contributions are the following: (a) we show how to formulate this problem precisely; (b) we show that it is sub-modular and thus it can be solved near-optimally; (c) we give fast, scalable algorithms to find such gateways. Experiments on real data sets validate the effectiveness and efficiency of the proposed methods, achieving up to 6,000,000x speedup.

  8. Scalable quantum search using trapped ions

    International Nuclear Information System (INIS)

    Ivanov, S. S.; Ivanov, P. A.; Linington, I. E.; Vitanov, N. V.

    2010-01-01

    We propose a scalable implementation of Grover's quantum search algorithm in a trapped-ion quantum information processor. The system is initialized in an entangled Dicke state by using adiabatic techniques. The inversion-about-average and oracle operators take the form of single off-resonant laser pulses. This is made possible by utilizing the physical symmetries of the trapped-ion linear crystal. The physical realization of the algorithm represents a dramatic simplification: each logical iteration (oracle and inversion about average) requires only two physical interaction steps, in contrast to the large number of concatenated gates required by previous approaches. This not only facilitates the implementation but also increases the overall fidelity of the algorithm.

  9. New Complexity Scalable MPEG Encoding Techniques for Mobile Applications

    Directory of Open Access Journals (Sweden)

    Stephan Mietens

    2004-03-01

    Full Text Available Complexity scalability offers the advantage of one-time design of video applications for a large product family, including mobile devices, without the need of redesigning the applications on the algorithmic level to meet the requirements of the different products. In this paper, we present complexity scalable MPEG encoding having core modules with modifications for scalability. The interdependencies of the scalable modules and the system performance are evaluated. Experimental results show scalability giving a smooth change in complexity and corresponding video quality. Scalability is basically achieved by varying the number of computed DCT coefficients and the number of evaluated motion vectors but other modules are designed such they scale with the previous parameters. In the experiments using the “Stefan” sequence, the elapsed execution time of the scalable encoder, reflecting the computational complexity, can be gradually reduced to roughly 50% of its original execution time. The video quality scales between 20 dB and 48 dB PSNR with unity quantizer setting, and between 21.5 dB and 38.5 dB PSNR for different sequences targeting 1500 kbps. The implemented encoder and the scalability techniques can be successfully applied in mobile systems based on MPEG video compression.

  10. Building scalable apps with Redis and Node.js

    CERN Document Server

    Johanan, Joshua

    2014-01-01

    If the phrase scalability sounds alien to you, then this is an ideal book for you. You will not need much Node.js experience as each framework is demonstrated in a way that requires no previous knowledge of the framework. You will be building scalable Node.js applications in no time! Knowledge of JavaScript is required.

  11. Fourier transform based scalable image quality measure.

    Science.gov (United States)

    Narwaria, Manish; Lin, Weisi; McLoughlin, Ian; Emmanuel, Sabu; Chia, Liang-Tien

    2012-08-01

    We present a new image quality assessment (IQA) algorithm based on the phase and magnitude of the 2D (twodimensional) Discrete Fourier Transform (DFT). The basic idea is to compare the phase and magnitude of the reference and distorted images to compute the quality score. However, it is well known that the Human Visual Systems (HVSs) sensitivity to different frequency components is not the same. We accommodate this fact via a simple yet effective strategy of nonuniform binning of the frequency components. This process also leads to reduced space representation of the image thereby enabling the reduced-reference (RR) prospects of the proposed scheme. We employ linear regression to integrate the effects of the changes in phase and magnitude. In this way, the required weights are determined via proper training and hence more convincing and effective. Lastly, using the fact that phase usually conveys more information than magnitude, we use only the phase for RR quality assessment. This provides the crucial advantage of further reduction in the required amount of reference image information. The proposed method is therefore further scalable for RR scenarios. We report extensive experimental results using a total of 9 publicly available databases: 7 image (with a total of 3832 distorted images with diverse distortions) and 2 video databases (totally 228 distorted videos). These show that the proposed method is overall better than several of the existing fullreference (FR) algorithms and two RR algorithms. Additionally, there is a graceful degradation in prediction performance as the amount of reference image information is reduced thereby confirming its scalability prospects. To enable comparisons and future study, a Matlab implementation of the proposed algorithm is available at http://www.ntu.edu.sg/home/wslin/reduced_phase.rar.

  12. Improving diabetes medication adherence: successful, scalable interventions

    Directory of Open Access Journals (Sweden)

    Zullig LL

    2015-01-01

    Full Text Available Leah L Zullig,1,2 Walid F Gellad,3,4 Jivan Moaddeb,2,5 Matthew J Crowley,1,2 William Shrank,6 Bradi B Granger,7 Christopher B Granger,8 Troy Trygstad,9 Larry Z Liu,10 Hayden B Bosworth1,2,7,11 1Center for Health Services Research in Primary Care, Durham Veterans Affairs Medical Center, Durham, NC, USA; 2Department of Medicine, Duke University, Durham, NC, USA; 3Center for Health Equity Research and Promotion, Pittsburgh Veterans Affairs Medical Center, Pittsburgh, PA, USA; 4Division of General Internal Medicine, University of Pittsburgh, Pittsburgh, PA, USA; 5Institute for Genome Sciences and Policy, Duke University, Durham, NC, USA; 6CVS Caremark Corporation; 7School of Nursing, Duke University, Durham, NC, USA; 8Department of Medicine, Division of Cardiology, Duke University School of Medicine, Durham, NC, USA; 9North Carolina Community Care Networks, Raleigh, NC, USA; 10Pfizer, Inc., and Weill Medical College of Cornell University, New York, NY, USA; 11Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, NC, USA Abstract: Effective medications are a cornerstone of prevention and disease treatment, yet only about half of patients take their medications as prescribed, resulting in a common and costly public health challenge for the US healthcare system. Since poor medication adherence is a complex problem with many contributing causes, there is no one universal solution. This paper describes interventions that were not only effective in improving medication adherence among patients with diabetes, but were also potentially scalable (ie, easy to implement to a large population. We identify key characteristics that make these interventions effective and scalable. This information is intended to inform healthcare systems seeking proven, low resource, cost-effective solutions to improve medication adherence. Keywords: medication adherence, diabetes mellitus, chronic disease, dissemination research

  13. Scalable graphene production: perspectives and challenges of plasma applications

    Science.gov (United States)

    Levchenko, Igor; Ostrikov, Kostya (Ken); Zheng, Jie; Li, Xingguo; Keidar, Michael; B. K. Teo, Kenneth

    2016-05-01

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h-1 m-2 was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of various

  14. Scalable graphene production: perspectives and challenges of plasma applications.

    Science.gov (United States)

    Levchenko, Igor; Ostrikov, Kostya Ken; Zheng, Jie; Li, Xingguo; Keidar, Michael; B K Teo, Kenneth

    2016-05-19

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h(-1) m(-2) was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of

  15. Scalable and Media Aware Adaptive Video Streaming over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Béatrice Pesquet-Popescu

    2008-07-01

    Full Text Available This paper proposes an advanced video streaming system based on scalable video coding in order to optimize resource utilization in wireless networks with retransmission mechanisms at radio protocol level. The key component of this system is a packet scheduling algorithm which operates on the different substreams of a main scalable video stream and which is implemented in a so-called media aware network element. The concerned type of transport channel is a dedicated channel subject to parameters (bitrate, loss rate variations on the long run. Moreover, we propose a combined scalability approach in which common temporal and SNR scalability features can be used jointly with a partitioning of the image into regions of interest. Simulation results show that our approach provides substantial quality gain compared to classical packet transmission methods and they demonstrate how ROI coding combined with SNR scalability allows to improve again the visual quality.

  16. Scalable Photogrammetric Motion Capture System "mosca": Development and Application

    Science.gov (United States)

    Knyaz, V. A.

    2015-05-01

    Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.

  17. MicROS-drt: supporting real-time and scalable data distribution in distributed robotic systems.

    Science.gov (United States)

    Ding, Bo; Wang, Huaimin; Fan, Zedong; Zhang, Pengfei; Liu, Hui

    A primary requirement in distributed robotic software systems is the dissemination of data to all interested collaborative entities in a timely and scalable manner. However, providing such a service in a highly dynamic and resource-limited robotic environment is a challenging task, and existing robot software infrastructure has limitations in this aspect. This paper presents a novel robot software infrastructure, micROS-drt, which supports real-time and scalable data distribution. The solution is based on a loosely coupled data publish-subscribe model with the ability to support various time-related constraints. And to realize this model, a mature data distribution standard, the data distribution service for real-time systems (DDS), is adopted as the foundation of the transport layer of this software infrastructure. By elaborately adapting and encapsulating the capability of the underlying DDS middleware, micROS-drt can meet the requirement of real-time and scalable data distribution in distributed robotic systems. Evaluation results in terms of scalability, latency jitter and transport priority as well as the experiment on real robots validate the effectiveness of this work.

  18. A Scalable Framework to Detect Personal Health Mentions on Twitter.

    Science.gov (United States)

    Yin, Zhijun; Fabbri, Daniel; Rosenbloom, S Trent; Malin, Bradley

    2015-06-05

    Biomedical research has traditionally been conducted via surveys and the analysis of medical records. However, these resources are limited in their content, such that non-traditional domains (eg, online forums and social media) have an opportunity to supplement the view of an individual's health. The objective of this study was to develop a scalable framework to detect personal health status mentions on Twitter and assess the extent to which such information is disclosed. We collected more than 250 million tweets via the Twitter streaming API over a 2-month period in 2014. The corpus was filtered down to approximately 250,000 tweets, stratified across 34 high-impact health issues, based on guidance from the Medical Expenditure Panel Survey. We created a labeled corpus of several thousand tweets via a survey, administered over Amazon Mechanical Turk, that documents when terms correspond to mentions of personal health issues or an alternative (eg, a metaphor). We engineered a scalable classifier for personal health mentions via feature selection and assessed its potential over the health issues. We further investigated the utility of the tweets by determining the extent to which Twitter users disclose personal health status. Our investigation yielded several notable findings. First, we find that tweets from a small subset of the health issues can train a scalable classifier to detect health mentions. Specifically, training on 2000 tweets from four health issues (cancer, depression, hypertension, and leukemia) yielded a classifier with precision of 0.77 on all 34 health issues. Second, Twitter users disclosed personal health status for all health issues. Notably, personal health status was disclosed over 50% of the time for 11 out of 34 (33%) investigated health issues. Third, the disclosure rate was dependent on the health issue in a statistically significant manner (P<.001). For instance, more than 80% of the tweets about migraines (83/100) and allergies (85

  19. Oracle database performance and scalability a quantitative approach

    CERN Document Server

    Liu, Henry H

    2011-01-01

    A data-driven, fact-based, quantitative text on Oracle performance and scalability With database concepts and theories clearly explained in Oracle's context, readers quickly learn how to fully leverage Oracle's performance and scalability capabilities at every stage of designing and developing an Oracle-based enterprise application. The book is based on the author's more than ten years of experience working with Oracle, and is filled with dependable, tested, and proven performance optimization techniques. Oracle Database Performance and Scalability is divided into four parts that enable reader

  20. Scalable-to-lossless transform domain distributed video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Veselov, Anton

    2010-01-01

    Distributed video coding (DVC) is a novel approach providing new features as low complexity encoding by mainly exploiting the source statistics at the decoder based on the availability of decoder side information. In this paper, scalable-tolossless DVC is presented based on extending a lossy Tran...... codec provides frame by frame encoding. Comparing the lossless coding efficiency, the proposed scalable-to-lossless TDWZ video codec can save up to 5%-13% bits compared to JPEG LS and H.264 Intra frame lossless coding and do so as a scalable-to-lossless coding....

  1. Design issues for numerical libraries on scalable multicore architectures

    International Nuclear Information System (INIS)

    Heroux, M A

    2008-01-01

    Future generations of scalable computers will rely on multicore nodes for a significant portion of overall system performance. At present, most applications and libraries cannot exploit multiple cores beyond running addition MPI processes per node. In this paper we discuss important multicore architecture issues, programming models, algorithms requirements and software design related to effective use of scalable multicore computers. In particular, we focus on important issues for library research and development, making recommendations for how to effectively develop libraries for future scalable computer systems

  2. Building Scalable Knowledge Graphs for Earth Science

    Science.gov (United States)

    Ramachandran, R.; Maskey, M.; Gatlin, P. N.; Zhang, J.; Duan, X.; Bugbee, K.; Christopher, S. A.; Miller, J. J.

    2017-12-01

    Estimates indicate that the world's information will grow by 800% in the next five years. In any given field, a single researcher or a team of researchers cannot keep up with this rate of knowledge expansion without the help of cognitive systems. Cognitive computing, defined as the use of information technology to augment human cognition, can help tackle large systemic problems. Knowledge graphs, one of the foundational components of cognitive systems, link key entities in a specific domain with other entities via relationships. Researchers could mine these graphs to make probabilistic recommendations and to infer new knowledge. At this point, however, there is a dearth of tools to generate scalable Knowledge graphs using existing corpus of scientific literature for Earth science research. Our project is currently developing an end-to-end automated methodology for incrementally constructing Knowledge graphs for Earth Science. Semantic Entity Recognition (SER) is one of the key steps in this methodology. SER for Earth Science uses external resources (including metadata catalogs and controlled vocabulary) as references to guide entity extraction and recognition (i.e., labeling) from unstructured text, in order to build a large training set to seed the subsequent auto-learning component in our algorithm. Results from several SER experiments will be presented as well as lessons learned.

  3. Ancestors protocol for scalable key management

    Directory of Open Access Journals (Sweden)

    Dieter Gollmann

    2010-06-01

    Full Text Available Group key management is an important functional building block for secure multicast architecture. Thereby, it has been extensively studied in the literature. The main proposed protocol is Adaptive Clustering for Scalable Group Key Management (ASGK. According to ASGK protocol, the multicast group is divided into clusters, where each cluster consists of areas of members. Each cluster uses its own Traffic Encryption Key (TEK. These clusters are updated periodically depending on the dynamism of the members during the secure session. The modified protocol has been proposed based on ASGK with some modifications to balance the number of affected members and the encryption/decryption overhead with any number of the areas when a member joins or leaves the group. This modified protocol is called Ancestors protocol. According to Ancestors protocol, every area receives the dynamism of the members from its parents. The main objective of the modified protocol is to reduce the number of affected members during the leaving and joining members, then 1 affects n overhead would be reduced. A comparative study has been done between ASGK protocol and the modified protocol. According to the comparative results, it found that the modified protocol is always outperforming the ASGK protocol.

  4. Percolator: Scalable Pattern Discovery in Dynamic Graphs

    Energy Technology Data Exchange (ETDEWEB)

    Choudhury, Sutanay; Purohit, Sumit; Lin, Peng; Wu, Yinghui; Holder, Lawrence B.; Agarwal, Khushbu

    2018-02-06

    We demonstrate Percolator, a distributed system for graph pattern discovery in dynamic graphs. In contrast to conventional mining systems, Percolator advocates efficient pattern mining schemes that (1) support pattern detection with keywords; (2) integrate incremental and parallel pattern mining; and (3) support analytical queries such as trend analysis. The core idea of Percolator is to dynamically decide and verify a small fraction of patterns and their in- stances that must be inspected in response to buffered updates in dynamic graphs, with a total mining cost independent of graph size. We demonstrate a) the feasibility of incremental pattern mining by walking through each component of Percolator, b) the efficiency and scalability of Percolator over the sheer size of real-world dynamic graphs, and c) how the user-friendly GUI of Percolator inter- acts with users to support keyword-based queries that detect, browse and inspect trending patterns. We also demonstrate two user cases of Percolator, in social media trend analysis and academic collaboration analysis, respectively.

  5. Scalable Notch Antenna System for Multiport Applications

    Directory of Open Access Journals (Sweden)

    Abdurrahim Toktas

    2016-01-01

    Full Text Available A novel and compact scalable antenna system is designed for multiport applications. The basic design is built on a square patch with an electrical size of 0.82λ0×0.82λ0 (at 2.4 GHz on a dielectric substrate. The design consists of four symmetrical and orthogonal triangular notches with circular feeding slots at the corners of the common patch. The 4-port antenna can be simply rearranged to 8-port and 12-port systems. The operating band of the system can be tuned by scaling (S the size of the system while fixing the thickness of the substrate. The antenna system with S: 1/1 in size of 103.5×103.5 mm2 operates at the frequency band of 2.3–3.0 GHz. By scaling the antenna with S: 1/2.3, a system of 45×45 mm2 is achieved, and thus the operating band is tuned to 4.7–6.1 GHz with the same scattering characteristic. A parametric study is also conducted to investigate the effects of changing the notch dimensions. The performance of the antenna is verified in terms of the antenna characteristics as well as diversity and multiplexing parameters. The antenna system can be tuned by scaling so that it is applicable to the multiport WLAN, WIMAX, and LTE devices with port upgradability.

  6. Scalable conditional induction variables (CIV) analysis

    KAUST Repository

    Oancea, Cosmin E.

    2015-02-01

    Subscripts using induction variables that cannot be expressed as a formula in terms of the enclosing-loop indices appear in the low-level implementation of common programming abstractions such as Alter, or stack operations and pose significant challenges to automatic parallelization. Because the complexity of such induction variables is often due to their conditional evaluation across the iteration space of loops we name them Conditional Induction Variables (CIV). This paper presents a flow-sensitive technique that summarizes both such CIV-based and affine subscripts to program level, using the same representation. Our technique requires no modifications of our dependence tests, which is agnostic to the original shape of the subscripts, and is more powerful than previously reported dependence tests that rely on the pairwise disambiguation of read-write references. We have implemented the CIV analysis in our parallelizing compiler and evaluated its impact on five Fortran benchmarks. We have found that that there are many important loops using CIV subscripts and that our analysis can lead to their scalable parallelization. This in turn has led to the parallelization of the benchmark programs they appear in.

  7. A Programmable, Scalable-Throughput Interleaver

    Directory of Open Access Journals (Sweden)

    E. J. C. Rijshouwer

    2010-01-01

    Full Text Available The interleaver stages of digital communication standards show a surprisingly large variation in throughput, state sizes, and permutation functions. Furthermore, data rates for 4G standards such as LTE-Advanced will exceed typical baseband clock frequencies of handheld devices. Multistream operation for Software Defined Radio and iterative decoding algorithms will call for ever higher interleave data rates. Our interleave machine is built around 8 single-port SRAM banks and can be programmed to generate up to 8 addresses every clock cycle. The scalable architecture combines SIMD and VLIW concepts with an efficient resolution of bank conflicts. A wide range of cellular, connectivity, and broadcast interleavers have been mapped on this machine, with throughputs up to more than 0.5 Gsymbol/second. Although it was designed for channel interleaving, the application domain of the interleaver extends also to Turbo interleaving. The presented configuration of the architecture is designed as a part of a programmable outer receiver on a prototype board. It offers (near universal programmability to enable the implementation of new interleavers. The interleaver measures 2.09 mm2 in 65 nm CMOS (including memories and proves functional on silicon.

  8. A Programmable, Scalable-Throughput Interleaver

    Directory of Open Access Journals (Sweden)

    Rijshouwer EJC

    2010-01-01

    Full Text Available The interleaver stages of digital communication standards show a surprisingly large variation in throughput, state sizes, and permutation functions. Furthermore, data rates for 4G standards such as LTE-Advanced will exceed typical baseband clock frequencies of handheld devices. Multistream operation for Software Defined Radio and iterative decoding algorithms will call for ever higher interleave data rates. Our interleave machine is built around 8 single-port SRAM banks and can be programmed to generate up to 8 addresses every clock cycle. The scalable architecture combines SIMD and VLIW concepts with an efficient resolution of bank conflicts. A wide range of cellular, connectivity, and broadcast interleavers have been mapped on this machine, with throughputs up to more than 0.5 Gsymbol/second. Although it was designed for channel interleaving, the application domain of the interleaver extends also to Turbo interleaving. The presented configuration of the architecture is designed as a part of a programmable outer receiver on a prototype board. It offers (near universal programmability to enable the implementation of new interleavers. The interleaver measures 2.09 m in 65 nm CMOS (including memories and proves functional on silicon.

  9. Modular Universal Scalable Ion-trap Quantum Computer

    Science.gov (United States)

    2016-06-02

    SECURITY CLASSIFICATION OF: The main goal of the original MUSIQC proposal was to construct and demonstrate a modular and universally- expandable ion...Distribution Unlimited UU UU UU UU 02-06-2016 1-Aug-2010 31-Jan-2016 Final Report: Modular Universal Scalable Ion-trap Quantum Computer The views...P.O. Box 12211 Research Triangle Park, NC 27709-2211 Ion trap quantum computation, scalable modular architectures REPORT DOCUMENTATION PAGE 11

  10. Architectures and Applications for Scalable Quantum Information Systems

    Science.gov (United States)

    2007-01-01

    Gershenfeld and I. Chuang. Quantum computing with molecules. Scientific American, June 1998. [16] A. Globus, D. Bailey, J. Han, R. Jaffe, C. Levit , R...AFRL-IF-RS-TR-2007-12 Final Technical Report January 2007 ARCHITECTURES AND APPLICATIONS FOR SCALABLE QUANTUM INFORMATION SYSTEMS...NUMBER 5b. GRANT NUMBER FA8750-01-2-0521 4. TITLE AND SUBTITLE ARCHITECTURES AND APPLICATIONS FOR SCALABLE QUANTUM INFORMATION SYSTEMS 5c

  11. On the scalability of LISP and advanced overlaid services

    OpenAIRE

    Coras, Florin

    2015-01-01

    In just four decades the Internet has gone from a lab experiment to a worldwide, business critical infrastructure that caters to the communication needs of almost a half of the Earth's population. With these figures on its side, arguing against the Internet's scalability would seem rather unwise. However, the Internet's organic growth is far from finished and, as billions of new devices are expected to be joined in the not so distant future, scalability, or lack thereof, is commonly believed ...

  12. Scalable, full-colour and controllable chromotropic plasmonic printing

    OpenAIRE

    Xue, Jiancai; Zhou, Zhang-Kai; Wei, Zhiqiang; Su, Rongbin; Lai, Juan; Li, Juntao; Li, Chao; Zhang, Tengwei; Wang, Xue-Hua

    2015-01-01

    Plasmonic colour printing has drawn wide attention as a promising candidate for the next-generation colour-printing technology. However, an efficient approach to realize full colour and scalable fabrication is still lacking, which prevents plasmonic colour printing from practical applications. Here we present a scalable and full-colour plasmonic printing approach by combining conjugate twin-phase modulation with a plasmonic broadband absorber. More importantly, our approach also demonstrates ...

  13. Microscopic Characterization of Scalable Coherent Rydberg Superatoms

    Directory of Open Access Journals (Sweden)

    Johannes Zeiher

    2015-08-01

    Full Text Available Strong interactions can amplify quantum effects such that they become important on macroscopic scales. Controlling these coherently on a single-particle level is essential for the tailored preparation of strongly correlated quantum systems and opens up new prospects for quantum technologies. Rydberg atoms offer such strong interactions, which lead to extreme nonlinearities in laser-coupled atomic ensembles. As a result, multiple excitation of a micrometer-sized cloud can be blocked while the light-matter coupling becomes collectively enhanced. The resulting two-level system, often called a “superatom,” is a valuable resource for quantum information, providing a collective qubit. Here, we report on the preparation of 2 orders of magnitude scalable superatoms utilizing the large interaction strength provided by Rydberg atoms combined with precise control of an ensemble of ultracold atoms in an optical lattice. The latter is achieved with sub-shot-noise precision by local manipulation of a two-dimensional Mott insulator. We microscopically confirm the superatom picture by in situ detection of the Rydberg excitations and observe the characteristic square-root scaling of the optical coupling with the number of atoms. Enabled by the full control over the atomic sample, including the motional degrees of freedom, we infer the overlap of the produced many-body state with a W state from the observed Rabi oscillations and deduce the presence of entanglement. Finally, we investigate the breakdown of the superatom picture when two Rydberg excitations are present in the system, which leads to dephasing and a loss of coherence.

  14. Myria: Scalable Analytics as a Service

    Science.gov (United States)

    Howe, B.; Halperin, D.; Whitaker, A.

    2014-12-01

    At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.

  15. Embedded DCT and wavelet methods for fine granular scalable video: analysis and comparison

    Science.gov (United States)

    van der Schaar-Mitrea, Mihaela; Chen, Yingwei; Radha, Hayder

    2000-04-01

    Video transmission over bandwidth-varying networks is becoming increasingly important due to emerging applications such as streaming of video over the Internet. The fundamental obstacle in designing such systems resides in the varying characteristics of the Internet (i.e. bandwidth variations and packet-loss patterns). In MPEG-4, a new SNR scalability scheme, called Fine-Granular-Scalability (FGS), is currently under standardization, which is able to adapt in real-time (i.e. at transmission time) to Internet bandwidth variations. The FGS framework consists of a non-scalable motion-predicted base-layer and an intra-coded fine-granular scalable enhancement layer. For example, the base layer can be coded using a DCT-based MPEG-4 compliant, highly efficient video compression scheme. Subsequently, the difference between the original and decoded base-layer is computed, and the resulting FGS-residual signal is intra-frame coded with an embedded scalable coder. In order to achieve high coding efficiency when compressing the FGS enhancement layer, it is crucial to analyze the nature and characteristics of residual signals common to the SNR scalability framework (including FGS). In this paper, we present a thorough analysis of SNR residual signals by evaluating its statistical properties, compaction efficiency and frequency characteristics. The signal analysis revealed that the energy compaction of the DCT and wavelet transforms is limited and the frequency characteristic of SNR residual signals decay rather slowly. Moreover, the blockiness artifacts of the low bit-rate coded base-layer result in artificial high frequencies in the residual signal. Subsequently, a variety of wavelet and embedded DCT coding techniques applicable to the FGS framework are evaluated and their results are interpreted based on the identified signal properties. As expected from the theoretical signal analysis, the rate-distortion performances of the embedded wavelet and DCT-based coders are very

  16. Towards Bandwidth Scalable Transceiver Technology for Optical Metro-Access Networks

    DEFF Research Database (Denmark)

    Spolitis, Sandis; Bobrovs, Vjaceslavs; Wagner, Christoph

    2015-01-01

    sliceable transceiver for 1 Gbit/s non-return to zero (NRZ) signal sliced into two slices is presented. Digital signal processing (DSP) power consumption and latency values for proposed sliceable transceiver technique are also discussed. In this research post FEC with 7% overhead error free transmission has......Massive fiber-to-the-home network deployment is creating a challenge for telecommunications network operators: exponential increase of the power consumption at the central offices and a never ending quest for equipment upgrades operating at higher bandwidth. In this paper, we report on flexible...... signal slicing technique, which allows transmission of high-bandwidth signals via low bandwidth electrical and optoelectrical equipment. The presented signal slicing technique is highly scalable in terms of bandwidth which is determined by the number of slices used. In this paper performance of scalable...

  17. Scalable and massively parallel Monte Carlo photon transport simulations for heterogeneous computing platforms.

    Science.gov (United States)

    Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian

    2018-01-01

    We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  18. GASPRNG: GPU accelerated scalable parallel random number generator library

    Science.gov (United States)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or

  19. Non-damaging and scalable carbon nanotube synthesis on carbon fibres

    OpenAIRE

    De Luca, H; Anthony, DB; Qian, H; Greenhalgh, E; Bismarck, A; Shaffer, M

    2016-01-01

    The growth of carbon nanotubes (CNTs) on carbon fibres (CFs) to produce a hierarchical fibre with two differing reinforcement length scales, in this instance nanometre and micrometre respectively, is considered a route to improve current state-of-the-art fibre reinforced composites [1]. The scalable production of carbon nanotube-grafted-carbon fibres (CNT-g-CFs) has been limited due to high temperatures, the use of flammable gases and the requirement of inert conditions for CNT synthesis, whi...

  20. Scalable fabrication of strongly textured organic semiconductor micropatterns by capillary force lithography.

    Science.gov (United States)

    Jo, Pil Sung; Vailionis, Arturas; Park, Young Min; Salleo, Alberto

    2012-06-26

    Strongly textured organic semiconductor micropatterns made of the small molecule dioctylbenzothienobenzothiophene (C(8)-BTBT) are fabricated by using a method based on capillary force lithography (CFL). This technique provides the C(8)-BTBT solution with nucleation sites for directional growth, and can be used as a scalable way to produce high quality crystalline arrays in desired regions of a substrate for OFET applications. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Facile and Scalable Synthesis of Monodispersed Spherical Capsules with a Mesoporous Shell

    KAUST Repository

    Qi, Genggeng

    2010-05-11

    Monodispersed HMSs with tunable particle size and shell thickness were successfully synthesized using relatively concentrated polystyrene latex templates and a silica precursor in a weakly basic ethanol/water mixture. The particle size of the capsules can vary from 100 nm to micrometers. These highly engineered monodispersed capsules synthesized by a facile and scalable process may find applications in drug delivery, catalysis, separationm or as biological and chemical microreactors. © 2010 American Chemical Society.

  2. A Scalable Method for Regioselective 3-Acylation of 2-Substituted Indoles under Basic Conditions

    DEFF Research Database (Denmark)

    Johansson, Karl Henrik; Urruticoechea, Andoni; Larsen, Inna

    2015-01-01

    Privileged structures such as 2-arylindoles are recurrent molecular scaffolds in bioactive molecules. We here present an operationally simple, high yielding and scalable method for regioselective 3-acylation of 2-substituted indoles under basic conditions using functionalized acid chlorides. The ....... The method shows good tolerance to both electron-withdrawing and donating substituents on the indole scaffold and gives ready access to a variety of functionalized 3-acylindole building blocks suited for further derivatization....

  3. Scalable bonding of nanofibrous polytetrafluoroethylene (PTFE) membranes on microstructures

    Science.gov (United States)

    Mortazavi, Mehdi; Fazeli, Abdolreza; Moghaddam, Saeed

    2018-01-01

    Expanded polytetrafluoroethylene (ePTFE) nanofibrous membranes exhibit high porosity (80%-90%), high gas permeability, chemical inertness, and superhydrophobicity, which makes them a suitable choice in many demanding fields including industrial filtration, medical implants, bio-/nano- sensors/actuators and microanalysis (i.e. lab-on-a-chip). However, one of the major challenges that inhibit implementation of such membranes is their inability to bond to other materials due to their intrinsic low surface energy and chemical inertness. Prior attempts to improve adhesion of ePTFE membranes to other surfaces involved surface chemical treatments which have not been successful due to degradation of the mechanical integrity and the breakthrough pressure of the membrane. Here, we report a simple and scalable method of bonding ePTFE membranes to different surfaces via the introduction of an intermediate adhesive layer. While a variety of adhesives can be used with this technique, the highest bonding performance is obtained for adhesives that have moderate contact angles with the substrate and low contact angles with the membrane. A thin layer of an adhesive can be uniformly applied onto micro-patterned substrates with feature sizes down to 5 µm using a roll-coating process. Membrane-based microchannel and micropillar devices with burst pressures of up to 200 kPa have been successfully fabricated and tested. A thin layer of the membrane remains attached to the substrate after debonding, suggesting that mechanical interlocking through nanofiber engagement is the main mechanism of adhesion.

  4. SAChES: Scalable Adaptive Chain-Ensemble Sampling.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Huang, Maoyi [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hou, Zhangshuan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bao, Jie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ren, Huiying [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-08-01

    We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the use of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.

  5. Detailed Modeling and Evaluation of a Scalable Multilevel Checkpointing System

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Moody, Adam [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bronevetsky, Greg [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); de Supinski, Bronis R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-09-01

    High-performance computing (HPC) systems are growing more powerful by utilizing more components. As the system mean time before failure correspondingly drops, applications must checkpoint frequently to make progress. But, at scale, the cost of checkpointing becomes prohibitive. A solution to this problem is multilevel checkpointing, which employs multiple types of checkpoints in a single run. Moreover, lightweight checkpoints can handle the most common failure modes, while more expensive checkpoints can handle severe failures. We designed a multilevel checkpointing library, the Scalable Checkpoint/Restart (SCR) library, that writes lightweight checkpoints to node-local storage in addition to the parallel file system. We present probabilistic Markov models of SCR's performance. We show that on future large-scale systems, SCR can lead to a gain in machine efficiency of up to 35 percent, and reduce the load on the parallel file system by a factor of two. In addition, we predict that checkpoint scavenging, or only writing checkpoints to the parallel file system on application termination, can reduce the load on the parallel file system by 20 × on today's systems and still maintain high application efficiency.

  6. Toward optimized light utilization in nanowire arrays using scalable nanosphere lithography and selected area growth.

    Science.gov (United States)

    Madaria, Anuj R; Yao, Maoqing; Chi, Chunyung; Huang, Ningfeng; Lin, Chenxi; Li, Ruijuan; Povinelli, Michelle L; Dapkus, P Daniel; Zhou, Chongwu

    2012-06-13

    Vertically aligned, catalyst-free semiconducting nanowires hold great potential for photovoltaic applications, in which achieving scalable synthesis and optimized optical absorption simultaneously is critical. Here, we report combining nanosphere lithography (NSL) and selected area metal-organic chemical vapor deposition (SA-MOCVD) for the first time for scalable synthesis of vertically aligned gallium arsenide nanowire arrays, and surprisingly, we show that such nanowire arrays with patterning defects due to NSL can be as good as highly ordered nanowire arrays in terms of optical absorption and reflection. Wafer-scale patterning for nanowire synthesis was done using a polystyrene nanosphere template as a mask. Nanowires grown from substrates patterned by NSL show similar structural features to those patterned using electron beam lithography (EBL). Reflection of photons from the NSL-patterned nanowire array was used as a measure of the effect of defects present in the structure. Experimentally, we show that GaAs nanowires as short as 130 nm show reflection of <10% over the visible range of the solar spectrum. Our results indicate that a highly ordered nanowire structure is not necessary: despite the "defects" present in NSL-patterned nanowire arrays, their optical performance is similar to "defect-free" structures patterned by more costly, time-consuming EBL methods. Our scalable approach for synthesis of vertical semiconducting nanowires can have application in high-throughput and low-cost optoelectronic devices, including solar cells.

  7. Temporal Scalability of Dynamic Volume Data using Mesh Compensated Wavelet Lifting.

    Science.gov (United States)

    Schnurrer, Wolfgang; Pallast, Niklas; Richter, Thomas; Kaup, Andre

    2017-10-12

    Due to their high resolution, dynamic medical 2D+t and 3D+t volumes from computed tomography (CT) and magnetic resonance tomography (MR) reach a size which makes them very unhandy for teleradiologic applications. A lossless scalable representation offers the advantage of a down-scaled version which can be used for orientation or previewing, while the remaining information for reconstructing the full resolution is transmitted on demand. The wavelet transform offers the desired scalability. A very high quality of the lowpass sub-band is crucial in order to use it as a down-scaled representation. We propose an approach based on compensated wavelet lifting for obtaining a scalable representation of dynamic CT and MR volumes with very high quality. The mesh compensation is feasible to model the displacement in dynamic volumes which is mainly given by expansion and contraction of tissue over time. To achieve this, we propose an optimized estimation of the mesh compensation parameters to optimally fit for dynamic volumes. Within the lifting structure, the inversion of the motion compensation is crucial in the update step. We propose to take this inversion directly into account during the estimation step and can improve the quality of the lowpass sub-band by 0.63 dB and 0.43 dB on average for our tested dynamic CT and MR volumes at the cost of an increase of the rate by 2.4% and 1.2% on average.

  8. On Multiple AER Handshaking Channels Over High-Speed Bit-Serial Bidirectional LVDS Links With Flow-Control and Clock-Correction on Commercial FPGAs for Scalable Neuromorphic Systems.

    Science.gov (United States)

    Yousefzadeh, Amirreza; Jablonski, Miroslaw; Iakymchuk, Taras; Linares-Barranco, Alejandro; Rosado, Alfredo; Plana, Luis A; Temple, Steve; Serrano-Gotarredona, Teresa; Furber, Steve B; Linares-Barranco, Bernabe

    2017-10-01

    Address event representation (AER) is a widely employed asynchronous technique for interchanging "neural spikes" between different hardware elements in neuromorphic systems. Each neuron or cell in a chip or a system is assigned an address (or ID), which is typically communicated through a high-speed digital bus, thus time-multiplexing a high number of neural connections. Conventional AER links use parallel physical wires together with a pair of handshaking signals (request and acknowledge). In this paper, we present a fully serial implementation using bidirectional SATA connectors with a pair of low-voltage differential signaling (LVDS) wires for each direction. The proposed implementation can multiplex a number of conventional parallel AER links for each physical LVDS connection. It uses flow control, clock correction, and byte alignment techniques to transmit 32-bit address events reliably over multiplexed serial connections. The setup has been tested using commercial Spartan6 FPGAs attaining a maximum event transmission speed of 75 Meps (Mega events per second) for 32-bit events at a line rate of 3.0 Gbps. Full HDL codes (vhdl/verilog) and example demonstration codes for the SpiNNaker platform will be made available.

  9. A Scalable Gaussian Process Analysis Algorithm for Biomass Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Chandola, Varun [ORNL; Vatsavai, Raju [ORNL

    2011-01-01

    Biomass monitoring is vital for studying the carbon cycle of earth's ecosystem and has several significant implications, especially in the context of understanding climate change and its impacts. Recently, several change detection methods have been proposed to identify land cover changes in temporal profiles (time series) of vegetation collected using remote sensing instruments, but do not satisfy one or both of the two requirements of the biomass monitoring problem, i.e., {\\em operating in online mode} and {\\em handling periodic time series}. In this paper, we adapt Gaussian process regression to detect changes in such time series in an online fashion. While Gaussian process (GP) have been widely used as a kernel based learning method for regression and classification, their applicability to massive spatio-temporal data sets, such as remote sensing data, has been limited owing to the high computational costs involved. We focus on addressing the scalability issues associated with the proposed GP based change detection algorithm. This paper makes several significant contributions. First, we propose a GP based online time series change detection algorithm and demonstrate its effectiveness in detecting different types of changes in {\\em Normalized Difference Vegetation Index} (NDVI) data obtained from a study area in Iowa, USA. Second, we propose an efficient Toeplitz matrix based solution which significantly improves the computational complexity and memory requirements of the proposed GP based method. Specifically, the proposed solution can analyze a time series of length $t$ in $O(t^2)$ time while maintaining a $O(t)$ memory footprint, compared to the $O(t^3)$ time and $O(t^2)$ memory requirement of standard matrix manipulation based methods. Third, we describe a parallel version of the proposed solution which can be used to simultaneously analyze a large number of time series. We study three different parallel implementations: using threads, MPI, and a

  10. GPU-based Scalable Volumetric Reconstruction for Multi-view Stereo

    Energy Technology Data Exchange (ETDEWEB)

    Kim, H; Duchaineau, M; Max, N

    2011-09-21

    We present a new scalable volumetric reconstruction algorithm for multi-view stereo using a graphics processing unit (GPU). It is an effectively parallelized GPU algorithm that simultaneously uses a large number of GPU threads, each of which performs voxel carving, in order to integrate depth maps with images from multiple views. Each depth map, triangulated from pair-wise semi-dense correspondences, represents a view-dependent surface of the scene. This algorithm also provides scalability for large-scale scene reconstruction in a high resolution voxel grid by utilizing streaming and parallel computation. The output is a photo-realistic 3D scene model in a volumetric or point-based representation. We demonstrate the effectiveness and the speed of our algorithm with a synthetic scene and real urban/outdoor scenes. Our method can also be integrated with existing multi-view stereo algorithms such as PMVS2 to fill holes or gaps in textureless regions.

  11. A Bit Stream Scalable Speech/Audio Coder Combining Enhanced Regular Pulse Excitation and Parametric Coding

    Science.gov (United States)

    Riera-Palou, Felip; den Brinker, Albertus C.

    2007-12-01

    This paper introduces a new audio and speech broadband coding technique based on the combination of a pulse excitation coder and a standardized parametric coder, namely, MPEG-4 high-quality parametric coder. After presenting a series of enhancements to regular pulse excitation (RPE) to make it suitable for the modeling of broadband signals, it is shown how pulse and parametric codings complement each other and how they can be merged to yield a layered bit stream scalable coder able to operate at different points in the quality bit rate plane. The performance of the proposed coder is evaluated in a listening test. The major result is that the extra functionality of the bit stream scalability does not come at the price of a reduced performance since the coder is competitive with standardized coders (MP3, AAC, SSC).

  12. A Bit Stream Scalable Speech/Audio Coder Combining Enhanced Regular Pulse Excitation and Parametric Coding

    Directory of Open Access Journals (Sweden)

    Albertus C. den Brinker

    2007-01-01

    Full Text Available This paper introduces a new audio and speech broadband coding technique based on the combination of a pulse excitation coder and a standardized parametric coder, namely, MPEG-4 high-quality parametric coder. After presenting a series of enhancements to regular pulse excitation (RPE to make it suitable for the modeling of broadband signals, it is shown how pulse and parametric codings complement each other and how they can be merged to yield a layered bit stream scalable coder able to operate at different points in the quality bit rate plane. The performance of the proposed coder is evaluated in a listening test. The major result is that the extra functionality of the bit stream scalability does not come at the price of a reduced performance since the coder is competitive with standardized coders (MP3, AAC, SSC.

  13. A Scalable Parallel PWTD-Accelerated SIE Solver for Analyzing Transient Scattering from Electrically Large Objects

    KAUST Repository

    Liu, Yang

    2015-12-17

    A scalable parallel plane-wave time-domain (PWTD) algorithm for efficient and accurate analysis of transient scattering from electrically large objects is presented. The algorithm produces scalable communication patterns on very large numbers of processors by leveraging two mechanisms: (i) a hierarchical parallelization strategy to evenly distribute the computation and memory loads at all levels of the PWTD tree among processors, and (ii) a novel asynchronous communication scheme to reduce the cost and memory requirement of the communications between the processors. The efficiency and accuracy of the algorithm are demonstrated through its applications to the analysis of transient scattering from a perfect electrically conducting (PEC) sphere with a diameter of 70 wavelengths and a PEC square plate with a dimension of 160 wavelengths. Furthermore, the proposed algorithm is used to analyze transient fields scattered from realistic airplane and helicopter models under high frequency excitation.

  14. Blind Cooperative Routing for Scalable and Energy-Efficient Internet of Things

    KAUST Repository

    Bader, Ahmed

    2016-02-26

    Multihop networking is promoted in this paper for energy-efficient and highly-scalable Internet of Things (IoT). Recognizing concerns related to the scalability of classical multihop routing and medium access techniques, the use of blind cooperation in conjunction with multihop communications is advocated herewith. Blind cooperation however is actually shown to be inefficient unless power control is applied. Inefficiency in this paper is projected in terms of the transport rate normalized to energy consumption. To that end, an uncoordinated power control mechanism is proposed whereby each device in a blind cooperative cluster randomly adjusts its transmit power level. An upper bound is derived for the mean transmit power that must be observed at each device. Finally, the uncoordinated power control mechanism is demonstrated to consistently outperform the simple point-to-point routing case. © 2015 IEEE.

  15. The Scalable Coherent Interface and related standards projects

    International Nuclear Information System (INIS)

    Gustavson, D.B.

    1991-09-01

    The Scalable Coherent Interface (SCI) project (IEEE P1596) found a way to avoid the limits that are inherent in bus technology. SCI provides bus-like services by transmitting packets on a collection of point-to-point unidirectional links. The SCI protocols support cache coherence in a distributed-shared-memory multiprocessor model, message passing, I/O, and local-area-network-like communication over fiber optic or wire links. VLSI circuits that operate parallel links at 1000 MByte/s and serial links at 1000 Mbit/s will be available early in 1992. Several ongoing SCI-related projects are applying the SCI technology to new areas or extending it to more difficult problems. P1596.1 defines the architecture of a bridge between SCI and VME; P1596.2 compatibly extends the cache coherence mechanism for efficient operation with kiloprocessor systems; P1596.3 defines new low-voltage (about 0.25 V) differential signals suitable for low power interfaces for CMOS or GaAs VLSI implementations of SCI; P1596.4 defines a high performance memory chip interface using these signals; P1596.5 defines data transfer formats for efficient interprocessor communication in heterogeneous multiprocessor systems. This paper reports the current status of SCI, related standards, and new projects. 16 refs

  16. A scalable platform for biomechanical studies of tissue cutting forces

    International Nuclear Information System (INIS)

    Valdastri, P; Tognarelli, S; Menciassi, A; Dario, P

    2009-01-01

    This paper presents a novel and scalable experimental platform for biomechanical analysis of tissue cutting that exploits a triaxial force-sensitive scalpel and a high resolution vision system. Real-time measurements of cutting forces can be used simultaneously with accurate visual information in order to extract important biomechanical clues in real time that would aid the surgeon during minimally invasive intervention in preserving healthy tissues. Furthermore, the in vivo data gathered can be used for modeling the viscoelastic behavior of soft tissues, which is an important issue in surgical simulator development. Thanks to a modular approach, this platform can be scaled down, thus enabling in vivo real-time robotic applications. Several cutting experiments were conducted with soft porcine tissues (lung, liver and kidney) chosen as ideal candidates for biopsy procedures. The cutting force curves show repeated self-similar units of localized loading followed by unloading. With regards to tissue properties, the depth of cut plays a significant role in the magnitude of the cutting force acting on the blade. Image processing techniques and dedicated algorithms were used to outline the surface of the tissues and estimate the time variation of the depth of cut. The depth of cut was finally used to obtain the normalized cutting force, thus allowing comparative biomechanical analysis

  17. Scalable and cost-effective NGS genotyping in the cloud.

    Science.gov (United States)

    Souilmi, Yassine; Lancaster, Alex K; Jung, Jae-Yoon; Rizzo, Ettore; Hawkins, Jared B; Powles, Ryan; Amzazi, Saaïd; Ghazal, Hassan; Tonellato, Peter J; Wall, Dennis P

    2015-10-15

    While next-generation sequencing (NGS) costs have plummeted in recent years, cost and complexity of computation remain substantial barriers to the use of NGS in routine clinical care. The clinical potential of NGS will not be realized until robust and routine whole genome sequencing data can be accurately rendered to medically actionable reports within a time window of hours and at scales of economy in the 10's of dollars. We take a step towards addressing this challenge, by using COSMOS, a cloud-enabled workflow management system, to develop GenomeKey, an NGS whole genome analysis workflow. COSMOS implements complex workflows making optimal use of high-performance compute clusters. Here we show that the Amazon Web Service (AWS) implementation of GenomeKey via COSMOS provides a fast, scalable, and cost-effective analysis of both public benchmarking and large-scale heterogeneous clinical NGS datasets. Our systematic benchmarking reveals important new insights and considerations to produce clinical turn-around of whole genome analysis optimization and workflow management including strategic batching of individual genomes and efficient cluster resource configuration.

  18. Scalable Joint Models for Reliable Uncertainty-Aware Event Prediction.

    Science.gov (United States)

    Soleimani, Hossein; Hensman, James; Saria, Suchi

    2017-08-21

    Missing data and noisy observations pose significant challenges for reliably predicting events from irregularly sampled multivariate time series (longitudinal) data. Imputation methods, which are typically used for completing the data prior to event prediction, lack a principled mechanism to account for the uncertainty due to missingness. Alternatively, state-of-the-art joint modeling techniques can be used for jointly modeling the longitudinal and event data and compute event probabilities conditioned on the longitudinal observations. These approaches, however, make strong parametric assumptions and do not easily scale to multivariate signals with many observations. Our proposed approach consists of several key innovations. First, we develop a flexible and scalable joint model based upon sparse multiple-output Gaussian processes. Unlike state-of-the-art joint models, the proposed model can explain highly challenging structure including non-Gaussian noise while scaling to large data. Second, we derive an optimal policy for predicting events using the distribution of the event occurrence estimated by the joint model. The derived policy trades-off the cost of a delayed detection versus incorrect assessments and abstains from making decisions when the estimated event probability does not satisfy the derived confidence criteria. Experiments on a large dataset show that the proposed framework significantly outperforms state-of-the-art techniques in event prediction.

  19. Elastic pointer directory organization for scalable shared memory multiprocessors

    Institute of Scientific and Technical Information of China (English)

    Yuhang Liu; Mingfa Zhu; Limin Xiao

    2014-01-01

    In the field of supercomputing, one key issue for scal-able shared-memory multiprocessors is the design of the directory which denotes the sharing state for a cache block. A good direc-tory design intends to achieve three key attributes: reasonable memory overhead, sharer position precision and implementation complexity. However, researchers often face the problem that gain-ing one attribute may result in losing another. The paper proposes an elastic pointer directory (EPD) structure based on the analysis of shared-memory applications, taking the fact that the number of sharers for each directory entry is typical y smal . Analysis re-sults show that for 4 096 nodes, the ratio of memory overhead to the ful-map directory is 2.7%. Theoretical analysis and cycle-accurate execution-driven simulations on a 16 and 64-node cache coherence non uniform memory access (CC-NUMA) multiproces-sor show that the corresponding pointer overflow probability is reduced significantly. The performance is observed to be better than that of a limited pointers directory and almost identical to the ful-map directory, except for the slight implementation complex-ity. Using the directory cache to explore directory access locality is also studied. The experimental result shows that this is a promis-ing approach to be used in the state-of-the-art high performance computing domain.

  20. Parallel peak pruning for scalable SMP contour tree computation

    Energy Technology Data Exchange (ETDEWEB)

    Carr, Hamish A. [Univ. of Leeds (United Kingdom); Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Davis, CA (United States); Sewell, Christopher M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ahrens, James P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-09

    As data sets grow to exascale, automated data analysis and visualisation are increasingly important, to intermediate human understanding and to reduce demands on disk storage via in situ analysis. Trends in architecture of high performance computing systems necessitate analysis algorithms to make effective use of combinations of massively multicore and distributed systems. One of the principal analytic tools is the contour tree, which analyses relationships between contours to identify features of more than local importance. Unfortunately, the predominant algorithms for computing the contour tree are explicitly serial, and founded on serial metaphors, which has limited the scalability of this form of analysis. While there is some work on distributed contour tree computation, and separately on hybrid GPU-CPU computation, there is no efficient algorithm with strong formal guarantees on performance allied with fast practical performance. Here in this paper, we report the first shared SMP algorithm for fully parallel contour tree computation, withfor-mal guarantees of O(lgnlgt) parallel steps and O(n lgn) work, and implementations with up to 10x parallel speed up in OpenMP and up to 50x speed up in NVIDIA Thrust.

  1. Microwave assisted scalable synthesis of titanium ferrite nanomaterials

    Science.gov (United States)

    Shukla, Abhishek; Bhardwaj, Abhishek K.; Singh, S. C.; Uttam, K. N.; Gautam, Nisha; Himanshu, A. K.; Shah, Jyoti; Kotnala, R. K.; Gopal, R.

    2018-04-01

    Titanium ferrite magnetic nanomaterials are synthesized by one-step, one pot, and scalable method assisted by microwave radiation. Effects of titanium content and microwave exposure time on size, shape, morphology, yield, bonding nature, crystalline structure, and magnetic properties of titanium ferrite nanomaterials are studied. As-synthesized nanomaterials are characterized by X-ray diffraction (XRD), ultraviolet-visible absorption spectroscopy (UV-Vis), attenuated total reflectance Fourier transform infrared spectroscopy (ATR-FTIR), Raman spectroscopy, transmission electron microscopy (TEM), and vibrating sample magnetometer measurements. XRD measurements depict the presence of two phases of titanium ferrite into the same sample, where crystallite size increases from ˜33 nm to 37 nm with the increase in titanium concentration. UV-Vis measurement showed broad spectrum in the spectral range of 250-600 nm which reveals that its characteristic peaks lie between ultraviolet and visible region; ATR-FTIR and Raman measurements predict iron-titanium oxide structures that are consistent with XRD results. The micrographs of TEM and selected area electron diffraction patterns show formation of hexagonal shaped particles with a high degree of crystallinity and presence of multi-phase. Energy dispersive spectroscopy measurements confirm that Ti:Fe compositional mass ratio can be controlled by tuning synthesis conditions. Increase of Ti defects into titanium ferrite lattice, either by increasing titanium precursor or by increasing exposure time, enhances its magnetic properties.

  2. Dynamic superhydrophobic behavior in scalable random textured polymeric surfaces

    Science.gov (United States)

    Moreira, David; Park, Sung-hoon; Lee, Sangeui; Verma, Neil; Bandaru, Prabhakar R.

    2016-03-01

    Superhydrophobic (SH) surfaces, created from hydrophobic materials with micro- or nano- roughness, trap air pockets in the interstices of the roughness, leading, in fluid flow conditions, to shear-free regions with finite interfacial fluid velocity and reduced resistance to flow. Significant attention has been given to SH conditions on ordered, periodic surfaces. However, in practical terms, random surfaces are more applicable due to their relative ease of fabrication. We investigate SH behavior on a novel durable polymeric rough surface created through a scalable roll-coating process with varying micro-scale roughness through velocity and pressure drop measurements. We introduce a new method to construct the velocity profile over SH surfaces with significant roughness in microchannels. Slip length was measured as a function of differing roughness and interstitial air conditions, with roughness and air fraction parameters obtained through direct visualization. The slip length was matched to scaling laws with good agreement. Roughness at high air fractions led to a reduced pressure drop and higher velocities, demonstrating the effectiveness of the considered surface in terms of reduced resistance to flow. We conclude that the observed air fraction under flow conditions is the primary factor determining the response in fluid flow. Such behavior correlated well with the hydrophobic or superhydrophobic response, indicating significant potential for practical use in enhancing fluid flow efficiency.

  3. A scalable coevolutionary multi-objective particle swarm optimizer

    Directory of Open Access Journals (Sweden)

    Xiangwei Zheng

    2010-11-01

    Full Text Available Multi-Objective Particle Swarm Optimizers (MOPSOs are easily trapped in local optima, cost more function evaluations and suffer from the curse of dimensionality. A scalable cooperative coevolution and ?-dominance based MOPSO (CEPSO is proposed to address these issues. In CEPSO, Multi-objective Optimization Problems (MOPs are decomposed in terms of their decision variables and are optimized by cooperative coevolutionary subswarms, and a uniform distribution mutation operator is adopted to avoid premature convergence. All subswarms share an external archive based on ?-dominance, which is also used as a leader set. Collaborators are selected from the archive and used to construct context vectors in order to evaluate particles in a subswarm. CEPSO is tested on several classical MOP benchmark functions and experimental results show that CEPSO can readily escape from local optima and optimize both low and high dimensional problems, but the number of function evaluations only increases linearly with respect to the number of decision variables. Therefore, CEPSO is competitive in solving various MOPs.

  4. Scalable, ultra-resistant structural colors based on network metamaterials

    KAUST Repository

    Galinski, Henning

    2017-05-05

    Structural colors have drawn wide attention for their potential as a future printing technology for various applications, ranging from biomimetic tissues to adaptive camouflage materials. However, an efficient approach to realize robust colors with a scalable fabrication technique is still lacking, hampering the realization of practical applications with this platform. Here, we develop a new approach based on large-scale network metamaterials that combine dealloyed subwavelength structures at the nanoscale with lossless, ultra-thin dielectric coatings. By using theory and experiments, we show how subwavelength dielectric coatings control a mechanism of resonant light coupling with epsilon-near-zero regions generated in the metallic network, generating the formation of saturated structural colors that cover a wide portion of the spectrum. Ellipsometry measurements support the efficient observation of these colors, even at angles of 70°. The network-like architecture of these nanomaterials allows for high mechanical resistance, which is quantified in a series of nano-scratch tests. With such remarkable properties, these metastructures represent a robust design technology for real-world, large-scale commercial applications.

  5. Scalable Fault-Tolerant Location Management Scheme for Mobile IP

    Directory of Open Access Journals (Sweden)

    JinHo Ahn

    2001-11-01

    Full Text Available As the number of mobile nodes registering with a network rapidly increases in Mobile IP, multiple mobility (home of foreign agents can be allocated to a network in order to improve performance and availability. Previous fault tolerant schemes (denoted by PRT schemes to mask failures of the mobility agents use passive replication techniques. However, they result in high failure-free latency during registration process if the number of mobility agents in the same network increases, and force each mobility agent to manage bindings of all the mobile nodes registering with its network. In this paper, we present a new fault-tolerant scheme (denoted by CML scheme using checkpointing and message logging techniques. The CML scheme achieves low failure-free latency even if the number of mobility agents in a network increases, and improves scalability to a large number of mobile nodes registering with each network compared with the PRT schemes. Additionally, the CML scheme allows each failed mobility agent to recover bindings of the mobile nodes registering with the mobility agent when it is repaired even if all the other mobility agents in the same network concurrently fail.

  6. Heat-treated stainless steel felt as scalable anode material for bioelectrochemical systems.

    Science.gov (United States)

    Guo, Kun; Soeriyadi, Alexander H; Feng, Huajun; Prévoteau, Antonin; Patil, Sunil A; Gooding, J Justin; Rabaey, Korneel

    2015-11-01

    This work reports a simple and scalable method to convert stainless steel (SS) felt into an effective anode for bioelectrochemical systems (BESs) by means of heat treatment. X-ray photoelectron spectroscopy and cyclic voltammetry elucidated that the heat treatment generated an iron oxide rich layer on the SS felt surface. The iron oxide layer dramatically enhanced the electroactive biofilm formation on SS felt surface in BESs. Consequently, the sustained current densities achieved on the treated electrodes (1 cm(2)) were around 1.5±0.13 mA/cm(2), which was seven times higher than the untreated electrodes (0.22±0.04 mA/cm(2)). To test the scalability of this material, the heat-treated SS felt was scaled up to 150 cm(2) and similar current density (1.5 mA/cm(2)) was achieved on the larger electrode. The low cost, straightforwardness of the treatment, high conductivity and high bioelectrocatalytic performance make heat-treated SS felt a scalable anodic material for BESs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Differentiation of Human Pluripotent Stem Cells into Functional Endothelial Cells in Scalable Suspension Culture

    Directory of Open Access Journals (Sweden)

    Ruth Olmer

    2018-05-01

    Full Text Available Summary: Endothelial cells (ECs are involved in a variety of cellular responses. As multifunctional components of vascular structures, endothelial (progenitor cells have been utilized in cellular therapies and are required as an important cellular component of engineered tissue constructs and in vitro disease models. Although primary ECs from different sources are readily isolated and expanded, cell quantity and quality in terms of functionality and karyotype stability is limited. ECs derived from human induced pluripotent stem cells (hiPSCs represent an alternative and potentially superior cell source, but traditional culture approaches and 2D differentiation protocols hardly allow for production of large cell numbers. Aiming at the production of ECs, we have developed a robust approach for efficient endothelial differentiation of hiPSCs in scalable suspension culture. The established protocol results in relevant numbers of ECs for regenerative approaches and industrial applications that show in vitro proliferation capacity and a high degree of chromosomal stability. : In this article, U. Martin and colleagues show the generation of hiPSC endothelial cells in scalable cultures in up to 100 mL culture volume. The generated ECs show in vitro proliferation capacity and a high degree of chromosomal stability after in vitro expansion. The established protocol allows to generate hiPSC-derived ECs in relevant numbers for regenerative approaches. Keywords: hiPSC differentiation, endothelial cells, scalable culture

  8. Using Python to Construct a Scalable Parallel Nonlinear Wave Solver

    KAUST Repository

    Mandli, Kyle

    2011-01-01

    Computational scientists seek to provide efficient, easy-to-use tools and frameworks that enable application scientists within a specific discipline to build and/or apply numerical models with up-to-date computing technologies that can be executed on all available computing systems. Although many tools could be useful for groups beyond a specific application, it is often difficult and time consuming to combine existing software, or to adapt it for a more general purpose. Python enables a high-level approach where a general framework can be supplemented with tools written for different fields and in different languages. This is particularly important when a large number of tools are necessary, as is the case for high performance scientific codes. This motivated our development of PetClaw, a scalable distributed-memory solver for time-dependent nonlinear wave propagation, as a case-study for how Python can be used as a highlevel framework leveraging a multitude of codes, efficient both in the reuse of code and programmer productivity. We present scaling results for computations on up to four racks of Shaheen, an IBM BlueGene/P supercomputer at King Abdullah University of Science and Technology. One particularly important issue that PetClaw has faced is the overhead associated with dynamic loading leading to catastrophic scaling. We use the walla library to solve the issue which does so by supplanting high-cost filesystem calls with MPI operations at a low enough level that developers may avoid any changes to their codes.

  9. A scalable variational inequality approach for flow through porous media models with pressure-dependent viscosity

    Science.gov (United States)

    Mapakshi, N. K.; Chang, J.; Nakshatrala, K. B.

    2018-04-01

    Mathematical models for flow through porous media typically enjoy the so-called maximum principles, which place bounds on the pressure field. It is highly desirable to preserve these bounds on the pressure field in predictive numerical simulations, that is, one needs to satisfy discrete maximum principles (DMP). Unfortunately, many of the existing formulations for flow through porous media models do not satisfy DMP. This paper presents a robust, scalable numerical formulation based on variational inequalities (VI), to model non-linear flows through heterogeneous, anisotropic porous media without violating DMP. VI is an optimization technique that places bounds on the numerical solutions of partial differential equations. To crystallize the ideas, a modification to Darcy equations by taking into account pressure-dependent viscosity will be discretized using the lowest-order Raviart-Thomas (RT0) and Variational Multi-scale (VMS) finite element formulations. It will be shown that these formulations violate DMP, and, in fact, these violations increase with an increase in anisotropy. It will be shown that the proposed VI-based formulation provides a viable route to enforce DMP. Moreover, it will be shown that the proposed formulation is scalable, and can work with any numerical discretization and weak form. A series of numerical benchmark problems are solved to demonstrate the effects of heterogeneity, anisotropy and non-linearity on DMP violations under the two chosen formulations (RT0 and VMS), and that of non-linearity on solver convergence for the proposed VI-based formulation. Parallel scalability on modern computational platforms will be illustrated through strong-scaling studies, which will prove the efficiency of the proposed formulation in a parallel setting. Algorithmic scalability as the problem size is scaled up will be demonstrated through novel static-scaling studies. The performed static-scaling studies can serve as a guide for users to be able to select

  10. Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach

    Science.gov (United States)

    Danyali, Habibiollah; Mertins, Alfred

    2011-01-01

    In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications. PMID:22606653

  11. A Numerical Study of Scalable Cardiac Electro-Mechanical Solvers on HPC Architectures

    Directory of Open Access Journals (Sweden)

    Piero Colli Franzone

    2018-04-01

    Full Text Available We introduce and study some scalable domain decomposition preconditioners for cardiac electro-mechanical 3D simulations on parallel HPC (High Performance Computing architectures. The electro-mechanical model of the cardiac tissue is composed of four coupled sub-models: (1 the static finite elasticity equations for the transversely isotropic deformation of the cardiac tissue; (2 the active tension model describing the dynamics of the intracellular calcium, cross-bridge binding and myofilament tension; (3 the anisotropic Bidomain model describing the evolution of the intra- and extra-cellular potentials in the deforming cardiac tissue; and (4 the ionic membrane model describing the dynamics of ionic currents, gating variables, ionic concentrations and stretch-activated channels. This strongly coupled electro-mechanical model is discretized in time with a splitting semi-implicit technique and in space with isoparametric finite elements. The resulting scalable parallel solver is based on Multilevel Additive Schwarz preconditioners for the solution of the Bidomain system and on BDDC preconditioned Newton-Krylov solvers for the non-linear finite elasticity system. The results of several 3D parallel simulations show the scalability of both linear and non-linear solvers and their application to the study of both physiological excitation-contraction cardiac dynamics and re-entrant waves in the presence of different mechano-electrical feedbacks.

  12. Scalability of voltage-controlled filamentary and nanometallic resistance memory devices.

    Science.gov (United States)

    Lu, Yang; Lee, Jong Ho; Chen, I-Wei

    2017-08-31

    Much effort has been devoted to device and materials engineering to realize nanoscale resistance random access memory (RRAM) for practical applications, but a rational physical basis to be relied on to design scalable devices spanning many length scales is still lacking. In particular, there is no clear criterion for switching control in those RRAM devices in which resistance changes are limited to localized nanoscale filaments that experience concentrated heat, electric current and field. Here, we demonstrate voltage-controlled resistance switching, always at a constant characteristic critical voltage, for macro and nanodevices in both filamentary RRAM and nanometallic RRAM, and the latter switches uniformly and does not require a forming process. As a result, area-scalability can be achieved under a device-area-proportional current compliance for the low resistance state of the filamentary RRAM, and for both the low and high resistance states of the nanometallic RRAM. This finding will help design area-scalable RRAM at the nanoscale. It also establishes an analogy between RRAM and synapses, in which signal transmission is also voltage-controlled.

  13. ESVD: An Integrated Energy Scalable Framework for Low-Power Video Decoding Systems

    Directory of Open Access Journals (Sweden)

    Wen Ji

    2010-01-01

    Full Text Available Video applications using mobile wireless devices are a challenging task due to the limited capacity of batteries. The higher complex functionality of video decoding needs high resource requirements. Thus, power efficient control has become more critical design with devices integrating complex video processing techniques. Previous works on power efficient control in video decoding systems often aim at the low complexity design and not explicitly consider the scalable impact of subfunctions in decoding process, and seldom consider the relationship with the features of compressed video date. This paper is dedicated to developing an energy-scalable video decoding (ESVD strategy for energy-limited mobile terminals. First, ESVE can dynamically adapt the variable energy resources due to the device aware technique. Second, ESVD combines the decoder control with decoded data, through classifying the data into different partition profiles according to its characteristics. Third, it introduces utility theoretical analysis during the resource allocation process, so as to maximize the resource utilization. Finally, it adapts the energy resource as different energy budget and generates the scalable video decoding output under energy-limited systems. Experimental results demonstrate the efficiency of the proposed approach.

  14. Iterative Integration of Visual Insights during Scalable Patent Search and Analysis.

    Science.gov (United States)

    Koch, S; Bosch, H; Giereth, M; Ertl, T

    2011-05-01

    Patents are of growing importance in current economic markets. Analyzing patent information has, therefore, become a common task for many interest groups. As a prerequisite for patent analysis, extensive search for relevant patent information is essential. Unfortunately, the complexity of patent material inhibits a straightforward retrieval of all relevant patent documents and leads to iterative, time-consuming approaches in practice. Already the amount of patent data to be analyzed poses challenges with respect to scalability. Further scalability issues arise concerning the diversity of users and the large variety of analysis tasks. With "PatViz", a system for interactive analysis of patent information has been developed addressing scalability at various levels. PatViz provides a visual environment allowing for interactive reintegration of insights into subsequent search iterations, thereby bridging the gap between search and analytic processes. Because of its extensibility, we expect that the approach we have taken can be employed in different problem domains that require high quality of search results regarding their completeness.

  15. Error-Resilient Unequal Error Protection of Fine Granularity Scalable Video Bitstreams

    Science.gov (United States)

    Cai, Hua; Zeng, Bing; Shen, Guobin; Xiong, Zixiang; Li, Shipeng

    2006-12-01

    This paper deals with the optimal packet loss protection issue for streaming the fine granularity scalable (FGS) video bitstreams over IP networks. Unlike many other existing protection schemes, we develop an error-resilient unequal error protection (ER-UEP) method that adds redundant information optimally for loss protection and, at the same time, cancels completely the dependency among bitstream after loss recovery. In our ER-UEP method, the FGS enhancement-layer bitstream is first packetized into a group of independent and scalable data packets. Parity packets, which are also scalable, are then generated. Unequal protection is finally achieved by properly shaping the data packets and the parity packets. We present an algorithm that can optimally allocate the rate budget between data packets and parity packets, together with several simplified versions that have lower complexity. Compared with conventional UEP schemes that suffer from bit contamination (caused by the bit dependency within a bitstream), our method guarantees successful decoding of all received bits, thus leading to strong error-resilience (at any fixed channel bandwidth) and high robustness (under varying and/or unclean channel conditions).

  16. Applications of the scalable coherent interface to data acquisition at LHC

    CERN Document Server

    Bogaerts, A; Divià, R; Müller, H; Parkman, C; Ponting, P J; Skaali, B; Midttun, G; Wormald, D; Wikne, J; Falciano, S; Cesaroni, F; Vinogradov, V I; Kristiansen, E H; Solberg, B; Guglielmi, A M; Worm, F H; Bovier, J; Davis, C; CERN. Geneva. Detector Research and Development Committee

    1991-01-01

    We propose to use the Scalable Coherent Interface (SCI) as a very high speed interconnect between LHC detector data buffers and farms of commercial trigger processors. Both the global second and third level trigger can be based on SCI as a reconfigurable and scalable system. SCI is a proposed IEEE standard which uses fast point-to-point links to provide computer-bus like services. It can connect a maximum of 65 536 nodes (memories or processors), providing data transfer rates of up to 1 Gbyte/s. Scalable data acquisition systems can be built using either simple SCI rings or complex switches. The interconnections may be flat cables, coaxial cables, or optical fibres. SCI protocols have been entirely implemented in VLSI, resulting in a significant simplification of data acquisition software. Novel SCI features allow efficient implementation of both data and processor driven readout architectures. In particular, a very efficient implementation of the third level trigger can be achieved by combining SCI's shared ...

  17. A repeatable and scalable fabrication method for sharp, hollow silicon microneedles

    Science.gov (United States)

    Kim, H.; Theogarajan, L. S.; Pennathur, S.

    2018-03-01

    Scalability and manufacturability are impeding the mass commercialization of microneedles in the medical field. Specifically, microneedle geometries need to be sharp, beveled, and completely controllable, difficult to achieve with microelectromechanical fabrication techniques. In this work, we performed a parametric study using silicon etch chemistries to optimize the fabrication of scalable and manufacturable beveled silicon hollow microneedles. We theoretically verified our parametric results with diffusion reaction equations and created a design guideline for a various set of miconeedles (80-160 µm needle base width, 100-1000 µm pitch, 40-50 µm inner bore diameter, and 150-350 µm height) to show the repeatability, scalability, and manufacturability of our process. As a result, hollow silicon microneedles with any dimensions can be fabricated with less than 2% non-uniformity across a wafer and 5% deviation between different processes. The key to achieving such high uniformity and consistency is a non-agitated HF-HNO3 bath, silicon nitride masks, and surrounding silicon filler materials with well-defined dimensions. Our proposed method is non-labor intensive, well defined by theory, and straightforward for wafer scale mass production, opening doors to a plethora of potential medical and biosensing applications.

  18. Scalable Biomarker Discovery for Diverse High-Dimensional Phenotypes

    Science.gov (United States)

    2015-11-23

    William D. Shannon, Richard R. Sharp, Thomas J. Sharpton, Narmada Shenoy, Nihar U. Sheth, Gina A. Simone, Indresh Singh, Chris S. Smillie, Jack D... William D. Shannon, Richard R. Sharp, Thomas J. Sharpton, Narmada Shenoy, Nihar U. Sheth, Gina A. Simone, Indresh Singh, Christopher S. Smillie, Jack D...Susanne J. Szabo, Jeff Porter, Harri Lähdesmäki, Curtis Huttenhower, Dirk Gevers, Thomas W. Cullen , Mikael Knip, on behalf of the DIABIMMUNE Study Group

  19. Enabling Secure, Scalable Microgrids with High Penetration Renewables

    Energy Technology Data Exchange (ETDEWEB)

    Wasynczuk, Oleg [Purdue Univ., West Lafayette, IN (United States). School of Electrical and Computer Engineering; Rashkin, Lee Joshua [Purdue Univ., West Lafayette, IN (United States). School of Electrical and Computer Engineering; Pekarek, Steven D. [Purdue Univ., West Lafayette, IN (United States). School of Electrical and Computer Engineering

    2013-09-01

    In the first section, ac and dc technologies are compared highlighting their advantages and disadvantages. Since ac and dc systems have both evolved significantly since their introduction in the mid and latter parts of the 19th century, many of the early advantages of ac systems no longer exist or are of less importance today. Consequently, it is useful to provide a brief historical perspective on the evolution of both ac and dc power systems. As in the dc case, there are many potential modes of operation and control strategies for the given system. In ac systems, the situation is more complex since it is necessary to regulate both the amplitude and the frequency of the ac voltage. In the third section, the techniques of controlling and analyzing the stability of ac systems is reviewed.

  20. A Testbed for Highly-Scalable Mission Critical Information Systems

    National Research Council Canada - National Science Library

    Birman, Kenneth P

    2005-01-01

    ...". The basic idea is to implement a communication platform using these new protocols, and then integrate the platform with standard Web Services tools and technologies to achieve a uniquely easy to use...

  1. Robust, Highly Scalable Solar Array System, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Solar array systems currently under development are focused on near-term missions with designs optimized for the 30-50 kW power range. However, NASA has a vital...

  2. Scalability of DL_POLY on High Performance Computing Platform

    CSIR Research Space (South Africa)

    Mabakane, Mabule S

    2017-12-01

    Full Text Available stream_source_info Mabakanea_19979_2017.pdf.txt stream_content_type text/plain stream_size 33716 Content-Encoding UTF-8 stream_name Mabakanea_19979_2017.pdf.txt Content-Type text/plain; charset=UTF-8 SACJ 29(3) December... when using many processors within the compute nodes of the supercomputer. The type of the processors of compute nodes and their memory also play an important role in the overall performance of the parallel application running on a supercomputer. DL...

  3. Scalable DeNoise-and-Forward in Bidirectional Relay Networks

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Krigslund, Rasmus; Popovski, Petar

    2010-01-01

    In this paper a scalable relaying scheme is proposed based on an existing concept called DeNoise-and-Forward, DNF. We call it Scalable DNF, S-DNF, and it targets the scenario with multiple communication flows through a single common relay. The idea of the scheme is to combine packets at the relay...... in order to save transmissions. To ensure decodability at the end-nodes, a priori information about the content of the combined packets must be available. This is gathered during the initial transmissions to the relay. The trade-off between decodability and number of necessary transmissions is analysed...

  4. READSCAN: A fast and scalable pathogen discovery program with accurate genome relative abundance estimation

    KAUST Repository

    Naeem, Raeece

    2012-11-28

    Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. 2012 The Author(s).

  5. READSCAN: A fast and scalable pathogen discovery program with accurate genome relative abundance estimation

    KAUST Repository

    Naeem, Raeece; Rashid, Mamoon; Pain, Arnab

    2012-01-01

    Summary: READSCAN is a highly scalable parallel program to identify non-host sequences (of potential pathogen origin) and estimate their genome relative abundance in high-throughput sequence datasets. READSCAN accurately classified human and viral sequences on a 20.1 million reads simulated dataset in <27 min using a small Beowulf compute cluster with 16 nodes (Supplementary Material). Availability: http://cbrc.kaust.edu.sa/readscan Contact: or raeece.naeem@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. 2012 The Author(s).

  6. Scalable UWB photonic generator based on the combination of doublet pulses.

    Science.gov (United States)

    Moreno, Vanessa; Rius, Manuel; Mora, José; Muriel, Miguel A; Capmany, José

    2014-06-30

    We propose and experimentally demonstrate a scalable and reconfigurable optical scheme to generate high order UWB pulses. Firstly, various ultra wideband doublets are created through a process of phase-to-intensity conversion by means of a phase modulation and a dispersive media. In a second stage, doublets are combined in an optical processing unit that allows the reconfiguration of UWB high order pulses. Experimental results both in time and frequency domains are presented showing good performance related to the fractional bandwidth and spectral efficiency parameters.

  7. Design for scalability in 3D computer graphics architectures

    DEFF Research Database (Denmark)

    Holten-Lund, Hans Erik

    2002-01-01

    This thesis describes useful methods and techniques for designing scalable hybrid parallel rendering architectures for 3D computer graphics. Various techniques for utilizing parallelism in a pipelines system are analyzed. During the Ph.D study a prototype 3D graphics architecture named Hybris has...

  8. Scalable storage for a DBMS using transparent distribution

    NARCIS (Netherlands)

    J.S. Karlsson; M.L. Kersten (Martin)

    1997-01-01

    textabstractScalable Distributed Data Structures (SDDSs) provide a self-managing and self-organizing data storage of potentially unbounded size. This stands in contrast to common distribution schemas deployed in conventional distributed DBMS. SDDSs, however, have mostly been used in synthetic

  9. Scalable force directed graph layout algorithms using fast multipole methods

    KAUST Repository

    Yunis, Enas Abdulrahman; Yokota, Rio; Ahmadia, Aron

    2012-01-01

    We present an extension to ExaFMM, a Fast Multipole Method library, as a generalized approach for fast and scalable execution of the Force-Directed Graph Layout algorithm. The Force-Directed Graph Layout algorithm is a physics-based approach

  10. Cascaded column generation for scalable predictive demand side management

    NARCIS (Netherlands)

    Toersche, Hermen; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2014-01-01

    We propose a nested Dantzig-Wolfe decomposition, combined with dynamic programming, for the distributed scheduling of a large heterogeneous fleet of residential appliances with nonlinear behavior. A cascaded column generation approach gives a scalable optimization strategy, provided that the problem

  11. Scalable power selection method for wireless mesh networks

    CSIR Research Space (South Africa)

    Olwal, TO

    2009-01-01

    Full Text Available This paper addresses the problem of a scalable dynamic power control (SDPC) for wireless mesh networks (WMNs) based on IEEE 802.11 standards. An SDPC model that accounts for architectural complexities witnessed in multiple radios and hops...

  12. Efficient Enhancement for Spatial Scalable Video Coding Transmission

    Directory of Open Access Journals (Sweden)

    Mayada Khairy

    2017-01-01

    Full Text Available Scalable Video Coding (SVC is an international standard technique for video compression. It is an extension of H.264 Advanced Video Coding (AVC. In the encoding of video streams by SVC, it is suitable to employ the macroblock (MB mode because it affords superior coding efficiency. However, the exhaustive mode decision technique that is usually used for SVC increases the computational complexity, resulting in a longer encoding time (ET. Many other algorithms were proposed to solve this problem with imperfection of increasing transmission time (TT across the network. To minimize the ET and TT, this paper introduces four efficient algorithms based on spatial scalability. The algorithms utilize the mode-distribution correlation between the base layer (BL and enhancement layers (ELs and interpolation between the EL frames. The proposed algorithms are of two categories. Those of the first category are based on interlayer residual SVC spatial scalability. They employ two methods, namely, interlayer interpolation (ILIP and the interlayer base mode (ILBM method, and enable ET and TT savings of up to 69.3% and 83.6%, respectively. The algorithms of the second category are based on full-search SVC spatial scalability. They utilize two methods, namely, full interpolation (FIP and the full-base mode (FBM method, and enable ET and TT savings of up to 55.3% and 76.6%, respectively.

  13. Scalable Robust Principal Component Analysis Using Grassmann Averages

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Enficiaud, Raffi

    2016-01-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortu...

  14. A Scalable Smart Meter Data Generator Using Spark

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Liu, Xiufeng; Danalachi, Sergiu

    2017-01-01

    Today, smart meters are being used worldwide. As a matter of fact smart meters produce large volumes of data. Thus, it is important for smart meter data management and analytics systems to process petabytes of data. Benchmarking and testing of these systems require scalable data, however, it can ...

  15. Scalability and efficiency of genetic algorithms for geometrical applications

    NARCIS (Netherlands)

    Dijk, van S.F.; Thierens, D.; Berg, de M.; Schoenauer, M.

    2000-01-01

    We study the scalability and efficiency of a GA that we developed earlier to solve the practical cartographic problem of labeling a map with point features. We argue that the special characteristics of our GA make that it fits in well with theoretical models predicting the optimal population size

  16. Scalable electro-photonic integration concept based on polymer waveguides

    NARCIS (Netherlands)

    Bosman, E.; Steenberge, G. van; Boersma, A.; Wiegersma, S.; Harmsma, P.J.; Karppinen, M.; Korhonen, T.; Offrein, B.J.; Dangel, R.; Daly, A.; Ortsiefer, M.; Justice, J.; Corbett, B.; Dorrestein, S.; Duis, J.

    2016-01-01

    A novel method for fabricating a single mode optical interconnection platform is presented. The method comprises the miniaturized assembly of optoelectronic single dies, the scalable fabrication of polymer single mode waveguides and the coupling to glass fiber arrays providing the I/O's. The low

  17. A Massively Scalable Architecture for Instant Messaging & Presence

    NARCIS (Netherlands)

    Schippers, Jorrit; Remke, Anne Katharina Ingrid; Punt, Henk; Wegdam, M.; Haverkort, Boudewijn R.H.M.; Thomas, N.; Bradley, J.; Knottenbelt, W.; Dingle, N.; Harder, U.

    2010-01-01

    This paper analyzes the scalability of Instant Messaging & Presence (IM&P) architectures. We take a queueing-based modelling and analysis approach to ��?nd the bottlenecks of the current IM&P architecture at the Dutch social network Hyves, as well as of alternative architectures. We use the

  18. Scalable multifunction RF system concepts for joint operations

    NARCIS (Netherlands)

    Otten, M.P.G.; Wit, J.J.M. de; Smits, F.M.A.; Rossum, W.L. van; Huizing, A.

    2010-01-01

    RF systems based on modular architectures have the potential of better re-use of technology, decreasing development time, and decreasing life cycle cost. Moreover, modular architectures provide scalability, allowing low cost upgrades and adaptability to different platforms. To achieve maximum

  19. Estimates of the Sampling Distribution of Scalability Coefficient H

    Science.gov (United States)

    Van Onna, Marieke J. H.

    2004-01-01

    Coefficient "H" is used as an index of scalability in nonparametric item response theory (NIRT). It indicates the degree to which a set of items rank orders examinees. Theoretical sampling distributions, however, have only been derived asymptotically and only under restrictive conditions. Bootstrap methods offer an alternative possibility to…

  20. Robust and Scalable DTLS Session Establishment

    OpenAIRE

    Tiloca, Marco; Gehrmann, Christian; Seitz, Ludwig

    2016-01-01

    The Datagram Transport Layer Security (DTLS) protocol is highly vulnerable to a form of denial-of-service attack (DoS), aimed at establishing a high number of invalid, half-open, secure sessions. Moreover, even when the efficient pre-shared key provisioning mode is considered, the key storage on the server side scales poorly with the number of clients. SICS Swedish ICT has designed a security architecture that efficiently addresses both issues without breaking the current standard.

  1. Temporal scalability comparison of the H.264/SVC and distributed video codec

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Belyaev, Evgeny

    2009-01-01

    The problem of the multimedia scalable video streaming is a current topic of interest. There exist many methods for scalable video coding. This paper is focused on the scalable extension of H.264/AVC (H.264/SVC) and distributed video coding (DVC). The paper presents an efficiency comparison of SV...

  2. Graphene-enhanced electrodes for scalable supercapacitors

    OpenAIRE

    Tsai, I-Ling; Cao, Jianyun; Le Fevre, Lewis; Wang, Bin; Todd, Rebecca; Dryfe, Robert; Forsyth, Andrew

    2017-01-01

    A scale-up process of high-rate-capability supercapacitors based on electrochemically exfoliated graphene (EEG) and hybrid activated carbon (AC)/ EEG are studied in this work. A comparison of the rate capabilities of large-scale EEG and AC/EEG-based pouch cell and commercial high-power supercapacitors are also presented in this paper. The oxygen content of the EEG used in this work is 9.6 at%, with a C/O ratio of 9.36, and the electrical conductivity is 2.68  104 Sm-1. The specific capacitan...

  3. Scalability of optical networks : crosstalk limitations

    NARCIS (Netherlands)

    Tafur Monroy, I.

    2000-01-01

    Optical networks represent a promising solution for the future high capacity and flexible transport network. This paper presents a model for the performance evaluation of optical networks with respect to linear crosstalk and accumulated spontaneous emission noise. The proposed model is intended for

  4. Scalable Beaconing for Cooperative Adaptive Cruise Control

    NARCIS (Netherlands)

    van Eenennaam, Martijn

    2013-01-01

    Over the past two hundred years, automotive technology has evolved from mechanised horse carriage to high-tech systems which pack more computing power than the entire space program that put Neil Armstrong on the moon. Hand-in-hand with this evolution came a proliferation of ownership and use of

  5. An ODMG-compatible testbed architecture for scalable management and analysis of physics data

    International Nuclear Information System (INIS)

    Malon, D.M.; May, E.N.

    1997-01-01

    This paper describes a testbed architecture for the investigation and development of scalable approaches to the management and analysis of massive amounts of high energy physics data. The architecture has two components: an interface layer that is compliant with a substantial subset of the ODMG-93 Version 1.2 specification, and a lightweight object persistence manager that provides flexible storage and retrieval services on a variety of single- and multi-level storage architectures, and on a range of parallel and distributed computing platforms

  6. Applications of Scalable Multipoint Video and Audio Using the Public Internet

    Directory of Open Access Journals (Sweden)

    Robert D. Gaglianello

    2000-01-01

    Full Text Available This paper describes a scalable multipoint video system, designed for efficient generation and display of high quality, multiple resolution, multiple compressed video streams over IP-based networks. We present our experiences using the system over the public Internet for several “real-world” applications, including distance learning, virtual theater, and virtual collaboration. The trials were a combined effort of Bell Laboratories and the Gertrude Stein Repertory Theatre (TGSRT. We also present current advances in the conferencing system since the trials, new areas for application and future applications.

  7. More scalability, less pain: A simple programming model and its implementation for extreme computing

    International Nuclear Information System (INIS)

    Lusk, E.L.; Pieper, S.C.; Butler, R.M.

    2010-01-01

    This is the story of a simple programming model, its implementation for extreme computing, and a breakthrough in nuclear physics. A critical issue for the future of high-performance computing is the programming model to use on next-generation architectures. Described here is a promising approach: program very large machines by combining a simplified programming model with a scalable library implementation. The presentation takes the form of a case study in nuclear physics. The chosen application addresses fundamental issues in the origins of our Universe, while the library developed to enable this application on the largest computers may have applications beyond this one.

  8. Bioinspired superhydrophobic surfaces, fabricated through simple and scalable roll-to-roll processing.

    Science.gov (United States)

    Park, Sung-Hoon; Lee, Sangeui; Moreira, David; Bandaru, Prabhakar R; Han, InTaek; Yun, Dong-Jin

    2015-10-22

    A simple, scalable, non-lithographic, technique for fabricating durable superhydrophobic (SH) surfaces, based on the fingering instabilities associated with non-Newtonian flow and shear tearing, has been developed. The high viscosity of the nanotube/elastomer paste has been exploited for the fabrication. The fabricated SH surfaces had the appearance of bristled shark skin and were robust with respect to mechanical forces. While flow instability is regarded as adverse to roll-coating processes for fabricating uniform films, we especially use the effect to create the SH surface. Along with their durability and self-cleaning capabilities, we have demonstrated drag reduction effects of the fabricated films through dynamic flow measurements.

  9. Scalable Synthesis of Freestanding Sandwich-structured Graphene/Polyaniline/Graphene Nanocomposite Paper for Flexible All-Solid-State Supercapacitor

    OpenAIRE

    Xiao, Fei; Yang, Shengxiong; Zhang, Zheye; Liu, Hongfang; Xiao, Junwu; Wan, Lian; Luo, Jun; Wang, Shuai; Liu, Yunqi

    2015-01-01

    We reported a scalable and modular method to prepare a new type of sandwich-structured graphene-based nanohybrid paper and explore its practical application as high-performance electrode in flexible supercapacitor. The freestanding and flexible graphene paper was firstly fabricated by highly reproducible printing technique and bubbling delamination method, by which the area and thickness of the graphene paper can be freely adjusted in a wide range. The as-prepared graphene paper possesses a c...

  10. Zinc oxide nanoleaves: A scalable disperser-assisted sonochemical approach for synthesis and an antibacterial application.

    Science.gov (United States)

    Gupta, Anadi; Srivastava, Rohit

    2018-03-01

    Current study reports a new and highly scalable method for the synthesis of novel structure Zinc oxide nanoleaves (ZnO-NLs) using disperser-assisted sonochemical approach. The synthesis was carried out in different batches from 50mL to 1L to ensure the scalability of the method which produced almost similar results. The use of high speed (9000rpm) mechanical dispersion while bath sonication (200W, 33kHz) yield 4.4g of ZnO-NLs powder in 1L batch reaction within 2h (>96% yield). The ZnO-NLs shows an excellent thermal stability even at a higher temperature (900°C) and high surface area. The high antibacterial activity of ZnO-NLs against diseases causing Gram-positive bacteria Staphylococcus aureus shows a reduction in CFU, morphological changes like eight times reduction in cell size, cell burst, and cellular leakage at 200µg/mL concentration. This study provides an efficient, cost-effective and an environmental friendly approach for the synthesis of ZnO-NLs at industrial scale as well as new technique to increase the efficiency of the existing sonochemical method. We envisage that this method can be applied to various fields where ZnO is significantly consumed like rubber manufacturing, ceramic industry and medicine. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Scalable 2D Hierarchical Porous Carbon Nanosheets for Flexible Supercapacitors with Ultrahigh Energy Density.

    Science.gov (United States)

    Yao, Lei; Wu, Qin; Zhang, Peixin; Zhang, Junmin; Wang, Dongrui; Li, Yongliang; Ren, Xiangzhong; Mi, Hongwei; Deng, Libo; Zheng, Zijian

    2018-03-01

    2D carbon nanomaterials such as graphene and its derivatives, have gained tremendous research interests in energy storage because of their high capacitance and chemical stability. However, scalable synthesis of ultrathin carbon nanosheets with well-defined pore architectures remains a great challenge. Herein, the first synthesis of 2D hierarchical porous carbon nanosheets (2D-HPCs) with rich nitrogen dopants is reported, which is prepared with high scalability through a rapid polymerization of a nitrogen-containing thermoset and a subsequent one-step pyrolysis and activation into 2D porous nanosheets. 2D-HPCs, which are typically 1.5 nm thick and 1-3 µm wide, show a high surface area (2406 m 2 g -1 ) and with hierarchical micro-, meso-, and macropores. This 2D and hierarchical porous structure leads to robust flexibility and good energy-storage capability, being 139 Wh kg -1 for a symmetric supercapacitor. Flexible supercapacitor devices fabricated by these 2D-HPCs also present an ultrahigh volumetric energy density of 8.4 mWh cm -3 at a power density of 24.9 mW cm -3 , which is retained at 80% even when the power density is increased by 20-fold. The devices show very high electrochemical life (96% retention after 10000 charge/discharge cycles) and excellent mechanical flexibility. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. GSKY: A scalable distributed geospatial data server on the cloud

    Science.gov (United States)

    Rozas Larraondo, Pablo; Pringle, Sean; Antony, Joseph; Evans, Ben

    2017-04-01

    Earth systems, environmental and geophysical datasets are an extremely valuable sources of information about the state and evolution of the Earth. Being able to combine information coming from different geospatial collections is in increasing demand by the scientific community, and requires managing and manipulating data with different formats and performing operations such as map reprojections, resampling and other transformations. Due to the large data volume inherent in these collections, storing multiple copies of them is unfeasible and so such data manipulation must be performed on-the-fly using efficient, high performance techniques. Ideally this should be performed using a trusted data service and common system libraries to ensure wide use and reproducibility. Recent developments in distributed computing based on dynamic access to significant cloud infrastructure opens the door for such new ways of processing geospatial data on demand. The National Computational Infrastructure (NCI), hosted at the Australian National University (ANU), has over 10 Petabytes of nationally significant research data collections. Some of these collections, which comprise a variety of observed and modelled geospatial data, are now made available via a highly distributed geospatial data server, called GSKY (pronounced [jee-skee]). GSKY supports on demand processing of large geospatial data products such as satellite earth observation data as well as numerical weather products, allowing interactive exploration and analysis of the data. It dynamically and efficiently distributes the required computations among cloud nodes providing a scalable analysis framework that can adapt to serve large number of concurrent users. Typical geospatial workflows handling different file formats and data types, or blending data in different coordinate projections and spatio-temporal resolutions, is handled transparently by GSKY. This is achieved by decoupling the data ingestion and indexing process as

  13. Scalable and portable visualization of large atomistic datasets

    Science.gov (United States)

    Sharma, Ashish; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya

    2004-10-01

    A scalable and portable code named Atomsviewer has been developed to interactively visualize a large atomistic dataset consisting of up to a billion atoms. The code uses a hierarchical view frustum-culling algorithm based on the octree data structure to efficiently remove atoms outside of the user's field-of-view. Probabilistic and depth-based occlusion-culling algorithms then select atoms, which have a high probability of being visible. Finally a multiresolution algorithm is used to render the selected subset of visible atoms at varying levels of detail. Atomsviewer is written in C++ and OpenGL, and it has been tested on a number of architectures including Windows, Macintosh, and SGI. Atomsviewer has been used to visualize tens of millions of atoms on a standard desktop computer and, in its parallel version, up to a billion atoms. Program summaryTitle of program: Atomsviewer Catalogue identifier: ADUM Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUM Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: 2.4 GHz Pentium 4/Xeon processor, professional graphics card; Apple G4 (867 MHz)/G5, professional graphics card Operating systems under which the program has been tested: Windows 2000/XP, Mac OS 10.2/10.3, SGI IRIX 6.5 Programming languages used: C++, C and OpenGL Memory required to execute with typical data: 1 gigabyte of RAM High speed storage required: 60 gigabytes No. of lines in the distributed program including test data, etc.: 550 241 No. of bytes in the distributed program including test data, etc.: 6 258 245 Number of bits in a word: Arbitrary Number of processors used: 1 Has the code been vectorized or parallelized: No Distribution format: tar gzip file Nature of physical problem: Scientific visualization of atomic systems Method of solution: Rendering of atoms using computer graphic techniques, culling algorithms for data

  14. Scalable fractionation of iron oxide nanoparticles using a CO2 gas-expanded liquid system

    International Nuclear Information System (INIS)

    Vengsarkar, Pranav S.; Xu, Rui; Roberts, Christopher B.

    2015-01-01

    Iron oxide nanoparticles exhibit highly size-dependent physicochemical properties that are important in applications such as catalysis and environmental remediation. In order for these size-dependent properties to be effectively harnessed for industrial applications scalable and cost-effective techniques for size-controlled synthesis or size separation must be developed. The synthesis of monodisperse iron oxide nanoparticles can be a prohibitively expensive process on a large scale. An alternative involves the use of inexpensive synthesis procedures followed by a size-selective processing technique. While there are many techniques available to fractionate nanoparticles, many of the techniques are unable to efficiently fractionate iron oxide nanoparticles in a scalable and inexpensive manner. A scalable apparatus capable of fractionating large quantities of iron oxide nanoparticles into distinct fractions of different sizes and size distributions has been developed. Polydisperse iron oxide nanoparticles (2–20 nm) coated with oleic acid used in this study were synthesized using a simple and inexpensive version of the popular coprecipitation technique. This apparatus uses hexane as a CO 2 gas-expanded liquid to controllably precipitate nanoparticles inside a 1L high-pressure reactor. This paper demonstrates the operation of this new apparatus and for the first time shows the successful fractionation results on a system of metal oxide nanoparticles, with initial nanoparticle concentrations in the gram-scale. The analysis of the obtained fractions was performed using transmission electron microscopy and dynamic light scattering. The use of this simple apparatus provides a pathway to separate large quantities of iron oxide nanoparticles based upon their size for use in various industrial applications.

  15. GPU-FS-kNN: a software tool for fast and scalable kNN computation using GPUs.

    Directory of Open Access Journals (Sweden)

    Ahmed Shamsul Arefin

    Full Text Available BACKGROUND: The analysis of biological networks has become a major challenge due to the recent development of high-throughput techniques that are rapidly producing very large data sets. The exploding volumes of biological data are craving for extreme computational power and special computing facilities (i.e. super-computers. An inexpensive solution, such as General Purpose computation based on Graphics Processing Units (GPGPU, can be adapted to tackle this challenge, but the limitation of the device internal memory can pose a new problem of scalability. An efficient data and computational parallelism with partitioning is required to provide a fast and scalable solution to this problem. RESULTS: We propose an efficient parallel formulation of the k-Nearest Neighbour (kNN search problem, which is a popular method for classifying objects in several fields of research, such as pattern recognition, machine learning and bioinformatics. Being very simple and straightforward, the performance of the kNN search degrades dramatically for large data sets, since the task is computationally intensive. The proposed approach is not only fast but also scalable to large-scale instances. Based on our approach, we implemented a software tool GPU-FS-kNN (GPU-based Fast and Scalable k-Nearest Neighbour for CUDA enabled GPUs. The basic approach is simple and adaptable to other available GPU architectures. We observed speed-ups of 50-60 times compared with CPU implementation on a well-known breast microarray study and its associated data sets. CONCLUSION: Our GPU-based Fast and Scalable k-Nearest Neighbour search technique (GPU-FS-kNN provides a significant performance improvement for nearest neighbour computation in large-scale networks. Source code and the software tool is available under GNU Public License (GPL at https://sourceforge.net/p/gpufsknn/.

  16. Scientific visualization uncertainty, multifield, biomedical, and scalable visualization

    CERN Document Server

    Chen, Min; Johnson, Christopher; Kaufman, Arie; Hagen, Hans

    2014-01-01

    Based on the seminar that took place in Dagstuhl, Germany in June 2011, this contributed volume studies the four important topics within the scientific visualization field: uncertainty visualization, multifield visualization, biomedical visualization and scalable visualization. • Uncertainty visualization deals with uncertain data from simulations or sampled data, uncertainty due to the mathematical processes operating on the data, and uncertainty in the visual representation, • Multifield visualization addresses the need to depict multiple data at individual locations and the combination of multiple datasets, • Biomedical is a vast field with select subtopics addressed from scanning methodologies to structural applications to biological applications, • Scalability in scientific visualization is critical as data grows and computational devices range from hand-held mobile devices to exascale computational platforms. Scientific Visualization will be useful to practitioners of scientific visualization, ...

  17. Scalable quantum memory in the ultrastrong coupling regime.

    Science.gov (United States)

    Kyaw, T H; Felicetti, S; Romero, G; Solano, E; Kwek, L-C

    2015-03-02

    Circuit quantum electrodynamics, consisting of superconducting artificial atoms coupled to on-chip resonators, represents a prime candidate to implement the scalable quantum computing architecture because of the presence of good tunability and controllability. Furthermore, recent advances have pushed the technology towards the ultrastrong coupling regime of light-matter interaction, where the qubit-resonator coupling strength reaches a considerable fraction of the resonator frequency. Here, we propose a qubit-resonator system operating in that regime, as a quantum memory device and study the storage and retrieval of quantum information in and from the Z2 parity-protected quantum memory, within experimentally feasible schemes. We are also convinced that our proposal might pave a way to realize a scalable quantum random-access memory due to its fast storage and readout performances.

  18. Fast & scalable pattern transfer via block copolymer nanolithography

    DEFF Research Database (Denmark)

    Li, Tao; Wang, Zhongli; Schulte, Lars

    2015-01-01

    A fully scalable and efficient pattern transfer process based on block copolymer (BCP) self-assembling directly on various substrates is demonstrated. PS-rich and PDMS-rich poly(styrene-b-dimethylsiloxane) (PS-b-PDMS) copolymers are used to give monolayer sphere morphology after spin-casting of s......A fully scalable and efficient pattern transfer process based on block copolymer (BCP) self-assembling directly on various substrates is demonstrated. PS-rich and PDMS-rich poly(styrene-b-dimethylsiloxane) (PS-b-PDMS) copolymers are used to give monolayer sphere morphology after spin...... on long range lateral order, including fabrication of substrates for catalysis, solar cells, sensors, ultrafiltration membranes and templating of semiconductors or metals....

  19. Semantic Models for Scalable Search in the Internet of Things

    Directory of Open Access Journals (Sweden)

    Dennis Pfisterer

    2013-03-01

    Full Text Available The Internet of Things is anticipated to connect billions of embedded devices equipped with sensors to perceive their surroundings. Thereby, the state of the real world will be available online and in real-time and can be combined with other data and services in the Internet to realize novel applications such as Smart Cities, Smart Grids, or Smart Healthcare. This requires an open representation of sensor data and scalable search over data from diverse sources including sensors. In this paper we show how the Semantic Web technologies RDF (an open semantic data format and SPARQL (a query language for RDF-encoded data can be used to address those challenges. In particular, we describe how prediction models can be employed for scalable sensor search, how these prediction models can be encoded as RDF, and how the models can be queried by means of SPARQL.

  20. ATLAS Grid Data Processing: system evolution and scalability

    CERN Document Server

    Golubkov, D; The ATLAS collaboration; Klimentov, A; Minaenko, A; Nevski, P; Vaniachine, A; Walker, R

    2012-01-01

    The production system for Grid Data Processing handles petascale ATLAS data reprocessing and Monte Carlo activities. The production system empowered further data processing steps on the Grid performed by dozens of ATLAS physics groups with coordinated access to computing resources worldwide, including additional resources sponsored by regional facilities. The system provides knowledge management of configuration parameters for massive data processing tasks, reproducibility of results, scalable database access, orchestrated workflow and performance monitoring, dynamic workload sharing, automated fault tolerance and petascale data integrity control. The system evolves to accommodate a growing number of users and new requirements from our contacts in ATLAS main areas: Trigger, Physics, Data Preparation and Software & Computing. To assure scalability, the next generation production system architecture development is in progress. We report on scaling up the production system for a growing number of users provi...

  1. NPTool: Towards Scalability and Reliability of Business Process Management

    Science.gov (United States)

    Braghetto, Kelly Rosa; Ferreira, João Eduardo; Pu, Calton

    Currently one important challenge in business process management is provide at the same time scalability and reliability of business process executions. This difficulty becomes more accentuated when the execution control assumes complex countless business processes. This work presents NavigationPlanTool (NPTool), a tool to control the execution of business processes. NPTool is supported by Navigation Plan Definition Language (NPDL), a language for business processes specification that uses process algebra as formal foundation. NPTool implements the NPDL language as a SQL extension. The main contribution of this paper is a description of the NPTool showing how the process algebra features combined with a relational database model can be used to provide a scalable and reliable control in the execution of business processes. The next steps of NPTool include reuse of control-flow patterns and support to data flow management.

  2. Proof of Stake Blockchain: Performance and Scalability for Groupware Communications

    DEFF Research Database (Denmark)

    Spasovski, Jason; Eklund, Peter

    2017-01-01

    A blockchain is a distributed transaction ledger, a disruptive technology that creates new possibilities for digital ecosystems. The blockchain ecosystem maintains an immutable transaction record to support many types of digital services. This paper compares the performance and scalability of a web......-based groupware communication application using both non-blockchain and blockchain technologies. Scalability is measured where message load is synthesized over two typical communication topologies. The first is 1 to n network -- a typical client-server or star-topology with a central vertex (server) receiving all...... messages from the remaining n - 1 vertices (clients). The second is a more naturally occurring scale-free network topology, where multiple communication hubs are distributed throughout the network. System performance is tested with both blockchain and non-blockchain solutions using multiple cloud computing...

  3. Continuity-Aware Scheduling Algorithm for Scalable Video Streaming

    Directory of Open Access Journals (Sweden)

    Atinat Palawan

    2016-05-01

    Full Text Available The consumer demand for retrieving and delivering visual content through consumer electronic devices has increased rapidly in recent years. The quality of video in packet networks is susceptible to certain traffic characteristics: average bandwidth availability, loss, delay and delay variation (jitter. This paper presents a scheduling algorithm that modifies the stream of scalable video to combat jitter. The algorithm provides unequal look-ahead by safeguarding the base layer (without the need for overhead of the scalable video. The results of the experiments show that our scheduling algorithm reduces the number of frames with a violated deadline and significantly improves the continuity of the video stream without compromising the average Y Peek Signal-to-Noise Ratio (PSNR.

  4. Scalable synthesis and energy applications of defect engineeered nano materials

    Science.gov (United States)

    Karakaya, Mehmet

    Nanomaterials and nanotechnologies have attracted a great deal of attention in a few decades due to their novel physical properties such as, high aspect ratio, surface morphology, impurities, etc. which lead to unique chemical, optical and electronic properties. The awareness of importance of nanomaterials has motivated researchers to develop nanomaterial growth techniques to further control nanostructures properties such as, size, surface morphology, etc. that may alter their fundamental behavior. Carbon nanotubes (CNTs) are one of the most promising materials with their rigidity, strength, elasticity and electric conductivity for future applications. Despite their excellent properties explored by the abundant research works, there is big challenge to introduce them into the macroscopic world for practical applications. This thesis first gives a brief overview of the CNTs, it will then go on mechanical and oil absorption properties of macro-scale CNT assemblies, then following CNT energy storage applications and finally fundamental studies of defect introduced graphene systems. Chapter Two focuses on helically coiled carbon nanotube (HCNT) foams in compression. Similarly to other foams, HCNT foams exhibit preconditioning effects in response to cyclic loading; however, their fundamental deformation mechanisms are unique. Bulk HCNT foams exhibit super-compressibility and recover more than 90% of large compressive strains (up to 80%). When subjected to striker impacts, HCNT foams mitigate impact stresses more effectively compared to other CNT foams comprised of non-helical CNTs (~50% improvement). The unique mechanical properties we revealed demonstrate that the HCNT foams are ideally suited for applications in packaging, impact protection, and vibration mitigation. The third chapter describes a simple method for the scalable synthesis of three-dimensional, elastic, and recyclable multi-walled carbon nanotube (MWCNT) based light weight bucky-aerogels (BAGs) that are

  5. A Scalable Heuristic for Viral Marketing Under the Tipping Model

    Science.gov (United States)

    2013-09-01

    Flixster is a social media website that allows users to share reviews and other information about cinema . [35] It was extracted in Dec. 2010. – FourSquare...work of Reichman were developed independently . We also note that Reichman performs no experimental evaluation of the algorithm. A Scalable Heuristic...other dif- fusion models, such as the independent cascade model [21] and evolutionary graph theory [25] as well as probabilistic variants of the

  6. A Scalable Communication Architecture for Advanced Metering Infrastructure

    OpenAIRE

    Ngo Hoang , Giang; Liquori , Luigi; Nguyen Chan , Hung

    2013-01-01

    Advanced Metering Infrastructure (AMI), seen as foundation for overall grid modernization, is an integration of many technologies that provides an intelligent connection between consumers and system operators [ami 2008]. One of the biggest challenge that AMI faces is to scalable collect and manage a huge amount of data from a large number of customers. In our paper, we address this challenge by introducing a mixed peer-to-peer (P2P) and client-server communication architecture for AMI in whic...

  7. Scalable Multi-group Key Management for Advanced Metering Infrastructure

    OpenAIRE

    Benmalek , Mourad; Challal , Yacine; Bouabdallah , Abdelmadjid

    2015-01-01

    International audience; Advanced Metering Infrastructure (AMI) is composed of systems and networks to incorporate changes for modernizing the electricity grid, reduce peak loads, and meet energy efficiency targets. AMI is a privileged target for security attacks with potentially great damage against infrastructures and privacy. For this reason, Key Management has been identified as one of the most challenging topics in AMI development. In this paper, we propose a new Scalable multi-group key ...

  8. Economical and scalable synthesis of 6-amino-2-cyanobenzothiazole

    Directory of Open Access Journals (Sweden)

    Jacob R. Hauser

    2016-09-01

    Full Text Available 2-Cyanobenzothiazoles (CBTs are useful building blocks for: 1 luciferin derivatives for bioluminescent imaging; and 2 handles for bioorthogonal ligations. A particularly versatile CBT is 6-amino-2-cyanobenzothiazole (ACBT, which has an amine handle for straight-forward derivatisation. Here we present an economical and scalable synthesis of ACBT based on a cyanation catalysed by 1,4-diazabicyclo[2.2.2]octane (DABCO, and discuss its advantages for scale-up over previously reported routes.

  9. Scalable force directed graph layout algorithms using fast multipole methods

    KAUST Repository

    Yunis, Enas Abdulrahman

    2012-06-01

    We present an extension to ExaFMM, a Fast Multipole Method library, as a generalized approach for fast and scalable execution of the Force-Directed Graph Layout algorithm. The Force-Directed Graph Layout algorithm is a physics-based approach to graph layout that treats the vertices V as repelling charged particles with the edges E connecting them acting as springs. Traditionally, the amount of work required in applying the Force-Directed Graph Layout algorithm is O(|V|2 + |E|) using direct calculations and O(|V| log |V| + |E|) using truncation, filtering, and/or multi-level techniques. Correct application of the Fast Multipole Method allows us to maintain a lower complexity of O(|V| + |E|) while regaining most of the precision lost in other techniques. Solving layout problems for truly large graphs with millions of vertices still requires a scalable algorithm and implementation. We have been able to leverage the scalability and architectural adaptability of the ExaFMM library to create a Force-Directed Graph Layout implementation that runs efficiently on distributed multicore and multi-GPU architectures. © 2012 IEEE.

  10. Scalable Video Coding with Interlayer Signal Decorrelation Techniques

    Directory of Open Access Journals (Sweden)

    Yang Wenxian

    2007-01-01

    Full Text Available Scalability is one of the essential requirements in the compression of visual data for present-day multimedia communications and storage. The basic building block for providing the spatial scalability in the scalable video coding (SVC standard is the well-known Laplacian pyramid (LP. An LP achieves the multiscale representation of the video as a base-layer signal at lower resolution together with several enhancement-layer signals at successive higher resolutions. In this paper, we propose to improve the coding performance of the enhancement layers through efficient interlayer decorrelation techniques. We first show that, with nonbiorthogonal upsampling and downsampling filters, the base layer and the enhancement layers are correlated. We investigate two structures to reduce this correlation. The first structure updates the base-layer signal by subtracting from it the low-frequency component of the enhancement layer signal. The second structure modifies the prediction in order that the low-frequency component in the new enhancement layer is diminished. The second structure is integrated in the JSVM 4.0 codec with suitable modifications in the prediction modes. Experimental results with some standard test sequences demonstrate coding gains up to 1 dB for I pictures and up to 0.7 dB for both I and P pictures.

  11. Scalable Computational Chemistry: New Developments and Applications

    Energy Technology Data Exchange (ETDEWEB)

    Alexeev, Yuri [Iowa State Univ., Ames, IA (United States)

    2002-01-01

    The computational part of the thesis is the investigation of titanium chloride (II) as a potential catalyst for the bis-silylation reaction of ethylene with hexaclorodisilane at different levels of theory. Bis-silylation is an important reaction for producing bis(silyl) compounds and new C-Si bonds, which can serve as monomers for silicon containing polymers and silicon carbides. Ab initio calculations on the steps involved in a proposed mechanism are presented. This choice of reactants allows them to study this reaction at reliable levels of theory without compromising accuracy. The calculations indicate that this is a highly exothermic barrierless reaction. The TiCl2 catalyst removes a 50 kcal/mol activation energy barrier required for the reaction without the catalyst. The first step is interaction of TiCl2 with ethylene to form an intermediate that is 60 kcal/mol below the energy of the reactants. This is the driving force for the entire reaction. Dynamic correlation plays a significant role because RHF calculations indicate that the net barrier for the catalyzed reaction is 50 kcal/mol. They conclude that divalent Ti has the potential to become an important industrial catalyst for silylation reactions. In the programming part of the thesis, parallelization of different quantum chemistry methods is presented. The parallelization of code is becoming important aspects of quantum chemistry code development. Two trends contribute to it: the overall desire to study large chemical systems and the desire to employ highly correlated methods which are usually computationally and memory expensive. In the presented distributed data algorithms computation is parallelized and the largest arrays are evenly distributed among CPUs. First, the parallelization of the Hartree-Fock self-consistent field (SCF) method is considered. SCF method is the most common starting point for more accurate calculations. The Fock build (sub step of SCF) from AO integrals is

  12. Dynamic Defrosting on Scalable Superhydrophobic Surfaces

    International Nuclear Information System (INIS)

    Murphy, Kevin R.; McClintic, William T.; Lester, Kevin C.; Collier, C. Patrick; Boreyko, Jonathan B.

    2017-01-01

    Recent studies have shown that frost can grow in a suspended Cassie state on nanostructured superhydrophobic surfaces. During defrosting, the melting sheet of Cassie frost spontaneously dewets into quasi-spherical slush droplets that are highly mobile. Promoting Cassie frost would therefore seem advantageous from a defrosting standpoint; however, nobody has systematically compared the efficiency of defrosting Cassie ice versus defrosting conventional surfaces. Here, we characterize the defrosting of an aluminum plate, one-half of which exhibits a superhydrophobic nanostructure while the other half is smooth and hydrophobic. For thick frost sheets (>1 mm), the superhydrophobic surface was able to dynamically shed the meltwater, even at very low tilt angles. In contrast, the hydrophobic surface was unable to shed any appreciable meltwater even at a 90° tilt angle. For thin frost layers (≲1 mm), not even the superhydrophobic surface could mobilize the meltwater. We attribute this to the large apparent contact angle of the meltwater, which for small amounts of frost serves to minimize coalescence events and prevent droplets from approaching the capillary length. Finally, we demonstrate a new mode of dynamic defrosting using an upside-down surface orientation, where the melting frost was able to uniformly detach from the superhydrophobic side and subsequently pull the frost from the hydrophobic side in a chain reaction. Treating surfaces to enable Cassie frost is therefore very desirable for enabling rapid and low-energy thermal defrosting, but only for frost sheets that are sufficiently thick.

  13. Scalable IC Platform for Smart Cameras

    Directory of Open Access Journals (Sweden)

    Harry Broers

    2005-08-01

    Full Text Available Smart cameras are among the emerging new fields of electronics. The points of interest are in the application areas, software and IC development. In order to reduce cost, it is worthwhile to invest in a single architecture that can be scaled for the various application areas in performance (and resulting power consumption. In this paper, we show that the combination of an SIMD (single-instruction multiple-data processor and a general-purpose DSP is very advantageous for the image processing tasks encountered in smart cameras. While the SIMD processor gives the very high performance necessary by exploiting the inherent data parallelism found in the pixel crunching part of the algorithms, the DSP offers a friendly approach to the more complex tasks. The paper continues to motivate that SIMD processors have very convenient scaling properties in silicon, making the complete, SIMD-DSP architecture suitable for different application areas without changing the software suite. Analysis of the changes in power consumption due to scaling shows that for typical image processing tasks, it is beneficial to scale the SIMD processor to use the maximum level of parallelism available in the algorithm if the IC supply voltage can be lowered. If silicon cost is of importance, the parallelism of the processor should be scaled to just reach the desired performance given the speed of the silicon.

  14. Scalable fabrication of self-aligned graphene transistors and circuits on glass.

    Science.gov (United States)

    Liao, Lei; Bai, Jingwei; Cheng, Rui; Zhou, Hailong; Liu, Lixin; Liu, Yuan; Huang, Yu; Duan, Xiangfeng

    2012-06-13

    Graphene transistors are of considerable interest for radio frequency (rf) applications. High-frequency graphene transistors with the intrinsic cutoff frequency up to 300 GHz have been demonstrated. However, the graphene transistors reported to date only exhibit a limited extrinsic cutoff frequency up to about 10 GHz, and functional graphene circuits demonstrated so far can merely operate in the tens of megahertz regime, far from the potential the graphene transistors could offer. Here we report a scalable approach to fabricate self-aligned graphene transistors with the extrinsic cutoff frequency exceeding 50 GHz and graphene circuits that can operate in the 1-10 GHz regime. The devices are fabricated on a glass substrate through a self-aligned process by using chemical vapor deposition (CVD) grown graphene and a dielectrophoretic assembled nanowire gate array. The self-aligned process allows the achievement of unprecedented performance in CVD graphene transistors with a highest transconductance of 0.36 mS/μm. The use of an insulating substrate minimizes the parasitic capacitance and has therefore enabled graphene transistors with a record-high extrinsic cutoff frequency (> 50 GHz) achieved to date. The excellent extrinsic cutoff frequency readily allows configuring the graphene transistors into frequency doubling or mixing circuits functioning in the 1-10 GHz regime, a significant advancement over previous reports (∼20 MHz). The studies open a pathway to scalable fabrication of high-speed graphene transistors and functional circuits and represent a significant step forward to graphene based radio frequency devices.

  15. Advanced technologies for scalable ATLAS conditions database access on the grid

    International Nuclear Information System (INIS)

    Basset, R; Canali, L; Girone, M; Hawkings, R; Valassi, A; Viegas, F; Dimitrov, G; Nevski, P; Vaniachine, A; Walker, R; Wong, A

    2010-01-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysis of server performance under stress tests indicates that Conditions Db data access is limited by the disk I/O throughput. An unacceptable side-effect of the disk I/O saturation is a degradation of the WLCG 3D Services that update Conditions Db data at all ten ATLAS Tier-1 sites using the technology of Oracle Streams. To avoid such bottlenecks we prototyped and tested a novel approach for database peak load avoidance in Grid computing. Our approach is based upon the proven idea of pilot job submission on the Grid: instead of the actual query, an ATLAS utility library sends to the database server a pilot query first.

  16. A Scalable Version of the Navy Operational Global Atmospheric Prediction System Spectral Forecast Model

    Directory of Open Access Journals (Sweden)

    Thomas E. Rosmond

    2000-01-01

    Full Text Available The Navy Operational Global Atmospheric Prediction System (NOGAPS includes a state-of-the-art spectral forecast model similar to models run at several major operational numerical weather prediction (NWP centers around the world. The model, developed by the Naval Research Laboratory (NRL in Monterey, California, has run operational at the Fleet Numerical Meteorological and Oceanographic Center (FNMOC since 1982, and most recently is being run on a Cray C90 in a multi-tasked configuration. Typically the multi-tasked code runs on 10 to 15 processors with overall parallel efficiency of about 90%. resolution is T159L30, but other operational and research applications run at significantly lower resolutions. A scalable NOGAPS forecast model has been developed by NRL in anticipation of a FNMOC C90 replacement in about 2001, as well as for current NOGAPS research requirements to run on DOD High-Performance Computing (HPC scalable systems. The model is designed to run with message passing (MPI. Model design criteria include bit reproducibility for different processor numbers and reasonably efficient performance on fully shared memory, distributed memory, and distributed shared memory systems for a wide range of model resolutions. Results for a wide range of processor numbers, model resolutions, and different vendor architectures are presented. Single node performance has been disappointing on RISC based systems, at least compared to vector processor performance. This is a common complaint, and will require careful re-examination of traditional numerical weather prediction (NWP model software design and data organization to fully exploit future scalable architectures.

  17. Scalable quantum computing based on stationary spin qubits in coupled quantum dots inside double-sided optical microcavities.

    Science.gov (United States)

    Wei, Hai-Rui; Deng, Fu-Guo

    2014-12-18

    Quantum logic gates are the key elements in quantum computing. Here we investigate the possibility of achieving a scalable and compact quantum computing based on stationary electron-spin qubits, by using the giant optical circular birefringence induced by quantum-dot spins in double-sided optical microcavities as a result of cavity quantum electrodynamics. We design the compact quantum circuits for implementing universal and deterministic quantum gates for electron-spin systems, including the two-qubit CNOT gate and the three-qubit Toffoli gate. They are compact and economic, and they do not require additional electron-spin qubits. Moreover, our devices have good scalability and are attractive as they both are based on solid-state quantum systems and the qubits are stationary. They are feasible with the current experimental technology, and both high fidelity and high efficiency can be achieved when the ratio of the side leakage to the cavity decay is low.

  18. Scalable multi-grid preconditioning techniques for the even-parity S_N solver in UNIC

    International Nuclear Information System (INIS)

    Mahadevan, Vijay S.; Smith, Michael A.

    2011-01-01

    The Even-parity neutron transport equation with FE-S_N discretization is solved traditionally using SOR preconditioned CG method at the lowest level of iterations in order to compute the criticality in reactor analysis problems. The use of high order isoparametric finite elements prohibits the formation of the discrete operator explicitly due to memory constraints in peta scale architectures. Hence, a h-p multi-grid preconditioner based on linear tessellation of the higher order mesh is introduced here for the space-angle system and compared against SOR and Algebraic MG black-box solvers. The performance and scalability of the multi-grid scheme was determined for two test problems and found to be competitive in terms of both computational time and memory requirements. The implementation of this preconditioner in an even-parity solver like UNIC from ANL can further enable high fidelity calculations in a scalable manner on peta flop machines. (author)

  19. Magnetically anisotropic additive for scalable manufacturing of polymer nanocomposite: iron-coated carbon nanotubes

    International Nuclear Information System (INIS)

    Yamamoto, Namiko; Manohara, Harish; Platzman, Ellen

    2016-01-01

    Novel nanoparticles additives for polymer nanocomposites were prepared by coating carbon nanotubes (CNTs) with ferromagnetic iron (Fe) layers, so that their micro-structures can be bulk-controlled by external magnetic field application. Application of magnetic fields is a promising, scalable method to deliver bulk amount of nanocomposites while maintaining organized nanoparticle assembly throughout the uncured polymer matrix. In this work, Fe layers (∼18 nm thick) were deposited on CNTs (∼38 nm diameter and ∼50 μm length) to form thin films with high aspect ratio, resulting in a dominance of shape anisotropy and thus high coercivity of ∼50–100 Oe. The Fe-coated CNTs were suspended in water and applied with a weak magnetic field of ∼75 G, and yet preliminary magnetic assembly was confirmed. Our results demonstrate that the fabricated Fe-coated CNTs are magnetically anisotropic and effectively respond to magnetic fields that are ∼10 3 times smaller than other existing work (∼10 5 G). We anticipate this work will pave the way for effective property enhancement and bulk application of CNT–polymer nanocomposites, through controlled micro-structure and scalable manufacturing. (paper)

  20. Scalable Fabrication of Integrated Nanophotonic Circuits on Arrays of Thin Single Crystal Diamond Membrane Windows.

    Science.gov (United States)

    Piracha, Afaq H; Rath, Patrik; Ganesan, Kumaravelu; Kühn, Stefan; Pernice, Wolfram H P; Prawer, Steven

    2016-05-11

    Diamond has emerged as a promising platform for nanophotonic, optical, and quantum technologies. High-quality, single crystalline substrates of acceptable size are a prerequisite to meet the demanding requirements on low-level impurities and low absorption loss when targeting large photonic circuits. Here, we describe a scalable fabrication method for single crystal diamond membrane windows that achieves three major goals with one fabrication method: providing high quality diamond, as confirmed by Raman spectroscopy; achieving homogeneously thin membranes, enabled by ion implantation; and providing compatibility with established planar fabrication via lithography and vertical etching. On such suspended diamond membranes we demonstrate a suite of photonic components as building blocks for nanophotonic circuits. Monolithic grating couplers are used to efficiently couple light between photonic circuits and optical fibers. In waveguide coupled optical ring resonators, we find loaded quality factors up to 66 000 at a wavelength of 1560 nm, corresponding to propagation loss below 7.2 dB/cm. Our approach holds promise for the scalable implementation of future diamond quantum photonic technologies and all-diamond photonic metrology tools.

  1. Scalable Production of Si Nanoparticles Directly from Low Grade Sources for Lithium-Ion Battery Anode.

    Science.gov (United States)

    Zhu, Bin; Jin, Yan; Tan, Yingling; Zong, Linqi; Hu, Yue; Chen, Lei; Chen, Yanbin; Zhang, Qiao; Zhu, Jia

    2015-09-09

    Silicon, one of the most promising candidates as lithium-ion battery anode, has attracted much attention due to its high theoretical capacity, abundant existence, and mature infrastructure. Recently, Si nanostructures-based lithium-ion battery anode, with sophisticated structure designs and process development, has made significant progress. However, low cost and scalable processes to produce these Si nanostructures remained as a challenge, which limits the widespread applications. Herein, we demonstrate that Si nanoparticles with controlled size can be massively produced directly from low grade Si sources through a scalable high energy mechanical milling process. In addition, we systematically studied Si nanoparticles produced from two major low grade Si sources, metallurgical silicon (∼99 wt % Si, $1/kg) and ferrosilicon (∼83 wt % Si, $0.6/kg). It is found that nanoparticles produced from ferrosilicon sources contain FeSi2, which can serve as a buffer layer to alleviate the mechanical fractures of volume expansion, whereas nanoparticles from metallurgical Si sources have higher capacity and better kinetic properties because of higher purity and better electronic transport properties. Ferrosilicon nanoparticles and metallurgical Si nanoparticles demonstrate over 100 stable deep cycling after carbon coating with the reversible capacities of 1360 mAh g(-1) and 1205 mAh g(-1), respectively. Therefore, our approach provides a new strategy for cost-effective, energy-efficient, large scale synthesis of functional Si electrode materials.

  2. PanDA Beyond ATLAS : A Scalable Workload Management System For Data Intensive Science

    CERN Document Server

    Borodin, M; The ATLAS collaboration; Jha, S; Golubkov, D; Klimentov, A; Maeno, T; Nilsson, P; Oleynik, D; Panitkin, S; Petrosyan, A; Schovancova, J; Vaniachine, A; Wenaus, T

    2014-01-01

    The LHC experiments are today at the leading edge of large scale distributed data-intensive computational science. The LHC's ATLAS experiment processes data volumes which are particularly extreme, over 140 PB to date, distributed worldwide at over of 120 sites. An important element in the success of the exciting physics results from ATLAS is the highly scalable integrated workflow and dataflow management afforded by the PanDA workload management system, used for all the distributed computing needs of the experiment. The PanDA design is not experiment specific and PanDA is now being extended to support other data intensive scientific applications. PanDA was cited as an example of "a high performance, fault tolerant software for fast, scalable access to data repositories of many kinds" during the "Big Data Research and Development Initiative" announcement, a 200 million USD U.S. government investment in tools to handle huge volumes of digital data needed to spur science and engineering discoveries. In this talk...

  3. Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework.

    Science.gov (United States)

    Lewis, Steven; Csordas, Attila; Killcoyne, Sarah; Hermjakob, Henning; Hoopmann, Michael R; Moritz, Robert L; Deutsch, Eric W; Boyle, John

    2012-12-05

    For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources.

  4. Scalable Stream Processing with Quality of Service for Smart City Crowdsensing Applications

    Directory of Open Access Journals (Sweden)

    Paolo Bellavista

    2013-12-01

    Full Text Available Crowdsensing is emerging as a powerful paradigm capable of leveraging the collective, though imprecise, monitoring capabilities of common people carrying smartphones or other personal devices, which can effectively become real-time mobile sensors, collecting information about the physical places they live in. This unprecedented amount of information, considered collectively, offers new valuable opportunities to understand more thoroughly the environment in which we live and, more importantly, gives the chance to use this deeper knowledge to act and improve, in a virtuous loop, the environment itself. However, managing this process is a hard technical challenge, spanning several socio-technical issues: here, we focus on the related quality, reliability, and scalability trade-offs by proposing an architecture for crowdsensing platforms that dynamically self-configure and self-adapt depending on application-specific quality requirements. In the context of this general architecture, the paper will specifically focus on the Quasit distributed stream processing middleware, and show how Quasit can be used to process and analyze crowdsensing-generated data flows with differentiated quality requirements in a highly scalable and reliable way.

  5. VPLS: an effective technology for building scalable transparent LAN services

    Science.gov (United States)

    Dong, Ximing; Yu, Shaohua

    2005-02-01

    Virtual Private LAN Service (VPLS) is generating considerable interest with enterprises and service providers as it offers multipoint transparent LAN service (TLS) over MPLS networks. This paper describes an effective technology - VPLS, which links virtual switch instances (VSIs) through MPLS to form an emulated Ethernet switch and build Scalable Transparent Lan Services. It first focuses on the architecture of VPLS with Ethernet bridging technique at the edge and MPLS at the core, then it tries to elucidate the data forwarding mechanism within VPLS domain, including learning and aging MAC addresses on a per LSP basis, flooding of unknown frames and replication for unknown, multicast, and broadcast frames. The loop-avoidance mechanism, known as split horizon forwarding, is also analyzed. Another important aspect of VPLS service is its basic operation, including autodiscovery and signaling, is discussed. From the perspective of efficiency and scalability the paper compares two important signaling mechanism, BGP and LDP, which are used to set up a PW between the PEs and bind the PWs to a particular VSI. With the extension of VPLS and the increase of full mesh of PWs between PE devices (n*(n-1)/2 PWs in all, a n2 complete problem), VPLS instance could have a large number of remote PE associations, resulting in an inefficient use of network bandwidth and system resources as the ingress PE has to replicate each frame and append MPLS labels for remote PE. So the latter part of this paper focuses on the scalability issue: the Hierarchical VPLS. Within the architecture of HVPLS, this paper addresses two ways to cope with a possibly large number of MAC addresses, which make VPLS operate more efficiently.

  6. A scalable lock-free hash table with open addressing

    DEFF Research Database (Denmark)

    Nielsen, Jesper Puge; Karlsson, Sven

    2016-01-01

    and concurrent operations without any locks. In this paper, we present a new fully lock-free open addressed hash table with a simpler design than prior published work. We split hash table insertions into two atomic phases: first inserting a value ignoring other concurrent operations, then in the second phase......Concurrent data structures synchronized with locks do not scale well with the number of threads. As more scalable alternatives, concurrent data structures and algorithms based on widely available, however advanced, atomic operations have been proposed. These data structures allow for correct...

  7. Scalable web services for the PSIPRED Protein Analysis Workbench.

    Science.gov (United States)

    Buchan, Daniel W A; Minneci, Federico; Nugent, Tim C O; Bryson, Kevin; Jones, David T

    2013-07-01

    Here, we present the new UCL Bioinformatics Group's PSIPRED Protein Analysis Workbench. The Workbench unites all of our previously available analysis methods into a single web-based framework. The new web portal provides a greatly streamlined user interface with a number of new features to allow users to better explore their results. We offer a number of additional services to enable computationally scalable execution of our prediction methods; these include SOAP and XML-RPC web server access and new HADOOP packages. All software and services are available via the UCL Bioinformatics Group website at http://bioinf.cs.ucl.ac.uk/.

  8. A Scalable Architecture of a Structured LDPC Decoder

    Science.gov (United States)

    Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon

    2004-01-01

    We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.

  9. Interactive segmentation: a scalable superpixel-based method

    Science.gov (United States)

    Mathieu, Bérengère; Crouzil, Alain; Puel, Jean-Baptiste

    2017-11-01

    This paper addresses the problem of interactive multiclass segmentation of images. We propose a fast and efficient new interactive segmentation method called superpixel α fusion (SαF). From a few strokes drawn by a user over an image, this method extracts relevant semantic objects. To get a fast calculation and an accurate segmentation, SαF uses superpixel oversegmentation and support vector machine classification. We compare SαF with competing algorithms by evaluating its performances on reference benchmarks. We also suggest four new datasets to evaluate the scalability of interactive segmentation methods, using images from some thousand to several million pixels. We conclude with two applications of SαF.

  10. Robust and scalable optical one-way quantum computation

    International Nuclear Information System (INIS)

    Wang Hefeng; Yang Chuiping; Nori, Franco

    2010-01-01

    We propose an efficient approach for deterministically generating scalable cluster states with photons. This approach involves unitary transformations performed on atoms coupled to optical cavities. Its operation cost scales linearly with the number of qubits in the cluster state, and photon qubits are encoded such that single-qubit operations can be easily implemented by using linear optics. Robust optical one-way quantum computation can be performed since cluster states can be stored in atoms and then transferred to photons that can be easily operated and measured. Therefore, this proposal could help in performing robust large-scale optical one-way quantum computation.

  11. Scalable Brain Network Construction on White Matter Fibers.

    Science.gov (United States)

    Chung, Moo K; Adluru, Nagesh; Dalton, Kim M; Alexander, Andrew L; Davidson, Richard J

    2011-02-12

    DTI offers a unique opportunity to characterize the structural connectivity of the human brain non-invasively by tracing white matter fiber tracts. Whole brain tractography studies routinely generate up to half million tracts per brain, which serves as edges in an extremely large 3D graph with up to half million edges. Currently there is no agreed-upon method for constructing the brain structural network graphs out of large number of white matter tracts. In this paper, we present a scalable iterative framework called the ε-neighbor method for building a network graph and apply it to testing abnormal connectivity in autism.

  12. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2008-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately......The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chip. This means that parallel processing is required in application areas that traditionally have not used...

  13. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2009-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately......The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chips. This means that parallel processing is required in application areas that traditionally have not used...

  14. Focal plane array with modular pixel array components for scalability

    Science.gov (United States)

    Kay, Randolph R; Campbell, David V; Shinde, Subhash L; Rienstra, Jeffrey L; Serkland, Darwin K; Holmes, Michael L

    2014-12-09

    A modular, scalable focal plane array is provided as an array of integrated circuit dice, wherein each die includes a given amount of modular pixel array circuitry. The array of dice effectively multiplies the amount of modular pixel array circuitry to produce a larger pixel array without increasing die size. Desired pixel pitch across the enlarged pixel array is preserved by forming die stacks with each pixel array circuitry die stacked on a separate die that contains the corresponding signal processing circuitry. Techniques for die stack interconnections and die stack placement are implemented to ensure that the desired pixel pitch is preserved across the enlarged pixel array.

  15. A scalable parallel algorithm for multiple objective linear programs

    Science.gov (United States)

    Wiecek, Malgorzata M.; Zhang, Hong

    1994-01-01

    This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.

  16. Empirical Evaluation of Superposition Coded Multicasting for Scalable Video

    KAUST Repository

    Chun Pong Lau

    2013-03-01

    In this paper we investigate cross-layer superposition coded multicast (SCM). Previous studies have proven its effectiveness in exploiting better channel capacity and service granularities via both analytical and simulation approaches. However, it has never been practically implemented using a commercial 4G system. This paper demonstrates our prototype in achieving the SCM using a standard 802.16 based testbed for scalable video transmissions. In particular, to implement the superposition coded (SPC) modulation, we take advantage a novel software approach, namely logical SPC (L-SPC), which aims to mimic the physical layer superposition coded modulation. The emulation results show improved throughput comparing with generic multicast method.

  17. Scalable and Hybrid Radio Resource Management for Future Wireless Networks

    DEFF Research Database (Denmark)

    Mino, E.; Luo, Jijun; Tragos, E.

    2007-01-01

    The concept of ubiquitous and scalable system is applied in the IST WINNER II [1] project to deliver optimum performance for different deployment scenarios, from local area to wide area wireless networks. The integration in a unique radio system of a cellular and local area type networks supposes...... a great advantage for the final user and for the operator, compared with the current situation, with disconnected systems, usually with different subscriptions, radio interfaces and terminals. To be a ubiquitous wireless system, the IST project WINNER II has defined three system modes. This contribution...

  18. Trident: scalable compute archives: workflows, visualization, and analysis

    Science.gov (United States)

    Gopu, Arvind; Hayashi, Soichi; Young, Michael D.; Kotulla, Ralf; Henschel, Robert; Harbeck, Daniel

    2016-08-01

    The Astronomy scientific community has embraced Big Data processing challenges, e.g. associated with time-domain astronomy, and come up with a variety of novel and efficient data processing solutions. However, data processing is only a small part of the Big Data challenge. Efficient knowledge discovery and scientific advancement in the Big Data era requires new and equally efficient tools: modern user interfaces for searching, identifying and viewing data online without direct access to the data; tracking of data provenance; searching, plotting and analyzing metadata; interactive visual analysis, especially of (time-dependent) image data; and the ability to execute pipelines on supercomputing and cloud resources with minimal user overhead or expertise even to novice computing users. The Trident project at Indiana University offers a comprehensive web and cloud-based microservice software suite that enables the straight forward deployment of highly customized Scalable Compute Archive (SCA) systems; including extensive visualization and analysis capabilities, with minimal amount of additional coding. Trident seamlessly scales up or down in terms of data volumes and computational needs, and allows feature sets within a web user interface to be quickly adapted to meet individual project requirements. Domain experts only have to provide code or business logic about handling/visualizing their domain's data products and about executing their pipelines and application work flows. Trident's microservices architecture is made up of light-weight services connected by a REST API and/or a message bus; a web interface elements are built using NodeJS, AngularJS, and HighCharts JavaScript libraries among others while backend services are written in NodeJS, PHP/Zend, and Python. The software suite currently consists of (1) a simple work flow execution framework to integrate, deploy, and execute pipelines and applications (2) a progress service to monitor work flows and sub

  19. Model-Based Evaluation Of System Scalability: Bandwidth Analysis For Smartphone-Based Biosensing Applications

    DEFF Research Database (Denmark)

    Patou, François; Madsen, Jan; Dimaki, Maria

    2016-01-01

    Scalability is a design principle often valued for the engineering of complex systems. Scalability is the ability of a system to change the current value of one of its specification parameters. Although targeted frameworks are available for the evaluation of scalability for specific digital systems...... re-engineering of 5 independent system modules, from the replacement of a wireless Bluetooth interface, to the revision of the ADC sample-and-hold operation could help increase system bandwidth....

  20. Scalability of Ferroelectric Tunnel Junctions to Sub-100 nm Dimensions

    Science.gov (United States)

    Abuwasib, Mohammad

    The ferroelectric tunnel junction (FTJ) is an emerging low-power device that has potential application as a non-volatile memory and logic element in beyond-CMOS circuits. As a beyond- CMOS device, it is necessary to investigate the device scaling limit of FTJs to sub-50 nm dimensions. In addition to the fabrication of scaled FTJs, the integration challenges and CMOS compatibility of the device needs to be addressed. FTJ device performance including ON/OFF ratio, memory retention time, switching endurance, write /read speed and power dissipation need to be characterized for benchmarking of this emerging device, compared to its charge-based counterparts such as DRAM, NAND/NOR flash, as well as to other emerging memory devices. In this dissertation, a detailed investigation of scaling of BaTiO3 (BTO) based FTJs was performed, from full-scale integration to electrical characterization. Two types of FTJs with La0.67Sr0.33MnO3 (LSMO) and SrRuO3 (SRO) bottom electrodes were investigated in this work namely; Co/BTO/LSMO and Co/BTO/SRO. A CMOS compatible fabrication process for integration of Co/BTO/LSMO FTJ devices ( 3x3 microm 2) was demonstrated for the first time using standard photolithography and self-aligned RIE technique. The fabricated FTJ device showed switching behavior, however, degradation of the LSMO contact was observed during the fabrication process. A detailed investigation of the contact properties of bottom electrode materials (LSMO, SRO) for BTO-based FTJs was performed. The process and thermal stability of different contact overlayers (Ti, Pt) was explained to understand the nature of the ohmic contacts for metal to SRO and LSMO layers. Noble metals-to-SRO was found to form the most stable contacts for FTJs. Based on this study, a systematic scalability study of Co/BTO/SRO FTJs was carried out from micron ( 3x3 microm2) to submicron ( 200x200 nm2) dimensions. Positive UP Negative Down (PUND) measurement confirms the ferroelectric properties of the BTO

  1. Neutron generators with size scalability, ease of fabrication and multiple ion source functionalities

    Science.gov (United States)

    Elizondo-Decanini, Juan M

    2014-11-18

    A neutron generator is provided with a flat, rectilinear geometry and surface mounted metallizations. This construction provides scalability and ease of fabrication, and permits multiple ion source functionalities.

  2. Traffic and Quality Characterization of the H.264/AVC Scalable Video Coding Extension

    Directory of Open Access Journals (Sweden)

    Geert Van der Auwera

    2008-01-01

    Full Text Available The recent scalable video coding (SVC extension to the H.264/AVC video coding standard has unprecedented compression efficiency while supporting a wide range of scalability modes, including temporal, spatial, and quality (SNR scalability, as well as combined spatiotemporal SNR scalability. The traffic characteristics, especially the bit rate variabilities, of the individual layer streams critically affect their network transport. We study the SVC traffic statistics, including the bit rate distortion and bit rate variability distortion, with long CIF resolution video sequences and compare them with the corresponding MPEG-4 Part 2 traffic statistics. We consider (i temporal scalability with three temporal layers, (ii spatial scalability with a QCIF base layer and a CIF enhancement layer, as well as (iii quality scalability modes FGS and MGS. We find that the significant improvement in RD efficiency of SVC is accompanied by substantially higher traffic variabilities as compared to the equivalent MPEG-4 Part 2 streams. We find that separately analyzing the traffic of temporal-scalability only encodings gives reasonable estimates of the traffic statistics of the temporal layers embedded in combined spatiotemporal encodings and in the base layer of combined FGS-temporal encodings. Overall, we find that SVC achieves significantly higher compression ratios than MPEG-4 Part 2, but produces unprecedented levels of traffic variability, thus presenting new challenges for the network transport of scalable video.

  3. SCALABLE TIME SERIES CHANGE DETECTION FOR BIOMASS MONITORING USING GAUSSIAN PROCESS

    Data.gov (United States)

    National Aeronautics and Space Administration — SCALABLE TIME SERIES CHANGE DETECTION FOR BIOMASS MONITORING USING GAUSSIAN PROCESS VARUN CHANDOLA AND RANGA RAJU VATSAVAI Abstract. Biomass monitoring,...

  4. On Scalability and Replicability of Smart Grid Projects—A Case Study

    Directory of Open Access Journals (Sweden)

    Lukas Sigrist

    2016-03-01

    Full Text Available This paper studies the scalability and replicability of smart grid projects. Currently, most smart grid projects are still in the R&D or demonstration phases. The full roll-out of the tested solutions requires a suitable degree of scalability and replicability to prevent project demonstrators from remaining local experimental exercises. Scalability and replicability are the preliminary requisites to perform scaling-up and replication successfully; therefore, scalability and replicability allow for or at least reduce barriers for the growth and reuse of the results of project demonstrators. The paper proposes factors that influence and condition a project’s scalability and replicability. These factors involve technical, economic, regulatory and stakeholder acceptance related aspects, and they describe requirements for scalability and replicability. In order to assess and evaluate the identified scalability and replicability factors, data has been collected from European and national smart grid projects by means of a survey, reflecting the projects’ view and results. The evaluation of the factors allows quantifying the status quo of on-going projects with respect to the scalability and replicability, i.e., they provide a feedback on to what extent projects take into account these factors and on whether the projects’ results and solutions are actually scalable and replicable.

  5. Ultracold molecules: vehicles to scalable quantum information processing

    International Nuclear Information System (INIS)

    Brickman Soderberg, Kathy-Anne; Gemelke, Nathan; Chin Cheng

    2009-01-01

    In this paper, we describe a novel scheme to implement scalable quantum information processing using Li-Cs molecular states to entangle 6 Li and 133 Cs ultracold atoms held in independent optical lattices. The 6 Li atoms will act as quantum bits to store information and 133 Cs atoms will serve as messenger bits that aid in quantum gate operations and mediate entanglement between distant qubit atoms. Each atomic species is held in a separate optical lattice and the atoms can be overlapped by translating the lattices with respect to each other. When the messenger and qubit atoms are overlapped, targeted single-spin operations and entangling operations can be performed by coupling the atomic states to a molecular state with radio-frequency pulses. By controlling the frequency and duration of the radio-frequency pulses, entanglement can be either created or swapped between a qubit messenger pair. We estimate operation fidelities for entangling two distant qubits and discuss scalability of this scheme and constraints on the optical lattice lasers. Finally we demonstrate experimental control of the optical potentials sufficient to translate atoms in the lattice.

  6. Scalable Nernst thermoelectric power using a coiled galfenol wire

    Science.gov (United States)

    Yang, Zihao; Codecido, Emilio A.; Marquez, Jason; Zheng, Yuanhua; Heremans, Joseph P.; Myers, Roberto C.

    2017-09-01

    The Nernst thermopower usually is considered far too weak in most metals for waste heat recovery. However, its transverse orientation gives it an advantage over the Seebeck effect on non-flat surfaces. Here, we experimentally demonstrate the scalable generation of a Nernst voltage in an air-cooled metal wire coiled around a hot cylinder. In this geometry, a radial temperature gradient generates an azimuthal electric field in the coil. A Galfenol (Fe0.85Ga0.15) wire is wrapped around a cartridge heater, and the voltage drop across the wire is measured as a function of axial magnetic field. As expected, the Nernst voltage scales linearly with the length of the wire. Based on heat conduction and fluid dynamic equations, finite-element method is used to calculate the temperature gradient across the Galfenol wire and determine the Nernst coefficient. A giant Nernst coefficient of -2.6 μV/KT at room temperature is estimated, in agreement with measurements on bulk Galfenol. We expect that the giant Nernst effect in Galfenol arises from its magnetostriction, presumably through enhanced magnon-phonon coupling. Our results demonstrate the feasibility of a transverse thermoelectric generator capable of scalable output power from non-flat heat sources.

  7. Scalable manufacturing of biomimetic moldable hydrogels for industrial applications

    Science.gov (United States)

    Yu, Anthony C.; Chen, Haoxuan; Chan, Doreen; Agmon, Gillie; Stapleton, Lyndsay M.; Sevit, Alex M.; Tibbitt, Mark W.; Acosta, Jesse D.; Zhang, Tony; Franzia, Paul W.; Langer, Robert; Appel, Eric A.

    2016-12-01

    Hydrogels are a class of soft material that is exploited in many, often completely disparate, industrial applications, on account of their unique and tunable properties. Advances in soft material design are yielding next-generation moldable hydrogels that address engineering criteria in several industrial settings such as complex viscosity modifiers, hydraulic or injection fluids, and sprayable carriers. Industrial implementation of these viscoelastic materials requires extreme volumes of material, upwards of several hundred million gallons per year. Here, we demonstrate a paradigm for the scalable fabrication of self-assembled moldable hydrogels using rationally engineered, biomimetic polymer-nanoparticle interactions. Cellulose derivatives are linked together by selective adsorption to silica nanoparticles via dynamic and multivalent interactions. We show that the self-assembly process for gel formation is easily scaled in a linear fashion from 0.5 mL to over 15 L without alteration of the mechanical properties of the resultant materials. The facile and scalable preparation of these materials leveraging self-assembly of inexpensive, renewable, and environmentally benign starting materials, coupled with the tunability of their properties, make them amenable to a range of industrial applications. In particular, we demonstrate their utility as injectable materials for pipeline maintenance and product recovery in industrial food manufacturing as well as their use as sprayable carriers for robust application of fire retardants in preventing wildland fires.

  8. Scalable parallel prefix solvers for discrete ordinates transport

    International Nuclear Information System (INIS)

    Pautz, S.; Pandya, T.; Adams, M.

    2009-01-01

    The well-known 'sweep' algorithm for inverting the streaming-plus-collision term in first-order deterministic radiation transport calculations has some desirable numerical properties. However, it suffers from parallel scaling issues caused by a lack of concurrency. The maximum degree of concurrency, and thus the maximum parallelism, grows more slowly than the problem size for sweeps-based solvers. We investigate a new class of parallel algorithms that involves recasting the streaming-plus-collision problem in prefix form and solving via cyclic reduction. This method, although computationally more expensive at low levels of parallelism than the sweep algorithm, offers better theoretical scalability properties. Previous work has demonstrated this approach for one-dimensional calculations; we show how to extend it to multidimensional calculations. Notably, for multiple dimensions it appears that this approach is limited to long-characteristics discretizations; other discretizations cannot be cast in prefix form. We implement two variants of the algorithm within the radlib/SCEPTRE transport code library at Sandia National Laboratories and show results on two different massively parallel systems. Both the 'forward' and 'symmetric' solvers behave similarly, scaling well to larger degrees of parallelism then sweeps-based solvers. We do observe some issues at the highest levels of parallelism (relative to the system size) and discuss possible causes. We conclude that this approach shows good potential for future parallel systems, but the parallel scalability will depend heavily on the architecture of the communication networks of these systems. (authors)

  9. Scalable privacy-preserving big data aggregation mechanism

    Directory of Open Access Journals (Sweden)

    Dapeng Wu

    2016-08-01

    Full Text Available As the massive sensor data generated by large-scale Wireless Sensor Networks (WSNs recently become an indispensable part of ‘Big Data’, the collection, storage, transmission and analysis of the big sensor data attract considerable attention from researchers. Targeting the privacy requirements of large-scale WSNs and focusing on the energy-efficient collection of big sensor data, a Scalable Privacy-preserving Big Data Aggregation (Sca-PBDA method is proposed in this paper. Firstly, according to the pre-established gradient topology structure, sensor nodes in the network are divided into clusters. Secondly, sensor data is modified by each node according to the privacy-preserving configuration message received from the sink. Subsequently, intra- and inter-cluster data aggregation is employed during the big sensor data reporting phase to reduce energy consumption. Lastly, aggregated results are recovered by the sink to complete the privacy-preserving big data aggregation. Simulation results validate the efficacy and scalability of Sca-PBDA and show that the big sensor data generated by large-scale WSNs is efficiently aggregated to reduce network resource consumption and the sensor data privacy is effectively protected to meet the ever-growing application requirements.

  10. Scalable fast multipole methods for vortex element methods

    KAUST Repository

    Hu, Qi

    2012-11-01

    We use a particle-based method to simulate incompressible flows, where the Fast Multipole Method (FMM) is used to accelerate the calculation of particle interactions. The most time-consuming kernelsâ\\'the Biot-Savart equation and stretching term of the vorticity equationâ\\'are mathematically reformulated so that only two Laplace scalar potentials are used instead of six, while automatically ensuring divergence-free far-field computation. Based on this formulation, and on our previous work for a scalar heterogeneous FMM algorithm, we develop a new FMM-based vortex method capable of simulating general flows including turbulence on heterogeneous architectures, which distributes the work between multi-core CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm also uses new data structures which can dynamically manage inter-node communication and load balance efficiently but with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s. © 2012 IEEE.

  11. Silicon nanophotonics for scalable quantum coherent feedback networks

    Energy Technology Data Exchange (ETDEWEB)

    Sarovar, Mohan; Brif, Constantin [Sandia National Laboratories, Livermore, CA (United States); Soh, Daniel B.S. [Sandia National Laboratories, Livermore, CA (United States); Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States); Cox, Jonathan; DeRose, Christopher T.; Camacho, Ryan; Davids, Paul [Sandia National Laboratories, Albuquerque, NM (United States)

    2016-12-15

    The emergence of coherent quantum feedback control (CQFC) as a new paradigm for precise manipulation of dynamics of complex quantum systems has led to the development of efficient theoretical modeling and simulation tools and opened avenues for new practical implementations. This work explores the applicability of the integrated silicon photonics platform for implementing scalable CQFC networks. If proven successful, on-chip implementations of these networks would provide scalable and efficient nanophotonic components for autonomous quantum information processing devices and ultra-low-power optical processing systems at telecommunications wavelengths. We analyze the strengths of the silicon photonics platform for CQFC applications and identify the key challenges to both the theoretical formalism and experimental implementations. In particular, we determine specific extensions to the theoretical CQFC framework (which was originally developed with bulk-optics implementations in mind), required to make it fully applicable to modeling of linear and nonlinear integrated optics networks. We also report the results of a preliminary experiment that studied the performance of an in situ controllable silicon nanophotonic network of two coupled cavities and analyze the properties of this device using the CQFC formalism. (orig.)

  12. The TOTEM DAQ based on the Scalable Readout System (SRS)

    Science.gov (United States)

    Quinto, Michele; Cafagna, Francesco S.; Fiergolski, Adrian; Radicioni, Emilio

    2018-02-01

    The TOTEM (TOTal cross section, Elastic scattering and diffraction dissociation Measurement at the LHC) experiment at LHC, has been designed to measure the total proton-proton cross-section and study the elastic and diffractive scattering at the LHC energies. In order to cope with the increased machine luminosity and the higher statistic required by the extension of the TOTEM physics program, approved for the LHC's Run Two phase, the previous VME based data acquisition system has been replaced with a new one based on the Scalable Readout System. The system features an aggregated data throughput of 2GB / s towards the online storage system. This makes it possible to sustain a maximum trigger rate of ˜ 24kHz, to be compared with the 1KHz rate of the previous system. The trigger rate is further improved by implementing zero-suppression and second-level hardware algorithms in the Scalable Readout System. The new system fulfils the requirements for an increased efficiency, providing higher bandwidth, and increasing the purity of the data recorded. Moreover full compatibility has been guaranteed with the legacy front-end hardware, as well as with the DAQ interface of the CMS experiment and with the LHC's Timing, Trigger and Control distribution system. In this contribution we describe in detail the architecture of full system and its performance measured during the commissioning phase at the LHC Interaction Point.

  13. Silicon nanophotonics for scalable quantum coherent feedback networks

    International Nuclear Information System (INIS)

    Sarovar, Mohan; Brif, Constantin; Soh, Daniel B.S.; Cox, Jonathan; DeRose, Christopher T.; Camacho, Ryan; Davids, Paul

    2016-01-01

    The emergence of coherent quantum feedback control (CQFC) as a new paradigm for precise manipulation of dynamics of complex quantum systems has led to the development of efficient theoretical modeling and simulation tools and opened avenues for new practical implementations. This work explores the applicability of the integrated silicon photonics platform for implementing scalable CQFC networks. If proven successful, on-chip implementations of these networks would provide scalable and efficient nanophotonic components for autonomous quantum information processing devices and ultra-low-power optical processing systems at telecommunications wavelengths. We analyze the strengths of the silicon photonics platform for CQFC applications and identify the key challenges to both the theoretical formalism and experimental implementations. In particular, we determine specific extensions to the theoretical CQFC framework (which was originally developed with bulk-optics implementations in mind), required to make it fully applicable to modeling of linear and nonlinear integrated optics networks. We also report the results of a preliminary experiment that studied the performance of an in situ controllable silicon nanophotonic network of two coupled cavities and analyze the properties of this device using the CQFC formalism. (orig.)

  14. Scalable Nernst thermoelectric power using a coiled galfenol wire

    Directory of Open Access Journals (Sweden)

    Zihao Yang

    2017-09-01

    Full Text Available The Nernst thermopower usually is considered far too weak in most metals for waste heat recovery. However, its transverse orientation gives it an advantage over the Seebeck effect on non-flat surfaces. Here, we experimentally demonstrate the scalable generation of a Nernst voltage in an air-cooled metal wire coiled around a hot cylinder. In this geometry, a radial temperature gradient generates an azimuthal electric field in the coil. A Galfenol (Fe0.85Ga0.15 wire is wrapped around a cartridge heater, and the voltage drop across the wire is measured as a function of axial magnetic field. As expected, the Nernst voltage scales linearly with the length of the wire. Based on heat conduction and fluid dynamic equations, finite-element method is used to calculate the temperature gradient across the Galfenol wire and determine the Nernst coefficient. A giant Nernst coefficient of -2.6 μV/KT at room temperature is estimated, in agreement with measurements on bulk Galfenol. We expect that the giant Nernst effect in Galfenol arises from its magnetostriction, presumably through enhanced magnon-phonon coupling. Our results demonstrate the feasibility of a transverse thermoelectric generator capable of scalable output power from non-flat heat sources.

  15. On eliminating synchronous communication in molecular simulations to improve scalability

    Science.gov (United States)

    Straatsma, T. P.; Chavarría-Miranda, Daniel G.

    2013-12-01

    Molecular dynamics simulation, as a complementary tool to experimentation, has become an important methodology for the understanding and design of molecular systems as it provides access to properties that are difficult, impossible or prohibitively expensive to obtain experimentally. Many of the available software packages have been parallelized to take advantage of modern massively concurrent processing resources. The challenge in achieving parallel efficiency is commonly attributed to the fact that molecular dynamics algorithms are communication intensive. This paper illustrates how an appropriately chosen data distribution and asynchronous one-sided communication approach can be used to effectively deal with the data movement within the Global Arrays/ARMCI programming model framework. A new put_notify capability is presented here, allowing the implementation of the molecular dynamics algorithm without any explicit global or local synchronization or global data reduction operations. In addition, this push-data model is shown to very effectively allow hiding data communication behind computation. Rather than data movement or explicit global reductions, the implicit synchronization of the algorithm becomes the primary challenge for scalability. Without any explicit synchronous operations, the scalability of molecular simulations is shown to depend only on the ability to evenly balance computational load.

  16. Scalable Multicasting over Next-Generation Internet Design, Analysis and Applications

    CERN Document Server

    Tian, Xiaohua

    2013-01-01

    Next-generation Internet providers face high expectations, as contemporary users worldwide expect high-quality multimedia functionality in a landscape of ever-expanding network applications. This volume explores the critical research issue of turning today’s greatly enhanced hardware capacity to good use in designing a scalable multicast  protocol for supporting large-scale multimedia services. Linking new hardware to improved performance in the Internet’s next incarnation is a research hot-spot in the computer communications field.   The methodical presentation deals with the key questions in turn: from the mechanics of multicast protocols to current state-of-the-art designs, and from methods of theoretical analysis of these protocols to applying them in the ns2 network simulator, known for being hard to extend. The authors’ years of research in the field inform this thorough treatment, which covers details such as applying AOM (application-oriented multicast) protocol to IPTV provision and resolving...

  17. Scalable photonic quantum computing assisted by quantum-dot spin in double-sided optical microcavity.

    Science.gov (United States)

    Wei, Hai-Rui; Deng, Fu-Guo

    2013-07-29

    We investigate the possibility of achieving scalable photonic quantum computing by the giant optical circular birefringence induced by a quantum-dot spin in a double-sided optical microcavity as a result of cavity quantum electrodynamics. We construct a deterministic controlled-not gate on two photonic qubits by two single-photon input-output processes and the readout on an electron-medium spin confined in an optical resonant microcavity. This idea could be applied to multi-qubit gates on photonic qubits and we give the quantum circuit for a three-photon Toffoli gate. High fidelities and high efficiencies could be achieved when the side leakage to the cavity loss rate is low. It is worth pointing out that our devices work in both the strong and the weak coupling regimes.

  18. Scalable nanofabrication of U-shaped nanowire resonators with tunable optical magnetism.

    Science.gov (United States)

    Zhou, Fan; Wang, Chen; Dong, Biqin; Chen, Xiangfan; Zhang, Zhen; Sun, Cheng

    2016-03-21

    Split ring resonators have been studied extensively in reconstituting the diminishing magnetism at high electromagnetic frequencies in nature. However, breakdown in the linear scaling of artificial magnetism is found to occur at the near-infrared frequency mainly due to the increasing contribution of self-inductance while reducing dimensions of the resonators. Although alternative designs have enabled artificial magnetism at optical frequencies, their sophisticated configurations and fabrication procedures do not lend themselves to easy implementation. Here, we report scalable nanofabrication of U-shaped nanowire resonators (UNWRs) using the high-throughput nanotransfer printing method. By providing ample area for conducting oscillating electric current, UNWRs overcome the saturation of the geometric scaling of the artificial magnetism. We experimentally demonstrated coarse and fine tuning of LC resonances over a wide wavelength range from 748 nm to 1600 nm. The added flexibility in transferring to other substrates makes UNWR a versatile building block for creating functional metamaterials in three dimensions.

  19. KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery

    Science.gov (United States)

    Fraser, Joshua; Haridas, Anoop; Seetharaman, Guna; Rao, Raghuveer M.; Palaniappan, Kannappan

    2013-05-01

    KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.

  20. WIFIRE: A Scalable Data-Driven Monitoring, Dynamic Prediction and Resilience Cyberinfrastructure for Wildfires

    Science.gov (United States)

    Altintas, I.; Block, J.; Braun, H.; de Callafon, R. A.; Gollner, M. J.; Smarr, L.; Trouve, A.

    2013-12-01

    , during and after a wildfire. Scalability of the WIFIRE approach allows many sensors to be subjected to user-specified data processing algorithms to generate threshold alerts within seconds. Integration of this sensor data into both rapidly available fire image data and models will better enable situational awareness, responses and decision support at local, state, national, and international levels. The products of WIFIRE will be initially disseminated to our collaborators (SDG&E, CAL FIRE, USFS), covering academic, private, and government laboratories while generating values to emergency officials, and consequently to the general public. WIFIRE may be used by government agencies in the future to save lives and property during wildfire events, test the effectiveness of response and evacuation scenarios before they occur and assess the effectiveness of high-density sensor networks in improving fire and weather predictions. WIFIRE's high-density network, therefore, will serve as a testbed for future applications worldwide.

  1. BAMSI: a multi-cloud service for scalable distributed filtering of massive genome data.

    Science.gov (United States)

    Ausmees, Kristiina; John, Aji; Toor, Salman Z; Hellander, Andreas; Nettelblad, Carl

    2018-06-26

    The advent of next-generation sequencing (NGS) has made whole-genome sequencing of cohorts of individuals a reality. Primary datasets of raw or aligned reads of this sort can get very large. For scientific questions where curated called variants are not sufficient, the sheer size of the datasets makes analysis prohibitively expensive. In order to make re-analysis of such data feasible without the need to have access to a large-scale computing facility, we have developed a highly scalable, storage-agnostic framework, an associated API and an easy-to-use web user interface to execute custom filters on large genomic datasets. We present BAMSI, a Software as-a Service (SaaS) solution for filtering of the 1000 Genomes phase 3 set of aligned reads, with the possibility of extension and customization to other sets of files. Unique to our solution is the capability of simultaneously utilizing many different mirrors of the data to increase the speed of the analysis. In particular, if the data is available in private or public clouds - an increasingly common scenario for both academic and commercial cloud providers - our framework allows for seamless deployment of filtering workers close to data. We show results indicating that such a setup improves the horizontal scalability of the system, and present a possible use case of the framework by performing an analysis of structural variation in the 1000 Genomes data set. BAMSI constitutes a framework for efficient filtering of large genomic data sets that is flexible in the use of compute as well as storage resources. The data resulting from the filter is assumed to be greatly reduced in size, and can easily be downloaded or routed into e.g. a Hadoop cluster for subsequent interactive analysis using Hive, Spark or similar tools. In this respect, our framework also suggests a general model for making very large datasets of high scientific value more accessible by offering the possibility for organizations to share the cost of

  2. Towards scalable quantum communication and computation: Novel approaches and realizations

    Science.gov (United States)

    Jiang, Liang

    Quantum information science involves exploration of fundamental laws of quantum mechanics for information processing tasks. This thesis presents several new approaches towards scalable quantum information processing. First, we consider a hybrid approach to scalable quantum computation, based on an optically connected network of few-qubit quantum registers. Specifically, we develop a novel scheme for scalable quantum computation that is robust against various imperfections. To justify that nitrogen-vacancy (NV) color centers in diamond can be a promising realization of the few-qubit quantum register, we show how to isolate a few proximal nuclear spins from the rest of the environment and use them for the quantum register. We also demonstrate experimentally that the nuclear spin coherence is only weakly perturbed under optical illumination, which allows us to implement quantum logical operations that use the nuclear spins to assist the repetitive-readout of the electronic spin. Using this technique, we demonstrate more than two-fold improvement in signal-to-noise ratio. Apart from direct application to enhance the sensitivity of the NV-based nano-magnetometer, this experiment represents an important step towards the realization of robust quantum information processors using electronic and nuclear spin qubits. We then study realizations of quantum repeaters for long distance quantum communication. Specifically, we develop an efficient scheme for quantum repeaters based on atomic ensembles. We use dynamic programming to optimize various quantum repeater protocols. In addition, we propose a new protocol of quantum repeater with encoding, which efficiently uses local resources (about 100 qubits) to identify and correct errors, to achieve fast one-way quantum communication over long distances. Finally, we explore quantum systems with topological order. Such systems can exhibit remarkable phenomena such as quasiparticles with anyonic statistics and have been proposed as

  3. Final Report: Enabling Exascale Hardware and Software Design through Scalable System Virtualization

    Energy Technology Data Exchange (ETDEWEB)

    Bridges, Patrick G.

    2015-02-01

    In this grant, we enhanced the Palacios virtual machine monitor to increase its scalability and suitability for addressing exascale system software design issues. This included a wide range of research on core Palacios features, large-scale system emulation, fault injection, perfomrance monitoring, and VMM extensibility. This research resulted in large number of high-impact publications in well-known venues, the support of a number of students, and the graduation of two Ph.D. students and one M.S. student. In addition, our enhanced version of the Palacios virtual machine monitor has been adopted as a core element of the Hobbes operating system under active DOE-funded research and development.

  4. ADEpedia: a scalable and standardized knowledge base of Adverse Drug Events using semantic web technology.

    Science.gov (United States)

    Jiang, Guoqian; Solbrig, Harold R; Chute, Christopher G

    2011-01-01

    A source of semantically coded Adverse Drug Event (ADE) data can be useful for identifying common phenotypes related to ADEs. We proposed a comprehensive framework for building a standardized ADE knowledge base (called ADEpedia) through combining ontology-based approach with semantic web technology. The framework comprises four primary modules: 1) an XML2RDF transformation module; 2) a data normalization module based on NCBO Open Biomedical Annotator; 3) a RDF store based persistence module; and 4) a front-end module based on a Semantic Wiki for the review and curation. A prototype is successfully implemented to demonstrate the capability of the system to integrate multiple drug data and ontology resources and open web services for the ADE data standardization. A preliminary evaluation is performed to demonstrate the usefulness of the system, including the performance of the NCBO annotator. In conclusion, the semantic web technology provides a highly scalable framework for ADE data source integration and standard query service.

  5. Phonon-based scalable platform for chip-scale quantum computing

    Directory of Open Access Journals (Sweden)

    Charles M. Reinke

    2016-12-01

    Full Text Available We present a scalable phonon-based quantum computer on a phononic crystal platform. Practical schemes involve selective placement of a single acceptor atom in the peak of the strain field in a high-Q phononic crystal cavity that enables coupling of the phonon modes to the energy levels of the atom. We show theoretical optimization of the cavity design and coupling waveguide, along with estimated performance figures of the coupled system. A qubit can be created by entangling a phonon at the resonance frequency of the cavity with the atom states. Qubits based on this half-sound, half-matter quasi-particle, called a phoniton, may outcompete other quantum architectures in terms of combined emission rate, coherence lifetime, and fabrication demands.

  6. NYU3T: teaching, technology, teamwork: a model for interprofessional education scalability and sustainability.

    Science.gov (United States)

    Djukic, Maja; Fulmer, Terry; Adams, Jennifer G; Lee, Sabrina; Triola, Marc M

    2012-09-01

    Interprofessional education is a critical precursor to effective teamwork and the collaboration of health care professionals in clinical settings. Numerous barriers have been identified that preclude scalable and sustainable interprofessional education (IPE) efforts. This article describes NYU3T: Teaching, Technology, Teamwork, a model that uses novel technologies such as Web-based learning, virtual patients, and high-fidelity simulation to overcome some of the common barriers and drive implementation of evidence-based teamwork curricula. It outlines the program's curricular components, implementation strategy, evaluation methods, and lessons learned from the first year of delivery and describes implications for future large-scale IPE initiatives. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Scalable stacked array piezoelectric deformable mirror for astronomy and laser processing applications

    Energy Technology Data Exchange (ETDEWEB)

    Wlodarczyk, Krystian L., E-mail: K.L.Wlodarczyk@hw.ac.uk; Maier, Robert R. J.; Hand, Duncan P. [Institute of Photonics and Quantum Sciences, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); Bryce, Emma; Hutson, David; Kirk, Katherine [School of Engineering and Science, University of the West of Scotland, Paisley PA1 2BE (United Kingdom); Schwartz, Noah; Atkinson, David; Beard, Steven; Baillie, Tom; Parr-Burman, Phil [UK Astronomy Technology Centre, Royal Observatory, Edinburgh EH9 3HJ (United Kingdom); Strachan, Mel [Institute of Photonics and Quantum Sciences, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); UK Astronomy Technology Centre, Royal Observatory, Edinburgh EH9 3HJ (United Kingdom)

    2014-02-15

    A prototype of a scalable and potentially low-cost stacked array piezoelectric deformable mirror (SA-PDM) with 35 active elements is presented in this paper. This prototype is characterized by a 2 μm maximum actuator stroke, a 1.4 μm mirror sag (measured for a 14 mm × 14 mm area of the unpowered SA-PDM), and a ±200 nm hysteresis error. The initial proof of concept experiments described here show that this mirror can be successfully used for shaping a high power laser beam in order to improve laser machining performance. Various beam shapes have been obtained with the SA-PDM and examples of laser machining with the shaped beams are presented.

  8. Towards scalable parellelism in Monte Carlo particle transport codes using remote memory access

    Energy Technology Data Exchange (ETDEWEB)

    Romano, Paul K [Los Alamos National Laboratory; Brown, Forrest B [Los Alamos National Laboratory; Forget, Benoit [MIT

    2010-01-01

    One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, they investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations.

  9. Toward a scalable flexible-order model for 3D nonlinear water waves

    DEFF Research Database (Denmark)

    Engsig-Karup, Allan Peter; Ducrozet, Guillaume; Bingham, Harry B.

    For marine and coastal applications, current work are directed toward the development of a scalable numerical 3D model for fully nonlinear potential water waves over arbitrary depths. The model is high-order accurate, robust and efficient for large-scale problems, and support will be included...... for flexibility in the description of structures by the use of curvilinear boundary-fitted meshes. The mathematical equations for potential waves in the physical domain is transformed through $\\sigma$-mapping(s) to a time-invariant boundary-fitted domain which then becomes a basis for an efficient solution...... strategy on a time-invariant mesh. The 3D numerical model is based on a finite difference method as in the original works \\cite{LiFleming1997,BinghamZhang2007}. Full details and other aspects of an improved 3D solution can be found in \\cite{EBL08}. The new and improved approach for three...

  10. Towards scalable parallelism in Monte Carlo particle transport codes using remote memory access

    International Nuclear Information System (INIS)

    Romano, Paul K.; Forget, Benoit; Brown, Forrest

    2010-01-01

    One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, we investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. Initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations. (author)

  11. Statistical Analysis of Video Frame Size Distribution Originating from Scalable Video Codec (SVC

    Directory of Open Access Journals (Sweden)

    Sima Ahmadpour

    2017-01-01

    Full Text Available Designing an effective and high performance network requires an accurate characterization and modeling of network traffic. The modeling of video frame sizes is normally applied in simulation studies and mathematical analysis and generating streams for testing and compliance purposes. Besides, video traffic assumed as a major source of multimedia traffic in future heterogeneous network. Therefore, the statistical distribution of video data can be used as the inputs for performance modeling of networks. The finding of this paper comprises the theoretical definition of distribution which seems to be relevant to the video trace in terms of its statistical properties and finds the best distribution using both the graphical method and the hypothesis test. The data set used in this article consists of layered video traces generating from Scalable Video Codec (SVC video compression technique of three different movies.

  12. On-chip, photon-number-resolving, telecommunication-band detectors for scalable photonic information processing

    Energy Technology Data Exchange (ETDEWEB)

    Gerrits, Thomas; Lita, Adriana E.; Calkins, Brice; Tomlin, Nathan A.; Fox, Anna E.; Linares, Antia Lamas; Mirin, Richard P.; Nam, Sae Woo [National Institute of Standards and Technology, Boulder, Colorado, 80305 (United States); Thomas-Peter, Nicholas; Metcalf, Benjamin J.; Spring, Justin B.; Langford, Nathan K.; Walmsley, Ian A. [Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU (United Kingdom); Gates, James C.; Smith, Peter G. R. [Optoelectronics Research Centre, University of Southampton, Highfield SO17 1BJ (United Kingdom)

    2011-12-15

    Integration is currently the only feasible route toward scalable photonic quantum processing devices that are sufficiently complex to be genuinely useful in computing, metrology, and simulation. Embedded on-chip detection will be critical to such devices. We demonstrate an integrated photon-number-resolving detector, operating in the telecom band at 1550 nm, employing an evanescently coupled design that allows it to be placed at arbitrary locations within a planar circuit. Up to five photons are resolved in the guided optical mode via absorption from the evanescent field into a tungsten transition-edge sensor. The detection efficiency is 7.2{+-}0.5 %. The polarization sensitivity of the detector is also demonstrated. Detailed modeling of device designs shows a clear and feasible route to reaching high detection efficiencies.

  13. Scalable Background-Limited Polarization-Sensitive Detectors for mm-wave Applications

    Science.gov (United States)

    Rostem, Karwan; Ali, Aamir; Appel, John W.; Bennett, Charles L.; Chuss, David T.; Colazo, Felipe A.; Crowe, Erik; Denis, Kevin L.; Essinger-Hileman, Tom; Marriage, Tobias A.; hide

    2014-01-01

    We report on the status and development of polarization-sensitive detectors for millimeter-wave applications. The detectors are fabricated on single-crystal silicon, which functions as a low-loss dielectric substrate for the microwave circuitry as well as the supporting membrane for the Transition-Edge Sensor (TES) bolometers. The orthomode transducer (OMT) is realized as a symmetric structure and on-chip filters are employed to define the detection bandwidth. A hybridized integrated enclosure reduces the high-frequency THz mode set that can couple to the TES bolometers. An implementation of the detector architecture at Q-band achieves 90% efficiency in each polarization. The design is scalable in both frequency coverage, 30-300 GHz, and in number of detectors with uniform characteristics. Hence, the detectors are desirable for ground-based or space-borne instruments that require large arrays of efficient background-limited cryogenic detectors.

  14. Use of modeling to assess the scalability of Ethernet networks for the ATLAS second level trigger

    CERN Document Server

    Korcyl, K; Dobinson, Robert W; Saka, F

    1999-01-01

    The second level trigger of LHC's ATLAS experiment has to perform real-time analyses on detector data at 10 GBytes/s. A switching network is required to connect more than thousand read-out buffers to about thousand processors that execute the trigger algorithm. We are investigating the use of Ethernet technology to build this large switching network. Ethernet is attractive because of the huge installed base, competitive prices, and recent introduction of the high-performance Gigabit version. Due to the network's size it has to be constructed as a layered structure of smaller units. To assess the scalability of such a structure we evaluated a single switch unit. (0 refs).

  15. Scalability, Scintillation Readout and Charge Drift in a Kilogram Scale Solid Xenon Particle Detector

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, J. [Fermilab; Cease, H. [Fermilab; Jaskierny, W. F. [Fermilab; Markley, D. [Fermilab; Pahlka, R. B. [Fermilab; Balakishiyeva, D. [Florida U.; Saab, T. [Florida U.; Filipenko, M. [Erlangen - Nuremberg U., ECAP

    2014-10-23

    We report a demonstration of the scalability of optically transparent xenon in the solid phase for use as a particle detector above a kilogram scale. We employ a liquid nitrogen cooled cryostat combined with a xenon purification and chiller system to measure the scintillation light output and electron drift speed from both the solid and liquid phases of xenon. Scintillation light output from sealed radioactive sources is measured by a set of high quantum efficiency photomultiplier tubes suitable for cryogenic applications. We observed a reduced amount of photons in solid phase compared to that in liquid phase. We used a conventional time projection chamber system to measure the electron drift time in a kilogram of solid xenon and observed faster electron drift speed in the solid phase xenon compared to that in the liquid phase.

  16. Scalable and Fully Distributed Localization in Large-Scale Sensor Networks

    Directory of Open Access Journals (Sweden)

    Miao Jin

    2017-06-01

    Full Text Available This work proposes a novel connectivity-based localization algorithm, well suitable for large-scale sensor networks with complex shapes and a non-uniform nodal distribution. In contrast to current state-of-the-art connectivity-based localization methods, the proposed algorithm is highly scalable with linear computation and communication costs with respect to the size of the network; and fully distributed where each node only needs the information of its neighbors without cumbersome partitioning and merging process. The algorithm is theoretically guaranteed and numerically stable. Moreover, the algorithm can be readily extended to the localization of networks with a one-hop transmission range distance measurement, and the propagation of the measurement error at one sensor node is limited within a small area of the network around the node. Extensive simulations and comparison with other methods under various representative network settings are carried out, showing the superior performance of the proposed algorithm.

  17. The NIDS Cluster: Scalable, Stateful Network Intrusion Detection on Commodity Hardware

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, Brian L; Vallentin, Matthias; Sommer, Robin; Lee, Jason; Leres, Craig; Paxson, Vern; Tierney, Brian

    2007-09-19

    In this work we present a NIDS cluster as a scalable solution for realizing high-performance, stateful network intrusion detection on commodity hardware. The design addresses three challenges: (i) distributing traffic evenly across an extensible set of analysis nodes in a fashion that minimizes the communication required for coordination, (ii) adapting the NIDS's operation to support coordinating its low-level analysis rather than just aggregating alerts; and (iii) validating that the cluster produces sound results. Prototypes of our NIDS cluster now operate at the Lawrence Berkeley National Laboratory and the University of California at Berkeley. In both environments the clusters greatly enhance the power of the network security monitoring.

  18. Scalable quantum information processing with atomic ensembles and flying photons

    International Nuclear Information System (INIS)

    Mei Feng; Yu Yafei; Feng Mang; Zhang Zhiming

    2009-01-01

    We present a scheme for scalable quantum information processing with atomic ensembles and flying photons. Using the Rydberg blockade, we encode the qubits in the collective atomic states, which could be manipulated fast and easily due to the enhanced interaction in comparison to the single-atom case. We demonstrate that our proposed gating could be applied to generation of two-dimensional cluster states for measurement-based quantum computation. Moreover, the atomic ensembles also function as quantum repeaters useful for long-distance quantum state transfer. We show the possibility of our scheme to work in bad cavity or in weak coupling regime, which could much relax the experimental requirement. The efficient coherent operations on the ensemble qubits enable our scheme to be switchable between quantum computation and quantum communication using atomic ensembles.

  19. Final Report. Center for Scalable Application Development Software

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [Rice Univ., Houston, TX (United States)

    2014-10-26

    The Center for Scalable Application Development Software (CScADS) was established as a part- nership between Rice University, Argonne National Laboratory, University of California Berkeley, University of Tennessee – Knoxville, and University of Wisconsin – Madison. CScADS pursued an integrated set of activities with the aim of increasing the productivity of DOE computational scientists by catalyzing the development of systems software, libraries, compilers, and tools for leadership computing platforms. Principal Center activities were workshops to engage the research community in the challenges of leadership computing, research and development of open-source software, and work with computational scientists to help them develop codes for leadership computing platforms. This final report summarizes CScADS activities at Rice University in these areas.

  20. Final Report: Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [William Marsh Rice University

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  1. Vortex Filaments in Grids for Scalable, Fine Smoke Simulation.

    Science.gov (United States)

    Meng, Zhang; Weixin, Si; Yinling, Qian; Hanqiu, Sun; Jing, Qin; Heng, Pheng-Ann

    2015-01-01

    Vortex modeling can produce attractive visual effects of dynamic fluids, which are widely applicable for dynamic media, computer games, special effects, and virtual reality systems. However, it is challenging to effectively simulate intensive and fine detailed fluids such as smoke with fast increasing vortex filaments and smoke particles. The authors propose a novel vortex filaments in grids scheme in which the uniform grids dynamically bridge the vortex filaments and smoke particles for scalable, fine smoke simulation with macroscopic vortex structures. Using the vortex model, their approach supports the trade-off between simulation speed and scale of details. After computing the whole velocity, external control can be easily exerted on the embedded grid to guide the vortex-based smoke motion. The experimental results demonstrate the efficiency of using the proposed scheme for a visually plausible smoke simulation with macroscopic vortex structures.

  2. Simplifying Scalable Graph Processing with a Domain-Specific Language

    KAUST Repository

    Hong, Sungpack; Salihoglu, Semih; Widom, Jennifer; Olukotun, Kunle

    2014-01-01

    Large-scale graph processing, with its massive data sets, requires distributed processing. However, conventional frameworks for distributed graph processing, such as Pregel, use non-traditional programming models that are well-suited for parallelism and scalability but inconvenient for implementing non-trivial graph algorithms. In this paper, we use Green-Marl, a Domain-Specific Language for graph analysis, to intuitively describe graph algorithms and extend its compiler to generate equivalent Pregel implementations. Using the semantic information captured by Green-Marl, the compiler applies a set of transformation rules that convert imperative graph algorithms into Pregel's programming model. Our experiments show that the Pregel programs generated by the Green-Marl compiler perform similarly to manually coded Pregel implementations of the same algorithms. The compiler is even able to generate a Pregel implementation of a complicated graph algorithm for which a manual Pregel implementation is very challenging.

  3. Simplifying Scalable Graph Processing with a Domain-Specific Language

    KAUST Repository

    Hong, Sungpack

    2014-01-01

    Large-scale graph processing, with its massive data sets, requires distributed processing. However, conventional frameworks for distributed graph processing, such as Pregel, use non-traditional programming models that are well-suited for parallelism and scalability but inconvenient for implementing non-trivial graph algorithms. In this paper, we use Green-Marl, a Domain-Specific Language for graph analysis, to intuitively describe graph algorithms and extend its compiler to generate equivalent Pregel implementations. Using the semantic information captured by Green-Marl, the compiler applies a set of transformation rules that convert imperative graph algorithms into Pregel\\'s programming model. Our experiments show that the Pregel programs generated by the Green-Marl compiler perform similarly to manually coded Pregel implementations of the same algorithms. The compiler is even able to generate a Pregel implementation of a complicated graph algorithm for which a manual Pregel implementation is very challenging.

  4. Optimization of Hierarchical Modulation for Use of Scalable Media

    Directory of Open Access Journals (Sweden)

    Heneghan Conor

    2010-01-01

    Full Text Available This paper studies the Hierarchical Modulation, a transmission strategy of the approaching scalable multimedia over frequency-selective fading channel for improving the perceptible quality. An optimization strategy for Hierarchical Modulation and convolutional encoding, which can achieve the target bit error rates with minimum global signal-to-noise ratio in a single-user scenario, is suggested. This strategy allows applications to make a free choice of relationship between Higher Priority (HP and Lower Priority (LP stream delivery. The similar optimization can be used in multiuser scenario. An image transport task and a transport task of an H.264/MPEG4 AVC video embedding both QVGA and VGA resolutions are simulated as the implementation example of this optimization strategy, and demonstrate savings in SNR and improvement in Peak Signal-to-Noise Ratio (PSNR for the particular examples shown.

  5. A Scalable Policy and SNMP Based Network Management Framework

    Institute of Scientific and Technical Information of China (English)

    LIU Su-ping; DING Yong-sheng

    2009-01-01

    Traditional SNMP-based network management can not deal with the task of managing large-scaled distributed network,while policy-based management is one of the effective solutions in network and distributed systems management. However,cross-vendor hardware compatibility is one of the limitations in policy-based management. Devices existing in current network mostly support SNMP rather than Common Open Policy Service (COPS) protocol. By analyzing traditional network management and policy-based network management, a scalable network management framework is proposed. It is combined with Internet Engineering Task Force (IETF) framework for policybased management and SNMP-based network management. By interpreting and translating policy decision to SNMP message,policy can be executed in traditional SNMP-based device.

  6. MOSDEN: A Scalable Mobile Collaborative Platform for Opportunistic Sensing Applications

    Directory of Open Access Journals (Sweden)

    Prem Prakash Jayaraman

    2014-05-01

    Full Text Available Mobile smartphones along with embedded sensors have become an efficient enabler for various mobile applications including opportunistic sensing. The hi-tech advances in smartphones are opening up a world of possibilities. This paper proposes a mobile collaborative platform called MOSDEN that enables and supports opportunistic sensing at run time. MOSDEN captures and shares sensor data acrossmultiple apps, smartphones and users. MOSDEN supports the emerging trend of separating sensors from application-specific processing, storing and sharing. MOSDEN promotes reuse and re-purposing of sensor data hence reducing the efforts in developing novel opportunistic sensing applications. MOSDEN has been implemented on Android-based smartphones and tablets. Experimental evaluations validate the scalability and energy efficiency of MOSDEN and its suitability towards real world applications. The results of evaluation and lessons learned are presented and discussed in this paper.

  7. Photonic Architecture for Scalable Quantum Information Processing in Diamond

    Directory of Open Access Journals (Sweden)

    Kae Nemoto

    2014-08-01

    Full Text Available Physics and information are intimately connected, and the ultimate information processing devices will be those that harness the principles of quantum mechanics. Many physical systems have been identified as candidates for quantum information processing, but none of them are immune from errors. The challenge remains to find a path from the experiments of today to a reliable and scalable quantum computer. Here, we develop an architecture based on a simple module comprising an optical cavity containing a single negatively charged nitrogen vacancy center in diamond. Modules are connected by photons propagating in a fiber-optical network and collectively used to generate a topological cluster state, a robust substrate for quantum information processing. In principle, all processes in the architecture can be deterministic, but current limitations lead to processes that are probabilistic but heralded. We find that the architecture enables large-scale quantum information processing with existing technology.

  8. Neuromorphic adaptive plastic scalable electronics: analog learning systems.

    Science.gov (United States)

    Srinivasa, Narayan; Cruz-Albrecht, Jose

    2012-01-01

    Decades of research to build programmable intelligent machines have demonstrated limited utility in complex, real-world environments. Comparing their performance with biological systems, these machines are less efficient by a factor of 1 million1 billion in complex, real-world environments. The Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program is a multifaceted Defense Advanced Research Projects Agency (DARPA) project that seeks to break the programmable machine paradigm and define a new path for creating useful, intelligent machines. Since real-world systems exhibit infinite combinatorial complexity, electronic neuromorphic machine technology would be preferable in a host of applications, but useful and practical implementations still do not exist. HRL Laboratories LLC has embarked on addressing these challenges, and, in this article, we provide an overview of our project and progress made thus far.

  9. Implementation of the Timepix ASIC in the Scalable Readout System

    Energy Technology Data Exchange (ETDEWEB)

    Lupberger, M., E-mail: lupberger@physik.uni-bonn.de; Desch, K.; Kaminski, J.

    2016-09-11

    We report on the development of electronics hardware, FPGA firmware and software to provide a flexible multi-chip readout of the Timepix ASIC within the framework of the Scalable Readout System (SRS). The system features FPGA-based zero-suppression and the possibility to read out up to 4×8 chips with a single Front End Concentrator (FEC). By operating several FECs in parallel, in principle an arbitrary number of chips can be read out, exploiting the scaling features of SRS. Specifically, we tested the system with a setup consisting of 160 Timepix ASICs, operated as GridPix devices in a large TPC field cage in a 1 T magnetic field at a DESY test beam facility providing an electron beam of up to 6 GeV. We discuss the design choices, the dedicated hardware components, the FPGA firmware as well as the performance of the system in the test beam.

  10. A Scalable Framework and Prototype for CAS e-Science

    Directory of Open Access Journals (Sweden)

    Yuanchun Zhou

    2007-07-01

    Full Text Available Based on the Small-World model of CAS e-Science and the power low of Internet, this paper presents a scalable CAS e-Science Grid framework based on virtual region called Virtual Region Grid Framework (VRGF. VRGF takes virtual region and layer as logic manage-unit. In VRGF, the mode of intra-virtual region is pure P2P, and the model of inter-virtual region is centralized. Therefore, VRGF is decentralized framework with some P2P properties. Further more, VRGF is able to achieve satisfactory performance on resource organizing and locating at a small cost, and is well adapted to the complicated and dynamic features of scientific collaborations. We have implemented a demonstration VRGF based Grid prototype—SDG.

  11. A scalable implementation of RI-SCF on parallel computers

    International Nuclear Information System (INIS)

    Fruechtl, H.A.; Kendall, R.A.; Harrison, R.J.

    1996-01-01

    In order to avoid the integral bottleneck of conventional SCF calculations, the Resolution of the Identity (RI) method is used to obtain an approximate solution to the Hartree-Fock equations. In this approximation only three-center integrals are needed to build the Fock matrix. It has been implemented as part of the NWChem package of portable and scalable ab initio programs for parallel computers. Utilizing the V-approximation, both the Coulomb and exchange contribution to the Fock matrix can be calculated from a transformed set of three-center integrals which have to be precalculated and stored. A distributed in-core method as well as a disk based implementation have been programmed. Details of the implementation as well as the parallel programming tools used are described. We also give results and timings from benchmark calculations

  12. Scalable Lunar Surface Networks and Adaptive Orbit Access

    Science.gov (United States)

    Wang, Xudong

    2015-01-01

    Teranovi Technologies, Inc., has developed innovative network architecture, protocols, and algorithms for both lunar surface and orbit access networks. A key component of the overall architecture is a medium access control (MAC) protocol that includes a novel mechanism of overlaying time division multiple access (TDMA) and carrier sense multiple access with collision avoidance (CSMA/CA), ensuring scalable throughput and quality of service. The new MAC protocol is compatible with legacy Institute of Electrical and Electronics Engineers (IEEE) 802.11 networks. Advanced features include efficiency power management, adaptive channel width adjustment, and error control capability. A hybrid routing protocol combines the advantages of ad hoc on-demand distance vector (AODV) routing and disruption/delay-tolerant network (DTN) routing. Performance is significantly better than AODV or DTN and will be particularly effective for wireless networks with intermittent links, such as lunar and planetary surface networks and orbit access networks.

  13. Scalable Creation of Long-Lived Multipartite Entanglement

    Science.gov (United States)

    Kaufmann, H.; Ruster, T.; Schmiegelow, C. T.; Luda, M. A.; Kaushal, V.; Schulz, J.; von Lindenfels, D.; Schmidt-Kaler, F.; Poschinger, U. G.

    2017-10-01

    We demonstrate the deterministic generation of multipartite entanglement based on scalable methods. Four qubits are encoded in 40Ca+, stored in a microstructured segmented Paul trap. These qubits are sequentially entangled by laser-driven pairwise gate operations. Between these, the qubit register is dynamically reconfigured via ion shuttling operations, where ion crystals are separated and merged, and ions are moved in and out of a fixed laser interaction zone. A sequence consisting of three pairwise entangling gates yields a four-ion Greenberger-Horne-Zeilinger state |ψ ⟩=(1 /√{2 })(|0000 ⟩+|1111 ⟩) , and full quantum state tomography reveals a state fidelity of 94.4(3)%. We analyze the decoherence of this state and employ dynamic decoupling on the spatially distributed constituents to maintain 69(5)% coherence at a storage time of 1.1 sec.

  14. Extending JPEG-LS for low-complexity scalable video coding

    DEFF Research Database (Denmark)

    Ukhanova, Anna; Sergeev, Anton; Forchhammer, Søren

    2011-01-01

    JPEG-LS, the well-known international standard for lossless and near-lossless image compression, was originally designed for non-scalable applications. In this paper we propose a scalable modification of JPEG-LS and compare it with the leading image and video coding standards JPEG2000 and H.264/SVC...

  15. Scalable Integrated Region-Based Image Retrieval Using IRM and Statistical Clustering.

    Science.gov (United States)

    Wang, James Z.; Du, Yanping

    Statistical clustering is critical in designing scalable image retrieval systems. This paper presents a scalable algorithm for indexing and retrieving images based on region segmentation. The method uses statistical clustering on region features and IRM (Integrated Region Matching), a measure developed to evaluate overall similarity between images…

  16. An extended systematic mapping study about the scalability of i* Models

    Directory of Open Access Journals (Sweden)

    Paulo Lima

    2016-12-01

    Full Text Available i* models have been used for requirements specification in many domains, such as healthcare, telecommunication, and air traffic control. Managing the scalability and the complexity of such models is an important challenge in Requirements Engineering (RE. Scalability is also one of the most intractable issues in the design of visual notations in general: a well-known problem with visual representations is that they do not scale well. This issue has led us to investigate scalability in i* models and its variants by means of a systematic mapping study. This paper is an extended version of a previous paper on the scalability of i* including papers indicated by specialists. Moreover, we also discuss the challenges and open issues regarding scalability of i* models and its variants. A total of 126 papers were analyzed in order to understand: how the RE community perceives scalability; and which proposals have considered this topic. We found that scalability issues are indeed perceived as relevant and that further work is still required, even though many potential solutions have already been proposed. This study can be a starting point for researchers aiming to further advance the treatment of scalability in i* models.

  17. Building a scalable event-level metadata service for ATLAS

    International Nuclear Information System (INIS)

    Cranshaw, J; Malon, D; Goosens, L; Viegas, F T A; McGlone, H

    2008-01-01

    The ATLAS TAG Database is a multi-terabyte event-level metadata selection system, intended to allow discovery, selection of and navigation to events of interest to an analysis. The TAG Database encompasses file- and relational-database-resident event-level metadata, distributed across all ATLAS Tiers. An oracle hosted global TAG relational database, containing all ATLAS events, implemented in Oracle, will exist at Tier O. Implementing a system that is both performant and manageable at this scale is a challenge. A 1 TB relational TAG Database has been deployed at Tier 0 using simulated tag data. The database contains one billion events, each described by two hundred event metadata attributes, and is currently undergoing extensive testing in terms of queries, population and manageability. These 1 TB tests aim to demonstrate and optimise the performance and scalability of an Oracle TAG Database on a global scale. Partitioning and indexing strategies are crucial to well-performing queries and manageability of the database and have implications for database population and distribution, so these are investigated. Physics query patterns are anticipated, but a crucial feature of the system must be to support a broad range of queries across all attributes. Concurrently, event tags from ATLAS Computing System Commissioning distributed simulations are accumulated in an Oracle-hosted database at CERN, providing an event-level selection service valuable for user experience and gathering information about physics query patterns. In this paper we describe the status of the Global TAG relational database scalability work and highlight areas of future direction

  18. fastBMA: scalable network inference and transitive reduction.

    Science.gov (United States)

    Hung, Ling-Hong; Shi, Kaiyuan; Wu, Migao; Young, William Chad; Raftery, Adrian E; Yeung, Ka Yee

    2017-10-01

    Inferring genetic networks from genome-wide expression data is extremely demanding computationally. We have developed fastBMA, a distributed, parallel, and scalable implementation of Bayesian model averaging (BMA) for this purpose. fastBMA also includes a computationally efficient module for eliminating redundant indirect edges in the network by mapping the transitive reduction to an easily solved shortest-path problem. We evaluated the performance of fastBMA on synthetic data and experimental genome-wide time series yeast and human datasets. When using a single CPU core, fastBMA is up to 100 times faster than the next fastest method, LASSO, with increased accuracy. It is a memory-efficient, parallel, and distributed application that scales to human genome-wide expression data. A 10 000-gene regulation network can be obtained in a matter of hours using a 32-core cloud cluster (2 nodes of 16 cores). fastBMA is a significant improvement over its predecessor ScanBMA. It is more accurate and orders of magnitude faster than other fast network inference methods such as the 1 based on LASSO. The improved scalability allows it to calculate networks from genome scale data in a reasonable time frame. The transitive reduction method can improve accuracy in denser networks. fastBMA is available as code (M.I.T. license) from GitHub (https://github.com/lhhunghimself/fastBMA), as part of the updated networkBMA Bioconductor package (https://www.bioconductor.org/packages/release/bioc/html/networkBMA.html) and as ready-to-deploy Docker images (https://hub.docker.com/r/biodepot/fastbma/). © The Authors 2017. Published by Oxford University Press.

  19. Joint-layer encoder optimization for HEVC scalable extensions

    Science.gov (United States)

    Tsai, Chia-Ming; He, Yuwen; Dong, Jie; Ye, Yan; Xiu, Xiaoyu; He, Yong

    2014-09-01

    Scalable video coding provides an efficient solution to support video playback on heterogeneous devices with various channel conditions in heterogeneous networks. SHVC is the latest scalable video coding standard based on the HEVC standard. To improve enhancement layer coding efficiency, inter-layer prediction including texture and motion information generated from the base layer is used for enhancement layer coding. However, the overall performance of the SHVC reference encoder is not fully optimized because rate-distortion optimization (RDO) processes in the base and enhancement layers are independently considered. It is difficult to directly extend the existing joint-layer optimization methods to SHVC due to the complicated coding tree block splitting decisions and in-loop filtering process (e.g., deblocking and sample adaptive offset (SAO) filtering) in HEVC. To solve those problems, a joint-layer optimization method is proposed by adjusting the quantization parameter (QP) to optimally allocate the bit resource between layers. Furthermore, to make more proper resource allocation, the proposed method also considers the viewing probability of base and enhancement layers according to packet loss rate. Based on the viewing probability, a novel joint-layer RD cost function is proposed for joint-layer RDO encoding. The QP values of those coding tree units (CTUs) belonging to lower layers referenced by higher layers are decreased accordingly, and the QP values of those remaining CTUs are increased to keep total bits unchanged. Finally the QP values with minimal joint-layer RD cost are selected to match the viewing probability. The proposed method was applied to the third temporal level (TL-3) pictures in the Random Access configuration. Simulation results demonstrate that the proposed joint-layer optimization method can improve coding performance by 1.3% for these TL-3 pictures compared to the SHVC reference encoder without joint-layer optimization.

  20. Application of a scalable plant transient gene expression platform for malaria vaccine development

    Directory of Open Access Journals (Sweden)

    Holger eSpiegel

    2015-12-01

    Full Text Available Despite decades of intensive research efforts there is currently no vaccine that provides sustained sterile immunity against malaria. In this context, a large number of targets from the different stages of the Plasmodium falciparum life cycle have been evaluated as vaccine candidates. None of these candidates has fulfilled expectations, and as long as we lack a single target that induces strain-transcending protective immune responses, combining key antigens from different life cycle stages seems to be the most promising route towards the development of efficacious malaria vaccines. After the identification of potential targets using approaches such as omics-based technology and reverse immunology, the rapid expression, purification and characterization of these proteins, as well as the generation and analysis of fusion constructs combining different promising antigens or antigen domains before committing to expensive and time consuming clinical development, represents one of the bottlenecks in the vaccine development pipeline. The production of recombinant proteins by transient gene expression in plants is a robust and versatile alternative to cell-based microbial and eukaryotic production platforms. The transfection of plant tissues and/or whole plants using Agrobacterium tumefaciens offers a low technical entry barrier, low costs and a high degree of flexibility embedded within a rapid and scalable workflow. Recombinant proteins can easily be targeted to different subcellular compartments according to their physicochemical requirements, including post-translational modifications, to ensure optimal yields of high quality product, and to support simple and economical downstream processing. Here we demonstrate the use of a plant transient expression platform based on transfection with A. tumefaciens as essential component of a malaria vaccine development workflow involving screens for expression, solubility and stability using fluorescent fusion

  1. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    Directory of Open Access Journals (Sweden)

    Sven Van Poucke

    Full Text Available With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension. Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM, the ETL process (Extract, Transform, Load was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  2. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    Science.gov (United States)

    Van Poucke, Sven; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; De Deyne, Cathy

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  3. A scalable geometric multigrid solver for nonsymmetric elliptic systems with application to variable-density flows

    Science.gov (United States)

    Esmaily, M.; Jofre, L.; Mani, A.; Iaccarino, G.

    2018-03-01

    A geometric multigrid algorithm is introduced for solving nonsymmetric linear systems resulting from the discretization of the variable density Navier-Stokes equations on nonuniform structured rectilinear grids and high-Reynolds number flows. The restriction operation is defined such that the resulting system on the coarser grids is symmetric, thereby allowing for the use of efficient smoother algorithms. To achieve an optimal rate of convergence, the sequence of interpolation and restriction operations are determined through a dynamic procedure. A parallel partitioning strategy is introduced to minimize communication while maintaining the load balance between all processors. To test the proposed algorithm, we consider two cases: 1) homogeneous isotropic turbulence discretized on uniform grids and 2) turbulent duct flow discretized on stretched grids. Testing the algorithm on systems with up to a billion unknowns shows that the cost varies linearly with the number of unknowns. This O (N) behavior confirms the robustness of the proposed multigrid method regarding ill-conditioning of large systems characteristic of multiscale high-Reynolds number turbulent flows. The robustness of our method to density variations is established by considering cases where density varies sharply in space by a factor of up to 104, showing its applicability to two-phase flow problems. Strong and weak scalability studies are carried out, employing up to 30,000 processors, to examine the parallel performance of our implementation. Excellent scalability of our solver is shown for a granularity as low as 104 to 105 unknowns per processor. At its tested peak throughput, it solves approximately 4 billion unknowns per second employing over 16,000 processors with a parallel efficiency higher than 50%.

  4. The ELPA library: scalable parallel eigenvalue solutions for electronic structure theory and computational science.

    Science.gov (United States)

    Marek, A; Blum, V; Johanni, R; Havu, V; Lang, B; Auckenthaler, T; Heinecke, A; Bungartz, H-J; Lederer, H

    2014-05-28

    sizes arising in the field of electronic structure theory is demonstrated for current high-performance computer architectures such as Cray or Intel/Infiniband. For a matrix of dimension 260,000, scalability up to 295,000 CPU cores has been shown on BlueGene/P.

  5. SCALABLE PHOTOGRAMMETRIC MOTION CAPTURE SYSTEM “MOSCA”: DEVELOPMENT AND APPLICATION

    Directory of Open Access Journals (Sweden)

    V. A. Knyaz

    2015-05-01

    Full Text Available Wide variety of applications (from industrial to entertainment has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.

  6. Solution-Processing of Organic Solar Cells: From In Situ Investigation to Scalable Manufacturing

    KAUST Repository

    Abdelsamie, Maged

    2016-12-05

    Photovoltaics provide a feasible route to fulfilling the substantial increase in demand for energy worldwide. Solution processable organic photovoltaics (OPVs) have attracted attention in the last decade because of the promise of low-cost manufacturing of sufficiently efficient devices at high throughput on large-area rigid or flexible substrates with potentially low energy and carbon footprints. In OPVs, the photoactive layer is made of a bulk heterojunction (BHJ) layer and is typically composed of a blend of an electron-donating (D) and an electron-accepting (A) materials which phase separate at the nanoscale and form a heterojunction at the D-A interface that plays a crucial role in the generation of charges. Despite the tremendous progress that has been made in increasing the efficiency of organic photovoltaics over the last few years, with power conversion efficiency increasing from 8% to 13% over the duration of this PhD dissertation, there have been numerous debates on the mechanisms of formation of the crucial BHJ layer and few clues about how to successfully transfer these lessons to scalable processes. This stems in large part from a lack of understanding of how BHJ layers form from solution. This lack of understanding makes it challenging to design BHJs and to control their formation in laboratory-based processes, such as spin-coating, let alone their successful transfer to scalable processes required for the manufacturing of organic solar cells. Consequently, the OPV community has in recent years sought out to better understand the key characteristics of state of the art lab-based organic solar cells and made efforts to shed light on how the BHJ forms in laboratory-based processes as well as in scalable processes. We take the view that understanding the formation of the solution-processed bulk heterojunction (BHJ) photoactive layer, where crucial photovoltaic processes take place, is the one of the most crucial steps to developing strategies towards the

  7. A scalable system for production of functional pancreatic progenitors from human embryonic stem cells.

    Directory of Open Access Journals (Sweden)

    Thomas C Schulz

    Full Text Available Development of a human embryonic stem cell (hESC-based therapy for type 1 diabetes will require the translation of proof-of-principle concepts into a scalable, controlled, and regulated cell manufacturing process. We have previously demonstrated that hESC can be directed to differentiate into pancreatic progenitors that mature into functional glucose-responsive, insulin-secreting cells in vivo. In this study we describe hESC expansion and banking methods and a suspension-based differentiation system, which together underpin an integrated scalable manufacturing process for producing pancreatic progenitors. This system has been optimized for the CyT49 cell line. Accordingly, qualified large-scale single-cell master and working cGMP cell banks of CyT49 have been generated to provide a virtually unlimited starting resource for manufacturing. Upon thaw from these banks, we expanded CyT49 for two weeks in an adherent culture format that achieves 50-100 fold expansion per week. Undifferentiated CyT49 were then aggregated into clusters in dynamic rotational suspension culture, followed by differentiation en masse for two weeks with a four-stage protocol. Numerous scaled differentiation runs generated reproducible and defined population compositions highly enriched for pancreatic cell lineages, as shown by examining mRNA expression at each stage of differentiation and flow cytometry of the final population. Islet-like tissue containing glucose-responsive, insulin-secreting cells was generated upon implantation into mice. By four- to five-months post-engraftment, mature neo-pancreatic tissue was sufficient to protect against streptozotocin (STZ-induced hyperglycemia. In summary, we have developed a tractable manufacturing process for the generation of functional pancreatic progenitors from hESC on a scale amenable to clinical entry.

  8. A Cross-Platform Infrastructure for Scalable Runtime Application Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jack Dongarra; Shirley Moore; Bart Miller, Jeffrey Hollingsworth; Tracy Rafferty

    2005-03-15

    The purpose of this project was to build an extensible cross-platform infrastructure to facilitate the development of accurate and portable performance analysis tools for current and future high performance computing (HPC) architectures. Major accomplishments include tools and techniques for multidimensional performance analysis, as well as improved support for dynamic performance monitoring of multithreaded and multiprocess applications. Previous performance tool development has been limited by the burden of having to re-write a platform-dependent low-level substrate for each architecture/operating system pair in order to obtain the necessary performance data from the system. Manual interpretation of performance data is not scalable for large-scale long-running applications. The infrastructure developed by this project provides a foundation for building portable and scalable performance analysis tools, with the end goal being to provide application developers with the information they need to analyze, understand, and tune the performance of terascale applications on HPC architectures. The backend portion of the infrastructure provides runtime instrumentation capability and access to hardware performance counters, with thread-safety for shared memory environments and a communication substrate to support instrumentation of multiprocess and distributed programs. Front end interfaces provides tool developers with a well-defined, platform-independent set of calls for requesting performance data. End-user tools have been developed that demonstrate runtime data collection, on-line and off-line analysis of performance data, and multidimensional performance analysis. The infrastructure is based on two underlying performance instrumentation technologies. These technologies are the PAPI cross-platform library interface to hardware performance counters and the cross-platform Dyninst library interface for runtime modification of executable images. The Paradyn and KOJAK

  9. Bit-depth scalable video coding with new inter-layer prediction

    Directory of Open Access Journals (Sweden)

    Chiang Jui-Chiu

    2011-01-01

    Full Text Available Abstract The rapid advances in the capture and display of high-dynamic range (HDR image/video content make it imperative to develop efficient compression techniques to deal with the huge amounts of HDR data. Since HDR device is not yet popular for the moment, the compatibility problems should be considered when rendering HDR content on conventional display devices. To this end, in this study, we propose three H.264/AVC-based bit-depth scalable video-coding schemes, called the LH scheme (low bit-depth to high bit-depth, the HL scheme (high bit-depth to low bit-depth, and the combined LH-HL scheme, respectively. The schemes efficiently exploit the high correlation between the high and the low bit-depth layers on the macroblock (MB level. Experimental results demonstrate that the HL scheme outperforms the other two schemes in some scenarios. Moreover, it achieves up to 7 dB improvement over the simulcast approach when the high and low bit-depth representations are 12 bits and 8 bits, respectively.

  10. Designing Psychological Treatments for Scalability: The PREMIUM Approach.

    Directory of Open Access Journals (Sweden)

    Sukumar Vellakkal

    Full Text Available Lack of access to empirically-supported psychological treatments (EPT that are contextually appropriate and feasible to deliver by non-specialist health workers (referred to as 'counsellors' are major barrier for the treatment of mental health problems in resource poor countries. To address this barrier, the 'Program for Effective Mental Health Interventions in Under-resourced Health Systems' (PREMIUM designed a method for the development of EPT for severe depression and harmful drinking. This was implemented over three years in India. This study assessed the relative usefulness and costs of the five 'steps' (Systematic reviews, In-depth interviews, Key informant surveys, Workshops with international experts, and Workshops with local experts in the first phase of identifying the strategies and theoretical model of the treatment and two 'steps' (Case series with specialists, and Case series and pilot trial with counsellors in the second phase of enhancing the acceptability and feasibility of its delivery by counsellors in PREMIUM with the aim of arriving at a parsimonious set of steps for future investigators to use for developing scalable EPT.The study used two sources of data: the usefulness ratings by the investigators and the resource utilization. The usefulness of each of the seven steps was assessed through the ratings by the investigators involved in the development of each of the two EPT, viz. Healthy Activity Program for severe depression and Counselling for Alcohol Problems for harmful drinking. Quantitative responses were elicited to rate the utility (usefulness/influence, followed by open-ended questions for explaining the rankings. The resources used by PREMIUM were computed in terms of time (months and monetary costs.The theoretical core of the new treatments were consistent with those of EPT derived from global evidence, viz. Behavioural Activation and Motivational Enhancement for severe depression and harmful drinking respectively

  11. Towards Reliable, Scalable, and Energy Efficient Cognitive Radio Systems

    KAUST Repository

    Sboui, Lokman

    2017-11-01

    The cognitive radio (CR) concept is expected to be adopted along with many technologies to meet the requirements of the next generation of wireless and mobile systems, the 5G. Consequently, it is important to determine the performance of the CR systems with respect to these requirements. In this thesis, after briefly describing the 5G requirements, we present three main directions in which we aim to enhance the CR performance. The first direction is the reliability. We study the achievable rate of a multiple-input multiple-output (MIMO) relay-assisted CR under two scenarios; an unmanned aerial vehicle (UAV) one-way relaying (OWR) and a fixed two-way relaying (TWR). We propose special linear precoding schemes that enable the secondary user (SU) to take advantage of the primary-free channel eigenmodes. We study the SU rate sensitivity to the relay power, the relay gain, the UAV altitude, the number of antennas and the line of sight availability. The second direction is the scalability. We first study a multiple access channel (MAC) with multiple SUs scenario. We propose a particular linear precoding and SUs selection scheme maximizing their sum-rate. We show that the proposed scheme provides a significant sum-rate improvement as the number of SUs increases. Secondly, we expand our scalability study to cognitive cellular networks. We propose a low-complexity algorithm for base station activation/deactivation and dynamic spectrum management maximizing the profits of primary and secondary networks subject to green constraints. We show that our proposed algorithms achieve performance close to those obtained with the exhaustive search method. The third direction is the energy efficiency (EE). We present a novel power allocation scheme based on maximizing the EE of both single-input and single-output (SISO) and MIMO systems. We solve a non-convex problem and derive explicit expressions of the corresponding optimal power. When the instantaneous channel is not available, we

  12. A Scalable Process for Production of Single-walled Carbon Nanotubes (SWNTs) by Catalytic Disproportionation of CO on a Solid Catalyst

    International Nuclear Information System (INIS)

    Resasco, D.E.; Alvarez, W.E.; Pompeo, F.; Balzano, L.; Herrera, J.E.; Kitiyanan, B.; Borgna, A.

    2002-01-01

    Existing single-walled carbon nanotube synthesis methods are not easily scalable, operate under severe conditions, and involve high capital and operating costs. The current cost of SWNT is exceedingly high. A catalytic method of synthesis has been developed that has shown potential advantages over the existing methods. This method is based on a catalyst formulation that inhibits the formation of undesired forms of carbon; it can be scaled-up and may result in lower production costs

  13. A practical multilayered conducting polymer actuator with scalable work output

    International Nuclear Information System (INIS)

    Ikushima, Kimiya; John, Stephen; Yokoyama, Kazuo; Nagamitsu, Sachio

    2009-01-01

    Household assistance robots are expected to become more prominent in the future and will require inherently safe design. Conducting polymer-based artificial muscle actuators are one potential option for achieving this safety, as they are flexible, lightweight and can be driven using low input voltages, unlike electromagnetic motors; however, practical implementation also requires a scalable structure and stability in air. In this paper we propose and practically implement a multilayer conducting polymer actuator which could achieve these targets using polypyrrole film and ionic liquid-soaked separators. The practical work density of a nine-layer multilayer actuator was 1.4 kJ m −3 at 0.5 Hz, when the volumes of the electrolyte and counter electrodes were included, which approaches the performance of mammalian muscle. To achieve air stability, we analyzed the effect of air-stable ionic liquid gels on actuator displacement using finite element simulation and it was found that the majority of strain could be retained when the elastic modulus of the gel was kept below 3 kPa. As a result of this work, we have shown that multilayered conducting polymer actuators are a feasible idea for household robotics, as they provide a substantial practical work density in a compact structure and can be easily scaled as required

  14. Online Hashing for Scalable Remote Sensing Image Retrieval

    Directory of Open Access Journals (Sweden)

    Peng Li

    2018-05-01

    Full Text Available Recently, hashing-based large-scale remote sensing (RS image retrieval has attracted much attention. Many new hashing algorithms have been developed and successfully applied to fast RS image retrieval tasks. However, there exists an important problem rarely addressed in the research literature of RS image hashing. The RS images are practically produced in a streaming manner in many real-world applications, which means the data distribution keeps changing over time. Most existing RS image hashing methods are batch-based models whose hash functions are learned once for all and kept fixed all the time. Therefore, the pre-trained hash functions might not fit the ever-growing new RS images. Moreover, the batch-based models have to load all the training images into memory for model learning, which consumes many computing and memory resources. To address the above deficiencies, we propose a new online hashing method, which learns and adapts its hashing functions with respect to the newly incoming RS images in terms of a novel online partial random learning scheme. Our hash model is updated in a sequential mode such that the representative power of the learned binary codes for RS images are improved accordingly. Moreover, benefiting from the online learning strategy, our proposed hashing approach is quite suitable for scalable real-world remote sensing image retrieval. Extensive experiments on two large-scale RS image databases under online setting demonstrated the efficacy and effectiveness of the proposed method.

  15. Scalable global grid catalogue for Run3 and beyond

    Science.gov (United States)

    Martinez Pedreira, M.; Grigoras, C.; ALICE Collaboration

    2017-10-01

    The AliEn (ALICE Environment) file catalogue is a global unique namespace providing mapping between a UNIX-like logical name structure and the corresponding physical files distributed over 80 storage elements worldwide. Powerful search tools and hierarchical metadata information are integral parts of the system and are used by the Grid jobs as well as local users to store and access all files on the Grid storage elements. The catalogue has been in production since 2005 and over the past 11 years has grown to more than 2 billion logical file names. The backend is a set of distributed relational databases, ensuring smooth growth and fast access. Due to the anticipated fast future growth, we are looking for ways to enhance the performance and scalability by simplifying the catalogue schema while keeping the functionality intact. We investigated different backend solutions, such as distributed key value stores, as replacement for the relational database. This contribution covers the architectural changes in the system, together with the technology evaluation, benchmark results and conclusions.

  16. SVOPME: A Scalable Virtual Organization Privileges Management Environment

    International Nuclear Information System (INIS)

    Garzoglio, Gabriele; Sfiligoi, Igor; Levshina, Tanya; Wang, Nanbor; Ananthan, Balamurali

    2010-01-01

    Grids enable uniform access to resources by implementing standard interfaces to resource gateways. In the Open Science Grid (OSG), privileges are granted on the basis of the user's membership to a Virtual Organization (VO). However, Grid sites are solely responsible to determine and control access privileges to resources using users' identity and personal attributes, which are available through Grid credentials. While this guarantees full control on access rights to the sites, it makes VO privileges heterogeneous throughout the Grid and hardly fits with the Grid paradigm of uniform access to resources. To address these challenges, we are developing the Scalable Virtual Organization Privileges Management Environment (SVOPME), which provides tools for VOs to define and publish desired privileges and assists sites to provide the appropriate access policies. Moreover, SVOPME provides tools for Grid sites to analyze site access policies for various resources, verify compliance with preferred VO policies, and generate directives for site administrators on how the local access policies can be amended to achieve such compliance without taking control of local configurations away from site administrators. This paper discusses what access policies are of interest to the OSG community and how SVOPME implements privilege management for OSG.

  17. A Platform for Scalable Satellite and Geospatial Data Analysis

    Science.gov (United States)

    Beneke, C. M.; Skillman, S.; Warren, M. S.; Kelton, T.; Brumby, S. P.; Chartrand, R.; Mathis, M.

    2017-12-01

    At Descartes Labs, we use the commercial cloud to run global-scale machine learning applications over satellite imagery. We have processed over 5 Petabytes of public and commercial satellite imagery, including the full Landsat and Sentinel archives. By combining open-source tools with a FUSE-based filesystem for cloud storage, we have enabled a scalable compute platform that has demonstrated reading over 200 GB/s of satellite imagery into cloud compute nodes. In one application, we generated global 15m Landsat-8, 20m Sentinel-1, and 10m Sentinel-2 composites from 15 trillion pixels, using over 10,000 CPUs. We recently created a public open-source Python client library that can be used to query and access preprocessed public satellite imagery from within our platform, and made this platform available to researchers for non-commercial projects. In this session, we will describe how you can use the Descartes Labs Platform for rapid prototyping and scaling of geospatial analyses and demonstrate examples in land cover classification.

  18. An open, interoperable, and scalable prehospital information technology network architecture.

    Science.gov (United States)

    Landman, Adam B; Rokos, Ivan C; Burns, Kevin; Van Gelder, Carin M; Fisher, Roger M; Dunford, James V; Cone, David C; Bogucki, Sandy

    2011-01-01

    Some of the most intractable challenges in prehospital medicine include response time optimization, inefficiencies at the emergency medical services (EMS)-emergency department (ED) interface, and the ability to correlate field interventions with patient outcomes. Information technology (IT) can address these and other concerns by ensuring that system and patient information is received when and where it is needed, is fully integrated with prior and subsequent patient information, and is securely archived. Some EMS agencies have begun adopting information technologies, such as wireless transmission of 12-lead electrocardiograms, but few agencies have developed a comprehensive plan for management of their prehospital information and integration with other electronic medical records. This perspective article highlights the challenges and limitations of integrating IT elements without a strategic plan, and proposes an open, interoperable, and scalable prehospital information technology (PHIT) architecture. The two core components of this PHIT architecture are 1) routers with broadband network connectivity to share data between ambulance devices and EMS system information services and 2) an electronic patient care report to organize and archive all electronic prehospital data. To successfully implement this comprehensive PHIT architecture, data and technology requirements must be based on best available evidence, and the system must adhere to health data standards as well as privacy and security regulations. Recent federal legislation prioritizing health information technology may position federal agencies to help design and fund PHIT architectures.

  19. Scalable transfer of vertical graphene nanosheets for flexible supercapacitor applications

    Science.gov (United States)

    Sahoo, Gopinath; Ghosh, Subrata; Polaki, S. R.; Mathews, Tom; Kamruddin, M.

    2017-10-01

    Vertical graphene nanosheets (VGN) are the material of choice for application in next-generation electronic devices. The growing demand for VGN-based flexible devices for the electronics industry brings in restriction on VGN growth temperature. The difficulty associated with the direct growth of VGN on flexible substrates can be overcome by adopting an effective strategy of transferring the well-grown VGN onto arbitrary flexible substrates through a soft chemistry route. In the present study, we report an inexpensive and scalable technique for the polymer-free transfer of VGN onto arbitrary substrates without disrupting its morphology, structure, and properties. After transfer, the morphology, chemical structure, and electrical properties are analyzed by scanning electron microscopy, Raman spectroscopy, x-ray photoelectron spectroscopy, and four-probe resistive methods, respectively. The wetting properties are studied from the water contact angle measurements. The observed results indicate the retention of morphology, surface chemistry, structure, and electronic properties. Furthermore, the storage capacity of the transferred VGN-based binder-free and current collector-free flexible symmetric supercapacitor device is studied. A very low sheet resistance of 670 Ω/□ and excellent supercapacitance of 158 μF cm-2 with 86% retention after 10 000 cycles show the prospect of the damage-free VGN transfer approach for the fabrication of flexible nanoelectronic devices.

  20. Nano-islands Based Charge Trapping Memory: A Scalability Study

    KAUST Repository

    Elatab, Nazek; Saadat, Irfan; Saraswat, Krishna; Nayfeh, Ammar

    2017-01-01

    Zinc-oxide (ZnO) and zirconia (ZrO2) metal oxides have been studied extensively in the past few decades with several potential applications including memory devices. In this work, a scalability study, based on the ITRS roadmap, is conducted on memory devices with ZnO and ZrO2 nano-islands charge trapping layer. Both nano-islands are deposited using atomic layer deposition (ALD), however, the different sizes, distribution and properties of the materials result in different memory performance. The results show that at the 32-nm node charge trapping memory with 127 ZrO2 nano-islands can provide a 9.4 V memory window. However, with ZnO only 31 nano-islands can provide a window of 2.5 V. The results indicate that ZrO2 nano-islands are more promising than ZnO in scaled down devices due to their higher density, higher-k, and absence of quantum confinement effects.