WorldWideScience

Sample records for providing scalable performance

  1. Scalable Performance Measurement and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gamblin, Todd [Univ. of North Carolina, Chapel Hill, NC (United States)

    2009-01-01

    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number of tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.

  2. Scalable resource management in high performance computers.

    Energy Technology Data Exchange (ETDEWEB)

    Frachtenberg, E. (Eitan); Petrini, F. (Fabrizio); Fernandez Peinador, J. (Juan); Coll, S. (Salvador)

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  3. Scalable service architecture for providing strong service guarantees

    Science.gov (United States)

    Christin, Nicolas; Liebeherr, Joerg

    2002-07-01

    For the past decade, a lot of Internet research has been devoted to providing different levels of service to applications. Initial proposals for service differentiation provided strong service guarantees, with strict bounds on delays, loss rates, and throughput, but required high overhead in terms of computational complexity and memory, both of which raise scalability concerns. Recently, the interest has shifted to service architectures with low overhead. However, these newer service architectures only provide weak service guarantees, which do not always address the needs of applications. In this paper, we describe a service architecture that supports strong service guarantees, can be implemented with low computational complexity, and only requires to maintain little state information. A key mechanism of the proposed service architecture is that it addresses scheduling and buffer management in a single algorithm. The presented architecture offers no solution for controlling the amount of traffic that enters the network. Instead, we plan on exploiting feedback mechanisms of TCP congestion control algorithms for the purpose of regulating the traffic entering the network.

  4. Oracle database performance and scalability a quantitative approach

    CERN Document Server

    Liu, Henry H

    2011-01-01

    A data-driven, fact-based, quantitative text on Oracle performance and scalability With database concepts and theories clearly explained in Oracle's context, readers quickly learn how to fully leverage Oracle's performance and scalability capabilities at every stage of designing and developing an Oracle-based enterprise application. The book is based on the author's more than ten years of experience working with Oracle, and is filled with dependable, tested, and proven performance optimization techniques. Oracle Database Performance and Scalability is divided into four parts that enable reader

  5. Providing scalable system software for high-end simulations

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D. [Sandia National Labs., Albuquerque, NM (United States)

    1997-12-31

    Detailed, full-system, complex physics simulations have been shown to be feasible on systems containing thousands of processors. In order to manage these computer systems it has been necessary to create scalable system services. In this talk Sandia`s research on scalable systems will be described. The key concepts of low overhead data movement through portals and of flexible services through multi-partition architectures will be illustrated in detail. The talk will conclude with a discussion of how these techniques can be applied outside of the standard monolithic MPP system.

  6. Performance and Scalability Evaluation of the Ceph Parallel File System

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Feiyi [ORNL; Nelson, Mark [Inktank Storage, Inc.; Oral, H Sarp [ORNL; Settlemyer, Bradley W [ORNL; Atchley, Scott [ORNL; Caldwell, Blake A [ORNL; Hill, Jason J [ORNL

    2013-01-01

    Ceph is an open-source and emerging parallel distributed file and storage system technology. By design, Ceph assumes running on unreliable and commodity storage and network hardware and provides reliability and fault-tolerance through controlled object placement and data replication. We evaluated the Ceph technology for scientific high-performance computing (HPC) environments. This paper presents our evaluation methodology, experiments, results and observations from mostly parallel I/O performance and scalability perspectives. Our work made two unique contributions. First, our evaluation is performed under a realistic setup for a large-scale capability HPC environment using a commercial high-end storage system. Second, our path of investigation, tuning efforts, and findings made direct contributions to Ceph's development and improved code quality, scalability, and performance. These changes should also benefit both Ceph and HPC communities at large. Throughout the evaluation, we observed that Ceph still is an evolving technology under fast-paced development and showing great promises.

  7. Improving the Performance Scalability of the Community Atmosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Mirin, Arthur [Lawrence Livermore National Laboratory (LLNL); Worley, Patrick H [ORNL

    2012-01-01

    The Community Atmosphere Model (CAM), which serves as the atmosphere component of the Community Climate System Model (CCSM), is the most computationally expensive CCSM component in typical configurations. On current and next-generation leadership class computing systems, the performance of CAM is tied to its parallel scalability. Improving performance scalability in CAM has been a challenge, due largely to algorithmic restrictions necessitated by the polar singularities in its latitude-longitude computational grid. Nevertheless, through a combination of exploiting additional parallelism, implementing improved communication protocols, and eliminating scalability bottlenecks, we have been able to more than double the maximum throughput rate of CAM on production platforms. We describe these improvements and present results on the Cray XT5 and IBM BG/P. The approaches taken are not specific to CAM and may inform similar scalability enhancement activities for other codes.

  8. Performance and complexity of color gamut scalable coding

    Science.gov (United States)

    He, Yuwen; Ye, Yan; Xiu, Xiaoyu

    2015-09-01

    Wide color gamut such as BT.2020 allows pictures to be rendered with sharper details and more vivid colors. It is considered an essential video parameter for next generation content generation, and has recently drawn significant commercial interest. As the upgrade cycle of content production work flow and consumer displays take place, current generation and next generation video content are expected to co-exist. Thus, maintaining backward compatibility becomes an important consideration for efficient content delivery systems. The scalable extension of HEVC (SHVC) was recently finalized in the second version of the HEVC specifications. SHVC provides a color mapping tool to improve scalable coding efficiency when the base layer and the enhancement layer video signals are in a different color gamut. The SHVC color mapping tool uses a 3D Look-Up Table (3D LUT) based cross-color linear model to efficiently convert the video in the base layer color gamut into the enhancement layer color gamut. Due to complexity concerns, certain limitations, including limiting the maximum 3D LUT size to 8x2x2, were applied to the color mapping process in SHVC. In this paper, we investigate the complexity and performance trade-off of the 3D LUT based color mapping process. Specifically, we explore the performance benefits of enlarging the size of the 3D LUT with various linear models. In order to reduce computational complexity, a simplified linear model is used within each 3D LUT partition. Simulation results are provided to detail the various performance vs. complexity trade-offs achievable in the proposed design.

  9. Performances of the PIPER scalable child human body model in accident reconstruction.

    Science.gov (United States)

    Giordano, Chiara; Li, Xiaogai; Kleiven, Svein

    2017-01-01

    Human body models (HBMs) have the potential to provide significant insights into the pediatric response to impact. This study describes a scalable/posable approach to perform child accident reconstructions using the Position and Personalize Advanced Human Body Models for Injury Prediction (PIPER) scalable child HBM of different ages and in different positions obtained by the PIPER tool. Overall, the PIPER scalable child HBM managed reasonably well to predict the injury severity and location of the children involved in real-life crash scenarios documented in the medical records. The developed methodology and workflow is essential for future work to determine child injury tolerances based on the full Child Advanced Safety Project for European Roads (CASPER) accident reconstruction database. With the workflow presented in this study, the open-source PIPER scalable HBM combined with the PIPER tool is also foreseen to have implications for improved safety designs for a better protection of children in traffic accidents.

  10. Scalability of DL_POLY on High Performance Computing Platform

    Directory of Open Access Journals (Sweden)

    Mabule Samuel Mabakane

    2017-12-01

    Full Text Available This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.

  11. High-performance scalable Information Service for the ATLAS experiment

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Hauser, R

    2012-01-01

    The ATLAS experiment is being operated by highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to access the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS TDAQ project. The IS provides high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about hundred gigabytes of information which is being constantly updated with the update interval varying from a second to few tens of seconds. IS ...

  12. High Performance Storage System Scalability: Architecture, Implementation, and Experience

    Energy Technology Data Exchange (ETDEWEB)

    Watson, R W

    2005-01-05

    The High Performance Storage System (HPSS) provides scalable hierarchical storage management (HSM), archive, and file system services. Its design, implementation and current dominant use are focused on HSM and archive services. It is also a general-purpose, global, shared, parallel file system, potentially useful in other application domains. When HPSS design and implementation began over a decade ago, scientific computing power and storage capabilities at a site, such as a DOE national laboratory, was measured in a few 10s of gigaops, data archived in HSMs in a few 10s of terabytes at most, data throughput rates to an HSM in a few megabytes/s, and daily throughput with the HSM in a few gigabytes/day. At that time, the DOE national laboratories and IBM HPSS design team recognized that we were headed for a data storage explosion driven by computing power rising to teraops/petaops requiring data stored in HSMs to rise to petabytes and beyond, data transfer rates with the HSM to rise to gigabytes/s and higher, and daily throughput with a HSM in 10s of terabytes/day. This paper discusses HPSS architectural, implementation and deployment experiences that contributed to its success in meeting the above orders of magnitude scaling targets. We also discuss areas that need additional attention as we continue significant scaling into the future.

  13. High-Performance Scalable Information Service for the ATLAS Experiment

    Science.gov (United States)

    Kolos, S.; Boutsioukis, G.; Hauser, R.

    2012-12-01

    The ATLAS[1] experiment is operated by a highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to assess the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS Trigger and Data Acquisition (TDAQ)[2] project. The IS provides a high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about a hundred gigabytes of information which is being constantly updated with the update interval varying from a second to a few tens of seconds. IS provides access to any information item on request as well as distributing notification to all the information subscribers. In the latter case IS subscribers receive information within a few milliseconds after it was updated. IS can handle arbitrary types of information, including histograms produced by the HLT applications, and provides C++, Java and Python API. The Information Service is a unique source of information for the majority of the online monitoring analysis and GUI applications used to control and monitor the ATLAS experiment. Information Service provides streaming functionality allowing efficient replication of all or part of the managed information. This functionality is used to duplicate the subset of the ATLAS monitoring data to the CERN public network with a latency of a few milliseconds, allowing efficient real-time monitoring of the data taking from outside the protected ATLAS network. Each information

  14. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  15. Citizen science provides a reliable and scalable tool to track disease-carrying mosquitoes.

    Science.gov (United States)

    Palmer, John R B; Oltra, Aitana; Collantes, Francisco; Delgado, Juan Antonio; Lucientes, Javier; Delacour, Sarah; Bengoa, Mikel; Eritja, Roger; Bartumeus, Frederic

    2017-10-24

    Recent outbreaks of Zika, chikungunya and dengue highlight the importance of better understanding the spread of disease-carrying mosquitoes across multiple spatio-temporal scales. Traditional surveillance tools are limited by jurisdictional boundaries and cost constraints. Here we show how a scalable citizen science system can solve this problem by combining citizen scientists' observations with expert validation and correcting for sampling effort. Our system provides accurate early warning information about the Asian tiger mosquito (Aedes albopictus) invasion in Spain, well beyond that available from traditional methods, and vital for public health services. It also provides estimates of tiger mosquito risk comparable to those from traditional methods but more directly related to the human-mosquito encounters that are relevant for epidemiological modelling and scalable enough to cover the entire country. These results illustrate how powerful public participation in science can be and suggest citizen science is positioned to revolutionize mosquito-borne disease surveillance worldwide.

  16. Building a Community Infrastructure for Scalable On-Line Performance Analysis Tools around Open|Speedshop

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Barton

    2014-06-30

    Peta-scale computing environments pose significant challenges for both system and application developers and addressing them required more than simply scaling up existing tera-scale solutions. Performance analysis tools play an important role in gaining this understanding, but previous monolithic tools with fixed feature sets have not sufficed. Instead, this project worked on the design, implementation, and evaluation of a general, flexible tool infrastructure supporting the construction of performance tools as “pipelines” of high-quality tool building blocks. These tool building blocks provide common performance tool functionality, and are designed for scalability, lightweight data acquisition and analysis, and interoperability. For this project, we built on Open|SpeedShop, a modular and extensible open source performance analysis tool set. The design and implementation of such a general and reusable infrastructure targeted for petascale systems required us to address several challenging research issues. All components needed to be designed for scale, a task made more difficult by the need to provide general modules. The infrastructure needed to support online data aggregation to cope with the large amounts of performance and debugging data. We needed to be able to map any combination of tool components to each target architecture. And we needed to design interoperable tool APIs and workflows that were concrete enough to support the required functionality, yet provide the necessary flexibility to address a wide range of tools. A major result of this project is the ability to use this scalable infrastructure to quickly create tools that match with a machine architecture and a performance problem that needs to be understood. Another benefit is the ability for application engineers to use the highly scalable, interoperable version of Open|SpeedShop, which are reassembled from the tool building blocks into a flexible, multi-user interface set of tools. This set of

  17. Scalable fabrication of nanomaterials based piezoresistivity sensors with enhanced performance

    Science.gov (United States)

    Hoang, Phong Tran

    Nanomaterials are small structures that have at least one dimension less than 100 nanometers. Depending on the number of dimensions that are not confined to the nanoscale range, nanomaterials can be classified into 0D, 1D and 2D types. Due to their small sizes, nanoparticles possess exceptional physical and chemical properties which opens a unique possibility for the next generation of strain sensors that are cheap, multifunctional, high sensitivity and reliability. Over the years, thanks to the development of new nanomaterials and the printing technologies, a number of printing techniques have been developed to fabricate a wide range of electronic devices on diverse substrates. Nanomaterials based thin film devices can be readily patterned and fabricated in a variety of ways, including printing, spraying and laser direct writing. In this work, we review the piezoresistivity of nanomaterials of different categories and study various printing approaches to utilize their excellent properties in the fabrication of scalable and printable thin film strain gauges. CNT-AgNP composite thin films were fabricated using a solution based screen printing process. By controlling the concentration ratio of CNTs to AgNPs in the nanocomposites and the supporting substrates, we were able to engineer the crack formation to achieve stable and high sensitivity sensors. The crack formation in the composite films lead to piezoresistive sensors with high GFs up to 221.2. Also, with a simple, low cost, and easy to scale up fabrication process they may find use as an alternative to traditional strain sensors. By using computer controlled spray coating system, we can achieve uniform and high quality CNTs thin films for the fabrication of strain sensors and transparent / flexible electrodes. A simple diazonium salt treatment of the pristine SWCNT thin film has been identified to be efficient in greatly enhancing the piezoresistive sensitivity of SWCNT thin film based piezoresistive sensors

  18. Scalable software-defined optical networking with high-performance routing and wavelength assignment algorithms.

    Science.gov (United States)

    Lee, Chankyun; Cao, Xiaoyuan; Yoshikane, Noboru; Tsuritani, Takehiro; Rhee, June-Koo Kevin

    2015-10-19

    The feasibility of software-defined optical networking (SDON) for a practical application critically depends on scalability of centralized control performance. The paper, highly scalable routing and wavelength assignment (RWA) algorithms are investigated on an OpenFlow-based SDON testbed for proof-of-concept demonstration. Efficient RWA algorithms are proposed to achieve high performance in achieving network capacity with reduced computation cost, which is a significant attribute in a scalable centralized-control SDON. The proposed heuristic RWA algorithms differ in the orders of request processes and in the procedures of routing table updates. Combined in a shortest-path-based routing algorithm, a hottest-request-first processing policy that considers demand intensity and end-to-end distance information offers both the highest throughput of networks and acceptable computation scalability. We further investigate trade-off relationship between network throughput and computation complexity in routing table update procedure by a simulation study.

  19. A scalable silicon photonic chip-scale optical switch for high performance computing systems.

    Science.gov (United States)

    Yu, Runxiang; Cheung, Stanley; Li, Yuliang; Okamoto, Katsunari; Proietti, Roberto; Yin, Yawei; Yoo, S J B

    2013-12-30

    This paper discusses the architecture and provides performance studies of a silicon photonic chip-scale optical switch for scalable interconnect network in high performance computing systems. The proposed switch exploits optical wavelength parallelism and wavelength routing characteristics of an Arrayed Waveguide Grating Router (AWGR) to allow contention resolution in the wavelength domain. Simulation results from a cycle-accurate network simulator indicate that, even with only two transmitter/receiver pairs per node, the switch exhibits lower end-to-end latency and higher throughput at high (>90%) input loads compared with electronic switches. On the device integration level, we propose to integrate all the components (ring modulators, photodetectors and AWGR) on a CMOS-compatible silicon photonic platform to ensure a compact, energy efficient and cost-effective device. We successfully demonstrate proof-of-concept routing functions on an 8 × 8 prototype fabricated using foundry services provided by OpSIS-IME.

  20. Ceph: A Scalable, High-Performance Distributed File System

    OpenAIRE

    Weil, Sage; Brandt, Scott A.; Miller, Ethan L; Long, Darrell D. E.; Maltzahn, Carlos

    2006-01-01

    We have developed Ceph, a distributed file system that provides excellent performance, reliability, and scala- bility. Ceph maximizes the separation between data and metadata management by replacing allocation ta- bles with a pseudo-random data distribution function (CRUSH) designed for heterogeneous and dynamic clus- ters of unreliable object storage devices (OSDs). We leverage device intelligence by distributing data replica- tion, failure detection and recovery to semi-autonomous ...

  1. Scalability and performance analysis of the EGEE information system

    CERN Document Server

    Ehm, F; Schulz, M W

    2008-01-01

    Grid information systems are mission-critical components in today's production grid infrastructures. They provide detailed information about grid services which is needed for job submission, data management and general monitoring of the grid. As the number of services within these infrastructures continues to grow, it must be understood if the current information system used in EGEE has the capacity to handle the extra load. This paper describes the current usage of the EGEE information system obtained by monitoring the existing system. A test framework is described which simulates the existing usage patterns and can be used to measure the performance of information systems. The framework is then used to conduct tests on the existing EGEE information system components to evaluate various performance enhancements. Finally, the framework is used to simulate the performance of the information system if the existing grid would double in size.

  2. Extreme Performance Scalable Operating Systems Final Progress Report (July 1, 2008 - October 31, 2011)

    Energy Technology Data Exchange (ETDEWEB)

    Malony, Allen D; Shende, Sameer

    2011-10-31

    This is the final progress report for the FastOS (Phase 2) (FastOS-2) project with Argonne National Laboratory and the University of Oregon (UO). The project started at UO on July 1, 2008 and ran until April 30, 2010, at which time a six-month no-cost extension began. The FastOS-2 work at UO delivered excellent results in all research work areas: * scalable parallel monitoring * kernel-level performance measurement * parallel I/0 system measurement * large-scale and hybrid application performance measurement * onlne scalable performance data reduction and analysis * binary instrumentation

  3. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    Energy Technology Data Exchange (ETDEWEB)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K. [Cray Inc., St. Paul, MN 55101 (United States); Porter, D. [Minnesota Supercomputing Institute for Advanced Computational Research, Minneapolis, MN USA (United States); O’Neill, B. J.; Nolting, C.; Donnert, J. M. F.; Jones, T. W. [School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455 (United States); Edmon, P., E-mail: pjm@cray.com, E-mail: nradclif@cray.com, E-mail: kkandalla@cray.com, E-mail: oneill@astro.umn.edu, E-mail: nolt0040@umn.edu, E-mail: donnert@ira.inaf.it, E-mail: twj@umn.edu, E-mail: dhp@umn.edu, E-mail: pedmon@cfa.harvard.edu [Institute for Theory and Computation, Center for Astrophysics, Harvard University, Cambridge, MA 02138 (United States)

    2017-02-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.

  4. Scalable File Systems for High Performance Computing Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Brandt, S A

    2007-10-03

    Simulations of mode I interlaminar fracture toughness tests of a carbon-reinforced composite material (BMS 8-212) were conducted with LSDYNA. The fracture toughness tests were performed by U.C. Berkeley. The simulations were performed to investigate the validity and practicality of employing decohesive elements to represent interlaminar bond failures that are prevalent in carbon-fiber composite structure penetration events. The simulations employed a decohesive element formulation that was verified on a simple two element model before being employed to perform the full model simulations. Care was required during the simulations to ensure that the explicit time integration of LSDYNA duplicate the near steady-state testing conditions. In general, this study validated the use of employing decohesive elements to represent the interlaminar bond failures seen in carbon-fiber composite structures, but the practicality of employing the elements to represent the bond failures seen in carbon-fiber composite structures during penetration events was not established.

  5. Performance and Scalability of Blockchain Networks and Smart Contracts

    OpenAIRE

    Scherer, Mattias

    2017-01-01

    The blockchain technology started as the innovation that powered the cryptocurrency Bitcoin. But in recent years, leaders in finance, banking, and many more companies has given this new innovation more attention than ever before. They seek a new technology to replace their system which are often inefficient and costly to operate. However, one of the reasons why it not possible to use a blockchain right away is because of the poor performance. Public blockchains, where anyone can participate, ...

  6. CHORUS – providing a scalable solution for public access to scholarly research

    Directory of Open Access Journals (Sweden)

    Howard Ratner

    2014-03-01

    Full Text Available CHORUS (Clearinghouse for the Open Research of the United States offers an open technology platform in response to the public access requirements of US federal funding agencies, researchers, institutions and the public. It is focused on five principal sets of functions: 'identification', 'preservation', 'discovery', 'access', and 'compliance' . CHORUS facilitates public access to peer-reviewed publications, after a determined embargo period (where applicable, for each discipline and agency. By leveraging existing tools such as CrossRef, FundRef and ORCID, CHORUS allows a greater proportion of funding to remain focused on research. CHORUS identifies articles that report on federally funded research and enables a reader to access the ‘best available version’ free of charge, via the publisher. It is a scalable solution that offers maximum efficiency for all parties by automating as much of the process as is possible. CHORUS launched in pilot phase in September 2013, and the production phase will begin in early 2014.

  7. Engineering PFLOTRAN for Scalable Performance on Cray XT and IBM BlueGene Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Mills, Richard T [ORNL; Sripathi, Vamsi K [ORNL; Mahinthakumar, Gnanamanika [ORNL; Hammond, Glenn [Pacific Northwest National Laboratory (PNNL); Lichtner, Peter [Los Alamos National Laboratory (LANL); Smith, Barry F [Argonne National Laboratory (ANL)

    2010-01-01

    We describe PFLOTRAN - a code for simulation of coupled hydro-thermal-chemical processes in variably saturated, non-isothermal, porous media - and the approaches we have employed to obtain scalable performance on some of the largest scale supercomputers in the world. We present detailed analyses of I/O and solver performance on Jaguar, the Cray XT5 at Oak Ridge National Laboratory, and Intrepid, the IBM BlueGene/P at Argonne National Laboratory, that have guided our choice of algorithms.

  8. Scalable parallel programming for high performance seismic simulation on petascale heterogeneous supercomputers

    Science.gov (United States)

    Zhou, Jun

    The 1994 Northridge earthquake in Los Angeles, California, killed 57 people, injured over 8,700 and caused an estimated $20 billion in damage. Petascale simulations are needed in California and elsewhere to provide society with a better understanding of the rupture and wave dynamics of the largest earthquakes at shaking frequencies required to engineer safe structures. As the heterogeneous supercomputing infrastructures are becoming more common, numerical developments in earthquake system research are particularly challenged by the dependence on the accelerator elements to enable "the Big One" simulations with higher frequency and finer resolution. Reducing time to solution and power consumption are two primary focus area today for the enabling technology of fault rupture dynamics and seismic wave propagation in realistic 3D models of the crust's heterogeneous structure. This dissertation presents scalable parallel programming techniques for high performance seismic simulation running on petascale heterogeneous supercomputers. A real world earthquake simulation code, AWP-ODC, one of the most advanced earthquake codes to date, was chosen as the base code in this research, and the testbed is based on Titan at Oak Ridge National Laboraratory, the world's largest hetergeneous supercomputer. The research work is primarily related to architecture study, computation performance tuning and software system scalability. An earthquake simulation workflow has also been developed to support the efficient production sets of simulations. The highlights of the technical development are an aggressive performance optimization focusing on data locality and a notable data communication model that hides the data communication latency. This development results in the optimal computation efficiency and throughput for the 13-point stencil code on heterogeneous systems, which can be extended to general high-order stencil codes. Started from scratch, the hybrid CPU/GPU version of AWP

  9. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  10. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy.

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-19

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available.

  11. Scalable High Performance Computing: Direct and Large-Eddy Turbulent Flow Simulations Using Massively Parallel Computers

    Science.gov (United States)

    Morgan, Philip E.

    2004-01-01

    This final report contains reports of research related to the tasks "Scalable High Performance Computing: Direct and Lark-Eddy Turbulent FLow Simulations Using Massively Parallel Computers" and "Devleop High-Performance Time-Domain Computational Electromagnetics Capability for RCS Prediction, Wave Propagation in Dispersive Media, and Dual-Use Applications. The discussion of Scalable High Performance Computing reports on three objectives: validate, access scalability, and apply two parallel flow solvers for three-dimensional Navier-Stokes flows; develop and validate a high-order parallel solver for Direct Numerical Simulations (DNS) and Large Eddy Simulation (LES) problems; and Investigate and develop a high-order Reynolds averaged Navier-Stokes turbulence model. The discussion of High-Performance Time-Domain Computational Electromagnetics reports on five objectives: enhancement of an electromagnetics code (CHARGE) to be able to effectively model antenna problems; utilize lessons learned in high-order/spectral solution of swirling 3D jets to apply to solving electromagnetics project; transition a high-order fluids code, FDL3DI, to be able to solve Maxwell's Equations using compact-differencing; develop and demonstrate improved radiation absorbing boundary conditions for high-order CEM; and extend high-order CEM solver to address variable material properties. The report also contains a review of work done by the systems engineer.

  12. Performance, Scalability, and Reliability (PSR) challenges, metrics and tools for web testing : A Case Study

    OpenAIRE

    Magapu, Akshay Kumar; Yarlagadda, Nikhil

    2016-01-01

    Context. Testing of web applications is an important task, as it ensures the functionality and quality of web applications. The quality of web application comes under non-functional testing. There are many quality attributes such as performance, scalability, reliability, usability, accessibility and security. Among these attributes, PSR is the most important and commonly used attributes considered in practice. However, there are very few empirical studies conducted on these three attributes. ...

  13. Impact of scan conversion methods on the performance of scalable video coding

    Science.gov (United States)

    Dubois, Eric; Baaziz, Nadia; Matta, Marwan

    1995-04-01

    The ability to flexibly access coded video data at different resolutions or bit rates is referred to as scalability. We are concerned here with the class of methods referred to as pyramidal embedded coding in which specific subsets of the binary data can be used to decode lower- resolution versions of the video sequence. Two key techniques in such a pyramidal coder are the scan-conversion operations of down-conversion and up-conversion. Down-conversion is required to produce the smaller, lower-resolution versions of the image sequence. Up- conversion is used to perform conditional coding, whereby the coded lower-resolution image is interpolated to the same resolution as the next higher image and used to assist in the encoding of that level. The coding efficiency depends on the accuracy of this up-conversion process. In this paper techniques for down-conversion and up-conversion are addressed in the context of a two-level pyramidal representation. We first present the pyramidal technique for spatial scalability and review the methods used in MPEG-2. We then discuss some enhanced methods for down- and up-conversion, and evaluate their performance in the context of the two-level scalable system.

  14. Scalable devices

    KAUST Repository

    Krüger, Jens J.

    2014-01-01

    In computer science in general and in particular the field of high performance computing and supercomputing the term scalable plays an important role. It indicates that a piece of hardware, a concept, an algorithm, or an entire system scales with the size of the problem, i.e., it can not only be used in a very specific setting but it\\'s applicable for a wide range of problems. From small scenarios to possibly very large settings. In this spirit, there exist a number of fixed areas of research on scalability. There are works on scalable algorithms, scalable architectures but what are scalable devices? In the context of this chapter, we are interested in a whole range of display devices, ranging from small scale hardware such as tablet computers, pads, smart-phones etc. up to large tiled display walls. What interests us mostly is not so much the hardware setup but mostly the visualization algorithms behind these display systems that scale from your average smart phone up to the largest gigapixel display walls.

  15. SAME4HPC: A Promising Approach in Building a Scalable and Mobile Environment for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Karthik, Rajasekar [ORNL

    2014-01-01

    In this paper, an architecture for building Scalable And Mobile Environment For High-Performance Computing with spatial capabilities called SAME4HPC is described using cutting-edge technologies and standards such as Node.js, HTML5, ECMAScript 6, and PostgreSQL 9.4. Mobile devices are increasingly becoming powerful enough to run high-performance apps. At the same time, there exist a significant number of low-end and older devices that rely heavily on the server or the cloud infrastructure to do the heavy lifting. Our architecture aims to support both of these types of devices to provide high-performance and rich user experience. A cloud infrastructure consisting of OpenStack with Ubuntu, GeoServer, and high-performance JavaScript frameworks are some of the key open-source and industry standard practices that has been adopted in this architecture.

  16. Scalable creation of gold nanostructures on high performance engineering polymeric substrate

    Science.gov (United States)

    Jia, Kun; Wang, Pan; Wei, Shiliang; Huang, Yumin; Liu, Xiaobo

    2017-12-01

    The article reveals a facile protocol for scalable production of gold nanostructures on a high performance engineering thermoplastic substrate made of polyarylene ether nitrile (PEN) for the first time. Firstly, gold thin films with different thicknesses of 2 nm, 4 nm and 6 nm were evaporated on a spin-coated PEN substrate on glass slide in vacuum. Next, the as-evaporated samples were thermally annealed around the glass transition temperature of the PEN substrate, on which gold nanostructures with island-like morphology were created. Moreover, it was found that the initial gold evaporation thickness and annealing atmosphere played an important role in determining the morphology and plasmonic properties of the formulated Au NPs. Interestingly, we discovered that isotropic Au NPs can be easily fabricated on the freestanding PEN substrate, which was fabricated by a cost-effective polymer solution casting method. More specifically, monodispersed Au nanospheres with an average size of ∼60 nm were obtained after annealing a 4 nm gold film covered PEN casting substrate at 220 °C for 2 h in oxygen. Therefore, the scalable production of Au NPs with controlled morphology on PEN substrate would open the way for development of robust flexible nanosensors and optical devices using high performance engineering polyarylene ethers.

  17. Design Considerations for Scalable High-Performance Vision Systems Embedded in Industrial Print Inspection Machines

    Directory of Open Access Journals (Sweden)

    Rössler Peter

    2007-01-01

    Full Text Available This paper describes the design of a scalable high-performance vision system which is used in the application area of optical print inspection. The system is able to process hundreds of megabytes of image data per second coming from several high-speed/high-resolution cameras. Due to performance requirements, some functionality has been implemented on dedicated hardware based on a field programmable gate array (FPGA, which is coupled to a high-end digital signal processor (DSP. The paper discusses design considerations like partitioning of image processing algorithms between hardware and software. The main chapters focus on functionality implemented on the FPGA, including low-level image processing algorithms (flat-field correction, image pyramid generation, neighborhood operations and advanced processing units (programmable arithmetic unit, geometry unit. Verification issues for the complex system are also addressed. The paper concludes with a summary of the FPGA resource usage and some performance results.

  18. Scalable modeling and performance evaluation of dynamic RED router using fluid-flow approximation

    Science.gov (United States)

    Ohsaki, Hiroyuki; Yamamoto, Hideyuki; Imase, Makoto

    2005-10-01

    In recent years, AQM (Active Queue Management) mechanisms, which support the end-to-end congestion control mechanism of TCP (Transmission Control Protocol), have been widely studied in the literature. AQM mechanism is a congestion controller at a router for suppressing and stabilizing its queue length (i.e., the number of packets in the buffer) by actively discarding arriving packets. Although a number of AQM mechanisms have been proposed, behaviors of those AQM mechanisms other than RED (Random Early Detection) have not been fully investigated. In this paper, using fluid-flow approximation, we analyze steady state behavior of DRED (Dynamic RED), which is designed with a control theoretic approach. More specifically, we model several network components such as congestion control mechanism of TCP, DRED router, and link propagation delay as independent SISO (Single-Input Single-Output) continuous-time systems. By interconnecting those SISO models, we obtain a continuous-time model for the entire network. Unlike other fluid-based modeling approaches, our analytic approach is scalable; our analytic approach is scalable in terms of the number of TCP connections and DRED routers since both input and output of all continuous-time systems are uniformly defined as a packet transmission rate. By performing steady-state analysis, we derive TCP throughput, average queue length of DRED router, and packet loss probability. Through several numerical examples, we quantitatively show that DRED has an intrinsic problem in high-speed networks; i.e., DRED cannot stabilize its queue length when the bottleneck link bandwidth is high. We also validate accuracy of our analytic approach by comparing analytic results with simulation ones.

  19. Scalable, high-performance 3D imaging software platform: system architecture and application to virtual colonoscopy.

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2012-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system.

  20. Provider Customer Service Program - Performance Data

    Data.gov (United States)

    U.S. Department of Health & Human Services — CMS is continuously analyzing performance and quality of the Provider Customer Service Programs (PCSPs) of the contractors and will be identifying trends and making...

  1. Pre-coating of LSCM perovskite with metal catalyst for scalable high performance anodes

    KAUST Repository

    Boulfrad, Samir

    2013-07-01

    In this work, a highly scalable technique is proposed as an alternative to the lab-scale impregnation method. LSCM-CGO powders were pre-coated with 5 wt% of Ni from nitrates. After appropriate mixing and adequate heat treatment, coated powders were then dispersed into organic based vehicles to form a screen-printable ink which was deposited and fired to form SOFC anode layers. Electrochemical tests show a considerable enhancement of the pre-coated anode performances under 50 ml/min wet H2 flow with polarization resistance decreased from about 0.60cm2 to 0.38 cm2 at 900 C and from 6.70 cm2 to 1.37 cm2 at 700 C. This is most likely due to the pre-coating process resulting in nano-scaled Ni particles with two typical sizes; from 50 to 200 nm and from 10 to 40 nm. Converging indications suggest that the latter type of particle comes from solid state solution of Ni in LSCM phase under oxidizing conditions and exsolution as nanoparticles under reducing atmospheres. Copyright © 2013, Hydrogen Energy Publications, LLC. Published by Elsevier Ltd. All rights reserved.

  2. ScalaTrace: Scalable Compression and Replay of Communication Traces for High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Noeth, M; Ratn, P; Mueller, F; Schulz, M; de Supinski, B R

    2008-05-16

    Characterizing the communication behavior of large-scale applications is a difficult and costly task due to code/system complexity and long execution times. While many tools to study this behavior have been developed, these approaches either aggregate information in a lossy way through high-level statistics or produce huge trace files that are hard to handle. We contribute an approach that provides orders of magnitude smaller, if not near-constant size, communication traces regardless of the number of nodes while preserving structural information. We introduce intra- and inter-node compression techniques of MPI events that are capable of extracting an application's communication structure. We further present a replay mechanism for the traces generated by our approach and discuss results of our implementation for BlueGene/L. Given this novel capability, we discuss its impact on communication tuning and beyond. To the best of our knowledge, such a concise representation of MPI traces in a scalable manner combined with deterministic MPI call replay are without any precedent.

  3. Scalable High Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning

    Science.gov (United States)

    Wu, Guorong; Kim, Minjeong; Wang, Qian; Munsell, Brent C.

    2015-01-01

    Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data,, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked auto-encoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework image registration experiments were conducted on 7.0-tesla brain MR images. In all experiments, the results showed the new image registration framework consistently demonstrated more accurate registration results when compared to state-of-the-art. PMID:26552069

  4. Durango: Scalable Synthetic Workload Generation for Extreme-Scale Application Performance Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Carothers, Christopher D. [Rensselaer Polytechnic Institute (RPI); Meredith, Jeremy S. [ORNL; Blanco, Marc [Rensselaer Polytechnic Institute (RPI); Vetter, Jeffrey S. [ORNL; Mubarak, Misbah [Argonne National Laboratory; LaPre, Justin [Rensselaer Polytechnic Institute (RPI); Moore, Shirley V. [ORNL

    2017-05-01

    Performance modeling of extreme-scale applications on accurate representations of potential architectures is critical for designing next generation supercomputing systems because it is impractical to construct prototype systems at scale with new network hardware in order to explore designs and policies. However, these simulations often rely on static application traces that can be difficult to work with because of their size and lack of flexibility to extend or scale up without rerunning the original application. To address this problem, we have created a new technique for generating scalable, flexible workloads from real applications, we have implemented a prototype, called Durango, that combines a proven analytical performance modeling language, Aspen, with the massively parallel HPC network modeling capabilities of the CODES framework.Our models are compact, parameterized and representative of real applications with computation events. They are not resource intensive to create and are portable across simulator environments. We demonstrate the utility of Durango by simulating the LULESH application in the CODES simulation environment on several topologies and show that Durango is practical to use for simulation without loss of fidelity, as quantified by simulation metrics. During our validation of Durango's generated communication model of LULESH, we found that the original LULESH miniapp code had a latent bug where the MPI_Waitall operation was used incorrectly. This finding underscores the potential need for a tool such as Durango, beyond its benefits for flexible workload generation and modeling.Additionally, we demonstrate the efficacy of Durango's direct integration approach, which links Aspen into CODES as part of the running network simulation model. Here, Aspen generates the application-level computation timing events, which in turn drive the start of a network communication phase. Results show that Durango's performance scales well when

  5. Corfu: A Platform for Scalable Consistency

    OpenAIRE

    Wei, Michael

    2017-01-01

    Corfu is a platform for building systems which are extremely scalable, strongly consistent and robust. Unlike other systems which weaken guarantees to provide better performance, we have built Corfu with a resilient fabric tuned and engineered for scalability and strong consistency at its core: the Corfu shared log. On top of the Corfu log, we have built a layer of advanced data services which leverage the properties of the Corfu log. Today, Corfu is already replacing data platforms in commer...

  6. Building a Community Infrastructure for Scalable On-Line Performance Analysis Tools around Open|SpeedShop

    Energy Technology Data Exchange (ETDEWEB)

    Galarowicz, James E. [Krell Institute, Ames, IA (United States); Miller, Barton P. [Univ. of Wisconsin, Madison, WI (United States). Computer Sciences Dept.; Hollingsworth, Jeffrey K. [Univ. of Maryland, College Park, MD (United States). Computer Sciences Dept.; Roth, Philip [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Future Technologies Group, Computer Science and Math Division; Schulz, Martin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing (CASC)

    2013-12-19

    In this project we created a community tool infrastructure for program development tools targeting Petascale class machines and beyond. This includes tools for performance analysis, debugging, and correctness tools, as well as tuning and optimization frameworks. The developed infrastructure provides a comprehensive and extensible set of individual tool building components. We started with the basic elements necessary across all tools in such an infrastructure followed by a set of generic core modules that allow a comprehensive performance analysis at scale. Further, we developed a methodology and workflow that allows others to add or replace modules, to integrate parts into their own tools, or to customize existing solutions. In order to form the core modules, we built on the existing Open|SpeedShop infrastructure and decomposed it into individual modules that match the necessary tool components. At the same time, we addressed the challenges found in performance tools for petascale systems in each module. When assembled, this instantiation of community tool infrastructure provides an enhanced version of Open|SpeedShop, which, while completely different in its architecture, provides scalable performance analysis for petascale applications through a familiar interface. This project also built upon and enhances capabilities and reusability of project partner components as specified in the original project proposal. The overall project team’s work over the project funding cycle was focused on several areas of research, which are described in the following sections. The reminder of this report also highlights related work as well as preliminary work that supported the project. In addition to the project partners funded by the Office of Science under this grant, the project team included several collaborators who contribute to the overall design of the envisioned tool infrastructure. In particular, the project team worked closely with the other two DOE NNSA

  7. A highly scalable and high-performance storage architecture for multimedia applications

    Science.gov (United States)

    Liu, Zhaobin; Xie, Changsheng; Fu, Xianglin; Cao, Qiang

    2002-12-01

    Due to the excitement of Internet and high bandwidth, there are more and more multimedia applications involving digital industry. However the storage and the real-time of the conventional storage architecture cannot cater for the requirements of continuous media. The most important storage architecture used in past is Direct Attached Storage (DAS) and RAID cabinet, and recently, both Network Attached Storage (NAS) and Storage Area Networks (SAN) are the alterative storage network topology. But as for the multimedia characters, there need more storage capacity and more simultaneous streams. In this paper, we have introduced a novel concept 'Unified Storage Network' (USN) to build efficient SAN over IP, to bridge the gap of NAS and SAN, furthermore to resolve the scalability problem of storage for multimedia applications.

  8. Content-Aware Scalability-Type Selection for Rate Adaptation of Scalable Video

    Directory of Open Access Journals (Sweden)

    Tekalp A Murat

    2007-01-01

    Full Text Available Scalable video coders provide different scaling options, such as temporal, spatial, and SNR scalabilities, where rate reduction by discarding enhancement layers of different scalability-type results in different kinds and/or levels of visual distortion depend on the content and bitrate. This dependency between scalability type, video content, and bitrate is not well investigated in the literature. To this effect, we first propose an objective function that quantifies flatness, blockiness, blurriness, and temporal jerkiness artifacts caused by rate reduction by spatial size, frame rate, and quantization parameter scaling. Next, the weights of this objective function are determined for different content (shot types and different bitrates using a training procedure with subjective evaluation. Finally, a method is proposed for choosing the best scaling type for each temporal segment that results in minimum visual distortion according to this objective function given the content type of temporal segments. Two subjective tests have been performed to validate the proposed procedure for content-aware selection of the best scalability type on soccer videos. Soccer videos scaled from 600 kbps to 100 kbps by the proposed content-aware selection of scalability type have been found visually superior to those that are scaled using a single scalability option over the whole sequence.

  9. Scalable photoreactor for hydrogen production

    KAUST Repository

    Takanabe, Kazuhiro

    2017-04-06

    Provided herein are scalable photoreactors that can include a membrane-free water- splitting electrolyzer and systems that can include a plurality of membrane-free water- splitting electrolyzers. Also provided herein are methods of using the scalable photoreactors provided herein.

  10. A Facile Approach Toward Scalable Fabrication of Reversible Shape-Memory Polymers with Bonded Elastomer Microphases as Internal Stress Provider.

    Science.gov (United States)

    Fan, Long Fei; Rong, Min Zhi; Zhang, Ming Qiu; Chen, Xu Dong

    2017-08-01

    The present communication reports a novel strategy to fabricate reversible shape-memory polymer that operates without the aid of external force on the basis of a two-phase structure design. The proof-of-concept material, crosslinked styrene-butadiene-styrene block copolymer (SBS, dispersed phase)/polycaprolactone-based polyurethane (PU, continuous phase) blend, possesses a closely connected microphase separation structure. That is, SBS phases are chemically bonded to crosslinked PU by means of a single crosslinking agent and two-step crosslinking process for increasing integrity of the system. Miscibility between components in the blend is no longer critical by taking advantage of the reactive blending technique. It is found that a suitable programming leads to compressed SBS, which serves as internal expansion stress provider as a result. The desired two-way shape-memory effect is realized by the joint action of the temperature-induced reversible opposite directional deformabilities of the crystalline phase of PU and compressed SBS, accompanying melting and orientated recrystallization of the former. Owing to the broadness of material selection and manufacturing convenience, the proposed approach opens an avenue toward mass production and application of the smart polymer. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. High performance flexible metal oxide/silver nanowire based transparent conductive films by a scalable lamination-assisted solution method

    Directory of Open Access Journals (Sweden)

    Hua Yu

    2017-03-01

    Full Text Available Flexible MoO3/silver nanowire (AgNW/MoO3/TiO2/Epoxy electrodes with comparable performance to ITO were fabricated by a scalable solution-processed method with lamination assistance for transparent and conductive applications. Silver nanoparticle-based electrodes were also prepared for comparison. Using a simple spin-coating and lamination-assisted planarization method, a full solution-based approach allows preparation of AgNW-based composite electrodes at temperatures as low as 140 °C. The resulting flexible AgNW-based electrodes exhibit higher transmittance of 82% at 550 nm and lower sheet resistance about 12–15 Ω sq−1, in comparison with the values of 68% and 22–25 Ω sq−1 separately for AgNP based electrodes. Scanning electron microscopy (SEM and Atomic force microscopy (AFM reveals that the multi-stacked metal-oxide layers embedded with the AgNWs possess lower surface roughness (<15 nm. The AgNW/MoO3 composite network could enhance the charge transport and collection efficiency by broadening the lateral conduction range due to the built of an efficient charge transport network with long-sized nanowire. In consideration of the manufacturing cost, the lamination-assisted solution-processed method is cost-effective and scalable, which is desire for large-area fabrication. While in view of the materials cost and comparable performance, this AgNW-based transparent and conductive electrodes is potential as an alternative to ITO for various optoelectronic applications.

  12. A novel 3D scalable video compression algorithm

    Science.gov (United States)

    Somasundaram, Siva; Subbalakshmi, Koduvayur P.

    2003-05-01

    In this paper we propose a scalable video coding scheme that utilizes the embedded block coding with optimal truncation (EBCOT) compression algorithm. Three dimensional spatio-temporal decomposition of the video sequence succeeded by compression using the EBCOT generates a SNR and resolution scalable bit stream. The proposed video coding algorithm not only performs closer to the MPEG-4 video coding standard in compression efficiency but also provides better SNR and resolution scalability. Experimental results show that the performance of the proposed algorithm does better than the 3-D SPIHT (Set Partitioning in Hierarchial Trees) algorithm by 1.5dB.

  13. Architecture-Aware Algorithms for Scalable Performance and Resilience on Heterogeneous Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States)

    2013-03-14

    There is a widening gap between the peak performance of high performance computers and the performance realized by full applications. Over the next decade, extreme-scale systems will present major new challenges to software development that could widen the gap so much that it prevents the productive use of future DOE Leadership computers.

  14. Evaluating the Scalability of Enterprise JavaBeans Technology

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yan (Jenny); Gorton, Ian; Liu, Anna; Chen, Shiping; Paul A Strooper; Pornsiri Muenchaisri

    2002-12-04

    One of the major problems in building large-scale distributed systems is to anticipate the performance of the eventual solution before it has been built. This problem is especially germane to Internet-based e-business applications, where failure to provide high performance and scalability can lead to application and business failure. The fundamental software engineering problem is compounded by many factors, including individual application diversity, software architecture trade-offs, COTS component integration requirements, and differences in performance of various software and hardware infrastructures. In this paper, we describe the results of an empirical investigation into the scalability of a widely used distributed component technology, Enterprise JavaBeans (EJB). A benchmark application is developed and tested to measure the performance of a system as both the client load and component infrastructure are scaled up. A scalability metric from the literature is then applied to analyze the scalability of the EJB component infrastructure under two different architectural solutions.

  15. A High-Performance, Scalable Infrastructure for Large-Scale Active DNS Measurements

    NARCIS (Netherlands)

    van Rijswijk, Roland M.; Jonker, Mattijs; Sperotto, Anna; Pras, Aiko

    The domain name system (DNS) is a core component of the Internet. It performs the vital task of mapping human readable names into machine readable data (such as IP addresses, which hosts handle e-mail, and so on). The content of the DNS reveals a lot about the technical operations of a domain. Thus,

  16. Performance prediction of scalable fuel cell systems for micro-vehicle applications

    Science.gov (United States)

    St. Clair, Jeffrey Glen

    Miniature (fuel cells consuming high energy density fuels. This thesis surveys miniature fuel cell technologies and identifies direct methanol and sodium borohydride technologies as especially promising at small scales. A methodology for estimating overall system-level performance that accounts for the balance of plant (i.e. the extra components like pumps, blowers, etc. necessary to run the fuel cell system) is developed and used to quantify the performance of two direct methanol and one NaBH4 fuel cell systems. Direct methanol systems with water recirculation offer superior specific power (400 mW/g) and specific energy at powers of 20W and system masses of 150g. The NaBH4 fuel cell system is superior at low power (fuel.

  17. Scalable Self-Supported Graphene Foam for High-Performance Electrocatalytic Oxygen Evolution.

    Science.gov (United States)

    Zhu, Yun-Pei; Ran, Jingrun; Qiao, Shi-Zhang

    2017-12-06

    Developing efficient electrocatalysts consisting of earth-abundant elements for oxygen evolution reaction (OER) is crucial for energy devices and technologies. Herein, we report self-supported highly porous nitrogen-doped graphene foam synthesized through the electrochemical expansion of carbon-fiber paper and subsequent nitrogen plasma treatment. A thorough characterization, such as electron microscopy and synchrotron-based near-edge X-ray absorption fine structure, indicates the well-developed porous structures featuring homogeneously doped nitrogen heteroatoms. These merits ensure enriched active sites, an enlarged active surface area, and improved mass/electron transport within the continuous graphene framework, thus leading to an outstanding capability toward electrocatalyzing OER in alkaline media, even competitive with the state-of-the-art noble-/transition-metal and nonmetal electrocatalysts reported to date, from the perspectives of the sharp onset potential, a small Tafel slope, and remarkable durability. Furthermore, a rechargeable Zn-air battery with this self-supported electrocatalyst directly used as the air cathode renders a low charge/discharge overpotential and considerable life span. The finding herein suggests that a rational methodology to synthesize graphene-based materials can significantly enhance the oxygen electrocatalysis, thereby promoting the overall performance of the energy-related system.

  18. Scalable-to-lossless transform domain distributed video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Veselov, Anton

    2010-01-01

    Distributed video coding (DVC) is a novel approach providing new features as low complexity encoding by mainly exploiting the source statistics at the decoder based on the availability of decoder side information. In this paper, scalable-tolossless DVC is presented based on extending a lossy...... TransformDomain Wyner-Ziv (TDWZ) distributed video codec with feedback.The lossless coding is obtained by using a reversible integer DCT.Experimental results show that the performance of the proposed scalable-to-lossless TDWZ video codec can outperform alternatives based on the JPEG 2000 standard. The TDWZ...... codec provides frame by frame encoding. Comparing the lossless coding efficiency, the proposed scalable-to-lossless TDWZ video codec can save up to 5%-13% bits compared to JPEG LS and H.264 Intra frame lossless coding and do so as a scalable-to-lossless coding....

  19. Scalable Multiple-Description Image Coding Based on Embedded Quantization

    Directory of Open Access Journals (Sweden)

    Moerman Ingrid

    2007-01-01

    Full Text Available Scalable multiple-description (MD coding allows for fine-grain rate adaptation as well as robust coding of the input source. In this paper, we present a new approach for scalable MD coding of images, which couples the multiresolution nature of the wavelet transform with the robustness and scalability features provided by embedded multiple-description scalar quantization (EMDSQ. Two coding systems are proposed that rely on quadtree coding to compress the side descriptions produced by EMDSQ. The proposed systems are capable of dynamically adapting the bitrate to the available bandwidth while providing robustness to data losses. Experiments performed under different simulated network conditions demonstrate the effectiveness of the proposed scalable MD approach for image streaming over error-prone channels.

  20. Scalable Multiple-Description Image Coding Based on Embedded Quantization

    Directory of Open Access Journals (Sweden)

    Augustin I. Gavrilescu

    2007-02-01

    Full Text Available Scalable multiple-description (MD coding allows for fine-grain rate adaptation as well as robust coding of the input source. In this paper, we present a new approach for scalable MD coding of images, which couples the multiresolution nature of the wavelet transform with the robustness and scalability features provided by embedded multiple-description scalar quantization (EMDSQ. Two coding systems are proposed that rely on quadtree coding to compress the side descriptions produced by EMDSQ. The proposed systems are capable of dynamically adapting the bitrate to the available bandwidth while providing robustness to data losses. Experiments performed under different simulated network conditions demonstrate the effectiveness of the proposed scalable MD approach for image streaming over error-prone channels.

  1. 5 CFR 9701.407 - Monitoring performance and providing feedback.

    Science.gov (United States)

    2010-01-01

    ... implementing directives and policies, supervisors must— (a) Monitor the performance of their employees and the organization; and (b) Provide timely periodic feedback to employees on their actual performance with respect to their performance expectations, including one or more interim performance reviews during each appraisal...

  2. Performance and scalability of finite-difference and finite-element wave-propagation modeling on Intel's Xeon Phi

    NARCIS (Netherlands)

    Zhebel, E.; Minisini, S.; Kononov, A.; Mulder, W.A.

    2013-01-01

    With the rapid developments in parallel compute architectures, algorithms for seismic modeling and imaging need to be reconsidered in terms of parallelization. The aim of this paper is to compare scalability of seismic modeling algorithms: finite differences, continuous mass-lumped finite elements

  3. Scalable rendering on PC clusters

    Energy Technology Data Exchange (ETDEWEB)

    WYLIE,BRIAN N.; LEWIS,VASILY; SHIRLEY,DAVID NOYES; PAVLAKOS,CONSTANTINE

    2000-04-25

    This case study presents initial results from research targeted at the development of cost-effective scalable visualization and rendering technologies. The implementations of two 3D graphics libraries based on the popular sort-last and sort-middle parallel rendering techniques are discussed. An important goal of these implementations is to provide scalable rendering capability for extremely large datasets (>> 5 million polygons). Applications can use these libraries for either run-time visualization, by linking to an existing parallel simulation, or for traditional post-processing by linking to an interactive display program. The use of parallel, hardware-accelerated rendering on commodity hardware is leveraged to achieve high performance. Current performance results show that, using current hardware (a small 16-node cluster), they can utilize up to 85% of the aggregate graphics performance and achieve rendering rates in excess of 20 million polygons/second using OpenGL{reg_sign} with lighting, Gouraud shading, and individually specified triangles (not t-stripped).

  4. Can Composite Measures Provide a Different Perspective on Provider Performance Than Individual Measures?

    Science.gov (United States)

    Shwartz, Michael; Rosen, Amy K; Burgess, James F

    2017-12-01

    Composite measures, which aggregate performance on individual measures into a summary score, are increasingly being used to evaluate facility performance. There is little understanding of the unique perspective that composite measures provide. To examine whether high/low (ie, high or low) performers on a composite measures are also high/low performers on most of the individual measures that comprise the composite. We used data from 2 previous studies, one involving 5 measures from 632 hospitals and one involving 28 measures from 112 Veterans Health Administration (VA) nursing homes; and new data on hospital readmissions for 3 conditions from 131 VA hospitals. To compare high/low performers on a composite to high/low performers on the component measures, we used 2-dimensional tables to categorize facilities into high/low performance on the composite and on the individual component measures. In the first study, over a third of the 162 hospitals in the top quintile based on the composite were in the top quintile on at most 1 of the 5 individual measures. In the second study, over 40% of the 27 high-performing nursing homes on the composite were high performers on 8 or fewer of the 28 individual measures. In the third study, 20% of the 61 low performers on the composite were low performers on only 1 of the 3 individual measures. Composite measures can identify as high/low performers facilities that perform "pretty well" (or "pretty poorly") across many individual measures but may not be high/low performers on most of them.

  5. Optical Backplane Based on Ring-Resonators: Scalability and Performance Analysis for 10 Gb/s OOK-NRZ

    Directory of Open Access Journals (Sweden)

    Giuseppe Rizzelli

    2014-05-01

    Full Text Available The use of architectures that implement optical switching without any need of optoelectronic conversion allows us to overcome the limits imposed by today’s electronic backplane, such as power consumption and dissipation, as well as power supply and footprint requirements. We propose a ring-resonator based optical backplane for router line-card interconnection. In particular we investigate how the scalability of the architecture is affected by the following parameters: number of line cards, switching-element round-trip losses, frequency drifting due to thermal variations, and waveguide-crossing effects. Moreover, to quantify the signal distortions introduced by filtering operations, the bit error rate for the different parameter conditions are shown in case of an on-off keying non-return-to-zero (OOK-NRZ input signal at 10 Gb/s.

  6. Scalable Production of the Silicon-Tin Yin-Yang Hybrid Structure with Graphene Coating for High Performance Lithium-Ion Battery Anodes.

    Science.gov (United States)

    Jin, Yan; Tan, Yingling; Hu, Xiaozhen; Zhu, Bin; Zheng, Qinghui; Zhang, Zijiao; Zhu, Guoying; Yu, Qian; Jin, Zhong; Zhu, Jia

    2017-05-10

    Alloy anodes possessed of high theoretical capacity show great potential for next-generation advanced lithium-ion battery. Even though huge volume change during lithium insertion and extraction leads to severe problems, such as pulverization and an unstable solid-electrolyte interphase (SEI), various nanostructures including nanoparticles, nanowires, and porous networks can address related challenges to improve electrochemical performance. However, the complex and expensive fabrication process hinders the widespread application of nanostructured alloy anodes, which generate an urgent demand of low-cost and scalable processes to fabricate building blocks with fine controls of size, morphology, and porosity. Here, we demonstrate a scalable and low-cost process to produce a porous yin-yang hybrid composite anode with graphene coating through high energy ball-milling and selective chemical etching. With void space to buffer the expansion, the produced functional electrodes demonstrate stable cycling performance of 910 mAh g-1 over 600 cycles at a rate of 0.5C for Si-graphene "yin" particles and 750 mAh g-1 over 300 cycles at 0.2C for Sn-graphene "yang" particles. Therefore, we open up a new approach to fabricate alloy anode materials at low-cost, low-energy consumption, and large scale. This type of porous silicon or tin composite with graphene coating can also potentially play a significant role in thermoelectrics and optoelectronics applications.

  7. Perceptual compressive sensing scalability in mobile video

    Science.gov (United States)

    Bivolarski, Lazar

    2011-09-01

    Scalability features embedded within the video sequences allows for streaming over heterogeneous networks to a variety of end devices. Compressive sensing techniques that will allow for lowering the complexity increase the robustness of the video scalability are reviewed. Human visual system models are often used in establishing perceptual metrics that would evaluate quality of video. Combining of perceptual and compressive sensing approach outlined from recent investigations. The performance and the complexity of different scalability techniques are evaluated. Application of perceptual models to evaluation of the quality of compressive sensing scalability is considered in the near perceptually lossless case and to the appropriate coding schemes is reviewed.

  8. Network performance and fault analytics for LTE wireless service providers

    CERN Document Server

    Kakadia, Deepak; Gilgur, Alexander

    2017-01-01

     This book is intended to describe how to leverage emerging technologies big data analytics and SDN, to address challenges specific to LTE and IP network performance and fault management data in order to more efficiently manage and operate an LTE wireless networks. The proposed integrated solutions permit the LTE network service provider to operate entire integrated network, from RAN to Core , from UE to application service, as one unified system and correspondingly collect and align disparate key metrics and data, using an integrated and holistic approach to network analysis. The LTE wireless network performance and fault involves the network performance and management of network elements in EUTRAN, EPC and IP transport components, not only as individual components, but also as nuances of inter-working of these components. The key metrics for EUTRAN include radio access network accessibility, retainability, integrity, availability and mobility. The key metrics for EPC include MME accessibility, mobility and...

  9. High-performance flat data center network architecture based on scalable and flow-controlled optical switching system

    Science.gov (United States)

    Calabretta, Nicola; Miao, Wang; Dorren, Harm

    2016-03-01

    Traffic in data centers networks (DCNs) is steadily growing to support various applications and virtualization technologies. Multi-tenancy enabling efficient resource utilization is considered as a key requirement for the next generation DCs resulting from the growing demands for services and applications. Virtualization mechanisms and technologies can leverage statistical multiplexing and fast switch reconfiguration to further extend the DC efficiency and agility. We present a novel high performance flat DCN employing bufferless and distributed fast (sub-microsecond) optical switches with wavelength, space, and time switching operation. The fast optical switches can enhance the performance of the DCNs by providing large-capacity switching capability and efficiently sharing the data plane resources by exploiting statistical multiplexing. Benefiting from the Software-Defined Networking (SDN) control of the optical switches, virtual DCNs can be flexibly created and reconfigured by the DCN provider. Numerical and experimental investigations of the DCN based on the fast optical switches show the successful setup of virtual network slices for intra-data center interconnections. Experimental results to assess the DCN performance in terms of latency and packet loss show less than 10^-5 packet loss and 640ns end-to-end latency with 0.4 load and 16- packet size buffer. Numerical investigation on the performance of the systems when the port number of the optical switch is scaled to 32x32 system indicate that more than 1000 ToRs each with Terabit/s interface can be interconnected providing a Petabit/s capacity. The roadmap to photonic integration of large port optical switches will be also presented.

  10. Demonstration of a 100 GBIT/S (GBPS) Scalable Optical Multiprocessor Interconnect System Using Optical Time Division Multiplexing

    National Research Council Canada - National Science Library

    Prucnal, Paul R

    2002-01-01

    ...) broadcast star computer interconnect has been performed. A highly scalable novel node design provides rapid inter-channel switching capability on the order of the single channel bit period (1.6 ns...

  11. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Chase Qishi [New Jersey Inst. of Technology, Newark, NJ (United States); Univ. of Memphis, TN (United States); Zhu, Michelle Mengxia [Southern Illinois Univ., Carbondale, IL (United States)

    2016-06-06

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models feature diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific

  12. Performance of the provider satisfaction inventory to measure provider satisfaction with diabetes care.

    Science.gov (United States)

    Montori, Victor M; Tweedy, Deborah A; Vogelsang, Debra A; Schryver, Patricia G; Naessens, James M; Smith, Steven A

    2002-01-01

    To develop and validate an inventory to measure provider satisfaction with diabetes management. Using the Mayo Clinic Model of Care, a review of the literature, and expert input, we developed a 4-category (chronic disease management, collaborative team practice, outcomes, and supportive environment), 29-item, 7-point-per-item Provider Satisfaction Inventory (PSI). For evaluation of the PSI, we mailed the survey to 192 primary-care and specialized providers from 8 practice sites (of whom 60 primary-care providers were participating in either usual or planned diabetes care). The Cronbach a score was used to assess the instrument's internal reliability. Participating providers indicated satisfaction or dissatisfaction with management of chronic disease by responding to 29 statements. The response rate was 58%. In each category, the Cronbach a score ranged from 0.71 to 0.90. Providers expressed satisfaction with patient-physician relationships, with the contributions of the nurse educator to the team, and with physician leadership. Providers were dissatisfied with their ability to spend adequate time with the patient (3.6 +/- 1.4), their ability to give patients with diabetes necessary personal attention (4.1 +/- 1.2), the efficient passing of communication (4.3 +/- 1.2), and the opportunities for input to change practice (4.3 +/- 1.6). No statistically significant difference (P = 0.12) was found in mean total scores between planned care (5.0 +/- 0.5) and usual care (4.7 +/- 0.6) providers. Moreover, no significant differences were noted across practice sites. The PSI is a reliable and preliminarily valid instrument for measuring provider satisfaction with diabetes care. Use in research and quality improvement activities awaits further validation.

  13. Leveraging Cloud Technology to Provide a Responsive, Reliable and Scalable Backend for the Virtual Ice Sheet Laboratory Using the Ice Sheet System Model and Amazon's Elastic Compute Cloud

    Science.gov (United States)

    Perez, G. L.; Larour, E. Y.; Halkides, D. J.; Cheng, D. L. C.

    2015-12-01

    The Virtual Ice Sheet Laboratory(VISL) is a Cryosphere outreach effort byscientists at the Jet Propulsion Laboratory(JPL) in Pasadena, CA, Earth and SpaceResearch(ESR) in Seattle, WA, and the University of California at Irvine (UCI), with the goal of providing interactive lessons for K-12 and college level students,while conforming to STEM guidelines. At the core of VISL is the Ice Sheet System Model(ISSM), an open-source project developed jointlyat JPL and UCI whose main purpose is to model the evolution of the polar ice caps in Greenland and Antarctica. By using ISSM, VISL students have access tostate-of-the-art modeling software that is being used to conduct scientificresearch by users all over the world. However, providing this functionality isby no means simple. The modeling of ice sheets in response to sea and atmospheric temperatures, among many other possible parameters, requiressignificant computational resources. Furthermore, this service needs to beresponsive and capable of handling burst requests produced by classrooms ofstudents. Cloud computing providers represent a burgeoning industry. With majorinvestments by tech giants like Amazon, Google and Microsoft, it has never beeneasier or more affordable to deploy computational elements on-demand. This isexactly what VISL needs and ISSM is capable of. Moreover, this is a promisingalternative to investing in expensive and rapidly devaluing hardware.

  14. Using MPI to Implement Scalable Libraries

    Science.gov (United States)

    Lusk, Ewing

    MPI is an instantiation of a general-purpose programming model, and high-performance implementations of the MPI standard have provided scalability for a wide range of applications. Ease of use was not an explicit goal of the MPI design process, which emphasized completeness, portability, and performance. Thus it is not surprising that MPI is occasionally criticized for being inconvenient to use and thus a drag on software developer productivity. One approach to the productivity issue is to use MPI to implement simpler programming models. Such models may limit the range of parallel algorithms that can be expressed, yet provide sufficient generality to benefit a significant number of applications, even from different domains.We illustrate this concept with the ADLB (Asynchronous, Dynamic Load-Balancing) library, which can be used to express manager/worker algorithms in such a way that their execution is scalable, even on the largestmachines. ADLB makes sophisticated use ofMPI functionality while providing an extremely simple API for the application programmer.We will describe it in the context of solving Sudoku puzzles and a nuclear physics Monte Carlo application currently running on tens of thousands of processors.

  15. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can arbit......, making it scalable to “big noisy data.” We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie....

  16. LHCb: Performance evaluation and capacity planning for a scalable and highly available virtulization infrastructure for the LHCb experiment

    CERN Multimedia

    Sborzacchi, F; Neufeld, N

    2013-01-01

    The virtual computing is often run to satisfy different needs: reduce costs, reduce resources, simplify maintenance and the last but not the least add flexibility. The use of Virtualization in a complex system such as a farm of PCs that control the hardware of an experiment (PLC, power supplies ,gas, magnets..) put us in a condition where not only an High Performance requirements need to be carefully considered but also a deep analysis of strategies to achieve a certain level of High Availability. We conducted a performance evaluation on different and comparable storage/network/virtulization platforms. The performance is measured using a series of independent benchmarks , testing the speed an the stability of multiple VMs runnng heavy-load operations on the I/O of virtualized storage and the virtualized network. The result from the benchmark tests allowed us to study and evaluate how the different workloads of Vm interact with the Hardware/Software resource layers.

  17. The Effect of Providing Breakfast in Class on Student Performance

    Science.gov (United States)

    Imberman, Scott A.; Kugler, Adriana D.

    2014-01-01

    Many schools have recently experimented with moving breakfast from the cafeteria to the classroom. We examine whether such a program increases achievement, grades, and attendance rates. We exploit quasi-random timing of program implementation that allows for a difference-in-differences identification strategy. We find that providing breakfast in…

  18. A scalable healthcare information system based on a service-oriented architecture.

    Science.gov (United States)

    Yang, Tzu-Hsiang; Sun, Yeali S; Lai, Feipei

    2011-06-01

    Many existing healthcare information systems are composed of a number of heterogeneous systems and face the important issue of system scalability. This paper first describes the comprehensive healthcare information systems used in National Taiwan University Hospital (NTUH) and then presents a service-oriented architecture (SOA)-based healthcare information system (HIS) based on the service standard HL7. The proposed architecture focuses on system scalability, in terms of both hardware and software. Moreover, we describe how scalability is implemented in rightsizing, service groups, databases, and hardware scalability. Although SOA-based systems sometimes display poor performance, through a performance evaluation of our HIS based on SOA, the average response time for outpatient, inpatient, and emergency HL7Central systems are 0.035, 0.04, and 0.036 s, respectively. The outpatient, inpatient, and emergency WebUI average response times are 0.79, 1.25, and 0.82 s. The scalability of the rightsizing project and our evaluation results show that the SOA HIS we propose provides evidence that SOA can provide system scalability and sustainability in a highly demanding healthcare information system.

  19. Towards Scalable Graph Computation on Mobile Devices

    Science.gov (United States)

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2015-01-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach. PMID:25859564

  20. Physical Profiling Performance of Air Force Primary Care Providers

    Science.gov (United States)

    2017-08-09

    MTF medical treatment facility OR odds ratio PCP primary care provider PHA Periodic Health Assessment SE standard error SME subject matter expert ...ascertain if predictors existed to augment PCP screening. This study was a cross-sectional, retrospective medical records review of active duty U.S. Air...Force (AF) members receiving care in an AF medical treatment facility (MTF) between October 31, 2013, and September 30, 2014, who had at least one

  1. A High Performance Computing Study of a Scalable FISST-Based Approach to Multi-Target, Multi-Sensor Tracking

    Science.gov (United States)

    Hussein, I.; Wilkins, M.; Roscoe, C.; Faber, W.; Chakravorty, S.; Schumacher, P.

    2016-09-01

    Finite Set Statistics (FISST) is a rigorous Bayesian multi-hypothesis management tool for the joint detection, classification and tracking of multi-sensor, multi-object systems. Implicit within the approach are solutions to the data association and target label-tracking problems. The full FISST filtering equations, however, are intractable. While FISST-based methods such as the PHD and CPHD filters are tractable, they require heavy moment approximations to the full FISST equations that result in a significant loss of information contained in the collected data. In this paper, we review Smart Sampling Markov Chain Monte Carlo (SSMCMC) that enables FISST to be tractable while avoiding moment approximations. We study the effect of tuning key SSMCMC parameters on tracking quality and computation time. The study is performed on a representative space object catalog with varying numbers of RSOs. The solution is implemented in the Scala computing language at the Maui High Performance Computing Center (MHPCC) facility.

  2. Numeric Analysis for Relationship-Aware Scalable Streaming Scheme

    Directory of Open Access Journals (Sweden)

    Heung Ki Lee

    2014-01-01

    Full Text Available Frequent packet loss of media data is a critical problem that degrades the quality of streaming services over mobile networks. Packet loss invalidates frames containing lost packets and other related frames at the same time. Indirect loss caused by losing packets decreases the quality of streaming. A scalable streaming service can decrease the amount of dropped multimedia resulting from a single packet loss. Content providers typically divide one large media stream into several layers through a scalable streaming service and then provide each scalable layer to the user depending on the mobile network. Also, a scalable streaming service makes it possible to decode partial multimedia data depending on the relationship between frames and layers. Therefore, a scalable streaming service provides a way to decrease the wasted multimedia data when one packet is lost. However, the hierarchical structure between frames and layers of scalable streams determines the service quality of the scalable streaming service. Even if whole packets of layers are transmitted successfully, they cannot be decoded as a result of the absence of reference frames and layers. Therefore, the complicated relationship between frames and layers in a scalable stream increases the volume of abandoned layers. For providing a high-quality scalable streaming service, we choose a proper relationship between scalable layers as well as the amount of transmitted multimedia data depending on the network situation. We prove that a simple scalable scheme outperforms a complicated scheme in an error-prone network. We suggest an adaptive set-top box (AdaptiveSTB to lower the dependency between scalable layers in a scalable stream. Also, we provide a numerical model to obtain the indirect loss of multimedia data and apply it to various multimedia streams. Our AdaptiveSTB enhances the quality of a scalable streaming service by removing indirect loss.

  3. Performance Verification of Production-Scalable Energy-Efficient Solutions: Winchester/Camberley Homes Mixed-Humid Climate

    Energy Technology Data Exchange (ETDEWEB)

    Mallay, D. [Partnership for Home Innovation, Upper Marlboro, MD (United States); Wiehagen, J. [Partnership for Home Innovation, Upper Marlboro, MD (United States)

    2014-07-01

    Winchester/Camberley Homes collaborated with the Building America team Partnership for Home Innovation to develop a new set of high performance home designs that could be applicable on a production scale. The new home designs are to be constructed in the mixed humid climate zone and could eventually apply to all of the builder's home designs to meet or exceed future energy codes or performance-based programs. However, the builder recognized that the combination of new wall framing designs and materials, higher levels of insulation in the wall cavity, and more detailed air sealing to achieve lower infiltration rates changes the moisture characteristics of the wall system. In order to ensure long term durability and repeatable successful implementation with few call-backs, the project team demonstrated through measured data that the wall system functions as a dynamic system, responding to changing interior and outdoor environmental conditions within recognized limits of the materials that make up the wall system. A similar investigation was made with respect to the complete redesign of the HVAC systems to significantly improve efficiency while maintaining indoor comfort. Recognizing the need to demonstrate the benefits of these efficiency features, the builder offered a new house model to serve as a test case to develop framing designs, evaluate material selections and installation requirements, changes to work scopes and contractor learning curves, as well as to compare theoretical performance characteristics with measured results.

  4. Performance Verification of Production-Scalable Energy-Efficient Solutions: Winchester/Camberley Homes Mixed-Humid Climate

    Energy Technology Data Exchange (ETDEWEB)

    Mallay, D.; Wiehagen, J.

    2014-07-01

    Winchester/Camberley Homes with the Building America program and its NAHB Research Center Industry Partnership collaborated to develop a new set of high performance home designs that could be applicable on a production scale. The new home designs are to be constructed in the mixed humid climate zone four and could eventually apply to all of the builder's home designs to meet or exceed future energy codes or performance-based programs. However, the builder recognized that the combination of new wall framing designs and materials, higher levels of insulation in the wall cavity, and more detailed air sealing to achieve lower infiltration rates changes the moisture characteristics of the wall system. In order to ensure long term durability and repeatable successful implementation with few call-backs, this report demonstrates through measured data that the wall system functions as a dynamic system, responding to changing interior and outdoor environmental conditions within recognized limits of the materials that make up the wall system. A similar investigation was made with respect to the complete redesign of the heating, cooling, air distribution, and ventilation systems intended to optimize the equipment size and configuration to significantly improve efficiency while maintaining indoor comfort. Recognizing the need to demonstrate the benefits of these efficiency features, the builder offered a new house model to serve as a test case to develop framing designs, evaluate material selections and installation requirements, changes to work scopes and contractor learning curves, as well as to compare theoretical performance characteristics with measured results.

  5. Bioreactor Scalability: Laboratory-Scale Bioreactor Design Influences Performance, Ecology, and Community Physiology in Expanded Granular Sludge Bed Bioreactors.

    Science.gov (United States)

    Connelly, Stephanie; Shin, Seung G; Dillon, Robert J; Ijaz, Umer Z; Quince, Christopher; Sloan, William T; Collins, Gavin

    2017-01-01

    Studies investigating the feasibility of new, or improved, biotechnologies, such as wastewater treatment digesters, inevitably start with laboratory-scale trials. However, it is rarely determined whether laboratory-scale results reflect full-scale performance or microbial ecology. The Expanded Granular Sludge Bed (EGSB) bioreactor, which is a high-rate anaerobic digester configuration, was used as a model to address that knowledge gap in this study. Two laboratory-scale idealizations of the EGSB-a one-dimensional and a three- dimensional scale-down of a full-scale design-were built and operated in triplicate under near-identical conditions to a full-scale EGSB. The laboratory-scale bioreactors were seeded using biomass obtained from the full-scale bioreactor, and, spent water from the distillation of whisky from maize was applied as substrate at both scales. Over 70 days, bioreactor performance, microbial ecology, and microbial community physiology were monitored at various depths in the sludge-beds using 16S rRNA gene sequencing (V4 region), specific methanogenic activity (SMA) assays, and a range of physical and chemical monitoring methods. SMA assays indicated dominance of the hydrogenotrophic pathway at full-scale whilst a more balanced activity profile developed during the laboratory-scale trials. At each scale, Methanobacterium was the dominant methanogenic genus present. Bioreactor performance overall was better at laboratory-scale than full-scale. We observed that bioreactor design at laboratory-scale significantly influenced spatial distribution of microbial community physiology and taxonomy in the bioreactor sludge-bed, with 1-D bioreactor types promoting stratification of each. In the 1-D laboratory bioreactors, increased abundance of Firmicutes was associated with both granule position in the sludge bed and increased activity against acetate and ethanol as substrates. We further observed that stratification in the sludge-bed in 1-D laboratory

  6. Bioreactor Scalability: Laboratory-Scale Bioreactor Design Influences Performance, Ecology, and Community Physiology in Expanded Granular Sludge Bed Bioreactors

    Science.gov (United States)

    Connelly, Stephanie; Shin, Seung G.; Dillon, Robert J.; Ijaz, Umer Z.; Quince, Christopher; Sloan, William T.; Collins, Gavin

    2017-01-01

    Studies investigating the feasibility of new, or improved, biotechnologies, such as wastewater treatment digesters, inevitably start with laboratory-scale trials. However, it is rarely determined whether laboratory-scale results reflect full-scale performance or microbial ecology. The Expanded Granular Sludge Bed (EGSB) bioreactor, which is a high-rate anaerobic digester configuration, was used as a model to address that knowledge gap in this study. Two laboratory-scale idealizations of the EGSB—a one-dimensional and a three- dimensional scale-down of a full-scale design—were built and operated in triplicate under near-identical conditions to a full-scale EGSB. The laboratory-scale bioreactors were seeded using biomass obtained from the full-scale bioreactor, and, spent water from the distillation of whisky from maize was applied as substrate at both scales. Over 70 days, bioreactor performance, microbial ecology, and microbial community physiology were monitored at various depths in the sludge-beds using 16S rRNA gene sequencing (V4 region), specific methanogenic activity (SMA) assays, and a range of physical and chemical monitoring methods. SMA assays indicated dominance of the hydrogenotrophic pathway at full-scale whilst a more balanced activity profile developed during the laboratory-scale trials. At each scale, Methanobacterium was the dominant methanogenic genus present. Bioreactor performance overall was better at laboratory-scale than full-scale. We observed that bioreactor design at laboratory-scale significantly influenced spatial distribution of microbial community physiology and taxonomy in the bioreactor sludge-bed, with 1-D bioreactor types promoting stratification of each. In the 1-D laboratory bioreactors, increased abundance of Firmicutes was associated with both granule position in the sludge bed and increased activity against acetate and ethanol as substrates. We further observed that stratification in the sludge-bed in 1-D laboratory

  7. Cost-effective scalable synthesis of mesoporous germanium particles via a redox-transmetalation reaction for high-performance energy storage devices.

    Science.gov (United States)

    Choi, Sinho; Kim, Jieun; Choi, Nam-Soon; Kim, Min Gyu; Park, Soojin

    2015-02-24

    Nanostructured germanium is a promising material for high-performance energy storage devices. However, synthesizing it in a cost-effective and simple manner on a large scale remains a significant challenge. Herein, we report a redox-transmetalation reaction-based route for the large-scale synthesis of mesoporous germanium particles from germanium oxide at temperatures of 420-600 °C. We could confirm that a unique redox-transmetalation reaction occurs between Zn(0) and Ge(4+) at approximately 420 °C using temperature-dependent in situ X-ray absorption fine structure analysis. This reaction has several advantages, which include (i) the successful synthesis of germanium particles at a low temperature (∼450 °C), (ii) the accommodation of large volume changes, owing to the mesoporous structure of the germanium particles, and (iii) the ability to synthesize the particles in a cost-effective and scalable manner, as inexpensive metal oxides are used as the starting materials. The optimized mesoporous germanium anode exhibits a reversible capacity of ∼1400 mA h g(-1) after 300 cycles at a rate of 0.5 C (corresponding to the capacity retention of 99.5%), as well as stable cycling in a full cell containing a LiCoO2 cathode with a high energy density (charge capacity = 286.62 mA h cm(-3)).

  8. Scalable analysis of Big pathology image data cohorts using efficient methods and high-performance computing strategies.

    Science.gov (United States)

    Kurc, Tahsin; Qi, Xin; Wang, Daihou; Wang, Fusheng; Teodoro, George; Cooper, Lee; Nalisnik, Michael; Yang, Lin; Saltz, Joel; Foran, David J

    2015-12-01

    We describe a suite of tools and methods that form a core set of capabilities for researchers and clinical investigators to evaluate multiple analytical pipelines and quantify sensitivity and variability of the results while conducting large-scale studies in investigative pathology and oncology. The overarching objective of the current investigation is to address the challenges of large data sizes and high computational demands. The proposed tools and methods take advantage of state-of-the-art parallel machines and efficient content-based image searching strategies. The content based image retrieval (CBIR) algorithms can quickly detect and retrieve image patches similar to a query patch using a hierarchical analysis approach. The analysis component based on high performance computing can carry out consensus clustering on 500,000 data points using a large shared memory system. Our work demonstrates efficient CBIR algorithms and high performance computing can be leveraged for efficient analysis of large microscopy images to meet the challenges of clinically salient applications in pathology. These technologies enable researchers and clinical investigators to make more effective use of the rich informational content contained within digitized microscopy specimens.

  9. Highly nitrogen-doped carbon capsules: scalable preparation and high-performance applications in fuel cells and lithium ion batteries.

    Science.gov (United States)

    Hu, Chuangang; Xiao, Ying; Zhao, Yang; Chen, Nan; Zhang, Zhipan; Cao, Minhua; Qu, Liangti

    2013-04-07

    Highly nitrogen-doped carbon capsules (hN-CCs) have been successfully prepared by using inexpensive melamine and glyoxal as precursors via solvothermal reaction and carbonization. With a great promise for large scale production, the hN-CCs, having large surface area and high-level nitrogen content (N/C atomic ration of ca. 13%), possess superior crossover resistance, selective activity and catalytic stability towards oxygen reduction reaction for fuel cells in alkaline medium. As a new anode material in lithium-ion battery, hN-CCs also exhibit excellent cycle performance and high rate capacity with a reversible capacity of as high as 1046 mA h g(-1) at a current density of 50 mA g(-1) after 50 cycles. These features make the hN-CCs developed in this study promising as suitable substitutes for the expensive noble metal catalysts in the next generation alkaline fuel cells, and as advanced electrode materials in lithium-ion batteries.

  10. Scalable Robust Principal Component Analysis Using Grassmann Averages.

    Science.gov (United States)

    Hauberg, Sren; Feragen, Aasa; Enficiaud, Raffi; Black, Michael J

    2016-11-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortunately, state-of-the-art approaches for robust PCA are not scalable. We note that in a zero-mean dataset, each observation spans a one-dimensional subspace, giving a point on the Grassmann manifold. We show that the average subspace corresponds to the leading principal component for Gaussian data. We provide a simple algorithm for computing this Grassmann Average ( GA), and show that the subspace estimate is less sensitive to outliers than PCA for general distributions. Because averages can be efficiently computed, we immediately gain scalability. We exploit robust averaging to formulate the Robust Grassmann Average (RGA) as a form of robust PCA. The resulting Trimmed Grassmann Average ( TGA) is appropriate for computer vision because it is robust to pixel outliers. The algorithm has linear computational complexity and minimal memory requirements. We demonstrate TGA for background modeling, video restoration, and shadow removal. We show scalability by performing robust PCA on the entire Star Wars IV movie; a task beyond any current method. Source code is available online.

  11. Portable and Transparent Message Compression in MPI Libraries to Improve the Performance and Scalability of Parallel Applications

    Energy Technology Data Exchange (ETDEWEB)

    Albonesi, David; Burtscher, Martin

    2009-04-17

    The goal of this project has been to develop a lossless compression algorithm for message-passing libraries that can accelerate HPC systems by reducing the communication time. Because both compression and decompression have to be performed in software in real time, the algorithm has to be extremely fast while still delivering a good compression ratio. During the first half of this project, they designed a new compression algorithm called FPC for scientific double-precision data, made the source code available on the web, and published two papers describing its operation, the first in the proceedings of the Data Compression Conference and the second in the IEEE Transactions on Computers. At comparable average compression ratios, this algorithm compresses and decompresses 10 to 100 times faster than BZIP2, DFCM, FSD, GZIP, and PLMI on the three architectures tested. With prediction tables that fit into the CPU's L1 data acache, FPC delivers a guaranteed throughput of six gigabits per second on a 1.6 GHz Itanium 2 system. The C source code and documentation of FPC are posted on-line and have already been downloaded hundreds of times. To evaluate FPC, they gathered 13 real-world scientific datasets from around the globe, including satellite data, crash-simulation data, and messages from HPC systems. Based on the large number of requests they received, they also made these datasets available to the community (with permission of the original sources). While FPC represents a great step forward, it soon became clear that its throughput was too slow for the emerging 10 gigabits per second networks. Hence, no speedup can be gained by including this algorithm in an MPI library. They therefore changed the aim of the second half of the project. Instead of implementing FPC in an MPI library, they refocused their efforts to develop a parallel compression algorithm to further boost the throughput. After all, all modern high-end microprocessors contain multiple CPUs on a

  12. Scalable Dynamic Instrumentation for BlueGene/L

    Energy Technology Data Exchange (ETDEWEB)

    Schulz, M; Ahn, D; Bernat, A; de Supinski, B R; Ko, S Y; Lee, G; Rountree, B

    2005-09-08

    Dynamic binary instrumentation for performance analysis on new, large scale architectures such as the IBM Blue Gene/L system (BG/L) poses new challenges. Their scale--with potentially hundreds of thousands of compute nodes--requires new, more scalable mechanisms to deploy and to organize binary instrumentation and to collect the resulting data gathered by the inserted probes. Further, many of these new machines don't support full operating systems on the compute nodes; rather, they rely on light-weight custom compute kernels that do not support daemon-based implementations. We describe the design and current status of a new implementation of the DPCL (Dynamic Probe Class Library) API for BG/L. DPCL provides an easy to use layer for dynamic instrumentation on parallel MPI applications based on the DynInst dynamic instrumentation mechanism for sequential platforms. Our work includes modifying DynInst to control instrumentation from remote I/O nodes and porting DPCL's communication to use MRNet, a scalable data reduction network for collecting performance data. We describe extensions to the DPCL API that support instrumentation of task subsets and aggregation of collected performance data. Overall, our implementation provides a scalable infrastructure that provides efficient binary instrumentation on BG/L.

  13. Scalable Synthesis of Triple-Core-Shell Nanostructures of TiO2 @MnO2 @C for High Performance Supercapacitors Using Structure-Guided Combustion Waves.

    Science.gov (United States)

    Shin, Dongjoon; Shin, Jungho; Yeo, Taehan; Hwang, Hayoung; Park, Seonghyun; Choi, Wonjoon

    2018-01-22

    Core-shell nanostructures of metal oxides and carbon-based materials have emerged as outstanding electrode materials for supercapacitors and batteries. However, their synthesis requires complex procedures that incur high costs and long processing times. Herein, a new route is proposed for synthesizing triple-core-shell nanoparticles of TiO2 @MnO2 @C using structure-guided combustion waves (SGCWs), which originate from incomplete combustion inside chemical-fuel-wrapped nanostructures, and their application in supercapacitor electrodes. SGCWs transform TiO2 to TiO2 @C and TiO2 @MnO2 to TiO2 @MnO2 @C via the incompletely combusted carbonaceous fuels under an open-air atmosphere, in seconds. The synthesized carbon layers act as templates for MnO2 shells in TiO2 @C and organic shells of TiO2 @MnO2 @C. The TiO2 @MnO2 @C-based electrodes exhibit a greater specific capacitance (488 F g-1 at 5 mV s-1 ) and capacitance retention (97.4% after 10 000 cycles at 1.0 V s-1 ), while the absence of MnO2 and carbon shells reveals a severe degradation in the specific capacitance and capacitance retention. Because the core-TiO2 nanoparticles and carbon shell prevent the deformation of the inner and outer sides of the MnO2 shell, the nanostructures of the TiO2 @MnO2 @C are preserved despite the long-term cycling, giving the superior performance. This SGCW-driven fabrication enables the scalable synthesis of multiple-core-shell structures applicable to diverse electrochemical applications. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Design and Implementation of Ceph: A Scalable Distributed File System

    Energy Technology Data Exchange (ETDEWEB)

    Weil, S A; Brandt, S A; Miller, E L; Long, D E; Maltzahn, C

    2006-04-19

    File system designers continue to look to new architectures to improve scalability. Object-based storage diverges from server-based (e.g. NFS) and SAN-based storage systems by coupling processors and memory with disk drives, delegating low-level allocation to object storage devices (OSDs) and decoupling I/O (read/write) from metadata (file open/close) operations. Even recent object-based systems inherit decades-old architectural choices going back to early UNIX file systems, however, limiting their ability to effectively scale to hundreds of petabytes. We present Ceph, a distributed file system that provides excellent performance and reliability with unprecedented scalability. Ceph maximizes the separation between data and metadata management by replacing allocation tables with a pseudo-random data distribution function (CRUSH) designed for heterogeneous and dynamic clusters of unreliable OSDs. We leverage OSD intelligence to distribute data replication, failure detection and recovery with semi-autonomous OSDs running a specialized local object storage file system (EBOFS). Finally, Ceph is built around a dynamic distributed metadata management cluster that provides extremely efficient metadata management that seamlessly adapts to a wide range of general purpose and scientific computing file system workloads. We present performance measurements under a variety of workloads that show superior I/O performance and scalable metadata management (more than a quarter million metadata ops/sec).

  15. The intergroup protocols: Scalable group communication for the internet

    Energy Technology Data Exchange (ETDEWEB)

    Berket, Karlo [Univ. of California, Santa Barbara, CA (United States)

    2000-12-04

    Reliable group ordered delivery of multicast messages in a distributed system is a useful service that simplifies the programming of distributed applications. Such a service helps to maintain the consistency of replicated information and to coordinate the activities of the various processes. With the increasing popularity of the Internet, there is an increasing interest in scaling the protocols that provide this service to the environment of the Internet. The InterGroup protocol suite, described in this dissertation, provides such a service, and is intended for the environment of the Internet with scalability to large numbers of nodes and high latency links. The InterGroup protocols approach the scalability problem from various directions. They redefine the meaning of group membership, allow voluntary membership changes, add a receiver-oriented selection of delivery guarantees that permits heterogeneity of the receiver set, and provide a scalable reliability service. The InterGroup system comprises several components, executing at various sites within the system. Each component provides part of the services necessary to implement a group communication system for the wide-area. The components can be categorized as: (1) control hierarchy, (2) reliable multicast, (3) message distribution and delivery, and (4) process group membership. We have implemented a prototype of the InterGroup protocols in Java, and have tested the system performance in both local-area and wide-area networks.

  16. PKI Scalability Issues

    OpenAIRE

    Slagell, Adam J.; Bonilla, Rafael

    2004-01-01

    This report surveys different PKI technologies such as PKIX and SPKI and the issues of PKI that affect scalability. Much focus is spent on certificate revocation methodologies and status verification systems such as CRLs, Delta-CRLs, CRS, Certificate Revocation Trees, Windowed Certificate Revocation, OCSP, SCVP and DVCS.

  17. A Scalable Strategy To Develop Advanced Anode for Sodium-Ion Batteries: Commercial Fe3O4-Derived Fe3O4@FeS with Superior Full-Cell Performance.

    Science.gov (United States)

    Hou, Bao-Hua; Wang, Ying-Ying; Guo, Jin-Zhi; Zhang, Yu; Ning, Qiu-Li; Yang, Yang; Li, Wen-Hao; Zhang, Jing-Ping; Wang, Xin-Long; Wu, Xing-Long

    2018-01-18

    A novel core-shell Fe3O4@FeS composed of Fe3O4 core and FeS shell with the morphology of regular octahedra has been prepared via a facile and scalable strategy via employing commercial Fe3O4 as the precursor. When used as anode material for sodium-ion batteries (SIBs), the prepared Fe3O4@FeS combines the merits of FeS and Fe3O4 with high Na-storage capacity and superior cycling stability, respectively. The optimized Fe3O4@FeS electrode shows ultralong cycle life and outstanding rate capability. For instance, it remains a capacity retention of 90.8% with a reversible capacity of 169 mAh g-1 after 750 cycles at 0.2 A g-1 and 151 mAh g-1 at a high current density of 2 A g-1, which is about 7.5 times in comparison to the Na-storage capacity of commercial Fe3O4. More importantly, the prepared Fe3O4@FeS also exhibits excellent full-cell performance. The assembled Fe3O4@FeS//Na3V2(PO4)2O2F sodium-ion full battery gives a reversible capacity of 157 mAh g-1 after 50 cycles at 0.5 A g-1 with a capacity retention of 92.3% and the Coulombic efficiency of around 100%, demonstrating its applicability for sodium-ion full batteries as a promising anode. Furthermore, it is also disclosed that such superior electrochemical properties can be attributed to the pseudocapacitive behavior of FeS shell as demonstrated by the kinetics studies as well as the core-shell structure. In view of the large-scale availability of commercial precursor and ease of preparation, this study provide a scalable strategy to develop advanced anode materials for SIBs.

  18. Towards scalable Byzantine fault-tolerant replication

    Science.gov (United States)

    Zbierski, Maciej

    2017-08-01

    Byzantine fault-tolerant (BFT) replication is a powerful technique, enabling distributed systems to remain available and correct even in the presence of arbitrary faults. Unfortunately, existing BFT replication protocols are mostly load-unscalable, i.e. they fail to respond with adequate performance increase whenever new computational resources are introduced into the system. This article proposes a universal architecture facilitating the creation of load-scalable distributed services based on BFT replication. The suggested approach exploits parallel request processing to fully utilize the available resources, and uses a load balancer module to dynamically adapt to the properties of the observed client workload. The article additionally provides a discussion on selected deployment scenarios, and explains how the proposed architecture could be used to increase the dependability of contemporary large-scale distributed systems.

  19. On the Scalability of Time-predictable Chip-Multiprocessing

    DEFF Research Database (Denmark)

    Puffitsch, Wolfgang; Schoeberl, Martin

    2012-01-01

    simple processors is not an option for embedded systems with high demands on computing power. In order to provide high performance and predictability we argue to use multiprocessor systems with a time-predictable memory interface. In this paper we present the scalability of a Java chip......Real-time systems need a time-predictable execution platform to be able to determine the worst-case execution time statically. In order to be time-predictable, several advanced processor features, such as out-of-order execution and other forms of speculation, have to be avoided. However, just using......-multiprocessor system that is designed to be time-predictable. Adding time-predictable caches is mandatory to achieve scalability with a shared memory multi-processor system. As Java bytecode retains information about the nature of memory accesses, it is possible to implement a memory hierarchy that takes...

  20. Enhanced JPEG2000 Quality Scalability through Block-Wise Layer Truncation

    Directory of Open Access Journals (Sweden)

    Auli-Llinas Francesc

    2010-01-01

    Full Text Available Quality scalability is an important feature of image and video coding systems. In JPEG2000, quality scalability is achieved through the use of quality layers that are formed in the encoder through rate-distortion optimization techniques. Quality layers provide optimal rate-distortion representations of the image when the codestream is transmitted and/or decoded at layer boundaries. Nonetheless, applications such as interactive image transmission, video streaming, or transcoding demand layer fragmentation. The common approach to truncate layers is to keep the initial prefix of the to-be-truncated layer, which may greatly penalize the quality of decoded images, especially when the layer allocation is inadequate. So far, only one method has been proposed in the literature providing enhanced quality scalability for compressed JPEG2000 imagery. However, that method provides quality scalability at the expense of high computational costs, which prevents its application to the aforementioned applications. This paper introduces a Block-Wise Layer Truncation (BWLT that, requiring negligible computational costs, enhances the quality scalability of compressed JPEG2000 images. The main insight behind BWLT is to dismantle and reassemble the to-be-fragmented layer by selecting the most relevant codestream segments of codeblocks within that layer. The selection process is conceived from a rate-distortion model that finely estimates rate-distortion contributions of codeblocks. Experimental results suggest that BWLT achieves near-optimal performance even when the codestream contains a single quality layer.

  1. Advanced technologies for scalable ATLAS conditions database access on the grid

    CERN Document Server

    Basset, R; Dimitrov, G; Girone, M; Hawkings, R; Nevski, P; Valassi, A; Vaniachine, A; Viegas, F; Walker, R; Wong, A

    2010-01-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysi...

  2. Scalable Resolution Display Walls

    KAUST Repository

    Leigh, Jason

    2013-01-01

    This article will describe the progress since 2000 on research and development in 2-D and 3-D scalable resolution display walls that are built from tiling individual lower resolution flat panel displays. The article will describe approaches and trends in display hardware construction, middleware architecture, and user-interaction design. The article will also highlight examples of use cases and the benefits the technology has brought to their respective disciplines. © 1963-2012 IEEE.

  3. Scalable Transactions for Web Applications in the Cloud

    NARCIS (Netherlands)

    Zhou, W.; Pierre, G.E.O.; Chi, C.-H.

    2009-01-01

    Cloud Computing platforms provide scalability and high availability properties for web applications but they sacrifice data consistency at the same time. However, many applications cannot afford any data inconsistency. We present a scalable transaction manager for NoSQL cloud database services to

  4. Scalability improvements to NRLMOL for DFT calculations of large molecules

    Science.gov (United States)

    Diaz, Carlos Manuel

    Advances in high performance computing (HPC) have provided a way to treat large, computationally demanding tasks using thousands of processors. With the development of more powerful HPC architectures, the need to create efficient and scalable code has grown more important. Electronic structure calculations are valuable in understanding experimental observations and are routinely used for new materials predictions. For the electronic structure calculations, the memory and computation time are proportional to the number of atoms. Memory requirements for these calculations scale as N2, where N is the number of atoms. While the recent advances in HPC offer platforms with large numbers of cores, the limited amount of memory available on a given node and poor scalability of the electronic structure code hinder their efficient usage of these platforms. This thesis will present some developments to overcome these bottlenecks in order to study large systems. These developments, which are implemented in the NRLMOL electronic structure code, involve the use of sparse matrix storage formats and the use of linear algebra using sparse and distributed matrices. These developments along with other related development now allow ground state density functional calculations using up to 25,000 basis functions and the excited state calculations using up to 17,000 basis functions while utilizing all cores on a node. An example on a light-harvesting triad molecule is described. Finally, future plans to further improve the scalability will be presented.

  5. Scalable Video Coding with Interlayer Signal Decorrelation Techniques

    Directory of Open Access Journals (Sweden)

    Yang Wenxian

    2007-01-01

    Full Text Available Scalability is one of the essential requirements in the compression of visual data for present-day multimedia communications and storage. The basic building block for providing the spatial scalability in the scalable video coding (SVC standard is the well-known Laplacian pyramid (LP. An LP achieves the multiscale representation of the video as a base-layer signal at lower resolution together with several enhancement-layer signals at successive higher resolutions. In this paper, we propose to improve the coding performance of the enhancement layers through efficient interlayer decorrelation techniques. We first show that, with nonbiorthogonal upsampling and downsampling filters, the base layer and the enhancement layers are correlated. We investigate two structures to reduce this correlation. The first structure updates the base-layer signal by subtracting from it the low-frequency component of the enhancement layer signal. The second structure modifies the prediction in order that the low-frequency component in the new enhancement layer is diminished. The second structure is integrated in the JSVM 4.0 codec with suitable modifications in the prediction modes. Experimental results with some standard test sequences demonstrate coding gains up to 1 dB for I pictures and up to 0.7 dB for both I and P pictures.

  6. Scalable Detection and Isolation of Phishing

    NARCIS (Netherlands)

    Moreira Moura, Giovane; Pras, Aiko

    2009-01-01

    This paper presents a proposal for scalable detection and isolation of phishing. The main ideas are to move the protection from end users towards the network provider and to employ the novel bad neighborhood concept, in order to detect and isolate both phishing e-mail senders and phishing web

  7. Scalable Reliable SD Erlang Design

    OpenAIRE

    Chechina, Natalia; Trinder, Phil; Ghaffari, Amir; Green, Rickard; Lundin, Kenneth; Virding, Robert

    2014-01-01

    This technical report presents the design of Scalable Distributed (SD) Erlang: a set of language-level changes that aims to enable Distributed Erlang to scale for server applications on commodity hardware with at most 100,000 cores. We cover a number of aspects, specifically anticipated architecture, anticipated failures, scalable data structures, and scalable computation. Other two components that guided us in the design of SD Erlang are design principles and typical Erlang applications. The...

  8. 42 CFR 493.19 - Provider-performed microscopy (PPM) procedures.

    Science.gov (United States)

    2010-10-01

    ...-field or phase-contrast microscopy. (4) The specimen is labile or delay in performing the test could... 42 Public Health 5 2010-10-01 2010-10-01 false Provider-performed microscopy (PPM) procedures. 493....19 Provider-performed microscopy (PPM) procedures. (a) Requirement. To be categorized as a PPM...

  9. NWChem: a Comprehensive and Scalable Open-Source Solution for Large Scale Molecular Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Valiev, Marat; Bylaska, Eric J.; Govind, Niranjan; Kowalski, Karol; Straatsma, TP; van Dam, Hubertus JJ; Wang, Dunyou; Nieplocha, Jaroslaw; Apra, Edoardo; Windus, Theresa L.; De Jong, Wibe A.

    2010-09-01

    The NWChem computational chemistry package offers extensive capabilities for large scale simulations of chemical and biological systems. Utilizing a common computational framework, diverse theoretical descriptions can be used to provide the best solution for a given scientific problem. Scalable parallel implementations and modular software design enables efficient utilization of current computational architectures. This paper provides an overview of NWChem focusing primarily on the core theoretical descriptions provided by the code and their parallel performance. In addition, future plans are outlined.

  10. Pay-for-performance for healthcare providers: Design, performance measurement, and (unintended) effects

    NARCIS (Netherlands)

    F. Eijkenaar (Frank)

    2013-01-01

    textabstractHealthcare systems around the world are characterized by a suboptimal delivery of healthcare services. There has been a growing belief among policymakers that many deficiencies (e.g., in the quality of care) stem from flawed provider payment systems creating perverse incentives for

  11. Scalable synthesis of interconnected porous silicon/carbon composites by the Rochow reaction as high-performance anodes of lithium ion batteries.

    Science.gov (United States)

    Zhang, Zailei; Wang, Yanhong; Ren, Wenfeng; Tan, Qiangqiang; Chen, Yunfa; Li, Hong; Zhong, Ziyi; Su, Fabing

    2014-05-12

    Despite the promising application of porous Si-based anodes in future Li ion batteries, the large-scale synthesis of these materials is still a great challenge. A scalable synthesis of porous Si materials is presented by the Rochow reaction, which is commonly used to produce organosilane monomers for synthesizing organosilane products in chemical industry. Commercial Si microparticles reacted with gas CH3 Cl over various Cu-based catalyst particles to substantially create macropores within the unreacted Si accompanying with carbon deposition to generate porous Si/C composites. Taking advantage of the interconnected porous structure and conductive carbon-coated layer after simple post treatment, these composites as anodes exhibit high reversible capacity and long cycle life. It is expected that by integrating the organosilane synthesis process and controlling reaction conditions, the manufacture of porous Si-based anodes on an industrial scale is highly possible. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. New Complexity Scalable MPEG Encoding Techniques for Mobile Applications

    Directory of Open Access Journals (Sweden)

    Stephan Mietens

    2004-03-01

    Full Text Available Complexity scalability offers the advantage of one-time design of video applications for a large product family, including mobile devices, without the need of redesigning the applications on the algorithmic level to meet the requirements of the different products. In this paper, we present complexity scalable MPEG encoding having core modules with modifications for scalability. The interdependencies of the scalable modules and the system performance are evaluated. Experimental results show scalability giving a smooth change in complexity and corresponding video quality. Scalability is basically achieved by varying the number of computed DCT coefficients and the number of evaluated motion vectors but other modules are designed such they scale with the previous parameters. In the experiments using the “Stefan” sequence, the elapsed execution time of the scalable encoder, reflecting the computational complexity, can be gradually reduced to roughly 50% of its original execution time. The video quality scales between 20 dB and 48 dB PSNR with unity quantizer setting, and between 21.5 dB and 38.5 dB PSNR for different sequences targeting 1500 kbps. The implemented encoder and the scalability techniques can be successfully applied in mobile systems based on MPEG video compression.

  13. Performance Measurement for a Logistics Services Provider to the Polymer Industry

    OpenAIRE

    Tok, King Lai

    2007-01-01

    This management project discusses the form of performance measurement system suitable for a logistics services provider who focuses on providing its services to large multinational petrochemical companies in the polymer industry

  14. Compliance Performance: Effects of a Provider Incentive Program and Coding Compliance Plan

    National Research Council Canada - National Science Library

    Tudela, Joseph A

    2004-01-01

    The purpose of this project is to study provider and coder related performance, i.e., provider compliance rate and coder productivity/accuracy rates and average dollar difference between coder and auditor, at Brooke Army Medical Center...

  15. Scalable and Media Aware Adaptive Video Streaming over Wireless Networks

    Science.gov (United States)

    Tizon, Nicolas; Pesquet-Popescu, Béatrice

    2008-12-01

    This paper proposes an advanced video streaming system based on scalable video coding in order to optimize resource utilization in wireless networks with retransmission mechanisms at radio protocol level. The key component of this system is a packet scheduling algorithm which operates on the different substreams of a main scalable video stream and which is implemented in a so-called media aware network element. The concerned type of transport channel is a dedicated channel subject to parameters (bitrate, loss rate) variations on the long run. Moreover, we propose a combined scalability approach in which common temporal and SNR scalability features can be used jointly with a partitioning of the image into regions of interest. Simulation results show that our approach provides substantial quality gain compared to classical packet transmission methods and they demonstrate how ROI coding combined with SNR scalability allows to improve again the visual quality.

  16. Scalable parallel communications

    Science.gov (United States)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth

  17. Integration of an intelligent systems behavior simulator and a scalable soldier-machine interface

    Science.gov (United States)

    Johnson, Tony; Manteuffel, Chris; Brewster, Benjamin; Tierney, Terry

    2007-04-01

    As the Army's Future Combat Systems (FCS) introduce emerging technologies and new force structures to the battlefield, soldiers will increasingly face new challenges in workload management. The next generation warfighter will be responsible for effectively managing robotic assets in addition to performing other missions. Studies of future battlefield operational scenarios involving the use of automation, including the specification of existing and proposed technologies, will provide significant insight into potential problem areas regarding soldier workload. The US Army Tank Automotive Research, Development, and Engineering Center (TARDEC) is currently executing an Army technology objective program to analyze and evaluate the effect of automated technologies and their associated control devices with respect to soldier workload. The Human-Robotic Interface (HRI) Intelligent Systems Behavior Simulator (ISBS) is a human performance measurement simulation system that allows modelers to develop constructive simulations of military scenarios with various deployments of interface technologies in order to evaluate operator effectiveness. One such interface is TARDEC's Scalable Soldier-Machine Interface (SMI). The scalable SMI provides a configurable machine interface application that is capable of adapting to several hardware platforms by recognizing the physical space limitations of the display device. This paper describes the integration of the ISBS and Scalable SMI applications, which will ultimately benefit both systems. The ISBS will be able to use the Scalable SMI to visualize the behaviors of virtual soldiers performing HRI tasks, such as route planning, and the scalable SMI will benefit from stimuli provided by the ISBS simulation environment. The paper describes the background of each system and details of the system integration approach.

  18. Scalable encryption using alpha rooting

    Science.gov (United States)

    Wharton, Eric J.; Panetta, Karen A.; Agaian, Sos S.

    2008-04-01

    Full and partial encryption methods are important for subscription based content providers, such as internet and cable TV pay channels. Providers need to be able to protect their products while at the same time being able to provide demonstrations to attract new customers without giving away the full value of the content. If an algorithm were introduced which could provide any level of full or partial encryption in a fast and cost effective manner, the applications to real-time commercial implementation would be numerous. In this paper, we present a novel application of alpha rooting, using it to achieve fast and straightforward scalable encryption with a single algorithm. We further present use of the measure of enhancement, the Logarithmic AME, to select optimal parameters for the partial encryption. When parameters are selected using the measure, the output image achieves a balance between protecting the important data in the image while still containing a good overall representation of the image. We will show results for this encryption method on a number of images, using histograms to evaluate the effectiveness of the encryption.

  19. Scalable Frequent Subgraph Mining

    KAUST Repository

    Abdelhamid, Ehab

    2017-06-19

    A graph is a data structure that contains a set of nodes and a set of edges connecting these nodes. Nodes represent objects while edges model relationships among these objects. Graphs are used in various domains due to their ability to model complex relations among several objects. Given an input graph, the Frequent Subgraph Mining (FSM) task finds all subgraphs with frequencies exceeding a given threshold. FSM is crucial for graph analysis, and it is an essential building block in a variety of applications, such as graph clustering and indexing. FSM is computationally expensive, and its existing solutions are extremely slow. Consequently, these solutions are incapable of mining modern large graphs. This slowness is caused by the underlying approaches of these solutions which require finding and storing an excessive amount of subgraph matches. This dissertation proposes a scalable solution for FSM that avoids the limitations of previous work. This solution is composed of four components. The first component is a single-threaded technique which, for each candidate subgraph, needs to find only a minimal number of matches. The second component is a scalable parallel FSM technique that utilizes a novel two-phase approach. The first phase quickly builds an approximate search space, which is then used by the second phase to optimize and balance the workload of the FSM task. The third component focuses on accelerating frequency evaluation, which is a critical step in FSM. To do so, a machine learning model is employed to predict the type of each graph node, and accordingly, an optimized method is selected to evaluate that node. The fourth component focuses on mining dynamic graphs, such as social networks. To this end, an incremental index is maintained during the dynamic updates. Only this index is processed and updated for the majority of graph updates. Consequently, search space is significantly pruned and efficiency is improved. The empirical evaluation shows that the

  20. Scalable Nanomanufacturing—A Review

    Directory of Open Access Journals (Sweden)

    Khershed Cooper

    2017-01-01

    Full Text Available This article describes the field of scalable nanomanufacturing, its importance and need, its research activities and achievements. The National Science Foundation is taking a leading role in fostering basic research in scalable nanomanufacturing (SNM. From this effort several novel nanomanufacturing approaches have been proposed, studied and demonstrated, including scalable nanopatterning. This paper will discuss SNM research areas in materials, processes and applications, scale-up methods with project examples, and manufacturing challenges that need to be addressed to move nanotechnology discoveries closer to the marketplace.

  1. DISP: Optimizations towards Scalable MPI Startup

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Huansong [Florida State University, Tallahassee; Pophale, Swaroop S [ORNL; Gorentla Venkata, Manjunath [ORNL; Yu, Weikuan [Florida State University, Tallahassee

    2016-01-01

    Despite the popularity of MPI for high performance computing, the startup of MPI programs faces a scalability challenge as both the execution time and memory consumption increase drastically at scale. We have examined this problem using the collective modules of Cheetah and Tuned in Open MPI as representative implementations. Previous improvements for collectives have focused on algorithmic advances and hardware off-load. In this paper, we examine the startup cost of the collective module within a communicator and explore various techniques to improve its efficiency and scalability. Accordingly, we have developed a new scalable startup scheme with three internal techniques, namely Delayed Initialization, Module Sharing and Prediction-based Topology Setup (DISP). Our DISP scheme greatly benefits the collective initialization of the Cheetah module. At the same time, it helps boost the performance of non-collective initialization in the Tuned module. We evaluate the performance of our implementation on Titan supercomputer at ORNL with up to 4096 processes. The results show that our delayed initialization can speed up the startup of Tuned and Cheetah by an average of 32.0% and 29.2%, respectively, our module sharing can reduce the memory consumption of Tuned and Cheetah by up to 24.1% and 83.5%, respectively, and our prediction-based topology setup can speed up the startup of Cheetah by up to 80%.

  2. Measuring Provider Performance for Physicians Participating in the Merit-Based Incentive Payment System.

    Science.gov (United States)

    Squitieri, Lee; Chung, Kevin C

    2017-07-01

    In 2017, the Centers for Medicare and Medicaid Services began requiring all eligible providers to participate in the Quality Payment Program or face financial reimbursement penalty. The Quality Payment Program outlines two paths for provider participation: the Merit-Based Incentive Payment System and Advanced Alternative Payment Models. For the first performance period beginning in January of 2017, the Centers for Medicare and Medicaid Services estimates that approximately 83 to 90 percent of eligible providers will not qualify for participation in an Advanced Alternative Payment Model and therefore must participate in the Merit-Based Incentive Payment System program. The Merit-Based Incentive Payment System path replaces existing quality-reporting programs and adds several new measures to evaluate providers using four categories of data: (1) quality, (2) cost/resource use, (3) improvement activities, and (4) advancing care information. These categories will be combined to calculate a weighted composite score for each provider or provider group. Composite Merit-Based Incentive Payment System scores based on 2017 performance data will be used to adjust reimbursed payment in 2019. In this article, the authors provide relevant background for understanding value-based provider performance measurement. The authors also discuss Merit-Based Incentive Payment System reporting requirements and scoring methodology to provide plastic surgeons with the necessary information to critically evaluate their own practice capabilities in the context of current performance metrics under the Quality Payment Program.

  3. Scalable Gravity Offload System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — A scalable gravity offload device simulates reduced gravity for the testing of various surface system elements such as mobile robots, excavators, habitats, and...

  4. Scalable and template-free synthesis of nanostructured Na{sub 1.08}V{sub 6}O{sub 15} as high-performance cathode material for lithium-ion batteries

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Shili, E-mail: slzheng@ipe.ac.cn [National Engineering Laboratory for Hydrometallurgical Cleaner Production Technology, Key Laboratory of Green Process and Engineering, Institute of Process Engineering, Chinese Academy of Sciences, Beijing (China); Wang, Xinran; Yan, Hong [National Engineering Laboratory for Hydrometallurgical Cleaner Production Technology, Key Laboratory of Green Process and Engineering, Institute of Process Engineering, Chinese Academy of Sciences, Beijing (China); University of Chinese Academy of Sciences, No. 19A Yuquan Road, Beijing (China); Du, Hao; Zhang, Yi [National Engineering Laboratory for Hydrometallurgical Cleaner Production Technology, Key Laboratory of Green Process and Engineering, Institute of Process Engineering, Chinese Academy of Sciences, Beijing (China)

    2016-09-15

    Highlights: • Nanostructured Na{sub 1.08}V{sub 6}O{sub 15} was synthesized through additive-free sol-gel process. • Prepared Na{sub 1.08}V{sub 6}O{sub 15} demonstrated high capacity and sufficient cycling stability. • The reaction temperature was optimized to allow scalable Na{sub 1.08}V{sub 6}O{sub 15} fabrication. - Abstract: Developing high-capacity cathode material with feasibility and scalability is still challenging for lithium-ion batteries (LIBs). In this study, a high-capacity ternary sodium vanadate compound, nanostructured NaV{sub 6}O{sub 15}, was template-free synthesized through sol-gel process with high producing efficiency. The as-prepared sample was systematically post-treated at different temperature and the post-annealing temperature was found to determine the cycling stability and capacity of NaV{sub 6}O{sub 15}. The well-crystallized one exhibited good electrochemical performance with a high specific capacity of 302 mAh g{sup −1} when cycled at current density of 0.03 mA g{sup −1}. Its relatively long-term cycling stability was characterized by the cell performance under the current density of 1 A g{sup −1}, delivering a reversible capacity of 118 mAh g{sup −1} after 300 cycles with 79% capacity retention and nearly 100% coulombic efficiency: all demonstrating its significant promise of proposed strategy for large-scale synthesis of NaV{sub 6}O{sub 15} as cathode with high-capacity and high energy density for LIBs.

  5. TriG: Next Generation Scalable Spaceborne GNSS Receiver

    Science.gov (United States)

    Tien, Jeffrey Y.; Okihiro, Brian Bachman; Esterhuizen, Stephan X.; Franklin, Garth W.; Meehan, Thomas K.; Munson, Timothy N.; Robison, David E.; Turbiner, Dmitry; Young, Lawrence E.

    2012-01-01

    TriG is the next generation NASA scalable space GNSS Science Receiver. It will track all GNSS and additional signals (i.e. GPS, GLONASS, Galileo, Compass and Doris). Scalable 3U architecture and fully software and firmware recofigurable, enabling optimization to meet specific mission requirements. TriG GNSS EM is currently undergoing testing and is expected to complete full performance testing later this year.

  6. Large performance incentives had the greatest impact on providers whose quality metrics were lowest at baseline.

    Science.gov (United States)

    Greene, Jessica; Hibbard, Judith H; Overton, Valerie

    2015-04-01

    This study examined the impact of Fairview Health Services' primary care provider compensation model, in which 40 percent of compensation was based on clinic-level quality outcomes. Fairview Health Services is a Pioneer accountable care organization in Minnesota. Using publicly reported performance data from 2010 and 2012, we found that Fairview's improvement in quality metrics was not greater than the improvement in other comparable Minnesota medical groups. An analysis of Fairview's administrative data found that the largest predictor of improvement over the first two years of the compensation model was primary care providers' baseline quality performance. Providers whose baseline performance was in the lowest tertile improved three times more, on average, across the three quality metrics studied than those in the middle tertile, and almost six times more than those in the top tertile. As a result, there was a narrowing of variation in performance across all primary care providers at Fairview and a narrowing of the gap in quality between providers who treated the highest-income patient panels and those who treated the lowest-income panels. The large quality incentive fell short of its overall quality improvement aim. However, the results suggest that payment reform may help narrow variation in primary care provider performance, which can translate into narrowing socioeconomic disparities. Project HOPE—The People-to-People Health Foundation, Inc.

  7. Scalable Atomistic Simulation Algorithms for Materials Research

    Directory of Open Access Journals (Sweden)

    Aiichiro Nakano

    2002-01-01

    Full Text Available A suite of scalable atomistic simulation programs has been developed for materials research based on space-time multiresolution algorithms. Design and analysis of parallel algorithms are presented for molecular dynamics (MD simulations and quantum-mechanical (QM calculations based on the density functional theory. Performance tests have been carried out on 1,088-processor Cray T3E and 1,280-processor IBM SP3 computers. The linear-scaling algorithms have enabled 6.44-billion-atom MD and 111,000-atom QM calculations on 1,024 SP3 processors with parallel efficiency well over 90%. production-quality programs also feature wavelet-based computational-space decomposition for adaptive load balancing, spacefilling-curve-based adaptive data compression with user-defined error bound for scalable I/O, and octree-based fast visibility culling for immersive and interactive visualization of massive simulation data.

  8. Classification accuracy of claims-based methods for identifying providers failing to meet performance targets

    Science.gov (United States)

    Hubbard, Rebecca A.; Benjamin-Johnson, Rhondee; Onega, Tracy; Smith-Bindman, Rebecca; Zhu, Weiwei; Fenton, Joshua J.

    2014-01-01

    Summary Quality assessment is critical for healthcare reform but data sources are lacking for measurement of many important healthcare outcomes. With over 49 million people covered by Medicare as of 2010, Medicare claims data offer a potentially valuable source that could be used in targeted health care quality improvement efforts. However, little is known about the operating characteristics of provider profiling methods using claims-based outcome measures that may estimate provider performance with error. Motivated by the example of screening mammography performance, we compared approaches to identifying providers failing to meet guideline targets using Medicare claims data. We used data from the Breast Cancer Surveillance Consortium and linked Medicare claims to compare claims-based and clinical estimates of cancer detection rate. We then demonstrated the performance of claim-based estimates across a broad range of operating characteristics using simulation studies. We found that identification of poor performing providers was extremely sensitive to algorithm specificity, with no approach identifying more than 65% of poorly performing providers when claims-based measures had specificity of 0.995 or less. We conclude that claims have the potential to contribute important information on healthcare outcomes to quality improvement efforts. However, to achieve this potential, development of highly accurate claims-based outcome measures should remain a priority. PMID:25302935

  9. Towards Scalable Strain Gauge-Based Joint Torque Sensors.

    Science.gov (United States)

    Khan, Hamza; D'Imperio, Mariapaola; Cannella, Ferdinando; Caldwell, Darwin G; Cuschieri, Alfred; Semini, Claudio

    2017-08-18

    During recent decades, strain gauge-based joint torque sensors have been commonly used to provide high-fidelity torque measurements in robotics. Although measurement of joint torque/force is often required in engineering research and development, the gluing and wiring of strain gauges used as torque sensors pose difficulties during integration within the restricted space available in small joints. The problem is compounded by the need for a scalable geometric design to measure joint torque. In this communication, we describe a novel design of a strain gauge-based mono-axial torque sensor referred to as square-cut torque sensor (SCTS), the significant features of which are high degree of linearity, symmetry, and high scalability in terms of both size and measuring range. Most importantly, SCTS provides easy access for gluing and wiring of the strain gauges on sensor surface despite the limited available space. We demonstrated that the SCTS was better in terms of symmetry (clockwise and counterclockwise rotation) and more linear. These capabilities have been shown through finite element modeling (ANSYS) confirmed by observed data obtained by load testing experiments. The high performance of SCTS was confirmed by studies involving changes in size, material and/or wings width and thickness. Finally, we demonstrated that the SCTS can be successfully implementation inside the hip joints of miniaturized hydraulically actuated quadruped robot-MiniHyQ. This communication is based on work presented at the 18th International Conference on Climbing and Walking Robots (CLAWAR).

  10. Towards Scalable Strain Gauge-Based Joint Torque Sensors

    Science.gov (United States)

    D’Imperio, Mariapaola; Cannella, Ferdinando; Caldwell, Darwin G.; Cuschieri, Alfred

    2017-01-01

    During recent decades, strain gauge-based joint torque sensors have been commonly used to provide high-fidelity torque measurements in robotics. Although measurement of joint torque/force is often required in engineering research and development, the gluing and wiring of strain gauges used as torque sensors pose difficulties during integration within the restricted space available in small joints. The problem is compounded by the need for a scalable geometric design to measure joint torque. In this communication, we describe a novel design of a strain gauge-based mono-axial torque sensor referred to as square-cut torque sensor (SCTS), the significant features of which are high degree of linearity, symmetry, and high scalability in terms of both size and measuring range. Most importantly, SCTS provides easy access for gluing and wiring of the strain gauges on sensor surface despite the limited available space. We demonstrated that the SCTS was better in terms of symmetry (clockwise and counterclockwise rotation) and more linear. These capabilities have been shown through finite element modeling (ANSYS) confirmed by observed data obtained by load testing experiments. The high performance of SCTS was confirmed by studies involving changes in size, material and/or wings width and thickness. Finally, we demonstrated that the SCTS can be successfully implementation inside the hip joints of miniaturized hydraulically actuated quadruped robot-MiniHyQ. This communication is based on work presented at the 18th International Conference on Climbing and Walking Robots (CLAWAR). PMID:28820446

  11. Factors affecting the performance of maternal health care providers in Armenia

    Directory of Open Access Journals (Sweden)

    Voltero Lauren

    2004-06-01

    Full Text Available Abstract Background Over the last five years, international development organizations began to modify and adapt the conventional Performance Improvement Model for use in low-resource settings. This model outlines the five key factors believed to influence performance outcomes: job expectations, performance feedback, environment and tools, motivation and incentives, and knowledge and skills. Each of these factors should be supplied by the organization in which the provider works, and thus, organizational support is considered as an overarching element for analysis. Little research, domestically or internationally, has been conducted on the actual effects of each of the factors on performance outcomes and most PI practitioners assume that all the factors are needed in order for performance to improve. This study presents a unique exploration of how the factors, individually as well as in combination, affect the performance of primary reproductive health providers (nurse-midwives in two regions of Armenia. Methods Two hundred and eighty-five nurses and midwives were observed conducting real or simulated antenatal and postpartum/neonatal care services and interviewed about the presence or absence of the performance factors within their work environment. Results were analyzed to compare average performance with the existence or absence of the factors; then, multiple regression analysis was conducted with the merged datasets to obtain the best models of "predictors" of performance within each clinical service. Results Baseline results revealed that performance was sub-standard in several areas and several performance factors were deficient or nonexistent. The multivariate analysis showed that (a training in the use of the clinic tools; and (b receiving recognition from the employer or the client/community, are factors strongly associated with performance, followed by (c receiving performance feedback in postpartum care. Other – extraneous

  12. Accurate distortion estimation and optimal bandwidth allocation for scalable H.264 video transmission over MIMO systems.

    Science.gov (United States)

    Jubran, Mohammad K; Bansal, Manu; Kondi, Lisimachos P; Grover, Rohan

    2009-01-01

    In this paper, we propose an optimal strategy for the transmission of scalable video over packet-based multiple-input multiple-output (MIMO) systems. The scalable extension of H.264/AVC that provides a combined temporal, quality and spatial scalability is used. For given channel conditions, we develop a method for the estimation of the distortion of the received video and propose different error concealment schemes. We show the accuracy of our distortion estimation algorithm in comparison with simulated wireless video transmission with packet errors. In the proposed MIMO system, we employ orthogonal space-time block codes (O-STBC) that guarantee independent transmission of different symbols within the block code. In the proposed constrained bandwidth allocation framework, we use the estimated end-to-end decoder distortion to optimally select the application layer parameters, i.e., quantization parameter (QP) and group of pictures (GOP) size, and physical layer parameters, i.e., rate-compatible turbo (RCPT) code rate and symbol constellation. Results show the substantial performance gain by using different symbol constellations across the scalable layers as compared to a fixed constellation.

  13. Procurement risk management practices and supply chain performance among mobile phone service providers in Kenya

    Directory of Open Access Journals (Sweden)

    Emily Adhiambo Okonjo

    2016-02-01

    Full Text Available The aim of this study was to establish the relationship between procurement risk management practices and supply chain performance among mobile phone service providers in Kenya. The study specifically set out to establish the extent to which mobile phone service providers have implemented procurement risk management practices and to determine the relationship between procurement risk management practices and supply chain performance. The study adopted a descriptive study design by collecting data from the four (4 mobile telecommunication companies in Kenya using a self-administered questionnaire. Means, standard deviation, and regression analysis were used to analyze the data collected. The study established that most of the mobile phone service providers in Kenya had implemented procurement risk management practices. It was also clear that there was a very significant relationship between procurement risk management practices and supply chain performance.

  14. Effect of Prior Cardiopulmonary Resuscitation Knowledge on Compression Performance by Hospital Providers

    Science.gov (United States)

    Burkhardt, Joshua N.; Glick, Joshua E.; Terndrup, Thomas E.

    2014-01-01

    Introduction The purpose of this study was to determine cardiopulmonary resuscitation (CPR) knowledge of hospital providers and whether knowledge affects performance of effective compressions during a simulated cardiac arrest. Methods This cross-sectional study evaluated the CPR knowledge and performance of medical students and ED personnel with current CPR certification. We collected data regarding compression rate, hand placement, depth, and recoil via a questionnaire to determine knowledge, and then we assessed performance using 60 seconds of compressions on a simulation mannequin. Results Data from 200 enrollments were analyzed by evaluators blinded to subject knowledge. Regarding knowledge, 94% of participants correctly identified parameters for rate, 58% for hand placement, 74% for depth, and 94% for recoil. Participants identifying an effective rate of ≥100 performed compressions at a significantly higher rate than participants identifying <100 (μ=117 vs. 94, p<0.001). Participants identifying correct hand placement performed significantly more compressions adherent to guidelines than those identifying incorrect placement (μ=86% vs. 72%, p<0.01). No significant differences were found in depth or recoil performance based on knowledge of guidelines. Conclusion Knowledge of guidelines was variable; however, CPR knowledge significantly impacted certain aspects of performance, namely rate and hand placement, whereas depth and recoil were not affected. Depth of compressions was poor regardless of prior knowledge, and knowledge did not correlate with recoil performance. Overall performance was suboptimal and additional training may be needed to ensure consistent, effective performance and therefore better outcomes after cardiopulmonary arrest. PMID:25035744

  15. Effect of Prior Cardiopulmonary Resuscitation Knowledge on Compression Performance by Hospital Providers

    Directory of Open Access Journals (Sweden)

    Joshua N. Burkhardt

    2014-07-01

    Full Text Available Introduction: The purpose of this study was to determine cardiopulmonary resuscitation (CPR knowledge of hospital providers and whether knowledge affects performance of effective compressions during a simulated cardiac arrest. Methods: This cross-sectional study evaluated the CPR knowledge and performance of medical students and ED personnel with current CPR certification. We collected data regarding compression rate, hand placement, depth, and recoil via a questionnaire to determine knowledge, and then we assessed performance using 60 seconds of compressions on a simulation mannequin. Results: Data from 200 enrollments were analyzed by evaluators blinded to subject knowledge. Regarding knowledge, 94% of participants correctly identified parameters for rate, 58% for hand placement, 74% for depth, and 94% for recoil. Participants identifying an effective rate of ≥100 performed compressions at a significantly higher rate than participants identifying <100 (µ=117 vs. 94, p<0.001. Participants identifying correct hand placement performed significantly more compressions adherent to guidelines than those identifying incorrect placement (µ=86% vs. 72%, p<0.01. No significant differences were found in depth or recoil performance based on knowledge of guidelines. Conclusion: Knowledge of guidelines was variable; however, CPR knowledge significantly impacted certain aspects of performance, namely rate and hand placement, whereas depth and recoil were not affected. Depth of compressions was poor regardless of prior knowledge, and knowledge did not correlate with recoil performance. Overall performance was suboptimal and additional training may be needed to ensure consistent, effective performance and therefore better outcomes after cardiopulmonary arrest.

  16. Composite Measures of Health Care Provider Performance: A Description of Approaches

    Science.gov (United States)

    Shwartz, Michael; Restuccia, Joseph D; Rosen, Amy K

    2015-01-01

    Context Since the Institute of Medicine’s 2001 report Crossing the Quality Chasm, there has been a rapid proliferation of quality measures used in quality-monitoring, provider-profiling, and pay-for-performance (P4P) programs. Although individual performance measures are useful for identifying specific processes and outcomes for improvement and tracking progress, they do not easily provide an accessible overview of performance. Composite measures aggregate individual performance measures into a summary score. By reducing the amount of data that must be processed, they facilitate (1) benchmarking of an organization’s performance, encouraging quality improvement initiatives to match performance against high-performing organizations, and (2) profiling and P4P programs based on an organization’s overall performance. Methods We describe different approaches to creating composite measures, discuss their advantages and disadvantages, and provide examples of their use. Findings The major issues in creating composite measures are (1) whether to aggregate measures at the patient level through all-or-none approaches or the facility level, using one of the several possible weighting schemes; (2) when combining measures on different scales, how to rescale measures (using z scores, range percentages, ranks, or 5-star categorizations); and (3) whether to use shrinkage estimators, which increase precision by smoothing rates from smaller facilities but also decrease transparency. Conclusions Because provider rankings and rewards under P4P programs may be sensitive to both context and the data, careful analysis is warranted before deciding to implement a particular method. A better understanding of both when and where to use composite measures and the incentives created by composite measures are likely to be important areas of research as the use of composite measures grows. PMID:26626986

  17. Views of mental health care consumers on public reporting of information on provider performance.

    Science.gov (United States)

    Stein, Bradley D; Kogan, Jane N; Essock, Susan; Fudurich, Stephanie

    2009-05-01

    This qualitative study examined consumer preferences regarding the content and use of provider performance data and other provider information to aid in consumers' decision making. Focus groups were conducted with 41 adults who were consumers of mental health care, and discussions were transcribed and analyzed with standard qualitative research methods. Consumers supported trends toward enhancing information about providers and its availability. Several key themes emerged, including the need for easily accessible information and the most and least useful types of information. Current efforts to share provider performance information do not meet consumer preferences. Modest changes in the types of information being shared and the manner in which it is shared may substantially enhance use of such information. Such changes may help consumers to be more informed and empowered in making decisions about care, improve the quality of the care delivered, and support the movement toward a more recovery-focused system of care.

  18. Comparing the performance of English mental health providers in achieving patient outcomes.

    Science.gov (United States)

    Moran, Valerie; Jacobs, Rowena

    2015-09-01

    Evidence on provider payment systems that incorporate patient outcomes is limited for mental health care. In England, funding for mental health care services is changing to a prospective payment system with a future objective of linking some part of provider payment to outcomes. This research examines performance of mental health providers offering hospital and community services, in order to investigate if some are delivering better outcomes. Outcomes are measured using the Health of the Nation Outcome Scales (HoNOS) - a clinician-rated routine outcome measure (CROM) mandated for national use. We use data from the Mental Health Minimum Data Set (MHMDS) - a dataset on specialist mental health care with national coverage - for the years 2011/12 and 2012/13 with a final estimation sample of 305,960 observations with follow-up HoNOS scores. A hierarchical ordered probit model is used and outcomes are risk adjusted with independent variables reflecting demographic, need, severity and social indicators. A hierarchical linear model is also estimated with the follow-up total HoNOS score as the dependent variable and the baseline total HoNOS score included as a risk-adjuster. Provider performance is captured by a random effect that is quantified using Empirical Bayes methods. We find that worse outcomes are associated with severity and better outcomes with older age and social support. After adjusting outcomes for various risk factors, variations in performance are still evident across providers. This suggests that if the intention to link some element of provider payment to outcomes becomes a reality, some providers may gain financially whilst others may lose. The paper contributes to the limited literature on risk adjustment of outcomes and performance assessment of providers in mental health in the context of prospective activity-based payment systems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Provider performance measures in private and public programs: achieving meaningful alignment with flexibility to innovate.

    Science.gov (United States)

    Higgins, Aparna; Veselovskiy, German; McKown, Lauren

    2013-08-01

    In recent years there has been a significant expansion in the use of provider performance measures for quality improvement, payment, and public reporting. Using data from a survey of health plans, we characterize the use of such performance measures by private payers. We also compare the use of these measures among selected private and public programs. We studied twenty-three health plans with 121 million commercial enrollees--66 percent of the national commercial enrollment. The health plans reported using 546 distinct performance measures. There was much variation in the use of performance measures in both private and public payment and care delivery programs, despite common areas of focus that included cardiovascular conditions, diabetes, and preventive services. We conclude that policy makers and stakeholders who seek less variability in the use of performance measures to increase consistency should balance this goal with the need for flexibility to meet the needs of specific populations and promote innovation.

  20. Does the new conceptual framework provide adequate concepts for reporting relevant information about performance?

    NARCIS (Netherlands)

    Brouwer, A.; Faramarzi, A; Hoogendoorn, M.

    2014-01-01

    The basic question we raise in this paper is whether the 2013 Discussion Paper (DP 2013) on the Conceptual Framework provides adequate principles for reporting an entity’s performance and what improvements could be made in light of both user needs and evidence from academic literature. DP 2013

  1. 16 CFR 1401.5 - Providing performance and technical data to purchasers by labeling.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Providing performance and technical data to purchasers by labeling. 1401.5 Section 1401.5 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION... identification and warning statement may appear on a firmly affixed tag, tape, card, or sticker or similar...

  2. 16 CFR 1407.3 - Providing performance and technical data to purchasers by labeling.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Providing performance and technical data to purchasers by labeling. 1407.3 Section 1407.3 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION... the portable generator that cannot be removed without the use of tools, and (B) On a location that is...

  3. The Effect of Performance Feedback Provided to Student-Teachers Working with Multiple Disabilities

    Science.gov (United States)

    Safak, Pinar; Yilmaz, Hatice Cansu; Demiryurek, Pinar; Dogus, Mustafa

    2016-01-01

    The aim of the study was to investigate the effect of performance feedback (PF) provided to student teachers working with students with multiple disabilities and visual impairment (MDVI) on their teaching skills. The study group of the research was composed of 11 student teachers attending to the final year of the Teaching Students with Visual…

  4. Scalable algorithms for contact problems

    CERN Document Server

    Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít

    2016-01-01

    This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...

  5. Should trained lay providers perform HIV testing? A systematic review to inform World Health Organization guidelines.

    Science.gov (United States)

    Kennedy, C E; Yeh, P T; Johnson, C; Baggaley, R

    2017-12-01

    New strategies for HIV testing services (HTS) are needed to achieve UN 90-90-90 targets, including diagnosis of 90% of people living with HIV. Task-sharing HTS to trained lay providers may alleviate health worker shortages and better reach target groups. We conducted a systematic review of studies evaluating HTS by lay providers using rapid diagnostic tests (RDTs). Peer-reviewed articles were included if they compared HTS using RDTs performed by trained lay providers to HTS by health professionals, or to no intervention. We also reviewed data on end-users' values and preferences around lay providers preforming HTS. Searching was conducted through 10 online databases, reviewing reference lists, and contacting experts. Screening and data abstraction were conducted in duplicate using systematic methods. Of 6113 unique citations identified, 5 studies were included in the effectiveness review and 6 in the values and preferences review. One US-based randomized trial found patients' uptake of HTS doubled with lay providers (57% vs. 27%, percent difference: 30, 95% confidence interval: 27-32, p Cambodia, Malawi, and South Africa comparing testing quality between lay providers and laboratory staff found little discordance and high sensitivity and specificity (≥98%). Values and preferences studies generally found support for lay providers conducting HTS, particularly in non-hypothetical scenarios. Based on evidence supporting using trained lay providers, a WHO expert panel recommended lay providers be allowed to conduct HTS using HIV RDTs. Uptake of this recommendation could expand HIV testing to more people globally.

  6. Silicon nanophotonics for scalable quantum coherent feedback networks

    Energy Technology Data Exchange (ETDEWEB)

    Sarovar, Mohan; Brif, Constantin [Sandia National Laboratories, Livermore, CA (United States); Soh, Daniel B.S. [Sandia National Laboratories, Livermore, CA (United States); Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States); Cox, Jonathan; DeRose, Christopher T.; Camacho, Ryan; Davids, Paul [Sandia National Laboratories, Albuquerque, NM (United States)

    2016-12-15

    The emergence of coherent quantum feedback control (CQFC) as a new paradigm for precise manipulation of dynamics of complex quantum systems has led to the development of efficient theoretical modeling and simulation tools and opened avenues for new practical implementations. This work explores the applicability of the integrated silicon photonics platform for implementing scalable CQFC networks. If proven successful, on-chip implementations of these networks would provide scalable and efficient nanophotonic components for autonomous quantum information processing devices and ultra-low-power optical processing systems at telecommunications wavelengths. We analyze the strengths of the silicon photonics platform for CQFC applications and identify the key challenges to both the theoretical formalism and experimental implementations. In particular, we determine specific extensions to the theoretical CQFC framework (which was originally developed with bulk-optics implementations in mind), required to make it fully applicable to modeling of linear and nonlinear integrated optics networks. We also report the results of a preliminary experiment that studied the performance of an in situ controllable silicon nanophotonic network of two coupled cavities and analyze the properties of this device using the CQFC formalism. (orig.)

  7. The Medicare Electronic Health Record Incentive Program: provider performance on core and menu measures.

    Science.gov (United States)

    Wright, Adam; Feblowitz, Joshua; Samal, Lipika; McCoy, Allison B; Sittig, Dean F

    2014-02-01

    To measure performance by eligible health care providers on CMS's meaningful use measures. Medicare Electronic Health Record Incentive Program Eligible Professionals Public Use File (PUF), which contains data on meaningful use attestations by 237,267 eligible providers through May 31, 2013. Cross-sectional analysis of the 15 core and 10 menu measures pertaining to use of EHR functions reported in the PUF. Providers in the dataset performed strongly on all core measures, with the most frequent response for each of the 15 measures being 90-100 percent compliance, even when the threshold for a particular measure was lower (e.g., 30 percent). PCPs had higher scores than specialists for computerized order entry, maintaining an active medication list, and documenting vital signs, while specialists had higher scores for maintaining a problem list, recording patient demographics and smoking status, and for providing patients with an after-visit summary. In fact, 90.2 percent of eligible providers claimed at least one exclusion, and half claimed two or more. Providers are successfully attesting to CMS's requirements, and often exceeding the thresholds required by CMS; however, some troubling patterns in exclusions are present. CMS should raise program requirements in future years. © Health Research and Educational Trust.

  8. The influence of system quality characteristics on health care providers' performance: Empirical evidence from Malaysia.

    Science.gov (United States)

    Mohd Salleh, Mohd Idzwan; Zakaria, Nasriah; Abdullah, Rosni

    The Ministry of Health Malaysia initiated the total hospital information system (THIS) as the first national electronic health record system for use in selected public hospitals across the country. Since its implementation 15 years ago, there has been the critical requirement for a systematic evaluation to assess its effectiveness in coping with the current system, task complexity, and rapid technological changes. The study aims to assess system quality factors to predict the performance of electronic health in a single public hospital in Malaysia. Non-probability sampling was employed for data collection among selected providers in a single hospital for two months. Data cleaning and bias checking were performed before final analysis in partial least squares-structural equation modeling. Convergent and discriminant validity assessments were satisfied the required criterions in the reflective measurement model. The structural model output revealed that the proposed adequate infrastructure, system interoperability, security control, and system compatibility were the significant predictors, where system compatibility became the most critical characteristic to influence an individual health care provider's performance. The previous DeLone and McLean information system success models should be extended to incorporate these technological factors in the medical system research domain to examine the effectiveness of modern electronic health record systems. In this study, care providers' performance was expected when the system usage fits with patients' needs that eventually increased their productivity. Copyright © 2016 King Saud Bin Abdulaziz University for Health Sciences. Published by Elsevier Ltd. All rights reserved.

  9. Enhanced jump performance when providing augmented feedback compared to an external or internal focus of attention.

    Science.gov (United States)

    Keller, Martin; Lauber, Benedikt; Gottschalk, Marius; Taube, Wolfgang

    2015-01-01

    Factors such as an external focus of attention (EF) and augmented feedback (AF) have been shown to improve performance. However, the efficacy of providing AF to enhance motor performance has never been compared with the effects of an EF or an internal focus of attention (IF). Therefore, the aim of the present study was to identify which of the three conditions (AF, EF or IF) leads to the highest performance in a countermovement jump (CMJ). Nineteen volunteers performed 12 series of 8 maximum CMJs. Changes in jump height between conditions and within the series were analysed. Jump heights differed between conditions (P jump heights at the end of the series in AF (+1.60%) and lower jump heights at the end of the series in EF (-1.79%) and IF (-1.68%) were observed. Muscle activity did not differ between conditions. The differences between conditions and within the series provide evidence that AF leads to higher performance and better progression within one series than EF and IF. Consequently, AF seems to outperform EF and IF when maximising jump height.

  10. Effects of performance measure implementation on clinical manager and provider motivation.

    Science.gov (United States)

    Damschroder, Laura J; Robinson, Claire H; Francis, Joseph; Bentley, Douglas R; Krein, Sarah L; Rosland, Ann-Marie; Hofer, Timothy P; Kerr, Eve A

    2014-12-01

    Clinical performance measurement has been a key element of efforts to transform the Veterans Health Administration (VHA). However, there are a number of signs that current performance measurement systems used within and outside the VHA may be reaching the point of maximum benefit to care and in some settings, may be resulting in negative consequences to care, including overtreatment and diminished attention to patient needs and preferences. Our research group has been involved in a long-standing partnership with the office responsible for clinical performance measurement in the VHA to understand and develop potential strategies to mitigate the unintended consequences of measurement. Our aim was to understand how the implementation of diabetes performance measures (PMs) influences management actions and day-to-day clinical practice. This is a mixed methods study design based on quantitative administrative data to select study facilities and quantitative data from semi-structured interviews. Sixty-two network-level and facility-level executives, managers, front-line providers and staff participated in the study. Qualitative content analyses were guided by a team-based consensus approach using verbatim interview transcripts. A published interpretive motivation theory framework is used to describe potential contributions of local implementation strategies to unintended consequences of PMs. Implementation strategies used by management affect providers' response to PMs, which in turn potentially undermines provision of high-quality patient-centered care. These include: 1) feedback reports to providers that are dissociated from a realistic capability to address performance gaps; 2) evaluative criteria set by managers that are at odds with patient-centered care; and 3) pressure created by managers' narrow focus on gaps in PMs that is viewed as more punitive than motivating. Next steps include working with VHA leaders to develop and test implementation approaches to help

  11. Scalable shared-memory multiprocessing

    CERN Document Server

    Lenoski, Daniel E

    1995-01-01

    Dr. Lenoski and Dr. Weber have experience with leading-edge research and practical issues involved in implementing large-scale parallel systems. They were key contributors to the architecture and design of the DASH multiprocessor. Currently, they are involved with commercializing scalable shared-memory technology.

  12. Scalability study of solid xenon

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, J.; Cease, H.; Jaskierny, W. F.; Markley, D.; Pahlka, R. B.; Balakishiyeva, D.; Saab, T.; Filipenko, M.

    2015-04-01

    We report a demonstration of the scalability of optically transparent xenon in the solid phase for use as a particle detector above a kilogram scale. We employed a cryostat cooled by liquid nitrogen combined with a xenon purification and chiller system. A modified {\\it Bridgeman's technique} reproduces a large scale optically transparent solid xenon.

  13. Real-time video communication improves provider performance in a simulated neonatal resuscitation.

    Science.gov (United States)

    Fang, Jennifer L; Carey, William A; Lang, Tara R; Lohse, Christine M; Colby, Christopher E

    2014-11-01

    To determine if a real-time audiovisual link with a neonatologist, termed video-assisted resuscitation or VAR, improves provider performance during a simulated neonatal resuscitation scenario. Using high-fidelity simulation, 46 study participants were presented with a neonatal resuscitation scenario. The control group performed independently, while the intervention group utilized VAR. Time to effective ventilation was compared using Wilcoxon rank sum tests. Providers' use of the corrective steps for ineffective ventilation per the NRP algorithm was compared using Cochran-Armitage trend tests. The time needed to establish effective ventilation was significantly reduced in the intervention group when compared to the control group (mean time 2 min 42 s versus 4 min 11 s, presuscitation scenario significantly reduces the time to establish effective ventilation and improves provider adherence to NRP guidelines. This technology may be a means for regional centers to support local providers during a neonatal emergency to improve patient safety and improve neonatal outcomes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. Scalability Optimization of Seamless Positioning Service

    Directory of Open Access Journals (Sweden)

    Juraj Machaj

    2016-01-01

    Full Text Available Recently positioning services are getting more attention not only within research community but also from service providers. From the service providers point of view positioning service that will be able to work seamlessly in all environments, for example, indoor, dense urban, and rural, has a huge potential to open new markets. However, such system does not only need to provide accurate position estimates but have to be scalable and resistant to fake positioning requests. In the previous works we have proposed a modular system, which is able to provide seamless positioning in various environments. The system automatically selects optimal positioning module based on available radio signals. The system currently consists of three positioning modules—GPS, GSM based positioning, and Wi-Fi based positioning. In this paper we will propose algorithm which will reduce time needed for position estimation and thus allow higher scalability of the modular system and thus allow providing positioning services to higher amount of users. Such improvement is extremely important, for real world application where large number of users will require position estimates, since positioning error is affected by response time of the positioning server.

  15. Myria: Scalable Analytics as a Service

    Science.gov (United States)

    Howe, B.; Halperin, D.; Whitaker, A.

    2014-12-01

    At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.

  16. The Impact of Order Source Misattribution on Computerized Provider Order Entry (CPOE) Performance Metrics.

    Science.gov (United States)

    Gellert, George A; Catzoela, Linda; Patel, Lajja; Bruner, Kylynn; Friedman, Felix; Ramirez, Ricardo; Saucedo, Lilliana; Webster, S Luke; Gillean, John A

    2017-01-01

    One strategy to foster adoption of computerized provider order entry (CPOE) by physicians is the monthly distribution of a list identifying the number and use rate percentage of orders entered electronically versus on paper by each physician in the facility. Physicians care about CPOE use rate reports because they support the patient safety and quality improvement objectives of CPOE implementation. Certain physician groups are also motivated because they participate in contracted financial and performance arrangements that include incentive payments or financial penalties for meeting (or failing to meet) a specified CPOE use rate target. Misattribution of order sources can hinder accurate measurement of individual physician CPOE use and can thereby undermine providers' confidence in their reported performance, as well as their motivation to utilize CPOE. Misattribution of order sources also has significant patient safety, quality, and medicolegal implications. This analysis sought to evaluate the magnitude and sources of misattribution among hospitalists with high CPOE use and, if misattribution was found, to formulate strategies to prevent and reduce its recurrence, thereby ensuring the integrity and credibility of individual and facility CPOE use rate reporting. A detailed manual order source review and validation of all orders issued by one hospitalist group at a midsize community hospital was conducted for a one-month study period. We found that a small but not dismissible percentage of orders issued by hospitalists-up to 4.18 percent (95 percent confidence interval, 3.84-4.56 percent) per month-were attributed inaccurately. Sources of misattribution by department or function were as follows: nursing, 42 percent; pharmacy, 38 percent; laboratory, 15 percent; unit clerk, 3 percent; and radiology, 2 percent. Order management and protocol were the most common correct order sources that were incorrectly attributed. Order source misattribution can negatively affect

  17. On System Scalability

    Science.gov (United States)

    2006-03-01

    Prediction for EJB Applications: A Sta- tistical Analysis, Software Engineering and Middleware,” 185–198. Proceedings of the Software Engineering and...doi.ieeecomputersociety.org/10.1109/APSEC.2002.1182977. Liu 04a Liu, Y. & Gorton, I. “Accuracy of Performance Prediction for EJB Applica- tions: A Statistical Analysis

  18. Programming Scala Scalability = Functional Programming + Objects

    CERN Document Server

    Wampler, Dean

    2009-01-01

    Learn how to be more productive with Scala, a new multi-paradigm language for the Java Virtual Machine (JVM) that integrates features of both object-oriented and functional programming. With this book, you'll discover why Scala is ideal for highly scalable, component-based applications that support concurrency and distribution. Programming Scala clearly explains the advantages of Scala as a JVM language. You'll learn how to leverage the wealth of Java class libraries to meet the practical needs of enterprise and Internet projects more easily. Packed with code examples, this book provides us

  19. Scalable high-performance algorithm for the simulation of exciton-dynamics. Application to the light harvesting complex II in the presence of resonant vibrational modes

    DEFF Research Database (Denmark)

    Kreisbeck, Christoph; Kramer, Tobias; Aspuru-Guzik, Alán

    2014-01-01

    the exciton dynamics within a density-matrix formalism are known, but are restricted to small systems with less than ten sites due to their computational complexity. To study the excitonic energy transfer in larger systems, we adapt and extend the exact hierarchical equation of motion (HEOM) method to various...... high-performance many-core platforms using the Open Compute Language (OpenCL). For the light-harvesting complex II (LHC II) found in spinach, the HEOM results deviate from predictions of approximate theories and clarify the time-scale of the transfer-process. We investigate the impact of resonantly...

  20. GPU-based Scalable Volumetric Reconstruction for Multi-view Stereo

    Energy Technology Data Exchange (ETDEWEB)

    Kim, H; Duchaineau, M; Max, N

    2011-09-21

    We present a new scalable volumetric reconstruction algorithm for multi-view stereo using a graphics processing unit (GPU). It is an effectively parallelized GPU algorithm that simultaneously uses a large number of GPU threads, each of which performs voxel carving, in order to integrate depth maps with images from multiple views. Each depth map, triangulated from pair-wise semi-dense correspondences, represents a view-dependent surface of the scene. This algorithm also provides scalability for large-scale scene reconstruction in a high resolution voxel grid by utilizing streaming and parallel computation. The output is a photo-realistic 3D scene model in a volumetric or point-based representation. We demonstrate the effectiveness and the speed of our algorithm with a synthetic scene and real urban/outdoor scenes. Our method can also be integrated with existing multi-view stereo algorithms such as PMVS2 to fill holes or gaps in textureless regions.

  1. P2P Video Streaming Strategies based on Scalable Video Coding

    Directory of Open Access Journals (Sweden)

    F.A. López-Fuentes

    2015-02-01

    Full Text Available Video streaming over the Internet has gained significant popularity during the last years, and the academy and industry have realized a great research effort in this direction. In this scenario, scalable video coding (SVC has emerged as an important video standard to provide more functionality to video transmission and storage applications. This paper proposes and evaluates two strategies based on scalable video coding for P2P video streaming services. In the first strategy, SVC is used to offer differentiated quality video to peers with heterogeneous capacities. The second strategy uses SVC to reach a homogeneous video quality between different videos from different sources. The obtained results show that our proposed strategies enable a system to improve its performance and introduce benefits such as differentiated quality of video for clients with heterogeneous capacities and variable network conditions.

  2. Scalable overlapping community detection

    NARCIS (Netherlands)

    Elhelw, Ismail; Hofman, Rutger; Li, Wenzhe; Ahn, Sungjin; Welling, Max; Bal, Henri

    2016-01-01

    Recent advancements in machine learning algorithms have transformed the data analytics domain and provided innovative solutions to inherently difficult problems. However, training models at scale over large data sets remains a daunting challenge. One such problem is the detection of overlapping

  3. Real-World Verbal Communication Performance of Children Provided With Cochlear Implants or Hearing Aids.

    Science.gov (United States)

    Meister, Hartmut; Keilmann, Annerose; Leonhard, Katharina; Streicher, Barbara; Müller, Linda; Lang-Roth, Ruth

    2015-07-01

    To compare the real-world verbal communication performance of children provided with cochlear implants (CIs) with their peers with hearing aids (HAs). Cross-sectional study in university tertiary referral centers and at hearing aid dispensers. Verbal communication performance was assessed by the Functioning after Pediatric Cochlear Implantation (FAPCI) instrument. The FAPCI was administered to 38 parents of children using CIs and 62 parents of children with HAs. According to the WHO classification, children with HAs were categorized into three groups (mild-moderate-severe hearing loss). Analysis of variance (ANOVA) was performed on the FAPCI scores, with study group, hearing age (i.e., device experience), and age at hearing intervention as sources of variation. ANOVA showed that hearing age and study group significantly contribute to the FAPCI outcome. In all study groups except the children with mild hearing loss, FAPCI scores increased alongside growing experience with the devices. Children with mild hearing loss using HAs showed higher scores than those with severe hearing loss or implanted children. There were no significant differences between the children with CIs and the children with moderate or severe hearing loss using HAs. Real-world verbal communication abilities of children with CIs are similar to those of children with moderate-to-severe hearing loss using amplification. Because hearing age significantly influences performance, children with moderate-to-severe hearing loss using HAs and implanted children catch up with children with mild hearing loss at a hearing age of approximately 3 years.

  4. A NEaT Design for reliable and scalable network stacks

    NARCIS (Netherlands)

    Hruby, Tomas; Giuffrida, Cristiano; Sambuc, Lionel; Bos, Herbert; Tanenbaum, Andrew S.

    2016-01-01

    Operating systems provide a wide range of services, which are crucial for the increasingly high reliability and scalability demands of modern applications. Providing both reliability and scalability at the same time is hard. Commodity OS architectures simply lack the design abstractions to do so for

  5. Scalable Automated Model Search

    Science.gov (United States)

    2014-05-20

    related to GHOSTFACE is Auto- Weka [38]. As the name suggests, Auto- Weka aims to automate the use of Weka [10] by ap- plying recent derivative-free...algorithm is one of the many optimization algorithms we use as part of GHOST- FACE. However, in contrast to GHOSTFACE, Auto- Weka focuses on single node...performance and does not optimize the parallel ex- ecution of algorithms. Moreover, Auto- Weka treats algorithms as black boxes to be executed and

  6. BUSINESS PERFORMANCE OF HEALTH TOURISM SERVICE PROVIDERS IN THE REPUBLIC OF CROATIA.

    Science.gov (United States)

    Vrkljan, Sanela; Hendija, Zvjezdana

    2016-03-01

    Health tourism can be generally divided into medical, health spa and wellness tourism. Health spa tourism services are provided in special hospitals for medical rehabilitation and health resorts, and include under medical supervision controlled use of natural healing factors and physical therapy in order to improve and preserve health. There are 13 special hospitals for medical rehabilitation and health resorts in Croatia. Most of them are financed through the state budget and lesser by sale on the market. More than half of their accommodation capacity is offered for sale on the market while the rest is under contract with the Croatian Health Insurance Fund. Domestic overnights are several times higher than foreign overnights. The aim of this study was to analyze business performance of special hospitals for medical rehabilitation and health resorts in Croatia in relation to the sources of financing and the structure of service users. The assumption was that those who are more market-oriented achieve better business performance. In proving these assumptions, an empirical research was conducted and the assumptions were tested. A positive correlation was proven in tested indicators of business performance of the analyzed service providers of health-spa tourism with a higher amount of overnight stays realized through sales on the market in relation to total overnight stays, with a greater share of foreign overnights in total of overnights and with a higher share of realized revenue on the market out of total revenue. The results of the research show that special hospitals for medical rehabilitation and health resorts that are more market-oriented are more successful in their business performance. These findings are important for planning the health and tourism policies in countries like Croatia.

  7. Performances of sexuality counselling: a framework for provider-client encounters.

    Science.gov (United States)

    van der Kwaak, Anke; Ferris, Kristina; van Kats, Jetty; Dieleman, Marjolein

    2010-12-01

    Adequately assessing quality of care poses enormous challenges. While conducting fieldwork, we were struck by the need for a framework that encapsulates provider-client encounters. Little evidence exists concerning the most effective training, and management of health staff engaged in sexuality, reproductive health and HIV related health services. This paper proposes a framework for analysing these encounters. This paper is based on five studies. Mixed method studies were carried out in Uganda and Kenya. Two additional studies looked into the effect of HIV on health worker performance in Uganda and Zambia. As a result of the findings, a desk review looked into factors affecting provider-client encounters in order to improve the responsiveness of programs. Positive encounters between provider and client are built on trust and respect, consist of communication, practice and process, and are influenced by space, place and context. Combining these facets allows for a better understanding of their interactions. A holistic perspective in which the breadth of dynamics and processes are described should be used when assessing the quality of provider-client encounters. Within training, management and human resource planning, these dynamics need to be utilized to realize the best possible care. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  8. Technical Report: Scalable Parallel Algorithms for High Dimensional Numerical Integration

    Energy Technology Data Exchange (ETDEWEB)

    Masalma, Yahya [Universidad del Turabo; Jiao, Yu [ORNL

    2010-10-01

    We implemented a scalable parallel quasi-Monte Carlo numerical high-dimensional integration for tera-scale data points. The implemented algorithm uses the Sobol s quasi-sequences to generate random samples. Sobol s sequence was used to avoid clustering effects in the generated random samples and to produce low-discrepancy random samples which cover the entire integration domain. The performance of the algorithm was tested. Obtained results prove the scalability and accuracy of the implemented algorithms. The implemented algorithm could be used in different applications where a huge data volume is generated and numerical integration is required. We suggest using the hyprid MPI and OpenMP programming model to improve the performance of the algorithms. If the mixed model is used, attention should be paid to the scalability and accuracy.

  9. Performance evaluation of hospitals that provide care in the public health system, Brazil

    Directory of Open Access Journals (Sweden)

    Marcelo Cristiano de Azevedo Ramos

    2015-01-01

    Full Text Available OBJECTIVE To analyze if size, administrative level, legal status, type of unit and educational activity influence the hospital network performance in providing services to the Brazilian Unified Health System.METHODS This cross-sectional study evaluated data from the Hospital Information System and the Cadastro Nacional de Estabelecimentos de Saúde (National Registry of Health Facilities, 2012, in Sao Paulo, Southeastern Brazil. We calculated performance indicators, such as: the ratio of hospital employees per bed; mean amount paid for admission; bed occupancy rate; average length of stay; bed turnover index and hospital mortality rate. Data were expressed as mean and standard deviation. The groups were compared using analysis of variance (ANOVA and Bonferroni correction.RESULTS The hospital occupancy rate in small hospitals was lower than in medium, big and special-sized hospitals. Higher hospital occupancy rate and bed turnover index were observed in hospitals that include education in their activities. The hospital mortality rate was lower in specialized hospitals compared to general ones, despite their higher proportion of highly complex admissions. We found no differences between hospitals in the direct and indirect administration for most of the indicators analyzed.CONCLUSIONS The study indicated the importance of the scale effect on efficiency, and larger hospitals had a higher performance. Hospitals that include education in their activities had a higher operating performance, albeit with associated importance of using human resources and highly complex structures. Specialized hospitals had a significantly lower rate of mortality than general hospitals, indicating the positive effect of the volume of procedures and technology used on clinical outcomes. The analysis related to the administrative level and legal status did not show any significant performance differences between the categories of public hospitals.

  10. The role of kaizen in creating radical performance results in a logistics service provider

    Directory of Open Access Journals (Sweden)

    Erez Agmoni

    2016-09-01

    Full Text Available Background: This study investigates the role of an incremental change in organizational process in creating radical performance results in a service provider company. The role of Kaizen is established prominently in manufacturing, but is nascent in service applications. This study examines the impact of introducing Kaizen as an ODI tool-how it is applied, how it works, and whether participants believe it helps service groups form more effective working relationships that result in significant performance improvements. Methods: Exploring the evolving role of Kaizen in service contexts, this study explores a variety of facets of human communication in the context of continuous improvement and teamwork inter-organizationally. The paper consists of an archival study and an action research case study. A pre-intervention study consisting of observations, interviews, and submission of questionnaires to employees of a manufacturing and air-sea freight firm was conducted. A Kaizen intervention occurred subsequently, and a post-intervention study was then conducted. Results: Radical improvements in both companies such as 30% financial growth, 81% productivity improvement and more are demonstrated in this paper. Conclusions: Findings offer unique insights into the effects of Kaizen in creating radical performance improvements in a service company and its customer. Both qualitative and quantitative results of business, satisfaction, and productivity suggest time invested in introducing Kaizen into a service organization helps the companies improve relationships and improve the bottom line dramatically.

  11. Scalable fast multipole accelerated vortex methods

    KAUST Repository

    Hu, Qi

    2014-05-01

    The fast multipole method (FMM) is often used to accelerate the calculation of particle interactions in particle-based methods to simulate incompressible flows. To evaluate the most time-consuming kernels - the Biot-Savart equation and stretching term of the vorticity equation, we mathematically reformulated it so that only two Laplace scalar potentials are used instead of six. This automatically ensuring divergence-free far-field computation. Based on this formulation, we developed a new FMM-based vortex method on heterogeneous architectures, which distributed the work between multicore CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm uses new data structures which can dynamically manage inter-node communication and load balance efficiently, with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching calculation for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s.

  12. An Open Infrastructure for Scalable, Reconfigurable Analysis

    Energy Technology Data Exchange (ETDEWEB)

    de Supinski, B R; Fowler, R; Gamblin, T; Mueller, F; Ratn, P; Schulz, M

    2008-05-15

    Petascale systems will have hundreds of thousands of processor cores so their applications must be massively parallel. Effective use of petascale systems will require efficient interprocess communication through memory hierarchies and complex network topologies. Tools to collect and analyze detailed data about this communication would facilitate its optimization. However, several factors complicate tool design. First, large-scale runs on petascale systems will be a precious commodity, so scalable tools must have almost no overhead. Second, the volume of performance data from petascale runs could easily overwhelm hand analysis and, thus, tools must collect only data that is relevant to diagnosing performance problems. Analysis must be done in-situ, when available processing power is proportional to the data. We describe a tool framework that overcomes these complications. Our approach allows application developers to combine existing techniques for measurement, analysis, and data aggregation to develop application-specific tools quickly. Dynamic configuration enables application developers to select exactly the measurements needed and generic components support scalable aggregation and analysis of this data with little additional effort.

  13. Development of a Child Abuse Checklist to Evaluate Prehospital Provider Performance.

    Science.gov (United States)

    Alphonso, Aimee; Auerbach, Marc; Bechtel, Kirsten; Bilodeau, Kyle; Gawel, Marcie; Koziel, Jeannette; Whitfill, Travis; Tiyyagura, Gunjan Kamdar

    2017-01-01

    To develop and provide validity evidence for a performance checklist to evaluate the child abuse screening behaviors of prehospital providers. Checklist Development: We developed the first iteration of the checklist after review of the relevant literature and on the basis of the authors' clinical experience. Next, a panel of six content experts participated in three rounds of Delphi review to reach consensus on the final checklist items. Checklist Validation: Twenty-eight emergency medical services (EMS) providers (16 EMT-Basics, 12 EMT-Paramedics) participated in a standardized simulated case of physical child abuse to an infant followed by one-on-one semi-structured qualitative interviews. Three reviewers scored the videotaped performance using the final checklist. Light's kappa and Cronbach's alpha were calculated to assess inter-rater reliability (IRR) and internal consistency, respectively. The correlation of successful child abuse screening with checklist task completion and with participant characteristics were compared using Pearson's chi squared test to gather evidence for construct validity. The Delphi review process resulted in a final checklist that included 24 items classified with trichotomous scoring (done, not done, or not applicable). The overall IRR of the three raters was 0.70 using Light's kappa, indicating substantial agreement. Internal consistency of the checklist was low, with an overall Cronbach's alpha of 0.61. Of 28 participants, only 14 (50%) successfully screened for child abuse in simulation. Participants who successfully screened for child abuse did not differ significantly from those who failed to screen in terms of training level, past experience with child abuse reporting, or self-reported confidence in detecting child abuse (all p > 0.30). Of all 24 tasks, only the task of exposing the infant significantly correlated with successful detection of child abuse (p child abuse checklist that demonstrated strong content validity and

  14. Quality scalable video data stream

    OpenAIRE

    Wiegand, T.; Kirchhoffer, H.; Schwarz, H

    2008-01-01

    An apparatus for generating a quality-scalable video data stream (36) is described which comprises means (42) for coding a video signal (18) using block-wise transformation to obtain transform blocks (146, 148) of transformation coefficient values for a picture (140) of the video signal, a predetermined scan order (154, 156, 164, 166) with possible scan positions being defined among the transformation coefficient values within the transform blocks so that in each transform block, for each pos...

  15. 42 CFR 493.53 - Notification requirements for laboratories issued a certificate for provider-performed microscopy...

    Science.gov (United States)

    2010-10-01

    ... certificate for provider-performed microscopy (PPM) procedures. 493.53 Section 493.53 Public Health CENTERS... CERTIFICATION LABORATORY REQUIREMENTS Registration Certificate, Certificate for Provider-performed Microscopy... certificate for provider-performed microscopy (PPM) procedures. Laboratories issued a certificate for PPM...

  16. Administrative data provide vital research evidence for maximizing health-system performance and outcomes.

    Science.gov (United States)

    Roder, David; Buckley, Elizabeth

    2017-06-01

    Although the quality of administrative data is frequently questioned, these data are vital for health-services evaluation and complement data from trials, other research studies and registries for research. Trials generally provide the strongest evidence of outcomes in research settings but results may not apply in many service environments. High-quality observational research has a complementary role where trials are not applicable and for assessing whether trial results apply to groups excluded from trials. Administrative data have a broader system-wide reach, enabling system-wide health-services research and monitoring of performance markers. Where administrative data raise questions about service outcomes, follow-up enquiry may be required to investigate validity and service implications. Greater use should be made of administrative data for system-wide monitoring and for research on service effectiveness and equity. © 2017 John Wiley & Sons Australia, Ltd.

  17. A scalable method for computing quadruplet wave-wave interactions

    Science.gov (United States)

    Van Vledder, Gerbrant

    2017-04-01

    Non-linear four-wave interactions are a key physical process in the evolution of wind generated ocean waves. The present generation operational wave models use the Discrete Interaction Approximation (DIA), but it accuracy is poor. It is now generally acknowledged that the DIA should be replaced with a more accurate method to improve predicted spectral shapes and derived parameters. The search for such a method is challenging as one should find a balance between accuracy and computational requirements. Such a method is presented here in the form of a scalable and adaptive method that can mimic both the time consuming exact Snl4 approach and the fast but inaccurate DIA, and everything in between. The method provides an elegant approach to improve the DIA, not by including more arbitrarily shaped wave number configurations, but by a mathematically consistent reduction of an exact method, viz. the WRT method. The adaptiveness is to adapt the abscissa of the locus integrand in relation to the magnitude of the known terms. The adaptiveness is extended to the highest level of the WRT method to select interacting wavenumber configurations in a hierarchical way in relation to their importance. This adaptiveness results in a speed-up of one to three orders of magnitude depending on the measure of accuracy. This definition of accuracy should not be expressed in terms of the quality of the transfer integral for academic spectra but rather in terms of wave model performance in a dynamic run. This has consequences for the balance between the required accuracy and the computational workload for evaluating these interactions. The performance of the scalable method on different scales is illustrated with results from academic spectra, simple growth curves to more complicated field cases using a 3G-wave model.

  18. GPU-FS-kNN: a software tool for fast and scalable kNN computation using GPUs.

    Directory of Open Access Journals (Sweden)

    Ahmed Shamsul Arefin

    Full Text Available BACKGROUND: The analysis of biological networks has become a major challenge due to the recent development of high-throughput techniques that are rapidly producing very large data sets. The exploding volumes of biological data are craving for extreme computational power and special computing facilities (i.e. super-computers. An inexpensive solution, such as General Purpose computation based on Graphics Processing Units (GPGPU, can be adapted to tackle this challenge, but the limitation of the device internal memory can pose a new problem of scalability. An efficient data and computational parallelism with partitioning is required to provide a fast and scalable solution to this problem. RESULTS: We propose an efficient parallel formulation of the k-Nearest Neighbour (kNN search problem, which is a popular method for classifying objects in several fields of research, such as pattern recognition, machine learning and bioinformatics. Being very simple and straightforward, the performance of the kNN search degrades dramatically for large data sets, since the task is computationally intensive. The proposed approach is not only fast but also scalable to large-scale instances. Based on our approach, we implemented a software tool GPU-FS-kNN (GPU-based Fast and Scalable k-Nearest Neighbour for CUDA enabled GPUs. The basic approach is simple and adaptable to other available GPU architectures. We observed speed-ups of 50-60 times compared with CPU implementation on a well-known breast microarray study and its associated data sets. CONCLUSION: Our GPU-based Fast and Scalable k-Nearest Neighbour search technique (GPU-FS-kNN provides a significant performance improvement for nearest neighbour computation in large-scale networks. Source code and the software tool is available under GNU Public License (GPL at https://sourceforge.net/p/gpufsknn/.

  19. Scalable and Hybrid Radio Resource Management for Future Wireless Networks

    DEFF Research Database (Denmark)

    Mino, E.; Luo, Jijun; Tragos, E.

    2007-01-01

    The concept of ubiquitous and scalable system is applied in the IST WINNER II [1] project to deliver optimum performance for different deployment scenarios, from local area to wide area wireless networks. The integration in a unique radio system of a cellular and local area type networks supposes...... describes a proposal for scalable and hybrid radio resource management to efficiently integrate the different WINNER system modes. Index...... a great advantage for the final user and for the operator, compared with the current situation, with disconnected systems, usually with different subscriptions, radio interfaces and terminals. To be a ubiquitous wireless system, the IST project WINNER II has defined three system modes. This contribution...

  20. Scalability limitations of VIA-based technologies in supporting MPI

    Energy Technology Data Exchange (ETDEWEB)

    BRIGHTWELL,RONALD B.; MACCABE,ARTHUR BERNARD

    2000-04-17

    This paper analyzes the scalability limitations of networking technologies based on the Virtual Interface Architecture (VIA) in supporting the runtime environment needed for an implementation of the Message Passing Interface. The authors present an overview of the important characteristics of VIA and an overview of the runtime system being developed as part of the Computational Plant (Cplant) project at Sandia National Laboratories. They discuss the characteristics of VIA that prevent implementations based on this system to meet the scalability and performance requirements of Cplant.

  1. Quality Scalability Compression on Single-Loop Solution in HEVC

    Directory of Open Access Journals (Sweden)

    Mengmeng Zhang

    2014-01-01

    Full Text Available This paper proposes a quality scalable extension design for the upcoming high efficiency video coding (HEVC standard. In the proposed design, the single-loop decoder solution is extended into the proposed scalable scenario. A novel interlayer intra/interprediction is added to reduce the amount of bits representation by exploiting the correlation between coding layers. The experimental results indicate that the average Bjøntegaard delta rate decrease of 20.50% can be gained compared with the simulcast encoding. The proposed technique achieved 47.98% Bjøntegaard delta rate reduction compared with the scalable video coding extension of the H.264/AVC. Consequently, significant rate savings confirm that the proposed method achieves better performance.

  2. A highly scalable, interoperable clinical decision support service.

    Science.gov (United States)

    Goldberg, Howard S; Paterno, Marilyn D; Rocha, Beatriz H; Schaeffer, Molly; Wright, Adam; Erickson, Jessica L; Middleton, Blackford

    2014-02-01

    To create a clinical decision support (CDS) system that is shareable across healthcare delivery systems and settings over large geographic regions. The enterprise clinical rules service (ECRS) realizes nine design principles through a series of enterprise java beans and leverages off-the-shelf rules management systems in order to provide consistent, maintainable, and scalable decision support in a variety of settings. The ECRS is deployed at Partners HealthCare System (PHS) and is in use for a series of trials by members of the CDS consortium, including internally developed systems at PHS, the Regenstrief Institute, and vendor-based systems deployed at locations in Oregon and New Jersey. Performance measures indicate that the ECRS provides sub-second response time when measured apart from services required to retrieve data and assemble the continuity of care document used as input. We consider related work, design decisions, comparisons with emerging national standards, and discuss uses and limitations of the ECRS. ECRS design, implementation, and use in CDS consortium trials indicate that it provides the flexibility and modularity needed for broad use and performs adequately. Future work will investigate additional CDS patterns, alternative methods of data passing, and further optimizations in ECRS performance.

  3. Using the scalable nonlinear equations solvers package

    Energy Technology Data Exchange (ETDEWEB)

    Gropp, W.D.; McInnes, L.C.; Smith, B.F.

    1995-02-01

    SNES (Scalable Nonlinear Equations Solvers) is a software package for the numerical solution of large-scale systems of nonlinear equations on both uniprocessors and parallel architectures. SNES also contains a component for the solution of unconstrained minimization problems, called SUMS (Scalable Unconstrained Minimization Solvers). Newton-like methods, which are known for their efficiency and robustness, constitute the core of the package. As part of the multilevel PETSc library, SNES incorporates many features and options from other parts of PETSc. In keeping with the spirit of the PETSc library, the nonlinear solution routines are data-structure-neutral, making them flexible and easily extensible. This users guide contains a detailed description of uniprocessor usage of SNES, with some added comments regarding multiprocessor usage. At this time the parallel version is undergoing refinement and extension, as we work toward a common interface for the uniprocessor and parallel cases. Thus, forthcoming versions of the software will contain additional features, and changes to parallel interface may result at any time. The new parallel version will employ the MPI (Message Passing Interface) standard for interprocessor communication. Since most of these details will be hidden, users will need to perform only minimal message-passing programming.

  4. CloudTPS: Scalable Transactions for Web Applications in the Cloud

    NARCIS (Netherlands)

    Zhou, W.; Pierre, G.E.O.; Chi, C.-H.

    2010-01-01

    NoSQL Cloud data services provide scalability and high availability properties for web applications but at the same time they sacrifice data consistency. However, many applications cannot afford any data inconsistency. CloudTPS is a scalable transaction manager to allow cloud database services to

  5. Scalable electro-photonic integration concept based on polymer waveguides

    NARCIS (Netherlands)

    Bosman, E.; Steenberge, G. van; Boersma, A.; Wiegersma, S.; Harmsma, P.J.; Karppinen, M.; Korhonen, T.; Offrein, B.J.; Dangel, R.; Daly, A.; Ortsiefer, M.; Justice, J.; Corbett, B.; Dorrestein, S.; Duis, J.

    2016-01-01

    A novel method for fabricating a single mode optical interconnection platform is presented. The method comprises the miniaturized assembly of optoelectronic single dies, the scalable fabrication of polymer single mode waveguides and the coupling to glass fiber arrays providing the I/O's. The low

  6. Declarative and Scalable Selection for Map Visualizations

    DEFF Research Database (Denmark)

    Kefaloukos, Pimin Konstantin Balic

    foreground layers is merited. (2) The typical map making professional has changed from a GIS specialist to a busy person with map making as a secondary skill. Today, thematic maps are produced by journalists, aid workers, amateur data enth siasts, and scientists alike. Therefore it is crucial...... that this diverse group of map makers is provided with easy-to-use and expressible thematic map design tools. Such tools should support customized selection of data for maps in scenarios where developer time is a scarce resource. (3) The Web provides access to massive data repositories for thematic maps...... based on an access log of recent requests. The results show that Glossy SQL og CVL can be used to compute cartographic selection by processing one or more complex queries in a relational database. The scalability of the approach has been verified up to half a million objects in the database. Furthermore...

  7. Scalable Techniques for Formal Verification

    CERN Document Server

    Ray, Sandip

    2010-01-01

    This book presents state-of-the-art approaches to formal verification techniques to seamlessly integrate different formal verification methods within a single logical foundation. It should benefit researchers and practitioners looking to get a broad overview of the spectrum of formal verification techniques, as well as approaches to combining such techniques within a single framework. Coverage includes a range of case studies showing how such combination is fruitful in developing a scalable verification methodology for industrial designs. This book outlines both theoretical and practical issue

  8. Flexible scalable photonic manufacturing method

    Science.gov (United States)

    Skunes, Timothy A.; Case, Steven K.

    2003-06-01

    A process for flexible, scalable photonic manufacturing is described. Optical components are actively pre-aligned and secured to precision mounts. In a subsequent operation, the mounted optical components are passively placed onto a substrate known as an Optical Circuit Board (OCB). The passive placement may be either manual for low volume applications or with a pick-and-place robot for high volume applications. Mating registration features on the component mounts and the OCB facilitate accurate optical alignment. New photonic circuits may be created by changing the layout of the OCB. Predicted yield data from Monte Carlo tolerance simulations for two fiber optic photonic circuits is presented.

  9. An adaptive scan of high frequency subbands for dyadic intra frame in MPEG4-AVC/H.264 scalable video coding

    Science.gov (United States)

    Shahid, Z.; Chaumont, M.; Puech, W.

    2009-01-01

    This paper develops a new adaptive scanning methodology for intra frame scalable coding framework based on a subband/wavelet(DWTSB) coding approach for MPEG-4 AVC/H.264 scalable video coding (SVC). It attempts to take advantage of the prior knowledge of the frequencies which are present in different higher frequency subbands. We propose dyadic intra frame coding method with adaptive scan (DWTSB-AS) for each subband as traditional zigzag scan is not suitable for high frequency subbands. Thus, by just modification of the scan order of the intra frame scalable coding framework of H.264, we can get better compression. The proposed algorithm has been theoretically justified and is thoroughly evaluated against the current SVC test model JSVM and DWTSB through extensive coding experiments for scalable coding of intra frame. The simulation results show the proposed scanning algorithm consistently outperforms JSVM and DWTSB in PSNR performance. This results in extra compression for intra frames, along with spatial scalability. Thus Image and video coding applications, traditionally serviced by separate coders, can be efficiently provided by an integrated coding system.

  10. Current parallel I/O limitations to scalable data analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Mascarenhas, Ajith Arthur; Pebay, Philippe Pierre

    2011-07-01

    This report describes the limitations to parallel scalability which we have encountered when applying our otherwise optimally scalable parallel statistical analysis tool kit to large data sets distributed across the parallel file system of the current premier DOE computational facility. This report describes our study to evaluate the effect of parallel I/O on the overall scalability of a parallel data analysis pipeline using our scalable parallel statistics tool kit [PTBM11]. In this goal, we tested it using the Jaguar-pf DOE/ORNL peta-scale platform on a large combustion simulation data under a variety of process counts and domain decompositions scenarios. In this report we have recalled the foundations of the parallel statistical analysis tool kit which we have designed and implemented, with the specific double intent of reproducing typical data analysis workflows, and achieving optimal design for scalable parallel implementations. We have briefly reviewed those earlier results and publications which allow us to conclude that we have achieved both goals. However, in this report we have further established that, when used in conjuction with a state-of-the-art parallel I/O system, as can be found on the premier DOE peta-scale platform, the scaling properties of the overall analysis pipeline comprising parallel data access routines degrade rapidly. This finding is problematic and must be addressed if peta-scale data analysis is to be made scalable, or even possible. In order to attempt to address these parallel I/O limitations, we will investigate the use the Adaptable IO System (ADIOS) [LZL+10] to improve I/O performance, while maintaining flexibility for a variety of IO options, such MPI IO, POSIX IO. This system is developed at ORNL and other collaborating institutions, and is being tested extensively on Jaguar-pf. Simulation code being developed on these systems will also use ADIOS to output the data thereby making it easier for other systems, such as ours, to

  11. Highly Scalable Matching Pursuit Signal Decomposition Algorithm

    Data.gov (United States)

    National Aeronautics and Space Administration — In this research, we propose a variant of the classical Matching Pursuit Decomposition (MPD) algorithm with significantly improved scalability and computational...

  12. A graph algebra for scalable visual analytics.

    Science.gov (United States)

    Shaverdian, Anna A; Zhou, Hao; Michailidis, George; Jagadish, Hosagrahar V

    2012-01-01

    Visual analytics (VA), which combines analytical techniques with advanced visualization features, is fast becoming a standard tool for extracting information from graph data. Researchers have developed many tools for this purpose, suggesting a need for formal methods to guide these tools' creation. Increased data demands on computing requires redesigning VA tools to consider performance and reliability in the context of analysis of exascale datasets. Furthermore, visual analysts need a way to document their analyses for reuse and results justification. A VA graph framework encapsulated in a graph algebra helps address these needs. Its atomic operators include selection and aggregation. The framework employs a visual operator and supports dynamic attributes of data to enable scalable visual exploration of data.

  13. The performance implications of outsourcing customer support to service providers in emerging versus established economies

    NARCIS (Netherlands)

    Raassens, N.; Wuyts, S.H.K.; Geyskens, I.

    Recent discussions in the business press query the contribution of customer-support outsourcing to firm performance. Despite the controversy surrounding its performance implications, customer-support outsourcing is still on the rise, especially to emerging markets. Against this backdrop, we study

  14. Performance assessment in health care providers: a critical review of evidence and current practice.

    Science.gov (United States)

    Hamilton, Karen E Stc; Coates, Vivien; Kelly, Billy; Boore, Jennifer R P; Cundell, Jill H; Gracey, Jacquie; McFetridge, Brian; McGonigle, Mary; Sinclair, Marlene

    2007-11-01

    To evaluate methods of performance assessment through an international literature review and a survey of current practice. Over the past two decades health care organizations have focussed on promoting high quality care in conjunction with retaining motivated staff. Cognisant of such initiatives, we sought to evaluate assessment methods for qualified staff according to their utility in the working environment. A systematic literature search was completed and each paper independently reviewed. All health care organizations in Northern Ireland submitted details of their performance assessments. Each was critically appraised using a utility index. Performance was not universally defined. A broad range of assessments were identified, each method had advantages and disadvantages. Although many lacked rigorous testing, areas of good practice were also noted. No single method is appropriate for assessing clinical performance. Rather, this study endorses proposals for a multi-method strategy to ensure that performance assessment demonstrates all attributes required for effective nursing and midwifery practice.

  15. The Concept of Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten

    2015-01-01

    are leveraged in this value creation, delivery and realization exercise. Central to the mainstream understanding of business models is the value proposition towards the customer and the hypothesis generated is that if the firm delivers to the customer what he/she requires, then there is a good foundation......The power of business models lies in their ability to visualize and clarify how firms’ may configure their value creation processes. Among the key aspects of business model thinking are a focus on what the customer values, how this value is best delivered to the customer and how strategic partners...... for a long-term profitable business. However, the message conveyed in this article is that while providing a good value proposition may help the firm ‘get by’, the really successful businesses of today are those able to reach the sweet-spot of business model scalability. This article introduces and discusses...

  16. TDCCREC: AN EFFICIENT AND SCALABLE WEB-BASED RECOMMENDATION SYSTEM

    Directory of Open Access Journals (Sweden)

    K.Latha

    2010-10-01

    Full Text Available Web browsers are provided with complex information space where the volume of information available to them is huge. There comes the Recommender system which effectively recommends web pages that are related to the current webpage, to provide the user with further customized reading material. To enhance the performance of the recommender systems, we include an elegant proposed web based recommendation system; Truth Discovery based Content and Collaborative RECommender (TDCCREC which is capable of addressing scalability. Existing approaches such as Learning automata deals with usage and navigational patterns of users. On the other hand, Weighted Association Rule is applied for recommending web pages by assigning weights to each page in all the transactions. Both of them have their own disadvantages. The websites recommended by the search engines have no guarantee for information correctness and often delivers conflicting information. To solve them, content based filtering and collaborative filtering techniques are introduced for recommending web pages to the active user along with the trustworthiness of the website and confidence of facts which outperforms the existing methods. Our results show how the proposed recommender system performs better in predicting the next request of web users.

  17. Compliance Performance: Effects of a Provider Incentive Program and Coding Compliance Plan

    National Research Council Canada - National Science Library

    Tudela, Joseph A

    2004-01-01

    ...) and select attributes and experience/training variables. For BAMC's provider incentive program, analysis reveals statistical significance for record compliance rates with data dated measures, F(underscored)(1,103)= 4.74, p(underscored) = .03...

  18. VISION: a regional performance improvement initiative for HIV health care providers.

    Science.gov (United States)

    Bruno, Theodore O; Hicks, Charles B; Naggie, Susanna; Wohl, David A; Albrecht, Helmut; Thielman, Nathan M; Rabin, Daniel U; Layton, Sherry; Subramaniam, Chitra; Grichnik, Katherine P; Shlien, Amanda; Weyer, Dianne

    2014-01-01

    VISION (HIV Integrated Learning ModuleS: Achieving Performance Improvement through CollaboratiON) was a regional performance improvement (PI) continuing medical education (CME) initiative designed to increase guideline-conforming practice of clinicians who manage patients with HIV infection. The 3-part activity consisted of (1) clinical practice assessment and development of an action plan for practice change, (2) completion of relevant education, and (3) reassessment. The activity did not change practitioners' performance in clinical status monitoring and in patient treatment, in large part because guidelines were being appropriately implemented at baseline as well as after the educational intervention. There was a trend toward improvement, however, in practitioner performance in the area of patient medication adherence (increased from 66% to 74%). Results observed in the VISION initiative were consistent with HIVQUAL metrics. Ongoing education in HIV is important, and VISION demonstrated performance improvement in medication adherence, a critical aspect of health care. © 2014 The Alliance for Continuing Education in the Health Professions, the Society for Academic Continuing Medical Education, and the Council on Continuing Medical Education, Association for Hospital Medical Education.

  19. Providing the Answers Does Not Improve Performance on a College Final Exam

    Science.gov (United States)

    Glass, Arnold Lewis; Sinha, Neha

    2013-01-01

    In the context of an upper-level psychology course, even when students were given an opportunity to refer to text containing the answers and change their exam responses in order to improve their exam scores, their performance on these questions improved slightly or not at all. Four experiments evaluated competing explanations for the students'…

  20. Correlation between provider computer experience and accuracy of electronic anesthesia charting A pilot study and performance improvement project

    Science.gov (United States)

    2017-03-20

    Anesthesia recordkeeping: Accuracy of recall with computerized and manual entry recordkeeping. CORRELATION BETWEEN PROVIDER COMPUTER EXPERIENCE 39...Unexpected increased mortality after implementation of a CORRELATION BETWEEN PROVIDER COMPUTER EXPERIENCE 40 commercially sold computerized physician...Correlation between provider computer experience and accuracy of electronic anesthesia charting – A pilot study and performance improvement

  1. Performances of sexuality counselling : a framework for provider-client encounters

    NARCIS (Netherlands)

    van der Kwaak, Anke; Ferris, Kristina; van Kats, Jetty; Dieleman, Marjolein

    2010-01-01

    OBJECTIVE: Adequately assessing quality of care poses enormous challenges. While conducting fieldwork, we were struck by the need for a framework that encapsulates provider-client encounters. Little evidence exists concerning the most effective training, and management of health staff engaged in

  2. Effort provides its own reward: endeavors reinforce subjective expectation and evaluation of task performance.

    Science.gov (United States)

    Wang, Lei; Zheng, Jiehui; Meng, Liang

    2017-04-01

    Although many studies have investigated the relationship between the amount of effort invested in a certain task and one's attitude towards the subsequent reward, whether exerted effort would impact one's expectation and evaluation of performance feedback itself still remains to be examined. In the present study, two types of calculation tasks that varied in the required effort were adopted, and we resorted to electroencephalography to probe the temporal dynamics of how exerted effort would affect one's anticipation and evaluation of performance feedback. In the high-effort condition, a more salient stimulus-preceding negativity was detected during the anticipation stage, which was accompanied with a more salient FRN/P300 complex (a more positive P300 and a less negative feedback-related negativity) in response to positive outcomes in the evaluation stage. These results suggested that when more effort was invested, an enhanced anticipatory attention would be paid toward one's task performance feedback and that positive outcomes would be subjectively valued to a greater extent.

  3. Interdepartmental Occupational Standards for Social Service Providers and Their Role in Improving Job Performance

    Directory of Open Access Journals (Sweden)

    Zabrodin Yu.M.,

    2016-04-01

    Full Text Available The paper presents an analysis of the occupational standards development abroad and in Russia. It focuses on interdepartmental occupational standards for social service providers. While creating occupational standards for social services as an integrated industry it is advisable to consider the design of whole system and its macro-level effects in a document called “sectoral qualification framework”. It is pointed out that 1 real professional activity in social sphere has a clear humanitarian focus, and its objects are radically different population groups; 2 the complexity of the social work is often associated with the interaction between various professionals and their activity have to be interdepartmentally organized. The author identifies the factors influencing development and implementation of professional standards in different countries and consider the main strategy directions of development and application of occupational standards of education and social service providers in Russia.

  4. GASPRNG: GPU accelerated scalable parallel random number generator library

    Science.gov (United States)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or

  5. Towards Reliable, Scalable, and Energy Efficient Cognitive Radio Systems

    KAUST Repository

    Sboui, Lokman

    2017-11-01

    The cognitive radio (CR) concept is expected to be adopted along with many technologies to meet the requirements of the next generation of wireless and mobile systems, the 5G. Consequently, it is important to determine the performance of the CR systems with respect to these requirements. In this thesis, after briefly describing the 5G requirements, we present three main directions in which we aim to enhance the CR performance. The first direction is the reliability. We study the achievable rate of a multiple-input multiple-output (MIMO) relay-assisted CR under two scenarios; an unmanned aerial vehicle (UAV) one-way relaying (OWR) and a fixed two-way relaying (TWR). We propose special linear precoding schemes that enable the secondary user (SU) to take advantage of the primary-free channel eigenmodes. We study the SU rate sensitivity to the relay power, the relay gain, the UAV altitude, the number of antennas and the line of sight availability. The second direction is the scalability. We first study a multiple access channel (MAC) with multiple SUs scenario. We propose a particular linear precoding and SUs selection scheme maximizing their sum-rate. We show that the proposed scheme provides a significant sum-rate improvement as the number of SUs increases. Secondly, we expand our scalability study to cognitive cellular networks. We propose a low-complexity algorithm for base station activation/deactivation and dynamic spectrum management maximizing the profits of primary and secondary networks subject to green constraints. We show that our proposed algorithms achieve performance close to those obtained with the exhaustive search method. The third direction is the energy efficiency (EE). We present a novel power allocation scheme based on maximizing the EE of both single-input and single-output (SISO) and MIMO systems. We solve a non-convex problem and derive explicit expressions of the corresponding optimal power. When the instantaneous channel is not available, we

  6. Evaluation of 3D printed anatomically scalable transfemoral prosthetic knee.

    Science.gov (United States)

    Ramakrishnan, Tyagi; Schlafly, Millicent; Reed, Kyle B

    2017-07-01

    This case study compares a transfemoral amputee's gait while using the existing Ossur Total Knee 2000 and our novel 3D printed anatomically scalable transfemoral prosthetic knee. The anatomically scalable transfemoral prosthetic knee is 3D printed out of a carbon-fiber and nylon composite that has a gear-mesh coupling with a hard-stop weight-actuated locking mechanism aided by a cross-linked four-bar spring mechanism. This design can be scaled using anatomical dimensions of a human femur and tibia to have a unique fit for each user. The transfemoral amputee who was tested is high functioning and walked on the Computer Assisted Rehabilitation Environment (CAREN) at a self-selected pace. The motion capture and force data that was collected showed that there were distinct differences in the gait dynamics. The data was used to perform the Combined Gait Asymmetry Metric (CGAM), where the scores revealed that the overall asymmetry of the gait on the Ossur Total Knee was more asymmetric than the anatomically scalable transfemoral prosthetic knee. The anatomically scalable transfemoral prosthetic knee had higher peak knee flexion that caused a large step time asymmetry. This made walking on the anatomically scalable transfemoral prosthetic knee more strenuous due to the compensatory movements in adapting to the different dynamics. This can be overcome by tuning the cross-linked spring mechanism to emulate the dynamics of the subject better. The subject stated that the knee would be good for daily use and has the potential to be adapted as a running knee.

  7. Mixing students and performance artists to provide innovative ways of communicating scientific research

    Science.gov (United States)

    van Manen, S. M.

    2007-12-01

    In May 2007 the Open University (U.K.) in conjunction with the MK (Milton Keynes) Gallery invited performance artists Noble and Silver to work with a group of students to design innovative methods of disseminating their research to a general audience. The students created a multitude of well-received live and multimedia performances based on their research. Students found they greatly benefited from the artists' and each others' different viewpoints and backgrounds, resulting in improved communication skills and varying interpretations of their own topic of interest. This work focuses on research aimed at identifying precursory activity at volcanoes using temperature, earthquake and ground movement data, to aid improvement of early warning systems. For this project an aspect of the research relevant to the public was chosen: the importance of appropriately timed warnings regarding the possibility of an eruption. If a warning is issued too early it may cause complacency and apathy towards the situation, whereas issuing a warning too late may endanger lives and property. An interactive DVD was produced which leads the user through the events preceding a volcanic eruption. The goal is to warn the public about the impending eruption at the most appropriate time. Data is presented in short film clips, after which questions are posed. Based on the player's answers the consequences or follow-up events of the choices are explored. We aim to improve and expand upon this concept in the near future, as well as making the DVD available to schools for educational purposes.

  8. Advanced technologies for scalable ATLAS conditions database access on the grid

    Science.gov (United States)

    Basset, R.; Canali, L.; Dimitrov, G.; Girone, M.; Hawkings, R.; Nevski, P.; Valassi, A.; Vaniachine, A.; Viegas, F.; Walker, R.; Wong, A.

    2010-04-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysis of server performance under stress tests indicates that Conditions Db data access is limited by the disk I/O throughput. An unacceptable side-effect of the disk I/O saturation is a degradation of the WLCG 3D Services that update Conditions Db data at all ten ATLAS Tier-1 sites using the technology of Oracle Streams. To avoid such bottlenecks we prototyped and tested a novel approach for database peak load avoidance in Grid computing. Our approach is based upon the proven idea of pilot job submission on the Grid: instead of the actual query, an ATLAS utility library sends to the database server a pilot query first.

  9. Developing and Trialling an independent, scalable and repeatable IT-benchmarking procedure for healthcare organisations.

    Science.gov (United States)

    Liebe, J D; Hübner, U

    2013-01-01

    Continuous improvements of IT-performance in healthcare organisations require actionable performance indicators, regularly conducted, independent measurements and meaningful and scalable reference groups. Existing IT-benchmarking initiatives have focussed on the development of reliable and valid indicators, but less on the questions about how to implement an environment for conducting easily repeatable and scalable IT-benchmarks. This study aims at developing and trialling a procedure that meets the afore-mentioned requirements. We chose a well established, regularly conducted (inter-) national IT-survey of healthcare organisations (IT-Report Healthcare) as the environment and offered the participants of the 2011 survey (CIOs of hospitals) to enter a benchmark. The 61 structural and functional performance indicators covered among others the implementation status and integration of IT-systems and functions, global user satisfaction and the resources of the IT-department. Healthcare organisations were grouped by size and ownership. The benchmark results were made available electronically and feedback on the use of these results was requested after several months. Fifty-ninehospitals participated in the benchmarking. Reference groups consisted of up to 141 members depending on the number of beds (size) and the ownership (public vs. private). A total of 122 charts showing single indicator frequency views were sent to each participant. The evaluation showed that 94.1% of the CIOs who participated in the evaluation considered this benchmarking beneficial and reported that they would enter again. Based on the feedback of the participants we developed two additional views that provide a more consolidated picture. The results demonstrate that establishing an independent, easily repeatable and scalable IT-benchmarking procedure is possible and was deemed desirable. Based on these encouraging results a new benchmarking round which includes process indicators is currently

  10. Quality Guidelines for Scalable Eco Skills in Vet Delivery

    Directory of Open Access Journals (Sweden)

    Liviu MOLDOVAN

    2016-06-01

    Full Text Available In a sustainable economy the assessment of Eco skills provided in a Vocational Education and Training (VET course is performed by the amount of knowledge applied on the job a period after the course ends. To this end VET providers have a current need of quality guidelines for scalable Eco skills and sustainable training evaluation. The purpose of the paper is to present some results of the project entitled “European Quality Assurance in VET towards new Eco Skills and Environmentally Sustainable Economy”, as regards Eco skills categorisation and elaboration of a scale to be used in the green methodology for learning assessment. In the research methodology, the Eco skills are categorised and exemplified for jobs at different levels into generic skills and specific skills evidenced from relevant publications in the field. Than the new Eco skills scale is developed which is organized into six steps: from “Level 1 - Introductory” up to “Level 6 - Expert”. The Eco skills scale is a useful input in the learning assessment process.

  11. Nano-islands Based Charge Trapping Memory: A Scalability Study

    KAUST Repository

    Elatab, Nazek

    2017-10-19

    Zinc-oxide (ZnO) and zirconia (ZrO2) metal oxides have been studied extensively in the past few decades with several potential applications including memory devices. In this work, a scalability study, based on the ITRS roadmap, is conducted on memory devices with ZnO and ZrO2 nano-islands charge trapping layer. Both nano-islands are deposited using atomic layer deposition (ALD), however, the different sizes, distribution and properties of the materials result in different memory performance. The results show that at the 32-nm node charge trapping memory with 127 ZrO2 nano-islands can provide a 9.4 V memory window. However, with ZnO only 31 nano-islands can provide a window of 2.5 V. The results indicate that ZrO2 nano-islands are more promising than ZnO in scaled down devices due to their higher density, higher-k, and absence of quantum confinement effects.

  12. Nutritional Supplement of Hatchery Eggshell Membrane Improves Poultry Performance and Provides Resistance against Endotoxin Stress.

    Directory of Open Access Journals (Sweden)

    S K Makkar

    Full Text Available Eggshells are significant part of hatchery waste which consist of calcium carbonate crust, membranes, and proteins and peptides of embryonic origins along with other entrapped contaminants including microbes. We hypothesized that using this product as a nutritional additive in poultry diet may confer better immunity to the chickens in the paradigm of mammalian milk that enhances immunity. Therefore, we investigated the effect of hatchery eggshell membranes (HESM as a short term feed supplement on growth performance and immunity of chickens under bacterial lipopolysaccharide (LPS challenged condition. Three studies were conducted to find the effect of HESM supplement on post hatch chickens. In the first study, the chickens were fed either a control diet or diets containing 0.5% whey protein or HESM as supplement and evaluated at 5 weeks of age using growth, hematology, clinical chemistry, plasma immunoglobulins, and corticosterone as variables. The second and third studies were done to compare the effects of LPS on control and HESM fed birds at 5 weeks of age following at 4 and 24 h of treatment where the HESM was also sterilized with ethanol to deplete bacterial factors. HESM supplement caused weight gain in 2 experiments and decreased blood corticosterone concentrations. While LPS caused a significant loss in body weight at 24 h following its administration, the HESM supplemented birds showed significantly less body weight loss compared with the control fed birds. The WBC, heterophil/lymphocyte ratio, and the levels of IgG were low in chickens fed diets with HESM supplement compared with control diet group. LPS challenge increased the expression of pro-inflammatory cytokine gene IL-6 but the HESM fed birds showed its effect curtailed, also, which also, favored the up-regulation of anti-inflammatory genes compared with control diet fed chickens. Post hatch supplementation of HESM appears to improve performance, modulate immunity, and increase

  13. Nutritional Supplement of Hatchery Eggshell Membrane Improves Poultry Performance and Provides Resistance against Endotoxin Stress.

    Science.gov (United States)

    Makkar, S K; Rath, N C; Packialakshmi, B; Zhou, Z Y; Huff, G R; Donoghue, A M

    2016-01-01

    Eggshells are significant part of hatchery waste which consist of calcium carbonate crust, membranes, and proteins and peptides of embryonic origins along with other entrapped contaminants including microbes. We hypothesized that using this product as a nutritional additive in poultry diet may confer better immunity to the chickens in the paradigm of mammalian milk that enhances immunity. Therefore, we investigated the effect of hatchery eggshell membranes (HESM) as a short term feed supplement on growth performance and immunity of chickens under bacterial lipopolysaccharide (LPS) challenged condition. Three studies were conducted to find the effect of HESM supplement on post hatch chickens. In the first study, the chickens were fed either a control diet or diets containing 0.5% whey protein or HESM as supplement and evaluated at 5 weeks of age using growth, hematology, clinical chemistry, plasma immunoglobulins, and corticosterone as variables. The second and third studies were done to compare the effects of LPS on control and HESM fed birds at 5 weeks of age following at 4 and 24 h of treatment where the HESM was also sterilized with ethanol to deplete bacterial factors. HESM supplement caused weight gain in 2 experiments and decreased blood corticosterone concentrations. While LPS caused a significant loss in body weight at 24 h following its administration, the HESM supplemented birds showed significantly less body weight loss compared with the control fed birds. The WBC, heterophil/lymphocyte ratio, and the levels of IgG were low in chickens fed diets with HESM supplement compared with control diet group. LPS challenge increased the expression of pro-inflammatory cytokine gene IL-6 but the HESM fed birds showed its effect curtailed, also, which also, favored the up-regulation of anti-inflammatory genes compared with control diet fed chickens. Post hatch supplementation of HESM appears to improve performance, modulate immunity, and increase resistance of

  14. The Relationship between Environmental Turbulence, Management Support, Organizational Collaboration, Information Technology Solution Realization, and Process Performance, in Healthcare Provider Organizations

    Science.gov (United States)

    Muglia, Victor O.

    2010-01-01

    The Problem: The purpose of this study was to investigate relationships between environmental turbulence, management support, organizational collaboration, information technology solution realization, and process performance in healthcare provider organizations. Method: A descriptive/correlational study of Hospital medical services process…

  15. Assessment of the Nurses Performance in Providing Care to Patients Undergoing Nasogastric Tube in Suez Canal University Hospital

    National Research Council Canada - National Science Library

    Magda Abdelaziz Mohammed; Mahmoud el Prince Mahmoud; Hamdy A Sleem; Mariam Sabry Shehab

    2017-01-01

    .... In general, tube feeding is a technique used for those who are unable to eat on their own. The aim of the present study is to assess nurses' performance in providing care to patients undergoing nasogastric tube...

  16. Feedback providing improvement strategies and reflection on feedback use: Effects on students' writing motivation, process, and performance

    NARCIS (Netherlands)

    Duijnhouwer, H.; Prins, F.J.; Stokking, K.M.

    2012-01-01

    This study investigated the effects of feedback providing improvement strategies and a reflection assignment on students’ writing motivation, process, and performance. Students in the experimental feedback condition (n = 41) received feedback including improvement strategies, whereas students in the

  17. Quality Scalability Aware Watermarking for Visual Content.

    Science.gov (United States)

    Bhowmik, Deepayan; Abhayaratne, Charith

    2016-11-01

    Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.

  18. Access to high-volume surgeons and the opportunity cost of performing radical prostatectomy by low-volume providers.

    Science.gov (United States)

    Barzi, Afsaneh; Klein, Eric A; Daneshmand, Siamak; Gill, Inderbir; Quinn, David I; Sadeghi, Sarmad

    2017-07-01

    Evidence suggests that redirecting surgeries to high-volume providers may be associated with better outcomes and significant societal savings. Whether such referrals are feasible remains unanswered. Medicare Provider Utilization and Payment Data, SEER 18, and US Incidence data were used to determine the geographic distribution and radical prostatectomy volume for providers. Access was defined as availability of a high-volume provider within driving distance of 100 miles. The opportunity cost was defined as the value of benefits achievable by performing the surgery by a high-volume provider that was forgone by not making a referral. The savings per referral were derived from a published Markov model for radical prostatectomy. A total of 14% of providers performed>27% of the radical prostatectomies with>30 cases per year and were designated high-volume providers. Providers with below-median volume (≤16 prostatectomies per year) performed>32% of radical prostatectomies. At least 47% of these were within a 100-mile driving distance (median = 22 miles), and therefore had access to a high-volume provider (>30 prostatectomies per year). This translated into a discounted savings of more than $24 million per year, representing the opportunity cost of not making a referral. The average volume for high- and low-volume providers was 55 and 13, respectively, resulting in an annual experience gap of 43 and a cumulative gap of 125 surgeries over 3 years. In 2014, the number of surgeons performing radical prostatectomy decreased by 5% while the number of high- and low-volume providers decreased by 25% and 11% showing a faster decline in the number of high-volume providers compared with low-volume surgeons. About half of prostatectomies performed by surgeons with below-median annual volume were within a 100-mile driving distance (median of 22 miles) of a high-volume surgeon. Such a referral may result in minimal additional costs and substantially improved outcomes. Copyright

  19. A review of atomic layer deposition providing high performance lithium sulfur batteries

    Science.gov (United States)

    Yan, Bo; Li, Xifei; Bai, Zhimin; Song, Xiaosheng; Xiong, Dongbin; Zhao, Mengli; Li, Dejun; Lu, Shigang

    2017-01-01

    With the significant obstacles that have been conquered in lithium-sulfur (Li-S) batteries, it is urgent to impel accelerating development of room-temperature Li-S batteries with high energy density and long-term stability. In view of the unique solid-liquid-solid conversion processes of Li-S batteries, however, designing effective strategies to address the insulativity and volume effect of cathode, shuttle of soluble polysulfides, and/or safety hazard of Li metal anode has been challenging. An atomic layer deposition (ALD) is a representative thin film technology with exceptional capabilities in developing atomic-precisely conformal films. It has been demonstrated to be a promise strategy of solving emerging issues in advanced electrical energy storage (EES) devices via the surface modification and/or the fabrication of complex nanostructured materials. In this review, the recent developments and significances on how ALD improves the performance of Li-S batteries were discussed in detail. Significant attention mainly focused on the various strategies with the use of ALD to refine the electrochemical interfaces and cell configurations. Furthermore, the novel opportunities and perspective associated with ALD for future research directions were summarized. This review may boost the development and application of advanced Li-S batteries using ALD.

  20. A feasibility study of a web-based performance improvement system for substance abuse treatment providers.

    Science.gov (United States)

    Forman, Robert; Crits-Christoph, Paul; Kaynak, Ovgü; Worley, Matt; Hantula, Donald A; Kulaga, Agatha; Rotrosen, John; Chu, Melissa; Gallop, Robert; Potter, Jennifer; Muchowski, Patrice; Brower, Kirk; Strobbe, Stephen; Magruder, Kathy; Chellis, A'Delle H; Clodfelter, Tad; Cawley, Margaret

    2007-12-01

    We report here on the feasibility of implementing a semiautomated performance improvement system-Patient Feedback (PF)-that enables real-time monitoring of patient ratings of therapeutic alliance, treatment satisfaction, and drug/alcohol use in outpatient substance abuse treatment clinics. The study was conducted in six clinics within the National Institute on Drug Abuse Clinical Trials Network. It involved a total of 39 clinicians and 6 clinic supervisors. Throughout the course of the study (consisting of five phases: training period [4 weeks], baseline [4 weeks], intervention [12 weeks], postintervention assessment [4 weeks], sustainability [1 year]), there was an overall collection rate of 75.5% of the clinic patient census. In general, the clinicians in these clinics had very positive treatment satisfaction and alliance ratings throughout the study. However, one clinic had worse drug use scores at baseline than other participating clinics and showed a decrease in self-reported drug use at postintervention. Although the implementation of the PF system proved to be feasible in actual clinical settings, further modifications of the PF system are needed to enhance any potential clinical usefulness.

  1. Fast and scalable inequality joins

    KAUST Repository

    Khayyat, Zuhair

    2016-09-07

    Inequality joins, which is to join relations with inequality conditions, are used in various applications. Optimizing joins has been the subject of intensive research ranging from efficient join algorithms such as sort-merge join, to the use of efficient indices such as (Formula presented.)-tree, (Formula presented.)-tree and Bitmap. However, inequality joins have received little attention and queries containing such joins are notably very slow. In this paper, we introduce fast inequality join algorithms based on sorted arrays and space-efficient bit-arrays. We further introduce a simple method to estimate the selectivity of inequality joins which is then used to optimize multiple predicate queries and multi-way joins. Moreover, we study an incremental inequality join algorithm to handle scenarios where data keeps changing. We have implemented a centralized version of these algorithms on top of PostgreSQL, a distributed version on top of Spark SQL, and an existing data cleaning system, Nadeef. By comparing our algorithms against well-known optimization techniques for inequality joins, we show our solution is more scalable and several orders of magnitude faster. © 2016 Springer-Verlag Berlin Heidelberg

  2. Creatine co-ingestion with carbohydrate or cinnamon extract provides no added benefit to anaerobic performance.

    Science.gov (United States)

    Islam, Hashim; Yorgason, Nick J; Hazell, Tom J

    2016-09-01

    The insulin response following carbohydrate ingestion enhances creatine transport into muscle. Cinnamon extract is promoted to have insulin-like effects, therefore this study examined if creatine co-ingestion with carbohydrates or cinnamon extract improved anaerobic capacity, muscular strength, and muscular endurance. Active young males (n = 25; 23.7 ± 2.5 y) were stratified into 3 groups: (1) creatine only (CRE); (2) creatine+ 70 g carbohydrate (CHO); or (3) creatine+ 500 mg cinnamon extract (CIN), based on anaerobic capacity (peak power·kg(-1)) and muscular strength at baseline. Three weeks of supplementation consisted of a 5 d loading phase (20 g/d) and a 16 d maintenance phase (5 g/d). Pre- and post-supplementation measures included a 30-s Wingate and a 30-s maximal running test (on a self-propelled treadmill) for anaerobic capacity. Muscular strength was measured as the one-repetition maximum 1-RM for chest, back, quadriceps, hamstrings, and leg press. Additional sets of the number of repetitions performed at 60% 1-RM until fatigue measured muscular endurance. All three groups significantly improved Wingate relative peak power (CRE: 15.4% P = .004; CHO: 14.6% P = .004; CIN: 15.7%, P = .003), and muscular strength for chest (CRE: 6.6% P < .001; CHO: 6.7% P < .001; CIN: 6.4% P < .001), back (CRE: 5.8% P < .001; CHO: 6.4% P < .001; CIN: 8.1% P < .001), and leg press (CRE: 11.7% P = .013; CHO: 10.0% P = .007; CIN: 17.3% P < .001). Only the CRE (10.4%, P = .021) and CIN (15.5%, P < .001) group improved total muscular endurance. No differences existed between groups post-supplementation. These findings demonstrate that three different methods of creatine ingestion lead to similar changes in anaerobic power, strength, and endurance.

  3. Finite Element Modeling on Scalable Parallel Computers

    Science.gov (United States)

    Cwik, T.; Zuffada, C.; Jamnejad, V.; Katz, D.

    1995-01-01

    A coupled finite element-integral equation was developed to model fields scattered from inhomogenous, three-dimensional objects of arbitrary shape. This paper outlines how to implement the software on a scalable parallel processor.

  4. Visual analytics in scalable visualization environments

    OpenAIRE

    Yamaoka, So

    2011-01-01

    Visual analytics is an interdisciplinary field that facilitates the analysis of the large volume of data through interactive visual interface. This dissertation focuses on the development of visual analytics techniques in scalable visualization environments. These scalable visualization environments offer a high-resolution, integrated virtual space, as well as a wide-open physical space that affords collaborative user interaction. At the same time, the sheer scale of these environments poses ...

  5. Scalable and cost-effective NGS genotyping in the cloud.

    Science.gov (United States)

    Souilmi, Yassine; Lancaster, Alex K; Jung, Jae-Yoon; Rizzo, Ettore; Hawkins, Jared B; Powles, Ryan; Amzazi, Saaïd; Ghazal, Hassan; Tonellato, Peter J; Wall, Dennis P

    2015-10-15

    While next-generation sequencing (NGS) costs have plummeted in recent years, cost and complexity of computation remain substantial barriers to the use of NGS in routine clinical care. The clinical potential of NGS will not be realized until robust and routine whole genome sequencing data can be accurately rendered to medically actionable reports within a time window of hours and at scales of economy in the 10's of dollars. We take a step towards addressing this challenge, by using COSMOS, a cloud-enabled workflow management system, to develop GenomeKey, an NGS whole genome analysis workflow. COSMOS implements complex workflows making optimal use of high-performance compute clusters. Here we show that the Amazon Web Service (AWS) implementation of GenomeKey via COSMOS provides a fast, scalable, and cost-effective analysis of both public benchmarking and large-scale heterogeneous clinical NGS datasets. Our systematic benchmarking reveals important new insights and considerations to produce clinical turn-around of whole genome analysis optimization and workflow management including strategic batching of individual genomes and efficient cluster resource configuration.

  6. XGet: a highly scalable and efficient file transfer tool for clusters

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, Hugh [Los Alamos National Laboratory; Ionkov, Latchesar [Los Alamos National Laboratory; Minnich, Ronald [SNL

    2008-01-01

    As clusters rapidly grow in size, transferring files between nodes can no longer be solved by the traditional transfer utilities due to their inherent lack of scalability. In this paper, we describe a new file transfer utility called XGet, which was designed to address the scalability problem of standard tools. We compared XGet against four transfer tools: Bittorrent, Rsync, TFTP, and Udpcast and our results show that XGet's performance is superior to the these utilities in many cases.

  7. Fully scalable video coding in multicast applications

    Science.gov (United States)

    Lerouge, Sam; De Sutter, Robbie; Lambert, Peter; Van de Walle, Rik

    2004-01-01

    The increasing diversity of the characteristics of the terminals and networks that are used to access multimedia content through the internet introduces new challenges for the distribution of multimedia data. Scalable video coding will be one of the elementary solutions in this domain. This type of coding allows to adapt an encoded video sequence to the limitations of the network or the receiving device by means of very basic operations. Algorithms for creating fully scalable video streams, in which multiple types of scalability are offered at the same time, are becoming mature. On the other hand, research on applications that use such bitstreams is only recently emerging. In this paper, we introduce a mathematical model for describing such bitstreams. In addition, we show how we can model applications that use scalable bitstreams by means of definitions that are built on top of this model. In particular, we chose to describe a multicast protocol that is targeted at scalable bitstreams. This way, we will demonstrate that it is possible to define an abstract model for scalable bitstreams, that can be used as a tool for reasoning about such bitstreams and related applications.

  8. Impact of Providing Compassion on Job Performance and Mental Health: The Moderating Effect of Interpersonal Relationship Quality.

    Science.gov (United States)

    Chu, Li-Chuan

    2017-07-01

    To examine the relationships of providing compassion at work with job performance and mental health, as well as to identify the role of interpersonal relationship quality in moderating these relationships. This study adopted a two-stage survey completed by 235 registered nurses employed by hospitals in Taiwan. All hypotheses were tested using hierarchical regression analyses. The results show that providing compassion is an effective predictor of job performance and mental health, whereas interpersonal relationship quality can moderate the relationships of providing compassion with job performance and mental health. When nurses are frequently willing to listen, understand, and help their suffering colleagues, the enhancement engendered by providing compassion can improve the provider's job performance and mental health. Creating high-quality relationships in the workplace can strengthen the positive benefits of providing compassion. Motivating employees to spontaneously exhibit compassion is crucial to an organization. Hospitals can establish value systems, belief systems, and cultural systems that support a compassionate response to suffering. In addition, nurses can internalize altruistic belief systems into their own personal value systems through a long process of socialization in the workplace. © 2017 Sigma Theta Tau International.

  9. Scalable metagenomic taxonomy classification using a reference genome database.

    Science.gov (United States)

    Ames, Sasha K; Hysom, David A; Gardner, Shea N; Lloyd, G Scott; Gokhale, Maya B; Allen, Jonathan E

    2013-09-15

    Deep metagenomic sequencing of biological samples has the potential to recover otherwise difficult-to-detect microorganisms and accurately characterize biological samples with limited prior knowledge of sample contents. Existing metagenomic taxonomic classification algorithms, however, do not scale well to analyze large metagenomic datasets, and balancing classification accuracy with computational efficiency presents a fundamental challenge. A method is presented to shift computational costs to an off-line computation by creating a taxonomy/genome index that supports scalable metagenomic classification. Scalable performance is demonstrated on real and simulated data to show accurate classification in the presence of novel organisms on samples that include viruses, prokaryotes, fungi and protists. Taxonomic classification of the previously published 150 giga-base Tyrolean Iceman dataset was found to take contents of the sample. Software was implemented in C++ and is freely available at http://sourceforge.net/projects/lmat allen99@llnl.gov Supplementary data are available at Bioinformatics online.

  10. Scalability of Sustainable Business Models in Hybrid Organizations

    Directory of Open Access Journals (Sweden)

    Adam Jabłoński

    2016-02-01

    Full Text Available The dynamics of change in modern business create new mechanisms for company management to determine their pursuit and the achievement of their high performance. This performance maintained over a long period of time becomes a source of ensuring business continuity by companies. An ontological being enabling the adoption of such assumptions is such a business model that has the ability to generate results in every possible market situation and, moreover, it has the features of permanent adaptability. A feature that describes the adaptability of the business model is its scalability. Being a factor ensuring more work and more efficient work with an increasing number of components, scalability can be applied to the concept of business models as the company’s ability to maintain similar or higher performance through it. Ensuring the company’s performance in the long term helps to build the so-called sustainable business model that often balances the objectives of stakeholders and shareholders, and that is created by the implemented principles of value-based management and corporate social responsibility. This perception of business paves the way for building hybrid organizations that integrate business activities with pro-social ones. The combination of an approach typical of hybrid organizations in designing and implementing sustainable business models pursuant to the scalability criterion seems interesting from the cognitive point of view. Today, hybrid organizations are great spaces for building effective and efficient mechanisms for dialogue between business and society. This requires the appropriate business model. The purpose of the paper is to present the conceptualization and operationalization of scalability of sustainable business models that determine the performance of a hybrid organization in the network environment. The paper presents the original concept of applying scalability in sustainable business models with detailed

  11. Continuity-Aware Scheduling Algorithm for Scalable Video Streaming

    Directory of Open Access Journals (Sweden)

    Atinat Palawan

    2016-05-01

    Full Text Available The consumer demand for retrieving and delivering visual content through consumer electronic devices has increased rapidly in recent years. The quality of video in packet networks is susceptible to certain traffic characteristics: average bandwidth availability, loss, delay and delay variation (jitter. This paper presents a scheduling algorithm that modifies the stream of scalable video to combat jitter. The algorithm provides unequal look-ahead by safeguarding the base layer (without the need for overhead of the scalable video. The results of the experiments show that our scheduling algorithm reduces the number of frames with a violated deadline and significantly improves the continuity of the video stream without compromising the average Y Peek Signal-to-Noise Ratio (PSNR.

  12. Potential of Scalable Vector Graphics (SVG) for Ocean Science Research

    Science.gov (United States)

    Sears, J. R.

    2002-12-01

    Scalable Vector Graphics (SVG), a graphic format encoded in Extensible Markup Language (XML), is a recent W3C standard. SVG is text-based and platform-neutral, allowing interoperability and a rich array of features that offer significant promise for the presentation and publication of ocean and earth science research. This presentation (a) provides a brief introduction to SVG with real-world examples; (b) reviews browsers, editors, and other SVG tools; and (c) talks about some of the more powerful capabilities of SVG that might be important for ocean and earth science data presentation, such as searchability, animation and scripting, interactivity, accessibility, dynamic SVG, layers, scalability, SVG Text, SVG Audio, server-side SVG, and embedding metadata and data. A list of useful SVG resources is also given.

  13. Scalable Notch Antenna System for Multiport Applications

    Directory of Open Access Journals (Sweden)

    Abdurrahim Toktas

    2016-01-01

    Full Text Available A novel and compact scalable antenna system is designed for multiport applications. The basic design is built on a square patch with an electrical size of 0.82λ0×0.82λ0 (at 2.4 GHz on a dielectric substrate. The design consists of four symmetrical and orthogonal triangular notches with circular feeding slots at the corners of the common patch. The 4-port antenna can be simply rearranged to 8-port and 12-port systems. The operating band of the system can be tuned by scaling (S the size of the system while fixing the thickness of the substrate. The antenna system with S: 1/1 in size of 103.5×103.5 mm2 operates at the frequency band of 2.3–3.0 GHz. By scaling the antenna with S: 1/2.3, a system of 45×45 mm2 is achieved, and thus the operating band is tuned to 4.7–6.1 GHz with the same scattering characteristic. A parametric study is also conducted to investigate the effects of changing the notch dimensions. The performance of the antenna is verified in terms of the antenna characteristics as well as diversity and multiplexing parameters. The antenna system can be tuned by scaling so that it is applicable to the multiport WLAN, WIMAX, and LTE devices with port upgradability.

  14. SCTP as scalable video coding transport

    Science.gov (United States)

    Ortiz, Jordi; Graciá, Eduardo Martínez; Skarmeta, Antonio F.

    2013-12-01

    This study presents an evaluation of the Stream Transmission Control Protocol (SCTP) for the transport of the scalable video codec (SVC), proposed by MPEG as an extension to H.264/AVC. Both technologies fit together properly. On the one hand, SVC permits to split easily the bitstream into substreams carrying different video layers, each with different importance for the reconstruction of the complete video sequence at the receiver end. On the other hand, SCTP includes features, such as the multi-streaming and multi-homing capabilities, that permit to transport robustly and efficiently the SVC layers. Several transmission strategies supported on baseline SCTP and its concurrent multipath transfer (CMT) extension are compared with the classical solutions based on the Transmission Control Protocol (TCP) and the Realtime Transmission Protocol (RTP). Using ns-2 simulations, it is shown that CMT-SCTP outperforms TCP and RTP in error-prone networking environments. The comparison is established according to several performance measurements, including delay, throughput, packet loss, and peak signal-to-noise ratio of the received video.

  15. A Hybrid MPI-OpenMP Scheme for Scalable Parallel Pseudospectral Computations for Fluid Turbulence

    Science.gov (United States)

    Rosenberg, D. L.; Mininni, P. D.; Reddy, R. N.; Pouquet, A.

    2010-12-01

    A hybrid scheme that utilizes MPI for distributed memory parallelism and OpenMP for shared memory parallelism is presented. The work is motivated by the desire to achieve exceptionally high Reynolds numbers in pseudospectral computations of fluid turbulence on emerging petascale, high core-count, massively parallel processing systems. The hybrid implementation derives from and augments a well-tested scalable MPI-parallelized pseudospectral code. The hybrid paradigm leads to a new picture for the domain decomposition of the pseudospectral grids, which is helpful in understanding, among other things, the 3D transpose of the global data that is necessary for the parallel fast Fourier transforms that are the central component of the numerical discretizations. Details of the hybrid implementation are provided, and performance tests illustrate the utility of the method. It is shown that the hybrid scheme achieves near ideal scalability up to ~20000 compute cores with a maximum mean efficiency of 83%. Data are presented that demonstrate how to choose the optimal number of MPI processes and OpenMP threads in order to optimize code performance on two different platforms.

  16. The scalability in the mechanochemical syntheses of edge functionalized graphene materials and biomass-derived chemicals.

    Science.gov (United States)

    Blair, Richard G; Chagoya, Katerina; Biltek, Scott; Jackson, Steven; Sinclair, Ashlyn; Taraboletti, Alexandra; Restrepo, David T

    2014-01-01

    Mechanochemical approaches to chemical synthesis offer the promise of improved yields, new reaction pathways and greener syntheses. Scaling these syntheses is a crucial step toward realizing a commercially viable process. Although much work has been performed on laboratory-scale investigations little has been done to move these approaches toward industrially relevant scales. Moving reactions from shaker-type mills and planetary-type mills to scalable solutions can present a challenge. We have investigated scalability through discrete element models, thermal monitoring and reactor design. We have found that impact forces and macroscopic mixing are important factors in implementing a truly scalable process. These observations have allowed us to scale reactions from a few grams to several hundred grams and we have successfully implemented scalable solutions for the mechanocatalytic conversion of cellulose to value-added compounds and the synthesis of edge functionalized graphene.

  17. Scalable and near-optimal design space exploration for embedded systems

    CERN Document Server

    Kritikakou, Angeliki; Goutis, Costas

    2014-01-01

    This book describes scalable and near-optimal, processor-level design space exploration (DSE) methodologies.  The authors present design methodologies for data storage and processing in real-time, cost-sensitive data-dominated embedded systems.  Readers will be enabled to reduce time-to-market, while satisfying system requirements for performance, area, and energy consumption, thereby minimizing the overall cost of the final design.   • Describes design space exploration (DSE) methodologies for data storage and processing in embedded systems, which achieve near-optimal solutions with scalable exploration time; • Presents a set of principles and the processes which support the development of the proposed scalable and near-optimal methodologies; • Enables readers to apply scalable and near-optimal methodologies to the intra-signal in-place optimization step for both regular and irregular memory accesses.

  18. SuperLU{_}DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiaoye S.; Demmel, James W.

    2002-03-27

    In this paper, we present the main algorithmic features in the software package SuperLU{_}DIST, a distributed-memory sparse direct solver for large sets of linear equations. We give in detail our parallelization strategies, with focus on scalability issues, and demonstrate the parallel performance and scalability on current machines. The solver is based on sparse Gaussian elimination, with an innovative static pivoting strategy proposed earlier by the authors. The main advantage of static pivoting over classical partial pivoting is that it permits a priori determination of data structures and communication pattern for sparse Gaussian elimination, which makes it more scalable on distributed memory machines. Based on this a priori knowledge, we designed highly parallel and scalable algorithms for both LU decomposition and triangular solve and we show that they are suitable for large-scale distributed memory machines.

  19. Scalable inference for stochastic block models

    KAUST Repository

    Peng, Chengbin

    2017-12-08

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference algorithms for such a model are increasingly limited due to their high time complexity and poor scalability. In this paper, we propose a multi-stage maximum likelihood approach to recover the latent parameters of the stochastic block model, in time linear with respect to the number of edges. We also propose a parallel algorithm based on message passing. Our algorithm can overlap communication and computation, providing speedup without compromising accuracy as the number of processors grows. For example, to process a real-world graph with about 1.3 million nodes and 10 million edges, our algorithm requires about 6 seconds on 64 cores of a contemporary commodity Linux cluster. Experiments demonstrate that the algorithm can produce high quality results on both benchmark and real-world graphs. An example of finding more meaningful communities is illustrated consequently in comparison with a popular modularity maximization algorithm.

  20. Scalability and interoperability within glideinWMS

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, D.; /Wisconsin U., Madison; Sfiligoi, I.; /Fermilab; Padhi, S.; /UC, San Diego; Frey, J.; /Wisconsin U., Madison; Tannenbaum, T.; /Wisconsin U., Madison

    2010-01-01

    Physicists have access to thousands of CPUs in grid federations such as OSG and EGEE. With the start-up of the LHC, it is essential for individuals or groups of users to wrap together available resources from multiple sites across multiple grids under a higher user-controlled layer in order to provide a homogeneous pool of available resources. One such system is glideinWMS, which is based on the Condor batch system. A general discussion of glideinWMS can be found elsewhere. Here, we focus on recent advances in extending its reach: scalability and integration of heterogeneous compute elements. We demonstrate that the new developments exceed the design goal of over 10,000 simultaneous running jobs under a single Condor schedd, using strong security protocols across global networks, and sustaining a steady-state job completion rate of a few Hz. We also show interoperability across heterogeneous computing elements achieved using client-side methods. We discuss this technique and the challenges in direct access to NorduGrid and CREAM compute elements, in addition to Globus based systems.

  1. Feedback Providing Improvement Strategies and Reflection on Feedback Use: Effects on Students' Writing Motivation, Process, and Performance

    Science.gov (United States)

    Duijnhouwer, Hendrien; Prins, Frans J.; Stokking, Karel M.

    2012-01-01

    This study investigated the effects of feedback providing improvement strategies and a reflection assignment on students' writing motivation, process, and performance. Students in the experimental feedback condition (n = 41) received feedback including improvement strategies, whereas students in the control feedback condition (n = 41) received…

  2. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    Thomas André

    2007-03-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  3. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    André Thomas

    2007-01-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  4. Temporal Scalability through Adaptive -Band Filter Banks for Robust H.264/MPEG-4 AVC Video Coding

    Directory of Open Access Journals (Sweden)

    Pau G

    2006-01-01

    Full Text Available This paper presents different structures that use adaptive -band hierarchical filter banks for temporal scalability. Open-loop and closed-loop configurations are introduced and illustrated using existing video codecs. In particular, it is shown that the H.264/MPEG-4 AVC codec allows us to introduce scalability by frame shuffling operations, thus keeping backward compatibility with the standard. The large set of shuffling patterns introduced here can be exploited to adapt the encoding process to the video content features, as well as to the user equipment and transmission channel characteristics. Furthermore, simulation results show that this scalability is obtained with no degradation in terms of subjective and objective quality in error-free environments, while in error-prone channels the scalable versions provide increased robustness.

  5. Wanted: Scalable Tracers for Diffusion Measurements

    Science.gov (United States)

    2015-01-01

    Scalable tracers are potentially a useful tool to examine diffusion mechanisms and to predict diffusion coefficients, particularly for hindered diffusion in complex, heterogeneous, or crowded systems. Scalable tracers are defined as a series of tracers varying in size but with the same shape, structure, surface chemistry, deformability, and diffusion mechanism. Both chemical homology and constant dynamics are required. In particular, branching must not vary with size, and there must be no transition between ordinary diffusion and reptation. Measurements using scalable tracers yield the mean diffusion coefficient as a function of size alone; measurements using nonscalable tracers yield the variation due to differences in the other properties. Candidate scalable tracers are discussed for two-dimensional (2D) diffusion in membranes and three-dimensional diffusion in aqueous solutions. Correlations to predict the mean diffusion coefficient of globular biomolecules from molecular mass are reviewed briefly. Specific suggestions for the 3D case include the use of synthetic dendrimers or random hyperbranched polymers instead of dextran and the use of core–shell quantum dots. Another useful tool would be a series of scalable tracers varying in deformability alone, prepared by varying the density of crosslinking in a polymer to make say “reinforced Ficoll” or “reinforced hyperbranched polyglycerol.” PMID:25319586

  6. Scalable L-infinite coding of meshes.

    Science.gov (United States)

    Munteanu, Adrian; Cernea, Dan C; Alecu, Alin; Cornelis, Jan; Schelkens, Peter

    2010-01-01

    The paper investigates the novel concept of local-error control in mesh geometry encoding. In contrast to traditional mesh-coding systems that use the mean-square error as target distortion metric, this paper proposes a new L-infinite mesh-coding approach, for which the target distortion metric is the L-infinite distortion. In this context, a novel wavelet-based L-infinite-constrained coding approach for meshes is proposed, which ensures that the maximum error between the vertex positions in the original and decoded meshes is lower than a given upper bound. Furthermore, the proposed system achieves scalability in L-infinite sense, that is, any decoding of the input stream will correspond to a perfectly predictable L-infinite distortion upper bound. An instantiation of the proposed L-infinite-coding approach is demonstrated for MESHGRID, which is a scalable 3D object encoding system, part of MPEG-4 AFX. In this context, the advantages of scalable L-infinite coding over L-2-oriented coding are experimentally demonstrated. One concludes that the proposed L-infinite mesh-coding approach guarantees an upper bound on the local error in the decoded mesh, it enables a fast real-time implementation of the rate allocation, and it preserves all the scalability features and animation capabilities of the employed scalable mesh codec.

  7. Scalable Coverage Maintenance for Dense Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jun Lu

    2007-06-01

    Full Text Available Owing to numerous potential applications, wireless sensor networks have been attracting significant research effort recently. The critical challenge that wireless sensor networks often face is to sustain long-term operation on limited battery energy. Coverage maintenance schemes can effectively prolong network lifetime by selecting and employing a subset of sensors in the network to provide sufficient sensing coverage over a target region. We envision future wireless sensor networks composed of a vast number of miniaturized sensors in exceedingly high density. Therefore, the key issue of coverage maintenance for future sensor networks is the scalability to sensor deployment density. In this paper, we propose a novel coverage maintenance scheme, scalable coverage maintenance (SCOM, which is scalable to sensor deployment density in terms of communication overhead (i.e., number of transmitted and received beacons and computational complexity (i.e., time and space complexity. In addition, SCOM achieves high energy efficiency and load balancing over different sensors. We have validated our claims through both analysis and simulations.

  8. Event metadata records as a testbed for scalable data mining

    Science.gov (United States)

    van Gemmeren, P.; Malon, D.

    2010-04-01

    At a data rate of 200 hertz, event metadata records ("TAGs," in ATLAS parlance) provide fertile grounds for development and evaluation of tools for scalable data mining. It is easy, of course, to apply HEP-specific selection or classification rules to event records and to label such an exercise "data mining," but our interest is different. Advanced statistical methods and tools such as classification, association rule mining, and cluster analysis are common outside the high energy physics community. These tools can prove useful, not for discovery physics, but for learning about our data, our detector, and our software. A fixed and relatively simple schema makes TAG export to other storage technologies such as HDF5 straightforward. This simplifies the task of exploiting very-large-scale parallel platforms such as Argonne National Laboratory's BlueGene/P, currently the largest supercomputer in the world for open science, in the development of scalable tools for data mining. Using a domain-neutral scientific data format may also enable us to take advantage of existing data mining components from other communities. There is, further, a substantial literature on the topic of one-pass algorithms and stream mining techniques, and such tools may be inserted naturally at various points in the event data processing and distribution chain. This paper describes early experience with event metadata records from ATLAS simulation and commissioning as a testbed for scalable data mining tool development and evaluation.

  9. Event metadata records as a testbed for scalable data mining

    Energy Technology Data Exchange (ETDEWEB)

    Gemmeren, P van; Malon, D, E-mail: gemmeren@anl.go [Argonne National Laboratory, Argonne, Illinois 60439 (United States)

    2010-04-01

    At a data rate of 200 hertz, event metadata records ('TAGs,' in ATLAS parlance) provide fertile grounds for development and evaluation of tools for scalable data mining. It is easy, of course, to apply HEP-specific selection or classification rules to event records and to label such an exercise 'data mining,' but our interest is different. Advanced statistical methods and tools such as classification, association rule mining, and cluster analysis are common outside the high energy physics community. These tools can prove useful, not for discovery physics, but for learning about our data, our detector, and our software. A fixed and relatively simple schema makes TAG export to other storage technologies such as HDF5 straightforward. This simplifies the task of exploiting very-large-scale parallel platforms such as Argonne National Laboratory's BlueGene/P, currently the largest supercomputer in the world for open science, in the development of scalable tools for data mining. Using a domain-neutral scientific data format may also enable us to take advantage of existing data mining components from other communities. There is, further, a substantial literature on the topic of one-pass algorithms and stream mining techniques, and such tools may be inserted naturally at various points in the event data processing and distribution chain. This paper describes early experience with event metadata records from ATLAS simulation and commissioning as a testbed for scalable data mining tool development and evaluation.

  10. Memory-Scalable GPU Spatial Hierarchy Construction.

    Science.gov (United States)

    Qiming Hou; Xin Sun; Kun Zhou; Lauterbach, C; Manocha, D

    2011-04-01

    Recent GPU algorithms for constructing spatial hierarchies have achieved promising performance for moderately complex models by using the breadth-first search (BFS) construction order. While being able to exploit the massive parallelism on the GPU, the BFS order also consumes excessive GPU memory, which becomes a serious issue for interactive applications involving very complex models with more than a few million triangles. In this paper, we propose to use the partial breadth-first search (PBFS) construction order to control memory consumption while maximizing performance. We apply the PBFS order to two hierarchy construction algorithms. The first algorithm is for kd-trees that automatically balances between the level of parallelism and intermediate memory usage. With PBFS, peak memory consumption during construction can be efficiently controlled without costly CPU-GPU data transfer. We also develop memory allocation strategies to effectively limit memory fragmentation. The resulting algorithm scales well with GPU memory and constructs kd-trees of models with millions of triangles at interactive rates on GPUs with 1 GB memory. Compared with existing algorithms, our algorithm is an order of magnitude more scalable for a given GPU memory bound. The second algorithm is for out-of-core bounding volume hierarchy (BVH) construction for very large scenes based on the PBFS construction order. At each iteration, all constructed nodes are dumped to the CPU memory, and the GPU memory is freed for the next iteration's use. In this way, the algorithm is able to build trees that are too large to be stored in the GPU memory. Experiments show that our algorithm can construct BVHs for scenes with up to 20 M triangles, several times larger than previous GPU algorithms.

  11. Competency in Chaos: Lifesaving Performance of Care Providers Utilizing a Competency-Based, Multi-Actor Emergency Preparedness Training Curriculum

    Science.gov (United States)

    Scott, Lancer A.; Swartzentruber, Derrick A.; Davis, Christopher Ashby; Maddux, P. Tim; Schnellman, Jennifer; Wahlquist, Amy E.

    2015-01-01

    Objective Providing comprehensive emergency preparedness training (EPT) to care providers is important to the future success of disaster operations in the US. Few EPT programs possess both competency-driven goals and metrics to measure performance during a multi-patient simulated disaster. Methods A 1-day (8-hour) EPT course for care providers was developed to enhance provider knowledge, skill, and comfort necessary to save lives during a simulated disaster. Nine learning objectives, 18 competencies, and 34 performance objectives were developed. During the 2-year demonstration of the curriculum, 24 fourth-year medical students and 17 Veterans Hospital Administration (VHA) providers were recruited and volunteered to take the course (two did not fully complete the research materials). An online pre-test, two post-tests, course assessment, didactic and small group content, and a 6-minute clinical casualty scenario were developed. During the scenario, trainees working in teams were confronted with three human simulators and 10 actor patients simultaneously. Unless appropriate performance objectives were met, the simulators “died” and the team was exposed to “anthrax.” After the scenario, team members participated in a facilitator-led debriefing using digital video and then repeated the scenario. Results Trainees (N = 39) included 24 (62%) medical students; seven (18%) physicians; seven (18%) nurses; and one (3%) emergency manager. Forty-seven percent of the VHA providers reported greater than 16 annual hours of disaster training, while 15 (63%) of the medical students reported no annual disaster training. The mean (SD) score for the pre-test was 12.3 (3.8), or 51% correct, and after the training, the mean (SD) score was 18.5 (2.2), or 77% (P <.01). The overall rating for the course was 96 out of 100. Trainee self-assessment of “Overall Skill” increased from 63.3 out of 100 to 83.4 out of 100 and “Overall Knowledge” increased from 49.3 out of 100 to 78

  12. ADEpedia: a scalable and standardized knowledge base of Adverse Drug Events using semantic web technology.

    Science.gov (United States)

    Jiang, Guoqian; Solbrig, Harold R; Chute, Christopher G

    2011-01-01

    A source of semantically coded Adverse Drug Event (ADE) data can be useful for identifying common phenotypes related to ADEs. We proposed a comprehensive framework for building a standardized ADE knowledge base (called ADEpedia) through combining ontology-based approach with semantic web technology. The framework comprises four primary modules: 1) an XML2RDF transformation module; 2) a data normalization module based on NCBO Open Biomedical Annotator; 3) a RDF store based persistence module; and 4) a front-end module based on a Semantic Wiki for the review and curation. A prototype is successfully implemented to demonstrate the capability of the system to integrate multiple drug data and ontology resources and open web services for the ADE data standardization. A preliminary evaluation is performed to demonstrate the usefulness of the system, including the performance of the NCBO annotator. In conclusion, the semantic web technology provides a highly scalable framework for ADE data source integration and standard query service.

  13. Scalable Density-Based Subspace Clustering

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Günnemann, Stephan

    2011-01-01

    For knowledge discovery in high dimensional databases, subspace clustering detects clusters in arbitrary subspace projections. Scalability is a crucial issue, as the number of possible projections is exponential in the number of dimensions. We propose a scalable density-based subspace clustering...... method that steers mining to few selected subspace clusters. Our novel steering technique reduces subspace processing by identifying and clustering promising subspaces and their combinations directly. Thereby, it narrows down the search space while maintaining accuracy. Thorough experiments on real...... and synthetic databases show that steering is efficient and scalable, with high quality results. For future work, our steering paradigm for density-based subspace clustering opens research potential for speeding up other subspace clustering approaches as well....

  14. Scalable Open Source Smart Grid Simulator (SGSim)

    DEFF Research Database (Denmark)

    Ebeid, Emad Samuel Malki; Jacobsen, Rune Hylsberg; Quaglia, Davide

    2017-01-01

    The future smart power grid will consist of an unlimited number of smart devices that communicate with control units to maintain the grid’s sustainability, efficiency, and balancing. In order to build and verify such controllers over a large grid, a scalable simulation environment is needed....... This paper presents an open source smart grid simulator (SGSim). The simulator is based on open source SystemC Network Simulation Library (SCNSL) and aims to model scalable smart grid applications. SGSim has been tested under different smart grid scenarios that contain hundreds of thousands of households...... and appliances. By using SGSim, different smart grid control strategies and protocols can be tested, validated and evaluated in a scalable environment....

  15. Scalable persistent identifier systems for dynamic datasets

    Science.gov (United States)

    Golodoniuc, P.; Cox, S. J. D.; Klump, J. F.

    2016-12-01

    Reliable and persistent identification of objects, whether tangible or not, is essential in information management. Many Internet-based systems have been developed to identify digital data objects, e.g., PURL, LSID, Handle, ARK. These were largely designed for identification of static digital objects. The amount of data made available online has grown exponentially over the last two decades and fine-grained identification of dynamically generated data objects within large datasets using conventional systems (e.g., PURL) has become impractical. We have compared capabilities of various technological solutions to enable resolvability of data objects in dynamic datasets, and developed a dataset-centric approach to resolution of identifiers. This is particularly important in Semantic Linked Data environments where dynamic frequently changing data is delivered live via web services, so registration of individual data objects to obtain identifiers is impractical. We use identifier patterns and pattern hierarchies for identification of data objects, which allows relationships between identifiers to be expressed, and also provides means for resolving a single identifier into multiple forms (i.e. views or representations of an object). The latter can be implemented through (a) HTTP content negotiation, or (b) use of URI querystring parameters. The pattern and hierarchy approach has been implemented in the Linked Data API supporting the United Nations Spatial Data Infrastructure (UNSDI) initiative and later in the implementation of geoscientific data delivery for the Capricorn Distal Footprints project using International Geo Sample Numbers (IGSN). This enables flexible resolution of multi-view persistent identifiers and provides a scalable solution for large heterogeneous datasets.

  16. Microscopic Characterization of Scalable Coherent Rydberg Superatoms

    Directory of Open Access Journals (Sweden)

    Johannes Zeiher

    2015-08-01

    Full Text Available Strong interactions can amplify quantum effects such that they become important on macroscopic scales. Controlling these coherently on a single-particle level is essential for the tailored preparation of strongly correlated quantum systems and opens up new prospects for quantum technologies. Rydberg atoms offer such strong interactions, which lead to extreme nonlinearities in laser-coupled atomic ensembles. As a result, multiple excitation of a micrometer-sized cloud can be blocked while the light-matter coupling becomes collectively enhanced. The resulting two-level system, often called a “superatom,” is a valuable resource for quantum information, providing a collective qubit. Here, we report on the preparation of 2 orders of magnitude scalable superatoms utilizing the large interaction strength provided by Rydberg atoms combined with precise control of an ensemble of ultracold atoms in an optical lattice. The latter is achieved with sub-shot-noise precision by local manipulation of a two-dimensional Mott insulator. We microscopically confirm the superatom picture by in situ detection of the Rydberg excitations and observe the characteristic square-root scaling of the optical coupling with the number of atoms. Enabled by the full control over the atomic sample, including the motional degrees of freedom, we infer the overlap of the produced many-body state with a W state from the observed Rabi oscillations and deduce the presence of entanglement. Finally, we investigate the breakdown of the superatom picture when two Rydberg excitations are present in the system, which leads to dephasing and a loss of coherence.

  17. Microscopic Characterization of Scalable Coherent Rydberg Superatoms

    Science.gov (United States)

    Zeiher, Johannes; Schauß, Peter; Hild, Sebastian; Macrı, Tommaso; Bloch, Immanuel; Gross, Christian

    2015-07-01

    Strong interactions can amplify quantum effects such that they become important on macroscopic scales. Controlling these coherently on a single-particle level is essential for the tailored preparation of strongly correlated quantum systems and opens up new prospects for quantum technologies. Rydberg atoms offer such strong interactions, which lead to extreme nonlinearities in laser-coupled atomic ensembles. As a result, multiple excitation of a micrometer-sized cloud can be blocked while the light-matter coupling becomes collectively enhanced. The resulting two-level system, often called a "superatom," is a valuable resource for quantum information, providing a collective qubit. Here, we report on the preparation of 2 orders of magnitude scalable superatoms utilizing the large interaction strength provided by Rydberg atoms combined with precise control of an ensemble of ultracold atoms in an optical lattice. The latter is achieved with sub-shot-noise precision by local manipulation of a two-dimensional Mott insulator. We microscopically confirm the superatom picture by in situ detection of the Rydberg excitations and observe the characteristic square-root scaling of the optical coupling with the number of atoms. Enabled by the full control over the atomic sample, including the motional degrees of freedom, we infer the overlap of the produced many-body state with a W state from the observed Rabi oscillations and deduce the presence of entanglement. Finally, we investigate the breakdown of the superatom picture when two Rydberg excitations are present in the system, which leads to dephasing and a loss of coherence.

  18. A Scalable and Reliable Message Transport Service for the ATLAS Trigger and Data Acquisition System

    CERN Document Server

    Kazarov, A; The ATLAS collaboration; Kolos, S; Lehmann Miotto, G; Soloviev, I

    2014-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) is a large distributed computing system composed of several thousands of interconnected computers and tens of thousands applications. During a run, TDAQ applications produce a lot of control and information messages with variable rates, addressed to TDAQ operators or to other applications. Reliable, fast and accurate delivery of the messages is important for the functioning of the whole TDAQ system. The Message Transport Service (MTS) provides facilities for the reliable transport, the filtering and the routing of the messages, basing on publish-subscribe-notify communication pattern with content-based message filtering. During the ongoing LHC shutdown, the MTS was re-implemented, taking into account important requirements like reliability, scalability and performance, handling of slow subscribers case and also simplicity of the design and the implementation. MTS uses CORBA middleware, a common layer for TDAQ infrastructure, and provides sending/subscribing APIs i...

  19. Scalable Parallel Distributed Coprocessor System for Graph Searching Problems with Massive Data

    Directory of Open Access Journals (Sweden)

    Wanrong Huang

    2017-01-01

    Full Text Available The Internet applications, such as network searching, electronic commerce, and modern medical applications, produce and process massive data. Considerable data parallelism exists in computation processes of data-intensive applications. A traversal algorithm, breadth-first search (BFS, is fundamental in many graph processing applications and metrics when a graph grows in scale. A variety of scientific programming methods have been proposed for accelerating and parallelizing BFS because of the poor temporal and spatial locality caused by inherent irregular memory access patterns. However, new parallel hardware could provide better improvement for scientific methods. To address small-world graph problems, we propose a scalable and novel field-programmable gate array-based heterogeneous multicore system for scientific programming. The core is multithread for streaming processing. And the communication network InfiniBand is adopted for scalability. We design a binary search algorithm to address mapping to unify all processor addresses. Within the limits permitted by the Graph500 test bench after 1D parallel hybrid BFS algorithm testing, our 8-core and 8-thread-per-core system achieved superior performance and efficiency compared with the prior work under the same degree of parallelism. Our system is efficient not as a special acceleration unit but as a processor platform that deals with graph searching applications.

  20. Fast 2D Convolutions and Cross-Correlations Using Scalable Architectures.

    Science.gov (United States)

    Carranza, Cesar; Llamocca, Daniel; Pattichis, Marios

    2017-05-01

    The manuscript describes fast and scalable architectures and associated algorithms for computing convolutions and cross-correlations. The basic idea is to map 2D convolutions and cross-correlations to a collection of 1D convolutions and cross-correlations in the transform domain. This is accomplished through the use of the discrete periodic radon transform for general kernels and the use of singular value decomposition -LU decompositions for low-rank kernels. The approach uses scalable architectures that can be fitted into modern FPGA and Zynq-SOC devices. Based on different types of available resources, for P×P blocks, 2D convolutions and cross-correlations can be computed in just O(P) clock cycles up to O(P 2 ) clock cycles. Thus, there is a trade-off between performance and required numbers and types of resources. We provide implementations of the proposed architectures using modern programmable devices (Virtex-7 and Zynq-SOC). Based on the amounts and types of required resources, we show that the proposed approaches significantly outperform current methods.

  1. CAM-SE: A scalable spectral element dynamical core for the Community Atmosphere Model.

    Energy Technology Data Exchange (ETDEWEB)

    Dennis, John [National Center for Atmospheric Research (NCAR); Edwards, Jim [IBM and National Center for Atmospheric Research; Evans, Kate J [ORNL; Guba, O [Sandia National Laboratories (SNL); Lauritzen, Peter [National Center for Atmospheric Research (NCAR); Mirin, Art [Lawrence Livermore National Laboratory (LLNL); St.-Cyr, Amik [National Center for Atmospheric Research (NCAR); Taylor, Mark [Sandia National Laboratories (SNL); Worley, Patrick H [ORNL

    2012-01-01

    The Community Atmosphere Model (CAM) version 5 includes a spectral element dynamical core option from NCAR's High-Order Method Modeling Environment. It is a continuous Galerkin spectral finite element method designed for fully unstructured quadrilateral meshes. The current configurations in CAM are based on the cubed-sphere grid. The main motivation for including a spectral element dynamical core is to improve the scalability of CAM by allowing quasi-uniform grids for the sphere that do not require polar filters. In addition, the approach provides other state-of-the-art capabilities such as improved conservation properties. Spectral elements are used for the horizontal discretization, while most other aspects of the dynamical core are a hybrid of well tested techniques from CAM's finite volume and global spectral dynamical core options. Here we first give a overview of the spectral element dynamical core as used in CAM. We then give scalability and performance results from CAM running with three different dynamical core options within the Community Earth System Model, using a pre-industrial time-slice configuration. We focus on high resolution simulations of 1/4 degree, 1/8 degree, and T340 spectral truncation.

  2. Verification of energy dissipation rate scalability in pilot and production scale bioreactors using computational fluid dynamics.

    Science.gov (United States)

    Johnson, Chris; Natarajan, Venkatesh; Antoniou, Chris

    2014-01-01

    Suspension mammalian cell cultures in aerated stirred tank bioreactors are widely used in the production of monoclonal antibodies. Given that production scale cell culture operations are typically performed in very large bioreactors (≥ 10,000 L), bioreactor scale-down and scale-up become crucial in the development of robust cell-culture processes. For successful scale-up and scale-down of cell culture operations, it is important to understand the scale-dependence of the distribution of the energy dissipation rates in a bioreactor. Computational fluid dynamics (CFD) simulations can provide an additional layer of depth to bioreactor scalability analysis. In this communication, we use CFD analyses of five bioreactor configurations to evaluate energy dissipation rates and Kolmogorov length scale distributions at various scales. The results show that hydrodynamic scalability is achievable as long as major design features (# of baffles, impellers) remain consistent across the scales. Finally, in all configurations, the mean Kolmogorov length scale is substantially higher than the average cell size, indicating that catastrophic cell damage due to mechanical agitation is highly unlikely at all scales. © 2014 American Institute of Chemical Engineers.

  3. From Digital Disruption to Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten; Thomsen, Peter Poulsen

    2017-01-01

    a long time to replicate, business model scalability can be cornered into four dimensions. In many corporate restructuring exercises and Mergers and Acquisitions there is a tendency to look for synergies in the form of cost reductions, lean workflows and market segments. However, this state of mind......This article discusses the terms disruption, digital disruption, business models and business model scalability. It illustrates how managers should be using these terms for the benefit of their business by developing business models capable of achieving exponentially increasing returns to scale...

  4. From Digital Disruption to Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten; Thomsen, Peter Poulsen

    2017-01-01

    as a response to digital disruption. A series of case studies illustrate that besides frequent existing messages in the business literature relating to the importance of creating agile businesses, both in growing and declining economies, as well as hard to copy value propositions or value propositions that take......This article discusses the terms disruption, digital disruption, business models and business model scalability. It illustrates how managers should be using these terms for the benefit of their business by developing business models capable of achieving exponentially increasing returns to scale...... will seldom lead to business model scalability capable of competing with digital disruption(s)....

  5. GSKY: A scalable distributed geospatial data server on the cloud

    Science.gov (United States)

    Rozas Larraondo, Pablo; Pringle, Sean; Antony, Joseph; Evans, Ben

    2017-04-01

    Earth systems, environmental and geophysical datasets are an extremely valuable sources of information about the state and evolution of the Earth. Being able to combine information coming from different geospatial collections is in increasing demand by the scientific community, and requires managing and manipulating data with different formats and performing operations such as map reprojections, resampling and other transformations. Due to the large data volume inherent in these collections, storing multiple copies of them is unfeasible and so such data manipulation must be performed on-the-fly using efficient, high performance techniques. Ideally this should be performed using a trusted data service and common system libraries to ensure wide use and reproducibility. Recent developments in distributed computing based on dynamic access to significant cloud infrastructure opens the door for such new ways of processing geospatial data on demand. The National Computational Infrastructure (NCI), hosted at the Australian National University (ANU), has over 10 Petabytes of nationally significant research data collections. Some of these collections, which comprise a variety of observed and modelled geospatial data, are now made available via a highly distributed geospatial data server, called GSKY (pronounced [jee-skee]). GSKY supports on demand processing of large geospatial data products such as satellite earth observation data as well as numerical weather products, allowing interactive exploration and analysis of the data. It dynamically and efficiently distributes the required computations among cloud nodes providing a scalable analysis framework that can adapt to serve large number of concurrent users. Typical geospatial workflows handling different file formats and data types, or blending data in different coordinate projections and spatio-temporal resolutions, is handled transparently by GSKY. This is achieved by decoupling the data ingestion and indexing process as

  6. Physical principles for scalable neural recording.

    Science.gov (United States)

    Marblestone, Adam H; Zamft, Bradley M; Maguire, Yael G; Shapiro, Mikhail G; Cybulski, Thaddeus R; Glaser, Joshua I; Amodei, Dario; Stranges, P Benjamin; Kalhor, Reza; Dalrymple, David A; Seo, Dongjin; Alon, Elad; Maharbiz, Michel M; Carmena, Jose M; Rabaey, Jan M; Boyden, Edward S; Church, George M; Kording, Konrad P

    2013-01-01

    Simultaneously measuring the activities of all neurons in a mammalian brain at millisecond resolution is a challenge beyond the limits of existing techniques in neuroscience. Entirely new approaches may be required, motivating an analysis of the fundamental physical constraints on the problem. We outline the physical principles governing brain activity mapping using optical, electrical, magnetic resonance, and molecular modalities of neural recording. Focusing on the mouse brain, we analyze the scalability of each method, concentrating on the limitations imposed by spatiotemporal resolution, energy dissipation, and volume displacement. Based on this analysis, all existing approaches require orders of magnitude improvement in key parameters. Electrical recording is limited by the low multiplexing capacity of electrodes and their lack of intrinsic spatial resolution, optical methods are constrained by the scattering of visible light in brain tissue, magnetic resonance is hindered by the diffusion and relaxation timescales of water protons, and the implementation of molecular recording is complicated by the stochastic kinetics of enzymes. Understanding the physical limits of brain activity mapping may provide insight into opportunities for novel solutions. For example, unconventional methods for delivering electrodes may enable unprecedented numbers of recording sites, embedded optical devices could allow optical detectors to be placed within a few scattering lengths of the measured neurons, and new classes of molecularly engineered sensors might obviate cumbersome hardware architectures. We also study the physics of powering and communicating with microscale devices embedded in brain tissue and find that, while radio-frequency electromagnetic data transmission suffers from a severe power-bandwidth tradeoff, communication via infrared light or ultrasound may allow high data rates due to the possibility of spatial multiplexing. The use of embedded local recording and

  7. On the assessment of performance and emissions characteristics of a SI engine provided with a laser ignition system

    Science.gov (United States)

    Birtas, A.; Boicea, N.; Draghici, F.; Chiriac, R.; Croitoru, G.; Dinca, M.; Dascalu, T.; Pavel, N.

    2017-10-01

    Performance and exhaust emissions of spark ignition engines are strongly dependent on the development of the combustion process. Controlling this process in order to improve the performance and to reduce emissions by ensuring rapid and robust combustion depends on how ignition stage is achieved. An ignition system that seems to be able for providing such an enhanced combustion process is that based on plasma generation using a Q-switched solid state laser that delivers pulses with high peak power (of MW-order level). The laser-spark devices used in the present investigations were realized using compact diffusion-bonded Nd:YAG/Cr4+:YAG ceramic media. The laser igniter was designed, integrated and built to resemble a classical spark plug and therefore it could be mounted directly on the cylinder head of a passenger car engine. In this study are reported the results obtained using such ignition system provided for a K7M 710 engine currently produced by Renault-Dacia, where the standard calibrations were changed towards the lean mixtures combustion zone. Results regarding the performance, the exhaust emissions and the combustion characteristics in optimized spark timing conditions, which demonstrate the potential of such an innovative ignition system, are presented.

  8. Scalable Open Source Smart Grid Simulator (SGSim)

    DEFF Research Database (Denmark)

    Ebeid, Emad Samuel Malki; Jacobsen, Rune Hylsberg; Stefanni, Francesco

    2017-01-01

    . This paper presents an open source smart grid simulator (SGSim). The simulator is based on open source SystemC Network Simulation Library (SCNSL) and aims to model scalable smart grid applications. SGSim has been tested under different smart grid scenarios that contain hundreds of thousands of households...

  9. Realization of a scalable airborne radar

    NARCIS (Netherlands)

    Halsema, D. van; Jongh, R.V. de; Es, J. van; Otten, M.P.G.; Vermeulen, B.C.B.; Liempt, L.J. van

    2008-01-01

    Modern airborne ground surveillance radar systems are increasingly based on Active Electronically Scanned Array (AESA) antennas. Efficient use of array technology and the need for radar solutions for various airborne platforms, manned and unmanned, leads to the design of scalable radar systems. The

  10. Scalable Domain Decomposed Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  11. Subjective comparison of temporal and quality scalability

    DEFF Research Database (Denmark)

    Korhonen, Jari; Reiter, Ulrich; You, Junyong

    2011-01-01

    and quality scalability. The practical experiments with low resolution video sequences show that in general, distortion is a more crucial factor for the perceived subjective quality than frame rate. However, the results also depend on the content. Moreover,, we discuss the role of other different influence...

  12. What information is provided in transcripts and Medical Student Performance Records from Canadian Medical Schools? A retrospective cohort study.

    Science.gov (United States)

    Robins, Jason A; McInnes, Matthew D F; Esmail, Kaisra

    2014-01-01

    Resident selection committees must rely on information provided by medical schools in order to evaluate candidates. However, this information varies between institutions, limiting its value in comparing individuals and fairly assessing their quality. This study investigates what is included in candidates' documentation, the heterogeneity therein, as well as its objective data. Samples of recent transcripts and Medical Student Performance Records were anonymised prior to evaluation. Data were then extracted by two independent reviewers blinded to the submitting university, assessing for the presence of pre-selected criteria; disagreement was resolved through consensus. The data were subsequently analysed in multiple subgroups. Inter-rater agreement equalled 92%. Inclusion of important criteria varied by school, ranging from 22.2% inclusion to 70.4%; the mean equalled 47.4%. The frequency of specific criteria was highly variable as well. Only 17.7% of schools provided any basis for comparison of academic performance; the majority detailed only status regarding pass or fail, without any further qualification. Considerable heterogeneity exists in the information provided in official medical school documentation, as well as markedly little objective data. Standardization may be necessary in order to facilitate fair comparison of graduates from different institutions. Implementation of objective data may allow more effective intra- and inter-scholastic comparison.

  13. A scalable pairwise class interaction framework for multidimensional classification

    DEFF Research Database (Denmark)

    Arias, Jacinto; Gámez, Jose A.; Nielsen, Thomas Dyhre

    2016-01-01

    We present a general framework for multidimensional classification that cap- tures the pairwise interactions between class variables. The pairwise class inter- actions are encoded using a collection of base classifiers (Phase 1), for which the class predictions are combined in a Markov random field...... inference methods in the second phase. We describe the basic framework and its main properties, as well as strategies for ensuring the scalability of the framework. We include a detailed experimental evaluation based on a range of publicly available databases. Here we analyze the overall performance...

  14. Using Python to Construct a Scalable Parallel Nonlinear Wave Solver

    KAUST Repository

    Mandli, Kyle

    2011-01-01

    Computational scientists seek to provide efficient, easy-to-use tools and frameworks that enable application scientists within a specific discipline to build and/or apply numerical models with up-to-date computing technologies that can be executed on all available computing systems. Although many tools could be useful for groups beyond a specific application, it is often difficult and time consuming to combine existing software, or to adapt it for a more general purpose. Python enables a high-level approach where a general framework can be supplemented with tools written for different fields and in different languages. This is particularly important when a large number of tools are necessary, as is the case for high performance scientific codes. This motivated our development of PetClaw, a scalable distributed-memory solver for time-dependent nonlinear wave propagation, as a case-study for how Python can be used as a highlevel framework leveraging a multitude of codes, efficient both in the reuse of code and programmer productivity. We present scaling results for computations on up to four racks of Shaheen, an IBM BlueGene/P supercomputer at King Abdullah University of Science and Technology. One particularly important issue that PetClaw has faced is the overhead associated with dynamic loading leading to catastrophic scaling. We use the walla library to solve the issue which does so by supplanting high-cost filesystem calls with MPI operations at a low enough level that developers may avoid any changes to their codes.

  15. Querying Data Providing Web Services

    OpenAIRE

    Sabesan, Manivasakan

    2010-01-01

    Web services are often used for search computing where data is retrieved from servers providing information of different kinds. Such data providing web services return a set of objects for a given set of parameters without any side effects. There is need to enable general and scalable search capabilities of data from data providing web services, which is the topic of this Thesis. The Web Service MEDiator (WSMED) system automatically provides relational views of any data providing web service ...

  16. Using overlay network architectures for scalable video distribution

    Science.gov (United States)

    Patrikakis, Charalampos Z.; Despotopoulos, Yannis; Fafali, Paraskevi; Cha, Jihun; Kim, Kyuheon

    2004-11-01

    Within the last years, the enormous growth of Internet based communication as well as the rapid increase of available processing power has lead to the widespread use of multimedia streaming as a means to convey information. This work aims at providing an open architecture designed to support scalable streaming to a large number of clients using application layer multicast. The architecture is based on media relay nodes that can be deployed transparently to any existing media distribution scheme, which can support media streamed using the RTP and RTSP protocols. The architecture is based on overlay networks at application level, featuring rate adaptation mechanisms for responding to network congestion.

  17. Scalable web services for the PSIPRED Protein Analysis Workbench.

    Science.gov (United States)

    Buchan, Daniel W A; Minneci, Federico; Nugent, Tim C O; Bryson, Kevin; Jones, David T

    2013-07-01

    Here, we present the new UCL Bioinformatics Group's PSIPRED Protein Analysis Workbench. The Workbench unites all of our previously available analysis methods into a single web-based framework. The new web portal provides a greatly streamlined user interface with a number of new features to allow users to better explore their results. We offer a number of additional services to enable computationally scalable execution of our prediction methods; these include SOAP and XML-RPC web server access and new HADOOP packages. All software and services are available via the UCL Bioinformatics Group website at http://bioinf.cs.ucl.ac.uk/.

  18. Scalable video on demand adaptive Internet-based distribution

    CERN Document Server

    Zink, Michael

    2013-01-01

    In recent years, the proliferation of available video content and the popularity of the Internet have encouraged service providers to develop new ways of distributing content to clients. Increasing video scaling ratios and advanced digital signal processing techniques have led to Internet Video-on-Demand applications, but these currently lack efficiency and quality. Scalable Video on Demand: Adaptive Internet-based Distribution examines how current video compression and streaming can be used to deliver high-quality applications over the Internet. In addition to analysing the problems

  19. Scalable Implementation of Finite Elements by NASA _ Implicit (ScIFEi)

    Science.gov (United States)

    Warner, James E.; Bomarito, Geoffrey F.; Heber, Gerd; Hochhalter, Jacob D.

    2016-01-01

    Scalable Implementation of Finite Elements by NASA (ScIFEN) is a parallel finite element analysis code written in C++. ScIFEN is designed to provide scalable solutions to computational mechanics problems. It supports a variety of finite element types, nonlinear material models, and boundary conditions. This report provides an overview of ScIFEi (\\Sci-Fi"), the implicit solid mechanics driver within ScIFEN. A description of ScIFEi's capabilities is provided, including an overview of the tools and features that accompany the software as well as a description of the input and output le formats. Results from several problems are included, demonstrating the efficiency and scalability of ScIFEi by comparing to finite element analysis using a commercial code.

  20. T3: Secure, Scalable, Distributed Data Movement and Remote System Control for Enterprise Level Cyber Security

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, Gregory S.; Nickless, William K.; Thiede, David R.; Gorton, Ian; Pitre, Bill J.; Christy, Jason E.; Faultersack, Elizabeth M.; Mauth, Jeffery A.

    2009-07-20

    Enterprise level cyber security requires the deployment, operation, and monitoring of many sensors across geographically dispersed sites. Communicating with the sensors to gather data and control behavior is a challenging task when the number of sensors is rapidly growing. This paper describes the system requirements, design, and implementation of T3, the third generation of our transport software that performs this task. T3 relies on open source software and open Internet standards. Data is encoded in MIME format messages and transported via NNTP, which provides scalability. OpenSSL and public key cryptography are used to secure the data. Robustness and ease of development are increased by defining an internal cryptographic API, implemented by modules in C, Perl, and Python. We are currently using T3 in a production environment. It is freely available to download and use for other projects.

  1. PetClaw: A scalable parallel nonlinear wave propagation solver for Python

    KAUST Repository

    Alghamdi, Amal

    2011-01-01

    We present PetClaw, a scalable distributed-memory solver for time-dependent nonlinear wave propagation. PetClaw unifies two well-known scientific computing packages, Clawpack and PETSc, using Python interfaces into both. We rely on Clawpack to provide the infrastructure and kernels for time-dependent nonlinear wave propagation. Similarly, we rely on PETSc to manage distributed data arrays and the communication between them.We describe both the implementation and performance of PetClaw as well as our challenges and accomplishments in scaling a Python-based code to tens of thousands of cores on the BlueGene/P architecture. The capabilities of PetClaw are demonstrated through application to a novel problem involving elastic waves in a heterogeneous medium. Very finely resolved simulations are used to demonstrate the suppression of shock formation in this system.

  2. OWL: A scalable Monte Carlo simulation suite for finite-temperature study of materials

    Science.gov (United States)

    Li, Ying Wai; Yuk, Simuck F.; Cooper, Valentino R.; Eisenbach, Markus; Odbadrakh, Khorgolkhuu

    The OWL suite is a simulation package for performing large-scale Monte Carlo simulations. Its object-oriented, modular design enables it to interface with various external packages for energy evaluations. It is therefore applicable to study the finite-temperature properties for a wide range of systems: from simple classical spin models to materials where the energy is evaluated by ab initio methods. This scheme not only allows for the study of thermodynamic properties based on first-principles statistical mechanics, it also provides a means for massive, multi-level parallelism to fully exploit the capacity of modern heterogeneous computer architectures. We will demonstrate how improved strong and weak scaling is achieved by employing novel, parallel and scalable Monte Carlo algorithms, as well as the applications of OWL to a few selected frontier materials research problems. This research was supported by the Office of Science of the Department of Energy under contract DE-AC05-00OR22725.

  3. Scalable Multicasting over Next-Generation Internet Design, Analysis and Applications

    CERN Document Server

    Tian, Xiaohua

    2013-01-01

    Next-generation Internet providers face high expectations, as contemporary users worldwide expect high-quality multimedia functionality in a landscape of ever-expanding network applications. This volume explores the critical research issue of turning today’s greatly enhanced hardware capacity to good use in designing a scalable multicast  protocol for supporting large-scale multimedia services. Linking new hardware to improved performance in the Internet’s next incarnation is a research hot-spot in the computer communications field.   The methodical presentation deals with the key questions in turn: from the mechanics of multicast protocols to current state-of-the-art designs, and from methods of theoretical analysis of these protocols to applying them in the ns2 network simulator, known for being hard to extend. The authors’ years of research in the field inform this thorough treatment, which covers details such as applying AOM (application-oriented multicast) protocol to IPTV provision and resolving...

  4. Scalable Global Grid catalogue for LHC Run3 and beyond arXiv

    CERN Document Server

    Martinez Pedreira, Miguel

    The AliEn (ALICE Environment) file catalogue is a global unique namespace providing mapping between a UNIX-like logical name structure and the corresponding physical files distributed over 80 storage elements worldwide. Powerful search tools and hierarchical metadata information are integral parts of the system and are used by the Grid jobs as well as local users to store and access all files on the Grid storage elements. The catalogue has been in production since 2005 and over the past 11 years has grown to more than 2 billion logical file names. The backend is a set of distributed relational databases, ensuring smooth growth and fast access. Due to the anticipated fast future growth, we are looking for ways to enhance the performance and scalability by simplifying the catalogue schema while keeping the functionality intact. We investigated different backend solutions, such as distributed key value stores, as replacement for the relational database. This contribution covers the architectural changes in the s...

  5. CASTOR: Widely Distributed Scalable Infospaces

    Science.gov (United States)

    2008-11-01

    as the application builder technology provided by Microsoft in their Indigo platform for Windows Vista. Tempest then automatically introduces...below. In RMTP, the sender and the receivers for a topic form a tree. Within this tree, every subset of nodes consisting of a parent and its child ...nodes represents a separate local recovery group. The child nodes in every such group send their local ACK/NAK information to the parent node, which

  6. Strong Scalability Study of Distributed Memory Parallel Markov Random Fields Using Graph Partitioning

    Science.gov (United States)

    Heinemann, Colleen

    Research in material science is increasingly reliant on image-based data from experiments, demanding construction of new analysis tools that help scientists discover information from digital images. Because there is such a wide variety of materials and image modalities, detecting different compounds from imaged materials continues to be a challenging task. A vast collection of algorithms for filtering, image segmentation, and texture description have facilitated and improved accuracy for sample measurements (see Chapter 1 Introduction and Literature Review). Despite this, the community still lacks scalable, general purpose, easily configurable image analysis frameworks that allow pattern detection on different imaging modalities across multiple scales. The need for such a framework was the motivation behind the development of a distributed-memory parallel Markov Random Field based framework. Markov Random Field (MRF) algorithms provide the ability to explore contextual information about a given dataset. Given the complexity of such algorithms, however, they are limited by performance when running serial. Thus, running in some sort of parallel fashion is necessary. The effects are twofold. Not only does running the MRF algorithm in parallel provide the ability to run current datasets faster and more efficiently, it also provides the ability for datasets to continue to grow in size and still be able to be run with such frameworks. The variation of the Markov Random Field algorithm utilized in this study first oversegments the given input image and constructs a graph model based on photometric and geometric distances. Next, the resulting graph model is refactored specifically into the MRF model to target image segmentation. Finally, a distributed approach is used for the optimization process to obtain the best labeling for the graph, which is essentially the goal of using a MRF algorithm. Given the concept of using a distributed memory parallel framework, specifically

  7. IEEE 802.15.4 Frame Aggregation Enhancement to Provide High Performance in Life-Critical Patient Monitoring Systems.

    Science.gov (United States)

    Akbar, Muhammad Sajjad; Yu, Hongnian; Cang, Shuang

    2017-01-28

    In wireless body area sensor networks (WBASNs), Quality of Service (QoS) provision for patient monitoring systems in terms of time-critical deadlines, high throughput and energy efficiency is a challenging task. The periodic data from these systems generates a large number of small packets in a short time period which needs an efficient channel access mechanism. The IEEE 802.15.4 standard is recommended for low power devices and widely used for many wireless sensor networks applications. It provides a hybrid channel access mechanism at the Media Access Control (MAC) layer which plays a key role in overall successful transmission in WBASNs. There are many WBASN's MAC protocols that use this hybrid channel access mechanism in variety of sensor applications. However, these protocols are less efficient for patient monitoring systems where life critical data requires limited delay, high throughput and energy efficient communication simultaneously. To address these issues, this paper proposes a frame aggregation scheme by using the aggregated-MAC protocol data unit (A-MPDU) which works with the IEEE 802.15.4 MAC layer. To implement the scheme accurately, we develop a traffic patterns analysis mechanism to understand the requirements of the sensor nodes in patient monitoring systems, then model the channel access to find the performance gap on the basis of obtained requirements, finally propose the design based on the needs of patient monitoring systems. The mechanism is initially verified using numerical modelling and then simulation is conducted using NS2.29, Castalia 3.2 and OMNeT++. The proposed scheme provides the optimal performance considering the required QoS.

  8. A comparative study of scalable video coding schemes utilizing wavelet technology

    Science.gov (United States)

    Schelkens, Peter; Andreopoulos, Yiannis; Barbarien, Joeri; Clerckx, Tom; Verdicchio, Fabio; Munteanu, Adrian; van der Schaar, Mihaela

    2004-02-01

    Video transmission over variable-bandwidth networks requires instantaneous bit-rate adaptation at the server site to provide an acceptable decoding quality. For this purpose, recent developments in video coding aim at providing a fully embedded bit-stream with seamless adaptation capabilities in bit-rate, frame-rate and resolution. A new promising technology in this context is wavelet-based video coding. Wavelets have already demonstrated their potential for quality and resolution scalability in still-image coding. This led to the investigation of various schemes for the compression of video, exploiting similar principles to generate embedded bit-streams. In this paper we present scalable wavelet-based video-coding technology with competitive rate-distortion behavior compared to standardized non-scalable technology.

  9. Scalable fast multipole methods for vortex element methods

    KAUST Repository

    Hu, Qi

    2012-11-01

    We use a particle-based method to simulate incompressible flows, where the Fast Multipole Method (FMM) is used to accelerate the calculation of particle interactions. The most time-consuming kernelsâ\\'the Biot-Savart equation and stretching term of the vorticity equationâ\\'are mathematically reformulated so that only two Laplace scalar potentials are used instead of six, while automatically ensuring divergence-free far-field computation. Based on this formulation, and on our previous work for a scalar heterogeneous FMM algorithm, we develop a new FMM-based vortex method capable of simulating general flows including turbulence on heterogeneous architectures, which distributes the work between multi-core CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm also uses new data structures which can dynamically manage inter-node communication and load balance efficiently but with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s. © 2012 IEEE.

  10. Declarative and Scalable Selection for Map Visualizations

    DEFF Research Database (Denmark)

    Kefaloukos, Pimin Konstantin Balic

    supports the PostgreSQL dialect of SQL. The prototype implementation is a compiler that translates CVL into SQL and stored procedures. (c) TileHeat is a framework and basic algorithm for partial materialization of hot tile sets for scalable map distribution. The framework predicts future map workloads......, there are indications that the method is scalable for databases that contain millions of records, especially if the target language of the compiler is substituted by a cluster-ready variant of SQL. While several realistic use cases for maps have been implemented in CVL, additional non-geographic data visualization uses...... goal. The results for Tileheat show that the prediction method offers a substantial improvement over the current method used by the Danish Geodata Agency. Thus, a large amount of computations can potentially be saved by this public institution, who is responsible for the distribution of government...

  11. A Scalability Model for ECS's Data Server

    Science.gov (United States)

    Menasce, Daniel A.; Singhal, Mukesh

    1998-01-01

    This report presents in four chapters a model for the scalability analysis of the Data Server subsystem of the Earth Observing System Data and Information System (EOSDIS) Core System (ECS). The model analyzes if the planned architecture of the Data Server will support an increase in the workload with the possible upgrade and/or addition of processors, storage subsystems, and networks. The approaches in the report include a summary of the architecture of ECS's Data server as well as a high level description of the Ingest and Retrieval operations as they relate to ECS's Data Server. This description forms the basis for the development of the scalability model of the data server and the methodology used to solve it.

  12. Stencil Lithography for Scalable Micro- and Nanomanufacturing

    Directory of Open Access Journals (Sweden)

    Ke Du

    2017-04-01

    Full Text Available In this paper, we review the current development of stencil lithography for scalable micro- and nanomanufacturing as a resistless and reusable patterning technique. We first introduce the motivation and advantages of stencil lithography for large-area micro- and nanopatterning. Then we review the progress of using rigid membranes such as SiNx and Si as stencil masks as well as stacking layers. We also review the current use of flexible membranes including a compliant SiNx membrane with springs, polyimide film, polydimethylsiloxane (PDMS layer, and photoresist-based membranes as stencil lithography masks to address problems such as blurring and non-planar surface patterning. Moreover, we discuss the dynamic stencil lithography technique, which significantly improves the patterning throughput and speed by moving the stencil over the target substrate during deposition. Lastly, we discuss the future advancement of stencil lithography for a resistless, reusable, scalable, and programmable nanolithography method.

  13. SPRNG Scalable Parallel Random Number Generator LIbrary

    Energy Technology Data Exchange (ETDEWEB)

    2010-03-16

    This revision corrects some errors in SPRNG 1. Users of newer SPRNG versions can obtain the corrected files and build their version with it. This version also improves the scalability of some of the application-based tests in the SPRNG test suite. It also includes an interface to a parallel Mersenne Twister, so that if users install the Mersenne Twister, then they can test this generator with the SPRNG test suite and also use some SPRNG features with that generator.

  14. Bitcoin-NG: A Scalable Blockchain Protocol

    OpenAIRE

    Eyal, Ittay; Gencer, Adem Efe; Sirer, Emin Gun; Renesse, Robbert,

    2015-01-01

    Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin's blockchain protocol, Bitcoin-NG is By...

  15. Stencil Lithography for Scalable Micro- and Nanomanufacturing

    OpenAIRE

    Ke Du; Junjun Ding; Yuyang Liu; Ishan Wathuthanthri; Chang-Hwan Choi

    2017-01-01

    In this paper, we review the current development of stencil lithography for scalable micro- and nanomanufacturing as a resistless and reusable patterning technique. We first introduce the motivation and advantages of stencil lithography for large-area micro- and nanopatterning. Then we review the progress of using rigid membranes such as SiNx and Si as stencil masks as well as stacking layers. We also review the current use of flexible membranes including a compliant SiNx membrane with spring...

  16. Scalable robotic biofabrication of tissue spheroids

    Energy Technology Data Exchange (ETDEWEB)

    Mehesz, A Nagy; Hajdu, Z; Visconti, R P; Markwald, R R; Mironov, V [Advanced Tissue Biofabrication Center, Department of Regenerative Medicine and Cell Biology, Medical University of South Carolina, Charleston, SC (United States); Brown, J [Department of Mechanical Engineering, Clemson University, Clemson, SC (United States); Beaver, W [York Technical College, Rock Hill, SC (United States); Da Silva, J V L, E-mail: mironovv@musc.edu [Renato Archer Information Technology Center-CTI, Campinas (Brazil)

    2011-06-15

    Development of methods for scalable biofabrication of uniformly sized tissue spheroids is essential for tissue spheroid-based bioprinting of large size tissue and organ constructs. The most recent scalable technique for tissue spheroid fabrication employs a micromolded recessed template prepared in a non-adhesive hydrogel, wherein the cells loaded into the template self-assemble into tissue spheroids due to gravitational force. In this study, we present an improved version of this technique. A new mold was designed to enable generation of 61 microrecessions in each well of a 96-well plate. The microrecessions were seeded with cells using an EpMotion 5070 automated pipetting machine. After 48 h of incubation, tissue spheroids formed at the bottom of each microrecession. To assess the quality of constructs generated using this technology, 600 tissue spheroids made by this method were compared with 600 spheroids generated by the conventional hanging drop method. These analyses showed that tissue spheroids fabricated by the micromolded method are more uniform in diameter. Thus, use of micromolded recessions in a non-adhesive hydrogel, combined with automated cell seeding, is a reliable method for scalable robotic fabrication of uniform-sized tissue spheroids.

  17. A scalable distributed RRT for motion planning

    KAUST Repository

    Jacobs, Sam Ade

    2013-05-01

    Rapidly-exploring Random Tree (RRT), like other sampling-based motion planning methods, has been very successful in solving motion planning problems. Even so, sampling-based planners cannot solve all problems of interest efficiently, so attention is increasingly turning to parallelizing them. However, one challenge in parallelizing RRT is the global computation and communication overhead of nearest neighbor search, a key operation in RRTs. This is a critical issue as it limits the scalability of previous algorithms. We present two parallel algorithms to address this problem. The first algorithm extends existing work by introducing a parameter that adjusts how much local computation is done before a global update. The second algorithm radially subdivides the configuration space into regions, constructs a portion of the tree in each region in parallel, and connects the subtrees,i removing cycles if they exist. By subdividing the space, we increase computation locality enabling a scalable result. We show that our approaches are scalable. We present results demonstrating almost linear scaling to hundreds of processors on a Linux cluster and a Cray XE6 machine. © 2013 IEEE.

  18. The Node Monitoring Component of a Scalable Systems Software Environment

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Samuel James [Iowa State Univ., Ames, IA (United States)

    2006-01-01

    This research describes Fountain, a suite of programs used to monitor the resources of a cluster. A cluster is a collection of individual computers that are connected via a high speed communication network. They are traditionally used by users who desire more resources, such as processing power and memory, than any single computer can provide. A common drawback to effectively utilizing such a large-scale system is the management infrastructure, which often does not often scale well as the system grows. Large-scale parallel systems provide new research challenges in the area of systems software, the programs or tools that manage the system from boot-up to running a parallel job. The approach presented in this thesis utilizes a collection of separate components that communicate with each other to achieve a common goal. While systems software comprises a broad array of components, this thesis focuses on the design choices for a node monitoring component. We will describe Fountain, an implementation of the Scalable Systems Software (SSS) node monitor specification. It is targeted at aggregate node monitoring for clusters, focusing on both scalability and fault tolerance as its design goals. It leverages widely used technologies such as XML and HTTP to present an interface to other components in the SSS environment.

  19. Solution-Processing of Organic Solar Cells: From In Situ Investigation to Scalable Manufacturing

    KAUST Repository

    Abdelsamie, Maged

    2016-12-05

    implementation of organic solar cells with high efficiency and manufacturability. In this dissertation, we investigate the mechanism of the BHJ layer formation during solution processing from common lab-based processes, such as spin-coating, with the aim of understanding the roles of materials, formulations and processing conditions and subsequently using this insight to enable the scalable manufacturing of high efficiency organic solar cells by such methods as wire-bar coating and blade-coating. To do so, we have developed state-of-the-art in situ diagnostics techniques to provide us with insight into the thin film formation process. As a first step, we have developed a modified spin-coater which allows us to perform in situ UV-visible absorption measurements during spin coating and provides key insight into the formation and evolution of polymer aggregates in solution and during the transformation to the solid state. Using this method, we have investigated the formation of organic BHJs made of a blend of poly (3-hexylthiophene) (P3HT) and fullerene, reference materials in the organic solar cell field. We show that process kinetics directly influence the microstructure and morphology of the bulk heterojunction, highlighting the value of in situ measurements. We have investigated the influence of crystallization dynamics of a wide-range of small-molecule donors and their solidification pathways on the processing routes needed for attaining high-performance solar cells. The study revealed the reason behind the need of empirically-adopted processing strategies such as solvent additives or alternatively thermal or solvent vapor annealing for achieving optimal performance. The study has provided a new perspective to materials design linking the need for solvent additives or annealing to the ease of crystallization of small-molecule donors and the presence or absence of transient phases before crystallization. From there, we have extended our investigation to small-molecule (p

  20. Scalable computing for evolutionary genomics.

    Science.gov (United States)

    Prins, Pjotr; Belhachemi, Dominique; Möller, Steffen; Smant, Geert

    2012-01-01

    , BioNode encourages creating free and open source VM images, for multiple targets, through one central project. BioNode can be deployed on Windows, OSX, Linux, and in the Cloud. Next to the downloadable BioNode images, we provide tutorials online, which empower bioinformaticians to install and run BioNode in different environments, as well as information for future initiatives, on creating and building such images.

  1. Scalable and balanced dynamic hybrid data assimilation

    Science.gov (United States)

    Kauranne, Tuomo; Amour, Idrissa; Gunia, Martin; Kallio, Kari; Lepistö, Ahti; Koponen, Sampsa

    2017-04-01

    Scalability of complex weather forecasting suites is dependent on the technical tools available for implementing highly parallel computational kernels, but to an equally large extent also on the dependence patterns between various components of the suite, such as observation processing, data assimilation and the forecast model. Scalability is a particular challenge for 4D variational assimilation methods that necessarily couple the forecast model into the assimilation process and subject this combination to an inherently serial quasi-Newton minimization process. Ensemble based assimilation methods are naturally more parallel, but large models force ensemble sizes to be small and that results in poor assimilation accuracy, somewhat akin to shooting with a shotgun in a million-dimensional space. The Variational Ensemble Kalman Filter (VEnKF) is an ensemble method that can attain the accuracy of 4D variational data assimilation with a small ensemble size. It achieves this by processing a Gaussian approximation of the current error covariance distribution, instead of a set of ensemble members, analogously to the Extended Kalman Filter EKF. Ensemble members are re-sampled every time a new set of observations is processed from a new approximation of that Gaussian distribution which makes VEnKF a dynamic assimilation method. After this a smoothing step is applied that turns VEnKF into a dynamic Variational Ensemble Kalman Smoother VEnKS. In this smoothing step, the same process is iterated with frequent re-sampling of the ensemble but now using past iterations as surrogate observations until the end result is a smooth and balanced model trajectory. In principle, VEnKF could suffer from similar scalability issues as 4D-Var. However, this can be avoided by isolating the forecast model completely from the minimization process by implementing the latter as a wrapper code whose only link to the model is calling for many parallel and totally independent model runs, all of them

  2. Formative evaluation of a telemedicine model for delivering clinical neurophysiology services part I: Utility, technical performance and service provider perspective

    Directory of Open Access Journals (Sweden)

    Breen Patricia

    2010-09-01

    Full Text Available Abstract Background Formative evaluation is conducted in the early stages of system implementation to assess how it works in practice and to identify opportunities for improving technical and process performance. A formative evaluation of a teleneurophysiology service was conducted to examine its technical and sociological dimensions. Methods A teleneurophysiology service providing routine EEG investigation was established. Service use, technical performance and satisfaction of clinical neurophysiology personnel were assessed qualitatively and quantitatively. These were contrasted with a previously reported analysis of the need for teleneurophysiology, and examination of expectation and satisfaction with clinical neurophysiology services in Ireland. A preliminary cost-benefit analysis was also conducted. Results Over the course of 40 clinical sessions during 20 weeks, 142 EEG investigations were recorded and stored on a file server at a satellite centre which was 130 miles away from the host clinical neurophysiology department. Using a virtual private network, the EEGs were accessed by a consultant neurophysiologist at the host centre for interpretation. The model resulted in a 5-fold increase in access to EEG services as well as reducing average waiting times for investigation by a half. Technically the model worked well, although a temporary loss of virtual private network connectivity highlighted the need for clarity in terms of responsibility for troubleshooting and repair of equipment problems. Referral quality, communication between host and satellite centres, quality of EEG recordings, and ease of EEG review and reporting indicated that appropriate organisational processes were adopted by the service. Compared to traditional CN service delivery, the teleneurophysiology model resulted in a comparable unit cost per EEG. Conclusion Observations suggest that when traditional organisational boundaries are crossed challenges associated with the

  3. Parallel scalability and efficiency of vortex particle method for aeroelasticity analysis of bluff bodies

    Science.gov (United States)

    Tolba, Khaled Ibrahim; Morgenthal, Guido

    2018-01-01

    This paper presents an analysis of the scalability and efficiency of a simulation framework based on the vortex particle method. The code is applied for the numerical aerodynamic analysis of line-like structures. The numerical code runs on multicore CPU and GPU architectures using OpenCL framework. The focus of this paper is the analysis of the parallel efficiency and scalability of the method being applied to an engineering test case, specifically the aeroelastic response of a long-span bridge girder at the construction stage. The target is to assess the optimal configuration and the required computer architecture, such that it becomes feasible to efficiently utilise the method within the computational resources available for a regular engineering office. The simulations and the scalability analysis are performed on a regular gaming type computer.

  4. Community pediatric hospitalists providing care in the emergency department: an analysis of physician productivity and financial performance.

    Science.gov (United States)

    Dudas, Robert A; Monroe, David; McColligan Borger, Melissa

    2011-11-01

    Community hospital pediatric inpatient programs are being threatened by current financial and demographic trends. We describe a model of care and report on the financial implications associated with combining emergency department (ED) and inpatient care of pediatric patients. We determine whether this type of model could generate sufficient revenue to support physician salaries for continuous in-house coverage in community hospitals. Financial productivity and selected performance indicators were obtained from a retrospective review of registration and billing records. Data were obtained from 2 community-based pediatric hospitalist programs, which are part of a single health system and included care delivered in the ED and inpatient settings during a 1-year period from July 1, 2008, to July 1, 2009. Together, the combined programs were able to generate 6079 total relative value units and collections of $244,828 annually per full-time equivalent (FTE). Salary, benefits, and practice expenses totaled $235,674 per FTE. Thus, combined daily revenues exceeded expenses and provided 104% of physician salary, benefits, and practice expenses. However, 1 program generated a net profit of $329,715 ($40,706 per FTE), whereas the other recorded a loss of $207,969 ($39,994 per FTE). Emergency department throughput times and left-without-being-seen rates at both programs were comparable to national benchmarks. Incorporating ED care into a pediatric hospitalist program can be an effective strategy to maintain the financial viability of pediatric services at community hospitals with low inpatient volumes that seek to provide 24-hour pediatric staffing.

  5. Scalable and Fault Tolerant Failure Detection and Consensus

    Energy Technology Data Exchange (ETDEWEB)

    Katti, Amogh [University of Reading, UK; Di Fatta, Giuseppe [University of Reading, UK; Naughton III, Thomas J [ORNL; Engelmann, Christian [ORNL

    2015-01-01

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum's User Level Failure Mitigation proposal has introduced an operation, MPI_Comm_shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI_Comm_shrink operation requires a fault tolerant failure detection and consensus algorithm. This paper presents and compares two novel failure detection and consensus algorithms. The proposed algorithms are based on Gossip protocols and are inherently fault-tolerant and scalable. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that in both algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus.

  6. Scalable architecture for a room temperature solid-state quantum information processor.

    Science.gov (United States)

    Yao, N Y; Jiang, L; Gorshkov, A V; Maurer, P C; Giedke, G; Cirac, J I; Lukin, M D

    2012-04-24

    The realization of a scalable quantum information processor has emerged over the past decade as one of the central challenges at the interface of fundamental science and engineering. Here we propose and analyse an architecture for a scalable, solid-state quantum information processor capable of operating at room temperature. Our approach is based on recent experimental advances involving nitrogen-vacancy colour centres in diamond. In particular, we demonstrate that the multiple challenges associated with operation at ambient temperature, individual addressing at the nanoscale, strong qubit coupling, robustness against disorder and low decoherence rates can be simultaneously achieved under realistic, experimentally relevant conditions. The architecture uses a novel approach to quantum information transfer and includes a hierarchy of control at successive length scales. Moreover, it alleviates the stringent constraints currently limiting the realization of scalable quantum processors and will provide fundamental insights into the physics of non-equilibrium many-body quantum systems.

  7. Scalable libraries for solving systems of nonlinear equations and unconstrained minimization problems.

    Energy Technology Data Exchange (ETDEWEB)

    Gropp, W. D.; McInnes, L. C.; Smith, B. F.

    1997-10-27

    Developing portable and scalable software for the solution of large-scale optimization problems presents many challenges that traditional libraries do not adequately meet. Using object-oriented design in conjunction with other innovative techniques, they address these issues within the SNES (Scalable Nonlinear Equation Solvers) and SUMS (Scalable Unconstrained Minimization Solvers) packages, which are part of the multilevel PETSCs (Portable, Extensible Tools for Scientific computation) library. This paper focuses on the authors design philosophy and its benefits in providing a uniform and versatile framework for developing optimization software and solving large-scale nonlinear problems. They also consider a three-dimensional anisotropic Ginzburg-Landau model as a representative application that exploits the packages' flexible interface with user-specified data structures and customized routines for function evaluation and preconditioning.

  8. Scalable Multifunction RF Systems: Combined vs. Separate Transmit and Receive Arrays

    NARCIS (Netherlands)

    Huizing, A.G.

    2008-01-01

    A scalable multifunction RF (SMRF) system allows the RF functionality (radar, electronic warfare and communications) to be easily extended and the RF performance to be scaled to the requirements of different missions and platforms. This paper presents the results of a trade-off study with respect to

  9. Wideband vs. Multiband Trade-offs for a Scalable Multifunction RF system

    NARCIS (Netherlands)

    Huizing, A.G.

    2005-01-01

    This paper presents a concept for a scalable multifunction RF (SMRF) system that allows the RF functionality (radar, electronic warfare and communications) to be easily extended and the RF performance to be scaled to the requirements of different missions and platforms. A trade-off analysis is

  10. ENDEAVOUR: A Scalable SDN Architecture for Real-World IXPs

    KAUST Repository

    Antichi, Gianni

    2017-10-25

    Innovation in interdomain routing has remained stagnant for over a decade. Recently, IXPs have emerged as economically-advantageous interconnection points for reducing path latencies and exchanging ever increasing traffic volumes among, possibly, hundreds of networks. Given their far-reaching implications on interdomain routing, IXPs are the ideal place to foster network innovation and extend the benefits of SDN to the interdomain level. In this paper, we present, evaluate, and demonstrate ENDEAVOUR, an SDN platform for IXPs. ENDEAVOUR can be deployed on a multi-hop IXP fabric, supports a large number of use cases, and is highly-scalable while avoiding broadcast storms. Our evaluation with real data from one of the largest IXPs, demonstrates the benefits and scalability of our solution: ENDEAVOUR requires around 70% fewer rules than alternative SDN solutions thanks to our rule partitioning mechanism. In addition, by providing an open source solution, we invite everyone from the community to experiment (and improve) our implementation as well as adapt it to new use cases.

  11. Developing a scalable artificial photosynthesis technology through nanomaterials by design.

    Science.gov (United States)

    Lewis, Nathan S

    2016-12-06

    An artificial photosynthetic system that directly produces fuels from sunlight could provide an approach to scalable energy storage and a technology for the carbon-neutral production of high-energy-density transportation fuels. A variety of designs are currently being explored to create a viable artificial photosynthetic system, and the most technologically advanced systems are based on semiconducting photoelectrodes. Here, I discuss the development of an approach that is based on an architecture, first conceived around a decade ago, that combines arrays of semiconducting microwires with flexible polymeric membranes. I highlight the key steps that have been taken towards delivering a fully functional solar fuels generator, which have exploited advances in nanotechnology at all hierarchical levels of device construction, and include the discovery of earth-abundant electrocatalysts for fuel formation and materials for the stabilization of light absorbers. Finally, I consider the remaining scientific and engineering challenges facing the fulfilment of an artificial photosynthetic system that is simultaneously safe, robust, efficient and scalable.

  12. The dust acoustic waves in three dimensional scalable complex plasma

    CERN Document Server

    Zhukhovitskii, D I

    2015-01-01

    Dust acoustic waves in the bulk of a dust cloud in complex plasma of low pressure gas discharge under microgravity conditions are considered. The dust component of complex plasma is assumed a scalable system that conforms to the ionization equation of state (IEOS) developed in our previous study. We find singular points of this IEOS that determine the behavior of the sound velocity in different regions of the cloud. The fluid approach is utilized to deduce the wave equation that includes the neutral drag term. It is shown that the sound velocity is fully defined by the particle compressibility, which is calculated on the basis of the scalable IEOS. The sound velocities and damping rates calculated for different 3D complex plasmas both in ac and dc discharges demonstrate a good correlation with experimental data that are within the limits of validity of the theory. The theory provides interpretation for the observed independence of the sound velocity on the coordinate and for a weak dependence on the particle ...

  13. Developing a scalable artificial photosynthesis technology through nanomaterials by design

    Science.gov (United States)

    Lewis, Nathan S.

    2016-12-01

    An artificial photosynthetic system that directly produces fuels from sunlight could provide an approach to scalable energy storage and a technology for the carbon-neutral production of high-energy-density transportation fuels. A variety of designs are currently being explored to create a viable artificial photosynthetic system, and the most technologically advanced systems are based on semiconducting photoelectrodes. Here, I discuss the development of an approach that is based on an architecture, first conceived around a decade ago, that combines arrays of semiconducting microwires with flexible polymeric membranes. I highlight the key steps that have been taken towards delivering a fully functional solar fuels generator, which have exploited advances in nanotechnology at all hierarchical levels of device construction, and include the discovery of earth-abundant electrocatalysts for fuel formation and materials for the stabilization of light absorbers. Finally, I consider the remaining scientific and engineering challenges facing the fulfilment of an artificial photosynthetic system that is simultaneously safe, robust, efficient and scalable.

  14. Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    John Mellor-Crummey

    2008-02-29

    Rice University's achievements as part of the Center for Programming Models for Scalable Parallel Computing include: (1) design and implemention of cafc, the first multi-platform CAF compiler for distributed and shared-memory machines, (2) performance studies of the efficiency of programs written using the CAF and UPC programming models, (3) a novel technique to analyze explicitly-parallel SPMD programs that facilitates optimization, (4) design, implementation, and evaluation of new language features for CAF, including communication topologies, multi-version variables, and distributed multithreading to simplify development of high-performance codes in CAF, and (5) a synchronization strength reduction transformation for automatically replacing barrier-based synchronization with more efficient point-to-point synchronization. The prototype Co-array Fortran compiler cafc developed in this project is available as open source software from http://www.hipersoft.rice.edu/caf.

  15. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility.

    Science.gov (United States)

    Jaschob, Daniel; Riffle, Michael

    2012-07-30

    Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. JobCenter is a client-server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or "in the cloud") and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  16. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility

    Directory of Open Access Journals (Sweden)

    Jaschob Daniel

    2012-07-01

    Full Text Available Abstract Background Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. Results JobCenter is a client–server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or “in the cloud” and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. Conclusions JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  17. SKILL--A Scalable Internet-Based Teaching and Learning System.

    Science.gov (United States)

    Neumann, Gustaf; Zirvas, Jana

    This paper describes the architecture and discusses implementation issues of a scalable Internet-based teaching and learning system (SKILL) being developed at the University of Essen (Germany). The primary objective of SKILL is to cope with the different knowledge levels and learning preferences of the students, providing them with a collaborative…

  18. Design issues of an open scalable architecture for active phased array radars

    NARCIS (Netherlands)

    Huizing, A.G.

    2003-01-01

    An open scalable architecture will make it easier and quicker to adapt active phased array radar to new missions and platforms. This will provide radar manufacturers with larger markets, more commonality in radar systems, and a better continuity in radar production lines. The procurement of open

  19. A Framework for Distributing Scalable Content over Peer-to-Peer Networks

    NARCIS (Netherlands)

    Eberhard, M.; Kumar, A.; Mignanti, S.; Petrocco, R.; Uitto, M.

    2011-01-01

    Peer-to-Peer systems are nowadays a very popular solution for multimedia distribution, as they provide significant cost benefits compared with traditional server-client distribution. Additionally, the distribution of scalable content enables the consumption of the content in a quality suited for the

  20. Scalable and Flexible SLA Management Approach for Cloud

    Directory of Open Access Journals (Sweden)

    SHAUKAT MEHMOOD

    2017-01-01

    Full Text Available Cloud Computing is a cutting edge technology in market now a days. In Cloud Computing environment the customer should pay bills to use computing resources. Resource allocation is a primary task in a cloud environment. Significance of resources allocation and availability increase many fold because income of the cloud depends on how efficiently it provides the rented services to the clients. SLA (Service Level Agreement is signed between the cloud Services Provider and the Cloud Services Consumer to maintain stipulated QoS (Quality of Service. It is noted that SLAs are violated due to several reasons. These may include system malfunctions and change in workload conditions. Elastic and adaptive approaches are required to prevent SLA violations. We propose an application level monitoring novel scheme to prevent SLA violations. It is based on elastic and scalable characteristics. It is easy to deploy and use. It focuses on application level monitoring.

  1. Combined Scalable Video Coding Method for Wireless Transmission

    Directory of Open Access Journals (Sweden)

    Achmad Affandi

    2011-08-01

    Full Text Available Mobile video streaming is one of multimedia services that has developed very rapidly. Recently, bandwidth utilization for wireless transmission is the main problem in the field of multimedia communications. In this research, we offer a combination of scalable methods as the most attractive solution to this problem. Scalable method for wireless communication should adapt to input video sequence. Standard ITU (International Telecommunication Union - Joint Scalable Video Model (JSVM is employed to produce combined scalable video coding (CSVC method that match the required quality of video streaming services for wireless transmission. The investigation in this paper shows that combined scalable technique outperforms the non-scalable one, in using bit rate capacity at certain layer.

  2. Towards a Scalable, Biomimetic, Antibacterial Coating

    Science.gov (United States)

    Dickson, Mary Nora

    Corneal afflictions are the second leading cause of blindness worldwide. When a corneal transplant is unavailable or contraindicated, an artificial cornea device is the only chance to save sight. Bacterial or fungal biofilm build up on artificial cornea devices can lead to serious complications including the need for systemic antibiotic treatment and even explantation. As a result, much emphasis has been placed on anti-adhesion chemical coatings and antibiotic leeching coatings. These methods are not long-lasting, and microorganisms can eventually circumvent these measures. Thus, I have developed a surface topographical antimicrobial coating. Various surface structures including rough surfaces, superhydrophobic surfaces, and the natural surfaces of insects' wings and sharks' skin are promising anti-biofilm candidates, however none meet the criteria necessary for implementation on the surface of an artificial cornea device. In this thesis I: 1) developed scalable fabrication protocols for a library of biomimetic nanostructure polymer surfaces 2) assessed the potential these for poly(methyl methacrylate) nanopillars to kill or prevent formation of biofilm by E. coli bacteria and species of Pseudomonas and Staphylococcus bacteria and improved upon a proposed mechanism for the rupture of Gram-negative bacterial cell walls 3) developed a scalable, commercially viable method for producing antibacterial nanopillars on a curved, PMMA artificial cornea device and 4) developed scalable fabrication protocols for implantation of antibacterial nanopatterned surfaces on the surfaces of thermoplastic polyurethane materials, commonly used in catheter tubings. This project constitutes a first step towards fabrication of the first entirely PMMA artificial cornea device. The major finding of this work is that by precisely controlling the topography of a polymer surface at the nano-scale, we can kill adherent bacteria and prevent biofilm formation of certain pathogenic bacteria

  3. Scalable and Anonymous Group Communication with MTor

    Directory of Open Access Journals (Sweden)

    Lin Dong

    2016-04-01

    Full Text Available This paper presents MTor, a low-latency anonymous group communication system. We construct MTor as an extension to Tor, allowing the construction of multi-source multicast trees on top of the existing Tor infrastructure. MTor does not depend on an external service to broker the group communication, and avoids central points of failure and trust. MTor’s substantial bandwidth savings and graceful scalability enable new classes of anonymous applications that are currently too bandwidth-intensive to be viable through traditional unicast Tor communication-e.g., group file transfer, collaborative editing, streaming video, and real-time audio conferencing.

  4. Scalable conditional induction variables (CIV) analysis

    DEFF Research Database (Denmark)

    Oancea, Cosmin Eugen; Rauchwerger, Lawrence

    2015-01-01

    representation. Our technique requires no modifications of our dependence tests, which is agnostic to the original shape of the subscripts, and is more powerful than previously reported dependence tests that rely on the pairwise disambiguation of read-write references. We have implemented the CIV analysis in our...... parallelizing compiler and evaluated its impact on five Fortran benchmarks. We have found that that there are many important loops using CIV subscripts and that our analysis can lead to their scalable parallelization. This in turn has led to the parallelization of the benchmark programs they appear in....

  5. Tip-Based Nanofabrication for Scalable Manufacturing

    Directory of Open Access Journals (Sweden)

    Huan Hu

    2017-03-01

    Full Text Available Tip-based nanofabrication (TBN is a family of emerging nanofabrication techniques that use a nanometer scale tip to fabricate nanostructures. In this review, we first introduce the history of the TBN and the technology development. We then briefly review various TBN techniques that use different physical or chemical mechanisms to fabricate features and discuss some of the state-of-the-art techniques. Subsequently, we focus on those TBN methods that have demonstrated potential to scale up the manufacturing throughput. Finally, we discuss several research directions that are essential for making TBN a scalable nano-manufacturing technology.

  6. Progressive Dictionary Learning with Hierarchical Predictive Structure for Scalable Video Coding.

    Science.gov (United States)

    Dai, Wenrui; Shen, Yangmei; Xiong, Hongkai; Jiang, Xiaoqian; Zou, Junni; Taubman, David

    2017-04-12

    Dictionary learning has emerged as a promising alternative to the conventional hybrid coding framework. However, the rigid structure of sequential training and prediction degrades its performance in scalable video coding. This paper proposes a progressive dictionary learning framework with hierarchical predictive structure for scalable video coding, especially in low bitrate region. For pyramidal layers, sparse representation based on spatio-temporal dictionary is adopted to improve the coding efficiency of enhancement layers (ELs) with a guarantee of reconstruction performance. The overcomplete dictionary is trained to adaptively capture local structures along motion trajectories as well as exploit the correlations between neighboring layers of resolutions. Furthermore, progressive dictionary learning is developed to enable the scalability in temporal domain and restrict the error propagation in a close-loop predictor. Under the hierarchical predictive structure, online learning is leveraged to guarantee the training and prediction performance with an improved convergence rate. To accommodate with the stateof- the-art scalable extension of H.264/AVC and latest HEVC, standardized codec cores are utilized to encode the base and enhancement layers. Experimental results show that the proposed method outperforms the latest SHVC and HEVC simulcast over extensive test sequences with various resolutions.

  7. Does training on performance based financing make a difference in performance and quality of health care delivery? Health care provider's perspective in Rungwe Tanzania.

    Science.gov (United States)

    Manongi, Rachel; Mushi, Declare; Kessy, Joachim; Salome, Saria; Njau, Bernard

    2014-04-04

    In recent years, Performance Based Financing (PBF); a form of result based financing, has attracted a global attention in health systems in developing countries. PBF promotes autonomous health facilities, motivates and introduces financial incentives to motivate health facilities and health workers to attain pre-determined targets. To achieve this, the Tanzanian government through the Christian Social Services Commission initiated a PBF pilot project in Rungwe district, Mbeya region. Kilimanjaro Christian Medical Center was given the role of training health workers on PBF principles in Rungwe. The aim of this study was to explore health care providers' perception on a three years training on PBF principles in a PBF pilot project at Rungwe District in Mbeya, Tanzania. This was an explorative qualitative study, which took place at Rungwe PBF pilot area in October 2012. Twenty six (26) participants were purposively selected. Six took part in- depth interviews (IDIs) and twenty (20) in the group discussions. Both the IDIs and the GDs explored the perceived benefit and challenges of implementing PBF in their workplace. Data were manually analyzed using content analysis approach. Overall informants had positive perspectives on PBF training. Most of the health facilities were able to implement some of the PBF concepts in their work places after the training, such as developing job descriptions for their staff, creating quarterly business plans for their facilities, costing for their services and entering service agreement with the government, improved record keeping, customer care and involving community as partners in running their facilities. The most common principle of paying individual performance bonuses was mentioned as a major challenge due to inadequate funding and poor design of Rungwe PBF pilot project. Despite poor design and inadequate funding, our findings have shown some promising results after PBF training in the study area. The findings have highlighted

  8. How can information systems provide support to nurses’ hand hygiene performance? Using gamification and indoor location to improve hand hygiene awareness and reduce hospital infections

    National Research Council Canada - National Science Library

    Marques, Rita; Gregório, João; Pinheiro, Fernando; Póvoa, Pedro; da Silva, Miguel Mira; Lapão, Luís Velez

    2017-01-01

    .... To raise awareness regarding hand hygiene compliance, individual behaviour change and performance optimization, we aimed to develop a gamification solution that collects data and provides real-time...

  9. Scalable Engineering of Quantum Optical Information Processing Architectures (SEQUOIA)

    Science.gov (United States)

    2016-12-13

    scalable architecture for LOQC and cluster state quantum computing (Ballistic or non-ballistic) - With parametric nonlinearities (Kerr, chi-2...Scalable Engineering of Quantum Optical Information-Processing Architectures (SEQUOIA) 5a. CONTRACT NUMBER W31-P4Q-15-C-0045 5b. GRANT NUMBER 5c...Technologies 13 December 2016 “Scalable Engineering of Quantum Optical Information-Processing Architectures (SEQUOIA)” Final R&D Status Report

  10. Using common table expressions to build a scalable Boolean query generator for clinical data warehouses.

    Science.gov (United States)

    Harris, Daniel R; Henderson, Darren W; Kavuluru, Ramakanth; Stromberg, Arnold J; Johnson, Todd R

    2014-09-01

    We present a custom, Boolean query generator utilizing common-table expressions (CTEs) that is capable of scaling with big datasets. The generator maps user-defined Boolean queries, such as those interactively created in clinical-research and general-purpose healthcare tools, into SQL. We demonstrate the effectiveness of this generator by integrating our study into the Informatics for Integrating Biology and the Bedside (i2b2) query tool and show that it is capable of scaling. Our custom generator replaces and outperforms the default query generator found within the Clinical Research Chart cell of i2b2. In our experiments, 16 different types of i2b2 queries were identified by varying four constraints: date, frequency, exclusion criteria, and whether selected concepts occurred in the same encounter. We generated nontrivial, random Boolean queries based on these 16 types; the corresponding SQL queries produced by both generators were compared by execution times. The CTE-based solution significantly outperformed the default query generator and provided a much more consistent response time across all query types (M = 2.03, SD = 6.64 versus M = 75.82, SD = 238.88 s). Without costly hardware upgrades, we provide a scalable solution based on CTEs with very promising empirical results centered on performance gains. The evaluation methodology used for this provides a means of profiling clinical data warehouse performance.

  11. Wounding patterns and human performance in knife attacks: optimising the protection provided by knife-resistant body armour.

    Science.gov (United States)

    Bleetman, A; Watson, C H; Horsfall, I; Champion, S M

    2003-12-01

    Stab attacks generate high loads, and to defeat them, armour needs to be of a certain thickness and stiffness. Slash attacks produce much lower loads and armour designed to defeat them can be far lighter and more flexible. Phase 1: Human performance in slash attacks: 87 randomly selected students at the Royal Military College of Science were asked to make one slash attack with an instrumented blade on a vertically mounted target. No instructions on how to slash the target were given. The direction, contact forces and velocity of each attack were recorded. Phase 2: Clinical experience with edged weapon attacks: The location and severity of all penetrating injuries in patients attending the Glasgow Royal Infirmary between 1993 and 1996 were charted on anatomical figures. Phase 1: Two types of human slash behaviour were evident: a 'chop and drag' blow and a 'sweep motion' type of attack. 'Chop and drag' attacks had higher peak forces and velocities than sweep attacks. Shoulder to waist blows (diagonal) accounted for 82% of attacks, 71% of attackers used a long diagonal slash with an average cut length of 34 cm and 11% used short diagonal attacks with an average cut length of 25 cm. Only 18% of attackers slashed across the body (short horizontal); the average measured cut length of this type was 28 cm. The maximum peak force for the total sample population was 212 N; the maximum velocity was 14.88 m s(-1). The 95 percentile force for the total sample population was 181 N and the velocity was 9.89 m s(-1). Phase 2: 431 of the 500 patients had been wounded with edged weapons. The average number of wounds sustained by victims in knife assaults was 2.4. The distribution of wounds by frequency and severity are presented. Anti-slash protection is required for the arms, neck, shoulders, and thighs. The clinical experience of knife-attack victims provides information on the relative vulnerabilities of different regions of the body. It is anticipated that designing a tunic

  12. Using Audience Response Technology to provide formative feedback on pharmacology performance for non-medical prescribing students - a preliminary evaluation

    Directory of Open Access Journals (Sweden)

    Mostyn Alison

    2012-11-01

    Full Text Available Abstract Background The use of anonymous audience response technology (ART to actively engage students in classroom learning has been evaluated positively across multiple settings. To date, however, there has been no empirical evaluation of the use of individualised ART handsets and formative feedback of ART scores. The present study investigates student perceptions of such a system and the relationship between formative feedback results and exam performance. Methods Four successive cohorts of Non-Medical Prescribing students (n=107 had access to the individualised ART system and three of these groups (n=72 completed a questionnaire about their perceptions of using ART. Semi-structured interviews were carried out with a purposive sample of seven students who achieved a range of scores on the formative feedback. Using data from all four cohorts of students, the relationship between mean ART scores and summative pharmacology exam score was examined using a non-parametric correlation. Results Questionnaire and interview data suggested that the use of ART enhanced the classroom environment, motivated students and promoted learning. Questionnaire data demonstrated that students found the formative feedback helpful for identifying their learning needs (95.6%, guiding their independent study (86.8%, and as a revision tool (88.3%. Interviewees particularly valued the objectivity of the individualised feedback which helped them to self-manage their learning. Interviewees’ initial anxiety about revealing their level of pharmacology knowledge to the lecturer and to themselves reduced over time as students focused on the learning benefits associated with the feedback. A significant positive correlation was found between students’ formative feedback scores and their summative pharmacology exam scores (Spearman’s rho = 0.71, N=107, p Conclusions Despite initial anxiety about the use of individualised ART units, students rated the helpfulness of the

  13. Big data integration: scalability and sustainability

    KAUST Repository

    Zhang, Zhang

    2016-01-26

    Integration of various types of omics data is critically indispensable for addressing most important and complex biological questions. In the era of big data, however, data integration becomes increasingly tedious, time-consuming and expensive, posing a significant obstacle to fully exploit the wealth of big biological data. Here we propose a scalable and sustainable architecture that integrates big omics data through community-contributed modules. Community modules are contributed and maintained by different committed groups and each module corresponds to a specific data type, deals with data collection, processing and visualization, and delivers data on-demand via web services. Based on this community-based architecture, we build Information Commons for Rice (IC4R; http://ic4r.org), a rice knowledgebase that integrates a variety of rice omics data from multiple community modules, including genome-wide expression profiles derived entirely from RNA-Seq data, resequencing-based genomic variations obtained from re-sequencing data of thousands of rice varieties, plant homologous genes covering multiple diverse plant species, post-translational modifications, rice-related literatures, and community annotations. Taken together, such architecture achieves integration of different types of data from multiple community-contributed modules and accordingly features scalable, sustainable and collaborative integration of big data as well as low costs for database update and maintenance, thus helpful for building IC4R into a comprehensive knowledgebase covering all aspects of rice data and beneficial for both basic and translational researches.

  14. Highly scalable Ab initio genomic motif identification

    KAUST Repository

    Marchand, Benoit

    2011-01-01

    We present results of scaling an ab initio motif family identification system, Dragon Motif Finder (DMF), to 65,536 processor cores of IBM Blue Gene/P. DMF seeks groups of mutually similar polynucleotide patterns within a set of genomic sequences and builds various motif families from them. Such information is of relevance to many problems in life sciences. Prior attempts to scale such ab initio motif-finding algorithms achieved limited success. We solve the scalability issues using a combination of mixed-mode MPI-OpenMP parallel programming, master-slave work assignment, multi-level workload distribution, multi-level MPI collectives, and serial optimizations. While the scalability of our algorithm was excellent (94% parallel efficiency on 65,536 cores relative to 256 cores on a modest-size problem), the final speedup with respect to the original serial code exceeded 250,000 when serial optimizations are included. This enabled us to carry out many large-scale ab initio motiffinding simulations in a few hours while the original serial code would have needed decades of execution time. Copyright 2011 ACM.

  15. Scalable Domain Decomposition Preconditioners for Heterogeneous Elliptic Problems

    Directory of Open Access Journals (Sweden)

    Pierre Jolivet

    2014-01-01

    Full Text Available Domain decomposition methods are, alongside multigrid methods, one of the dominant paradigms in contemporary large-scale partial differential equation simulation. In this paper, a lightweight implementation of a theoretically and numerically scalable preconditioner is presented in the context of overlapping methods. The performance of this work is assessed by numerical simulations executed on thousands of cores, for solving various highly heterogeneous elliptic problems in both 2D and 3D with billions of degrees of freedom. Such problems arise in computational science and engineering, in solid and fluid mechanics. While focusing on overlapping domain decomposition methods might seem too restrictive, it will be shown how this work can be applied to a variety of other methods, such as non-overlapping methods and abstract deflation based preconditioners. It is also presented how multilevel preconditioners can be used to avoid communication during an iterative process such as a Krylov method.

  16. Scalable Fabrication of 2D Semiconducting Crystals for Future Electronics

    Directory of Open Access Journals (Sweden)

    Jiantong Li

    2015-12-01

    Full Text Available Two-dimensional (2D layered materials are anticipated to be promising for future electronics. However, their electronic applications are severely restricted by the availability of such materials with high quality and at a large scale. In this review, we introduce systematically versatile scalable synthesis techniques in the literature for high-crystallinity large-area 2D semiconducting materials, especially transition metal dichalcogenides, and 2D material-based advanced structures, such as 2D alloys, 2D heterostructures and 2D material devices engineered at the wafer scale. Systematic comparison among different techniques is conducted with respect to device performance. The present status and the perspective for future electronics are discussed.

  17. A Scalable Framework and Prototype for CAS e-Science

    Directory of Open Access Journals (Sweden)

    Yuanchun Zhou

    2007-07-01

    Full Text Available Based on the Small-World model of CAS e-Science and the power low of Internet, this paper presents a scalable CAS e-Science Grid framework based on virtual region called Virtual Region Grid Framework (VRGF. VRGF takes virtual region and layer as logic manage-unit. In VRGF, the mode of intra-virtual region is pure P2P, and the model of inter-virtual region is centralized. Therefore, VRGF is decentralized framework with some P2P properties. Further more, VRGF is able to achieve satisfactory performance on resource organizing and locating at a small cost, and is well adapted to the complicated and dynamic features of scientific collaborations. We have implemented a demonstration VRGF based Grid prototype—SDG.

  18. Simplifying Scalable Graph Processing with a Domain-Specific Language

    KAUST Repository

    Hong, Sungpack

    2014-01-01

    Large-scale graph processing, with its massive data sets, requires distributed processing. However, conventional frameworks for distributed graph processing, such as Pregel, use non-traditional programming models that are well-suited for parallelism and scalability but inconvenient for implementing non-trivial graph algorithms. In this paper, we use Green-Marl, a Domain-Specific Language for graph analysis, to intuitively describe graph algorithms and extend its compiler to generate equivalent Pregel implementations. Using the semantic information captured by Green-Marl, the compiler applies a set of transformation rules that convert imperative graph algorithms into Pregel\\'s programming model. Our experiments show that the Pregel programs generated by the Green-Marl compiler perform similarly to manually coded Pregel implementations of the same algorithms. The compiler is even able to generate a Pregel implementation of a complicated graph algorithm for which a manual Pregel implementation is very challenging.

  19. Final Report: Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [William Marsh Rice University

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  20. Hierarchical Sets: Analyzing Pangenome Structure through Scalable Set Visualizations

    DEFF Research Database (Denmark)

    Pedersen, Thomas Lin

    2017-01-01

    information to increase in knowledge. As the pangenome data structure is essentially a collection of sets we explore the potential for scalable set visualization as a tool for pangenome analysis. We present a new hierarchical clustering algorithm based on set arithmetics that optimizes the intersection sizes...... along the branches. The intersection and union sizes along the hierarchy are visualized using a composite dendrogram and icicle plot, which, in pangenome context, shows the evolution of pangenome and core size along the evolutionary hierarchy. Outlying elements, i.e. elements whose presence pattern do...... of hierarchical sets by applying it to a pangenome based on 113 Escherichia and Shigella genomes and find it provides a powerful addition to pangenome analysis. The described clustering algorithm and visualizations are implemented in the hierarchicalSets R package available from CRAN (https...

  1. CloudETL: Scalable Dimensional ETL for Hadoop and Hive

    DEFF Research Database (Denmark)

    Xiufeng, Liu; Thomsen, Christian; Pedersen, Torben Bach

    Extract-Transform-Load (ETL) programs process data from sources into data warehouses (DWs). Due to the rapid growth of data volumes, there is an increasing demand for systems that can scale on demand. Recently, much attention has been given to MapReduce which is a framework for highly parallel...... handling of massive data sets in cloud environments. The MapReduce-based Hive has been proposed as a DBMS-like system for DWs and provides good and scalable analytical features. It is,however, still challenging to do proper dimensional ETL processing with Hive; for example, UPDATEs are not supported which...... makes handling of slowly changing dimensions (SCDs) very difficult. To remedy this, we here present the cloud-enabled ETL framework CloudETL. CloudETL uses the open source MapReduce implementation Hadoop to parallelize the ETL execution and to process data into Hive. The user defines the ETL process...

  2. Adaptive Streaming of Scalable Videos over P2PTV

    Directory of Open Access Journals (Sweden)

    Youssef Lahbabi

    2015-01-01

    Full Text Available In this paper, we propose a new Scalable Video Coding (SVC quality-adaptive peer-to-peer television (P2PTV system executed at the peers and at the network. The quality adaptation mechanisms are developed as follows: on one hand, the Layer Level Initialization (LLI is used for adapting the video quality with the static resources at the peers in order to avoid long startup times. On the other hand, the Layer Level Adjustment (LLA is invoked periodically to adjust the SVC layer to the fluctuation of the network conditions with the aim of predicting the possible stalls before their occurrence. Our results demonstrate that our mechanisms allow quickly adapting the video quality to various system changes while providing best Quality of Experience (QoE that matches current resources of the peer devices and instantaneous throughput available at the network state.

  3. MSDLSR: Margin Scalable Discriminative Least Squares Regression for Multicategory Classification.

    Science.gov (United States)

    Wang, Lingfeng; Zhang, Xu-Yao; Pan, Chunhong

    2016-12-01

    In this brief, we propose a new margin scalable discriminative least squares regression (MSDLSR) model for multicategory classification. The main motivation behind the MSDLSR is to explicitly control the margin of DLSR model. We first prove that the DLSR is a relaxation of the traditional L2 -support vector machine. Based on this fact, we further provide a theorem on the margin of DLSR. With this theorem, we add an explicit constraint on DLSR to restrict the number of zeros of dragging values, so as to control the margin of DLSR. The new model is called MSDLSR. Theoretically, we analyze the determination of the margin and support vectors of MSDLSR. Extensive experiments illustrate that our method outperforms the current state-of-the-art approaches on various machine leaning and real-world data sets.

  4. Wired/wireless access integrated RoF-PON with scalable generation of multi-frequency MMWs enabled by polarization multiplexed FWM in SOA.

    Science.gov (United States)

    Xiang, Yu; Chen, Chen; Zhang, Chongfu; Qiu, Kun

    2013-01-14

    In this paper, we propose and demonstrate a novel integrated radio-over-fiber passive optical network (RoF-PON) system for both wired and wireless access. By utilizing the polarization multiplexed four-wave mixing (FWM) effect in a semiconductor optical amplifier (SOA), scalable generation of multi-frequency millimeter-waves (MMWs) can be provided so as to assist the configuration of multi-frequency wireless access for the wire/wireless access integrated ROF-PON system. In order to obtain a better performance, the polarization multiplexed FWM effect is investigated in detail. Simulation results successfully verify the feasibility of our proposed scheme.

  5. 16 CFR 1406.4 - Requirements to provide performance and technical notice to prospective purchasers and purchasers.

    Science.gov (United States)

    2010-01-01

    ... recommends the use of this 2 label format in order to provide more consumer awareness of the operation and... any identification of the manufacturer, brand, model, and similar designations. At the manufacturer's...

  6. A Fault-Tolerant Mobile Computing Model Based On Scalable Replica

    Directory of Open Access Journals (Sweden)

    Meenakshi Sati

    2014-06-01

    Full Text Available The most frequent challenge faced by mobile user is stay connected with online data, while disconnected or poorly connected store the replica of critical data. Nomadic users require replication to store copies of critical data on their mobile machines. Existing replication services do not provide all classes of mobile users with the capabilities they require, which include: the ability for direct synchronization between any two replicas, support for large numbers of replicas, and detailed control over what files reside on their local (mobile replica. Existing peer-to-peer solutions would enable direct communication, but suffers from dramatic scaling problems in the number of replicas, limiting the number of overall users and impacting performance. Roam is a replication system designed to satisfy the requirements of the mobile user. Roam is based on the Ward Model, replication architecture for mobile environments. Using the Ward Model and new distributed algorithms, Roam provides a scalable replication solution for the mobile user. We describe the motivation, design, and implementation of Roam and report its performance. Replication is extremely important in mobile environments because nomadic users require local copies of important data.

  7. A wireless, compact, and scalable bioimpedance measurement system for energy-efficient multichannel body sensor solutions

    Science.gov (United States)

    Ramos, J.; Ausín, J. L.; Lorido, A. M.; Redondo, F.; Duque-Carrillo, J. F.

    2013-04-01

    In this paper, we present the design, realization and evaluation of a multichannel measurement system based on a cost-effective high-performance integrated circuit for electrical bioimpedance (EBI) measurements in the frequency range from 1 kHz to 1 MHz, and a low-cost commercially available radio frequency transceiver device, which provides reliable wireless communication. The resulting on-chip spectrometer provides high measuring EBI capabilities and constitutes the basic node to built EBI wireless sensor networks (EBI-WSNs). The proposed EBI-WSN behaves as a high-performance wireless multichannel EBI spectrometer where the number of nodes, i.e., number of channels, is completely scalable to satisfy specific requirements of body sensor networks. One of its main advantages is its versatility, since each EBI node is independently configurable and capable of working simultaneously. A prototype of the EBI node leads to a very small printed circuit board of approximately 8 cm2 including chip-antenna, which can operate several years on one 3-V coin cell battery. A specifically tailored graphical user interface (GUI) for EBI-WSN has been also designed and implemented in order to configure the operation of EBI nodes and the network topology. EBI analysis parameters, e.g., single-frequency or spectroscopy, time interval, analysis by EBI events, frequency and amplitude ranges of the excitation current, etc., are defined by the GUI.

  8. PyClaw: Accessible, Extensible, Scalable Tools for Wave Propagation Problems

    KAUST Repository

    Ketcheson, David I.

    2012-08-15

    Development of scientific software involves tradeoffs between ease of use, generality, and performance. We describe the design of a general hyperbolic PDE solver that can be operated with the convenience of MATLAB yet achieves efficiency near that of hand-coded Fortran and scales to the largest supercomputers. This is achieved by using Python for most of the code while employing automatically wrapped Fortran kernels for computationally intensive routines, and using Python bindings to interface with a parallel computing library and other numerical packages. The software described here is PyClaw, a Python-based structured grid solver for general systems of hyperbolic PDEs [K. T. Mandli et al., PyClaw Software, Version 1.0, http://numerics.kaust.edu.sa/pyclaw/ (2011)]. PyClaw provides a powerful and intuitive interface to the algorithms of the existing Fortran codes Clawpack and SharpClaw, simplifying code development and use while providing massive parallelism and scalable solvers via the PETSc library. The package is further augmented by use of PyWENO for generation of efficient high-order weighted essentially nonoscillatory reconstruction code. The simplicity, capability, and performance of this approach are demonstrated through application to example problems in shallow water flow, compressible flow, and elasticity.

  9. ParaText : scalable text modeling and analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-06-01

    Automated processing, modeling, and analysis of unstructured text (news documents, web content, journal articles, etc.) is a key task in many data analysis and decision making applications. As data sizes grow, scalability is essential for deep analysis. In many cases, documents are modeled as term or feature vectors and latent semantic analysis (LSA) is used to model latent, or hidden, relationships between documents and terms appearing in those documents. LSA supplies conceptual organization and analysis of document collections by modeling high-dimension feature vectors in many fewer dimensions. While past work on the scalability of LSA modeling has focused on the SVD, the goal of our work is to investigate the use of distributed memory architectures for the entire text analysis process, from data ingestion to semantic modeling and analysis. ParaText is a set of software components for distributed processing, modeling, and analysis of unstructured text. The ParaText source code is available under a BSD license, as an integral part of the Titan toolkit. ParaText components are chained-together into data-parallel pipelines that are replicated across processes on distributed-memory architectures. Individual components can be replaced or rewired to explore different computational strategies and implement new functionality. ParaText functionality can be embedded in applications on any platform using the native C++ API, Python, or Java. The ParaText MPI Process provides a 'generic' text analysis pipeline in a command-line executable that can be used for many serial and parallel analysis tasks. ParaText can also be deployed as a web service accessible via a RESTful (HTTP) API. In the web service configuration, any client can access the functionality provided by ParaText using commodity protocols ... from standard web browsers to custom clients written in any language.

  10. Network selection, Information filtering and Scalable computation

    Science.gov (United States)

    Ye, Changqing

    -complete factorizations, possibly with a high percentage of missing values. This promotes additional sparsity beyond rank reduction. Computationally, we design methods based on a ``decomposition and combination'' strategy, to break large-scale optimization into many small subproblems to solve in a recursive and parallel manner. On this basis, we implement the proposed methods through multi-platform shared-memory parallel programming, and through Mahout, a library for scalable machine learning and data mining, for mapReduce computation. For example, our methods are scalable to a dataset consisting of three billions of observations on a single machine with sufficient memory, having good timings. Both theoretical and numerical investigations show that the proposed methods exhibit significant improvement in accuracy over state-of-the-art scalable methods.

  11. Formative evaluation of a telemedicine model for delivering clinical neurophysiology services part I: utility, technical performance and service provider perspective.

    LENUS (Irish Health Repository)

    Breen, Patricia

    2010-01-01

    Formative evaluation is conducted in the early stages of system implementation to assess how it works in practice and to identify opportunities for improving technical and process performance. A formative evaluation of a teleneurophysiology service was conducted to examine its technical and sociological dimensions.

  12. Scalable DeNoise-and-Forward in Bidirectional Relay Networks

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Krigslund, Rasmus; Popovski, Petar

    2010-01-01

    In this paper a scalable relaying scheme is proposed based on an existing concept called DeNoise-and-Forward, DNF. We call it Scalable DNF, S-DNF, and it targets the scenario with multiple communication flows through a single common relay. The idea of the scheme is to combine packets at the relay...

  13. Building scalable apps with Redis and Node.js

    CERN Document Server

    Johanan, Joshua

    2014-01-01

    If the phrase scalability sounds alien to you, then this is an ideal book for you. You will not need much Node.js experience as each framework is demonstrated in a way that requires no previous knowledge of the framework. You will be building scalable Node.js applications in no time! Knowledge of JavaScript is required.

  14. Scalable active learning for multiclass image classification.

    Science.gov (United States)

    Joshi, Ajay J; Porikli, Fatih; Papanikolopoulos, Nikolaos P

    2012-11-01

    Machine learning techniques for computer vision applications like object recognition, scene classification, etc., require a large number of training samples for satisfactory performance. Especially when classification is to be performed over many categories, providing enough training samples for each category is infeasible. This paper describes new ideas in multiclass active learning to deal with the training bottleneck, making it easier to train large multiclass image classification systems. First, we propose a new interaction modality for training which requires only yes-no type binary feedback instead of a precise category label. The modality is especially powerful in the presence of hundreds of categories. For the proposed modality, we develop a Value-of-Information (VOI) algorithm that chooses informative queries while also considering user annotation cost. Second, we propose an active selection measure that works with many categories and is extremely fast to compute. This measure is employed to perform a fast seed search before computing VOI, resulting in an algorithm that scales linearly with dataset size. Third, we use locality sensitive hashing to provide a very fast approximation to active learning, which gives sublinear time scaling, allowing application to very large datasets. The approximation provides up to two orders of magnitude speedups with little loss in accuracy. Thorough empirical evaluation of classification accuracy, noise sensitivity, imbalanced data, and computational performance on a diverse set of image datasets demonstrates the strengths of the proposed algorithms.

  15. 78 FR 69438 - AGOA: Trade and Investment Performance Overview; AGOA: Economic Effects of Providing Duty-Free...

    Science.gov (United States)

    2013-11-19

    ...: Economic Effects of Providing Duty-Free Treatment for Imports; investigation No. 332-545, U.S. AGOA Rules... COMMISSION [Investigation No. 332-542, Investigation No. 332-544, Investigation No. 332-545, Investigation No...-Free Treatment for Imports, U.S. AGOA Rules of Origin: Possible Changes To Promote Regional Integration...

  16. VA residential substance use disorder treatment program providers' perceptions of facilitators and barriers to performance on pre-admission processes.

    Science.gov (United States)

    Ellerbe, Laura S; Manfredi, Luisa; Gupta, Shalini; Phelps, Tyler E; Bowe, Thomas R; Rubinsky, Anna D; Burden, Jennifer L; Harris, Alex H S

    2017-04-04

    In the U.S. Department of Veterans Affairs (VA), residential treatment programs are an important part of the continuum of care for patients with a substance use disorder (SUD). However, a limited number of program-specific measures to identify quality gaps in SUD residential programs exist. This study aimed to: (1) Develop metrics for two pre-admission processes: Wait Time and Engagement While Waiting, and (2) Interview program management and staff about program structures and processes that may contribute to performance on these metrics. The first aim sought to supplement the VA's existing facility-level performance metrics with SUD program-level metrics in order to identify high-value targets for quality improvement. The second aim recognized that not all key processes are reflected in the administrative data, and even when they are, new insight may be gained from viewing these data in the context of day-to-day clinical practice. VA administrative data from fiscal year 2012 were used to calculate pre-admission metrics for 97 programs (63 SUD Residential Rehabilitation Treatment Programs (SUD RRTPs); 34 Mental Health Residential Rehabilitation Treatment Programs (MH RRTPs) with a SUD track). Interviews were then conducted with management and front-line staff to learn what factors may have contributed to high or low performance, relative to the national average for their program type. We hypothesized that speaking directly to residential program staff may reveal innovative practices, areas for improvement, and factors that may explain system-wide variability in performance. Average wait time for admission was 16 days (SUD RRTPs: 17 days; MH RRTPs with a SUD track: 11 days), with 60% of Veterans waiting longer than 7 days. For these Veterans, engagement while waiting occurred in an average of 54% of the waiting weeks (range 3-100% across programs). Fifty-nine interviews representing 44 programs revealed factors perceived to potentially impact performance in

  17. BASSET: Scalable Gateway Finder in Large Graphs

    Energy Technology Data Exchange (ETDEWEB)

    Tong, H; Papadimitriou, S; Faloutsos, C; Yu, P S; Eliassi-Rad, T

    2010-11-03

    Given a social network, who is the best person to introduce you to, say, Chris Ferguson, the poker champion? Or, given a network of people and skills, who is the best person to help you learn about, say, wavelets? The goal is to find a small group of 'gateways': persons who are close enough to us, as well as close enough to the target (person, or skill) or, in other words, are crucial in connecting us to the target. The main contributions are the following: (a) we show how to formulate this problem precisely; (b) we show that it is sub-modular and thus it can be solved near-optimally; (c) we give fast, scalable algorithms to find such gateways. Experiments on real data sets validate the effectiveness and efficiency of the proposed methods, achieving up to 6,000,000x speedup.

  18. Interpolating and Estimating Horizontal Diffuse Solar Irradiation to Provide UK-Wide Coverage: Selection of the Best Performing Models

    Directory of Open Access Journals (Sweden)

    Diane Palmer

    2017-02-01

    Full Text Available Plane-of-array (PoA irradiation data is a requirement to simulate the energetic performance of photovoltaic devices (PVs. Normally, solar data is only available as global horizontal irradiation, for a limited number of locations, and typically in hourly time resolution. One approach to handling this restricted data is to enhance it initially by interpolation to the location of interest; next, it must be translated to PoA data by separately considering the diffuse and the beam components. There are many methods of interpolation. This research selects ordinary kriging as the best performing technique by studying mathematical properties, experimentation and leave-one-out-cross validation. Likewise, a number of different translation models has been developed, most of them parameterised for specific measurement setups and locations. The work presented identifies the optimum approach for the UK on a national scale. The global horizontal irradiation will be split into its constituent parts. Divers separation models were tried. The results of each separation algorithm were checked against measured data distributed across the UK. It became apparent that while there is little difference between procedures (14 Wh/m2 mean bias error (MBE, 12 Wh/m2 root mean square error (RMSE, the Ridley, Boland, Lauret equation (a universal split algorithm consistently performed well. The combined interpolation/separation RMSE is 86 Wh/m2.

  19. A scalable method for parallelizing sampling-based motion planning algorithms

    KAUST Repository

    Jacobs, Sam Ade

    2012-05-01

    This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.

  20. Optimal bit allocation for hybrid scalable/multiple-description video transmission over wireless channels

    Science.gov (United States)

    Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.

    2006-01-01

    In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.

  1. fastBMA: scalable network inference and transitive reduction.

    Science.gov (United States)

    Hung, Ling-Hong; Shi, Kaiyuan; Wu, Migao; Young, William Chad; Raftery, Adrian E; Yeung, Ka Yee

    2017-10-01

    Inferring genetic networks from genome-wide expression data is extremely demanding computationally. We have developed fastBMA, a distributed, parallel, and scalable implementation of Bayesian model averaging (BMA) for this purpose. fastBMA also includes a computationally efficient module for eliminating redundant indirect edges in the network by mapping the transitive reduction to an easily solved shortest-path problem. We evaluated the performance of fastBMA on synthetic data and experimental genome-wide time series yeast and human datasets. When using a single CPU core, fastBMA is up to 100 times faster than the next fastest method, LASSO, with increased accuracy. It is a memory-efficient, parallel, and distributed application that scales to human genome-wide expression data. A 10 000-gene regulation network can be obtained in a matter of hours using a 32-core cloud cluster (2 nodes of 16 cores). fastBMA is a significant improvement over its predecessor ScanBMA. It is more accurate and orders of magnitude faster than other fast network inference methods such as the 1 based on LASSO. The improved scalability allows it to calculate networks from genome scale data in a reasonable time frame. The transitive reduction method can improve accuracy in denser networks. fastBMA is available as code (M.I.T. license) from GitHub (https://github.com/lhhunghimself/fastBMA), as part of the updated networkBMA Bioconductor package (https://www.bioconductor.org/packages/release/bioc/html/networkBMA.html) and as ready-to-deploy Docker images (https://hub.docker.com/r/biodepot/fastbma/). © The Authors 2017. Published by Oxford University Press.

  2. MicROS-drt: supporting real-time and scalable data distribution in distributed robotic systems.

    Science.gov (United States)

    Ding, Bo; Wang, Huaimin; Fan, Zedong; Zhang, Pengfei; Liu, Hui

    A primary requirement in distributed robotic software systems is the dissemination of data to all interested collaborative entities in a timely and scalable manner. However, providing such a service in a highly dynamic and resource-limited robotic environment is a challenging task, and existing robot software infrastructure has limitations in this aspect. This paper presents a novel robot software infrastructure, micROS-drt, which supports real-time and scalable data distribution. The solution is based on a loosely coupled data publish-subscribe model with the ability to support various time-related constraints. And to realize this model, a mature data distribution standard, the data distribution service for real-time systems (DDS), is adopted as the foundation of the transport layer of this software infrastructure. By elaborately adapting and encapsulating the capability of the underlying DDS middleware, micROS-drt can meet the requirement of real-time and scalable data distribution in distributed robotic systems. Evaluation results in terms of scalability, latency jitter and transport priority as well as the experiment on real robots validate the effectiveness of this work.

  3. Bubble pump: scalable strategy for in-plane liquid routing.

    Science.gov (United States)

    Oskooei, Ali; Günther, Axel

    2015-07-07

    We present an on-chip liquid routing technique intended for application in well-based microfluidic systems that require long-term active pumping at low to medium flowrates. Our technique requires only one fluidic feature layer, one pneumatic control line and does not rely on flexible membranes and mechanical or moving parts. The presented bubble pump is therefore compatible with both elastomeric and rigid substrate materials and the associated scalable manufacturing processes. Directed liquid flow was achieved in a microchannel by an in-series configuration of two previously described "bubble gates", i.e., by gas-bubble enabled miniature gate valves. Only one time-dependent pressure signal is required and initiates at the upstream (active) bubble gate a reciprocating bubble motion. Applied at the downstream (passive) gate a time-constant gas pressure level is applied. In its rest state, the passive gate remains closed and only temporarily opens while the liquid pressure rises due to the active gate's reciprocating bubble motion. We have designed, fabricated and consistently operated our bubble pump with a variety of working liquids for >72 hours. Flow rates of 0-5.5 μl min(-1), were obtained and depended on the selected geometric dimensions, working fluids and actuation frequencies. The maximum operational pressure was 2.9 kPa-9.1 kPa and depended on the interfacial tension of the working fluids. Attainable flow rates compared favorably with those of available micropumps. We achieved flow rate enhancements of 30-100% by operating two bubble pumps in tandem and demonstrated scalability of the concept in a multi-well format with 12 individually and uniformly perfused microchannels (variation in flow rate bubble pump may provide active flow control for analytical and point-of-care diagnostic devices, as well as for microfluidic cells culture and organ-on-chip platforms.

  4. BBCA: Improving the scalability of *BEAST using random binning.

    Science.gov (United States)

    Zimmermann, Théo; Mirarab, Siavash; Warnow, Tandy

    2014-01-01

    Species tree estimation can be challenging in the presence of gene tree conflict due to incomplete lineage sorting (ILS), which can occur when the time between speciation events is short relative to the population size. Of the many methods that have been developed to estimate species trees in the presence of ILS, *BEAST, a Bayesian method that co-estimates the species tree and gene trees given sequence alignments on multiple loci, has generally been shown to have the best accuracy. However, *BEAST is extremely computationally intensive so that it cannot be used with large numbers of loci; hence, *BEAST is not suitable for genome-scale analyses. We present BBCA (boosted binned coalescent-based analysis), a method that can be used with *BEAST (and other such co-estimation methods) to improve scalability. BBCA partitions the loci randomly into subsets, uses *BEAST on each subset to co-estimate the gene trees and species tree for the subset, and then combines the newly estimated gene trees together using MP-EST, a popular coalescent-based summary method. We compare time-restricted versions of BBCA and *BEAST on simulated datasets, and show that BBCA is at least as accurate as *BEAST, and achieves better convergence rates for large numbers of loci. Phylogenomic analysis using *BEAST is currently limited to datasets with a small number of loci, and analyses with even just 100 loci can be computationally challenging. BBCA uses a very simple divide-and-conquer approach that makes it possible to use *BEAST on datasets containing hundreds of loci. This study shows that BBCA provides excellent accuracy and is highly scalable.

  5. Scalable hardware verification with symbolic simulation

    CERN Document Server

    Bertacco, Valeria

    2006-01-01

    An innovative presentation of the theory of disjoint support decomposition, presenting novel results and algorithms, plus original and up-to-date techniques in formal verificationProvides an overview of current verification techniques, and unveils the inner workings of symbolic simulationFocuses on new techniques that narrow the performance gap between the complexity of digital systems and the limited ability to verify themAddresses key topics in need of future research.

  6. Scalable Track Initiation for Optical Space Surveillance

    Science.gov (United States)

    Schumacher, P.; Wilkins, M. P.

    2012-09-01

    least cubic and commonly quartic or higher. Therefore, practical implementations require attention to the scalability of the algorithms, when one is dealing with the very large number of observations from large surveillance telescopes. We address two broad categories of algorithms. The first category includes and extends the classical methods of Laplace and Gauss, as well as the more modern method of Gooding, in which one solves explicitly for the apparent range to the target in terms of the given data. In particular, recent ideas offered by Mortari and Karimi allow us to construct a family of range-solution methods that can be scaled to many processors efficiently. We find that the orbit solutions (data association hypotheses) can be ranked by means of a concept we call persistence, in which a simple statistical measure of likelihood is based on the frequency of occurrence of combinations of observations in consistent orbit solutions. Of course, range-solution methods can be expected to perform poorly if the orbit solutions of most interest are not well conditioned. The second category of algorithms addresses this difficulty. Instead of solving for range, these methods attach a set of range hypotheses to each measured line of sight. Then all pair-wise combinations of observations are considered and the family of Lambert problems is solved for each pair. These algorithms also have polynomial complexity, though now the complexity is quadratic in the number of observations and also quadratic in the number of range hypotheses. We offer a novel type of admissible-region analysis, constructing partitions of the orbital element space and deriving rigorous upper and lower bounds on the possible values of the range for each partition. This analysis allows us to parallelize with respect to the element partitions and to reduce the number of range hypotheses that have to be considered in each processor simply by making the partitions smaller. Naturally, there are many ways to

  7. Resource Allocation for OFDMA-Based Cognitive Radio Networks with Application to H.264 Scalable Video Transmission

    Directory of Open Access Journals (Sweden)

    Coon JustinP

    2011-01-01

    Full Text Available Resource allocation schemes for orthogonal frequency division multiple access- (OFDMA- based cognitive radio (CR networks that impose minimum and maximum rate constraints are considered. To demonstrate the practical application of such systems, we consider the transmission of scalable video sequences. An integer programming (IP formulation of the problem is presented, which provides the optimal solution when solved using common discrete programming methods. Due to the computational complexity involved in such an approach and its unsuitability for dynamic cognitive radio environments, we propose to use the method of lift-and-project to obtain a stronger formulation for the resource allocation problem such that the integrality gap between the integer program and its linear relaxation is reduced. A simple branching operation is then performed that eliminates any noninteger values at the output of the linear program solvers. Simulation results demonstrate that this simple technique results in solutions very close to the optimum.

  8. Building a reliable, scalable and affordable RTC for AO instruments on ELTs

    Science.gov (United States)

    Gratadour, Damien; Sevin, Arnaud; Perret, Denis; Brule, Julien

    2013-12-01

    Addressing the unprecedented amount of computing power needed by the ELTs AO instruments real-time controllers (RTC) is one of the key technological developments required for the design of the next generation AO systems. Throughput oriented architectures such as GPUs, providing orders of magnitude greater computational performance than high-end CPUs, have recently appeared as attractive and economically viable candidates since the fast emergence of devices capable of general purpose computing. However, using for real-time applications a I/0 device which cannot be scheduled nor controlled internally by the operating system but is sent commands through a closed source driver comes with a number of challenges. Building on the experience of almost real-time end-to-end simulations using GPUs, and relying on the development of the COMPASS platform, a unified and optimized framework for AO simulations and real-time control, our team has engaged into the development of a scalable, heterogeneous GPU-based prototype for an AO RTC. In this paper, we review the main challenges arising when utilizing GPUs in real-time systems for AO and rank them in terms of impact significance and available solutions. We present our strategy, to mitigate these issues including the general architecture of our prototype, the real-time core and additional dedicated components for data acquisition and distribution. Finally, we discuss the expected performance in terms of latency and jitter on the basis of realistic benchmarks and focusing on the dimensioning of the MICADO AO module RTC.

  9. Scalability of DL_POLY on High Performance Computing Platform

    CSIR Research Space (South Africa)

    Mabakane, Mabule S

    2017-12-01

    Full Text Available running on up to 100 processors (cores) (W. Smith & Todorov, 2006). DL_POLY_3 (including 3.09) utilises a static/equi-spacial Domain Decomposition parallelisation strategy in which the simulation cell (comprising the atoms, ions or molecules) is divided..., 1997; Lange et al., 2011). Traditionally, it is expected that codes should scale linearly when one increases computational resources such as compute nodes or servers (Chamberlain, Chace, & Patil, 1998; Gropp & Snir, 2009). However, several studies...

  10. Band-to-Band Tunneling Transistors: Scalability and Circuit Performance

    Science.gov (United States)

    2013-05-01

    include radar systems, radio astronomy, and cell phone communications. Device Principles HEMTs use the properties of a heterojunction to form a...Ghani, K. Mistry, M. Bohr , and Y. El-Mansy., “A Logic Technology Featuring Strained-Silicon,” IEEE Electron Device Lett., vol. 25, no. 4, pp. 191-193

  11. Trifocal intraocular lenses: a comparison of the visual performance and quality of vision provided by two different lens designs

    Directory of Open Access Journals (Sweden)

    Gundersen KG

    2017-06-01

    Full Text Available Kjell G Gundersen,1 Rick Potvin2 1IFocus Øyeklinikk AS, Haugesund, Norway; 2Science in Vision, Akron, NY, USA Purpose: To compare two different diffractive trifocal intraocular lens (IOL designs, evaluating longer-term refractive outcomes, visual acuity (VA at various distances, low contrast VA and quality of vision.Patients and methods: Patients with binocularly implanted trifocal IOLs of two different designs (FineVision [FV] and Panoptix [PX] were evaluated 6 months to 2 years after surgery. Best distance-corrected and uncorrected VA were tested at distance (4 m, intermediate (80 and 60 cm and near (40 cm. A binocular defocus curve was collected with the subject’s best distance correction in place. The preferred reading distance was determined along with the VA at that distance. Low contrast VA at distance was also measured. Quality of vision was measured with the National Eye Institute Visual Function Questionnaire near subset and the Quality of Vision questionnaire.Results: Thirty subjects in each group were successfully recruited. The binocular defocus curves differed only at vergences of −1.0 D (FV better, P=0.02, −1.5 and −2.00 D (PX better, P<0.01 for both. Best distance-corrected and uncorrected binocular vision were significantly better for the PX lens at 60 cm (P<0.01 with no significant differences at other distances. The preferred reading distance was between 42 and 43 cm for both lenses, with the VA at the preferred reading distance slightly better with the PX lens (P=0.04. There were no statistically significant differences by lens for low contrast VA (P=0.1 or for quality of vision measures (P>0.3.Conclusion: Both trifocal lenses provided excellent distance, intermediate and near vision, but several measures indicated that the PX lens provided better intermediate vision at 60 cm. This may be important to users of tablets and other handheld devices. Quality of vision appeared similar between the two lens designs

  12. A Novel Polyaniline-Coated Bagasse Fiber Composite with Core-Shell Heterostructure Provides Effective Electromagnetic Shielding Performance.

    Science.gov (United States)

    Zhang, Yang; Qiu, Munan; Yu, Ying; Wen, Bianying; Cheng, Lele

    2017-01-11

    A facile route was proposed to synthesize polyaniline (PANI) uniformly deposited on bagasse fiber (BF) via a one-step in situ polymerization of aniline in the dispersed system of BF. Correlations between the structural, electrical, and electromagnetic properties were extensively investigated. Scanning electron microscopy images confirm that the PANI was coated dominantly on the BF surface, indicating that the as-prepared BF/PANI composite adopted the natural and inexpensive BF as its core and the PANI as the shell. Fourier transform infrared spectra suggest significant interactions between the BF and PANI shell, and a high degree of doping in the PANI shell was achieved. X-ray diffraction results reveal that the crystallization of the PANI shell was improved. The dielectric behaviors are analyzed with respect to dielectric constant, loss tangent, and Cole-Cole plots. The BF/PANI composite exhibits superior electrical conductivity (2.01 ± 0.29 S·cm-1), which is higher than that of the pristine PANI with 1.35 ± 0.15 S·cm-1. The complex permittivity, electromagnetic interference (EMI), shielding effectiveness (SE) values, and attenuation constants of the BF/PANI composite were larger than those of the pristine PANI. The EMI shielding mechanisms of the composite were experimentally and theoretically analyzed. The absorption-dominated total EMI SE of 28.8 dB at a thickness of 0.4 mm indicates the usefulness of the composite for electromagnetic shielding. Moreover, detailed comparison of electrical and EMI shielding properties with respect to the BF/PANI, dedoped BF/PANI composite, and the pristine PANI indicate that the enhancement of electromagnetic properties for the BF/PANI composite was due to the improved conductivity and the core-shell architecture. Thus, the composite has potential commercial applications for high-performance electromagnetic shielding materials and also could be used as a conductive filler to endow polymers with electromagnetic shielding

  13. Historical building monitoring using an energy-efficient scalable wireless sensor network architecture.

    Science.gov (United States)

    Capella, Juan V; Perles, Angel; Bonastre, Alberto; Serrano, Juan J

    2011-01-01

    We present a set of novel low power wireless sensor nodes designed for monitoring wooden masterpieces and historical buildings, in order to perform an early detection of pests. Although our previous star-based system configuration has been in operation for more than 13 years, it does not scale well for sensorization of large buildings or when deploying hundreds of nodes. In this paper we demonstrate the feasibility of a cluster-based dynamic-tree hierarchical Wireless Sensor Network (WSN) architecture where realistic assumptions of radio frequency data transmission are applied to cluster construction, and a mix of heterogeneous nodes are used to minimize economic cost of the whole system and maximize power saving of the leaf nodes. Simulation results show that the specialization of a fraction of the nodes by providing better antennas and some energy harvesting techniques can dramatically extend the life of the entire WSN and reduce the cost of the whole system. A demonstration of the proposed architecture with a new routing protocol and applied to termite pest detection has been implemented on a set of new nodes and should last for about 10 years, but it provides better scalability, reliability and deployment properties.

  14. Parallel peak pruning for scalable SMP contour tree computation

    Energy Technology Data Exchange (ETDEWEB)

    Carr, Hamish A. [Univ. of Leeds (United Kingdom); Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Davis, CA (United States); Sewell, Christopher M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ahrens, James P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-09

    As data sets grow to exascale, automated data analysis and visualisation are increasingly important, to intermediate human understanding and to reduce demands on disk storage via in situ analysis. Trends in architecture of high performance computing systems necessitate analysis algorithms to make effective use of combinations of massively multicore and distributed systems. One of the principal analytic tools is the contour tree, which analyses relationships between contours to identify features of more than local importance. Unfortunately, the predominant algorithms for computing the contour tree are explicitly serial, and founded on serial metaphors, which has limited the scalability of this form of analysis. While there is some work on distributed contour tree computation, and separately on hybrid GPU-CPU computation, there is no efficient algorithm with strong formal guarantees on performance allied with fast practical performance. Here in this paper, we report the first shared SMP algorithm for fully parallel contour tree computation, withfor-mal guarantees of O(lgnlgt) parallel steps and O(n lgn) work, and implementations with up to 10x parallel speed up in OpenMP and up to 50x speed up in NVIDIA Thrust.

  15. Provider performance in treating poor patients - factors influencing prescribing practices in lao PDR: a cross-sectional study

    Directory of Open Access Journals (Sweden)

    Petzold Max

    2011-01-01

    Full Text Available Abstract Background Out-of-pocket payments make up about 80% of medical care spending at hospitals in Laos, thereby putting poor households at risk of catastrophic health expenditure. Social security schemes in the form of community-based health insurance and health equity funds have been introduced in some parts of the country. Drug and Therapeutics Committees (DTCs have been established to ensure rational use of drugs and improve quality of care. The objective was to assess the appropriateness and expenditure for treatment for poor patients by health care providers at hospitals in three selected provinces of Laos and to explore associated factors. Methods Cross-sectional study using four tracer conditions. Structured interviews with 828 in-patients at twelve provincial and district hospitals on the subject of insurance protection, income and expenditures for treatment, including informal payment. Evaluation of each patient's medical record for appropriateness of drug use using a checklist of treatment guidelines (maximum score = 10. Results No significant difference in appropriateness of care for patients at different income levels, but higher expenditures for patients with the highest income level. The score for appropriate drug use in insured patients was significantly higher than uninsured patients (5.9 vs. 4.9, and the length of stay in days significantly shorter (2.7 vs. 3.7. Insured patients paid significantly less than uninsured patients, both for medicines (USD 14.8 vs. 43.9 and diagnostic tests (USD 5.9 vs. 9.2. On the contrary the score for appropriateness of drug use in patients making informal payments was significantly lower than patients not making informal payments (3.5 vs. 5.1, and the length of stay significantly longer (6.8 vs. 3.2, while expenditures were significantly higher both for medicines (USD 124.5 vs. 28.8 and diagnostic tests (USD 14.1 vs. 7.7. Conclusions The lower expenditure for insured patients can help reduce

  16. A Novel Scalable Deblocking-Filter Architecture for H.264/AVC and SVC Video Codecs

    OpenAIRE

    Cervero, Teresa; Otero Marnotes, Andres; López, S.; Torre Arnanz, Eduardo de la; Gallicó, G.; Sarmiento, Roberto; Riesgo Alcaide, Teresa

    2011-01-01

    A highly parallel and scalable Deblocking Filter (DF) hardware architecture for H.264/AVC and SVC video codecs is presented in this paper. The proposed architecture mainly consists on a coarse grain systolic array obtained by replicating a unique and homogeneous Functional Unit (FU), in which a whole Deblocking-Filter unit is implemented. The proposal is also based on a novel macroblock-level parallelization strategy of the filtering algorithm which improves the final performance by exploitin...

  17. Analytical Assessment of Security Level of Distributed and Scalable Computer Systems

    OpenAIRE

    Zhengbing Hu; Vadym Mukhin; Yaroslav Kornaga; Yaroslav Lavrenko; Oleg Barabash; Oksana Herasymenko

    2016-01-01

    The article deals with the issues of the security of distributed and scalable computer systems based on the risk-based approach. The main existing methods for predicting the consequences of the dangerous actions of the intrusion agents are described. There is shown a generalized structural scheme of job manager in the context of a risk-based approach. Suggested analytical assessments for the security risk level in the distributed computer systems allow performing the c...

  18. Scalable, remote administration of Windows NT.

    Energy Technology Data Exchange (ETDEWEB)

    Gomberg, M.; Stacey, C.; Sayre, J.

    1999-06-08

    In the UNIX community there is an overwhelming perception that NT is impossible to manage remotely and that NT administration doesn't scale. This was essentially true with earlier versions of the operating system. Even today, out of the box, NT is difficult to manage remotely. Many tools, however, now make remote management of NT not only possible, but under some circumstances very easy. In this paper we discuss how we at Argonne's Mathematics and Computer Science Division manage all our NT machines remotely from a single console, with minimum locally installed software overhead. We also present NetReg, which is a locally developed tool for scalable registry management. NetReg allows us to apply a registry change to a specified set of machines. It is a command line utility that can be run in either interactive or batch mode and is written in Perl for Win32, taking heavy advantage of the Win32::TieRegistry module.

  19. Scalable conditional induction variables (CIV) analysis

    KAUST Repository

    Oancea, Cosmin E.

    2015-02-01

    Subscripts using induction variables that cannot be expressed as a formula in terms of the enclosing-loop indices appear in the low-level implementation of common programming abstractions such as Alter, or stack operations and pose significant challenges to automatic parallelization. Because the complexity of such induction variables is often due to their conditional evaluation across the iteration space of loops we name them Conditional Induction Variables (CIV). This paper presents a flow-sensitive technique that summarizes both such CIV-based and affine subscripts to program level, using the same representation. Our technique requires no modifications of our dependence tests, which is agnostic to the original shape of the subscripts, and is more powerful than previously reported dependence tests that rely on the pairwise disambiguation of read-write references. We have implemented the CIV analysis in our parallelizing compiler and evaluated its impact on five Fortran benchmarks. We have found that that there are many important loops using CIV subscripts and that our analysis can lead to their scalable parallelization. This in turn has led to the parallelization of the benchmark programs they appear in.

  20. A Programmable, Scalable-Throughput Interleaver

    Directory of Open Access Journals (Sweden)

    Rijshouwer EJC

    2010-01-01

    Full Text Available The interleaver stages of digital communication standards show a surprisingly large variation in throughput, state sizes, and permutation functions. Furthermore, data rates for 4G standards such as LTE-Advanced will exceed typical baseband clock frequencies of handheld devices. Multistream operation for Software Defined Radio and iterative decoding algorithms will call for ever higher interleave data rates. Our interleave machine is built around 8 single-port SRAM banks and can be programmed to generate up to 8 addresses every clock cycle. The scalable architecture combines SIMD and VLIW concepts with an efficient resolution of bank conflicts. A wide range of cellular, connectivity, and broadcast interleavers have been mapped on this machine, with throughputs up to more than 0.5 Gsymbol/second. Although it was designed for channel interleaving, the application domain of the interleaver extends also to Turbo interleaving. The presented configuration of the architecture is designed as a part of a programmable outer receiver on a prototype board. It offers (near universal programmability to enable the implementation of new interleavers. The interleaver measures 2.09 m in 65 nm CMOS (including memories and proves functional on silicon.

  1. Scalable Combinatorial Tools for Health Disparities Research

    Directory of Open Access Journals (Sweden)

    Michael A. Langston

    2014-10-01

    Full Text Available Despite staggering investments made in unraveling the human genome, current estimates suggest that as much as 90% of the variance in cancer and chronic diseases can be attributed to factors outside an individual’s genetic endowment, particularly to environmental exposures experienced across his or her life course. New analytical approaches are clearly required as investigators turn to complicated systems theory and ecological, place-based and life-history perspectives in order to understand more clearly the relationships between social determinants, environmental exposures and health disparities. While traditional data analysis techniques remain foundational to health disparities research, they are easily overwhelmed by the ever-increasing size and heterogeneity of available data needed to illuminate latent gene x environment interactions. This has prompted the adaptation and application of scalable combinatorial methods, many from genome science research, to the study of population health. Most of these powerful tools are algorithmically sophisticated, highly automated and mathematically abstract. Their utility motivates the main theme of this paper, which is to describe real applications of innovative transdisciplinary models and analyses in an effort to help move the research community closer toward identifying the causal mechanisms and associated environmental contexts underlying health disparities. The public health exposome is used as a contemporary focus for addressing the complex nature of this subject.

  2. Design and Implementation of a Scalable Membership Service for Supercomputer Resiliency-Aware Runtime

    Energy Technology Data Exchange (ETDEWEB)

    Tock, Yoav [IBM Corporation, Haifa Research Center; Mandler, Benjamin [IBM Corporation, Haifa Research Center; Moreira, Jose [IBM T. J. Watson Research Center; Jones, Terry R [ORNL

    2013-01-01

    As HPC systems and applications get bigger and more complex, we are approaching an era in which resiliency and run-time elasticity concerns be- come paramount.We offer a building block for an alternative resiliency approach in which computations will be able to make progress while components fail, in addition to enabling a dynamic set of nodes throughout a computation lifetime. The core of our solution is a hierarchical scalable membership service provid- ing eventual consistency semantics. An attribute replication service is used for hierarchy organization, and is exposed to external applications. Our solution is based on P2P technologies and provides resiliency and elastic runtime support at ultra large scales. Resulting middleware is general purpose while exploiting HPC platform unique features and architecture. We have implemented and tested this system on BlueGene/P with Linux, and using worst-case analysis, evaluated the service scalability as effective for up to 1M nodes.

  3. Influence of pay-for-performance programs on information technology use among child health providers: the devil is in the details.

    Science.gov (United States)

    Menachemi, Nir; Struchen-Shellhorn, Wendy; Brooks, Robert G; Simpson, Lisa

    2009-01-01

    Pay-for-performance programs are used to promote improved health care quality, often through increased use of health information technology. However, little is known about whether pay-for-performance programs influence the adoption of health information technology, especially among child health providers. This study explored how various pay-for-performance compensation methods are related to health information technology use. Survey data from 1014 child health providers practicing in Florida were analyzed by using univariate and multivariate techniques. Questions asked about the adoption of electronic health records and personal digital assistants, as well as types of activities that affected child health provider compensation or income. The most common reported method to affect respondents' compensation was traditional productivity or billing (78%). Of the pay-for-performance-related methods of compensation, child health providers indicated that measures of clinical care (41%), patient surveys and experience (34%), the use of health information technology (29%), and quality bonuses or incentives (27%) were a major or minor factor in their compensation. In multivariate logistic regression analyses, only pay-for-performance programs that compensated directly for health information technology use were associated with an increased likelihood of electronic health record system adoption. Pay-for-performance programs linking measures of clinical quality to compensation were positively associated with personal digital assistant use among child health providers. Pay-for-performance programs that do not directly emphasize health information technology use do not influence the adoption of electronic health records among Florida physicians treating children. Understanding how different pay-for-performance compensation methods incentivize health information technology adoption is important for improving quality.

  4. Scalable TCP-friendly Video Distribution for Heterogeneous Clients

    Science.gov (United States)

    Zink, Michael; Griwodz, Carsten; Schmitt, Jens; Steinmetz, Ralf

    2003-01-01

    This paper investigates an architecture and implementation for the use of a TCP-friendly protocol in a scalable video distribution system for hierarchically encoded layered video. The design supports a variety of heterogeneous clients, because recent developments have shown that access network and client capabilities differ widely in today's Internet. The distribution system presented here consists of videos servers, proxy caches and clients that make use of a TCP-friendly rate control (TFRC) to perform congestion controlled streaming of layer encoded video. The data transfer protocol of the system is RTP compliant, yet it integrates protocol elements for congestion control with protocols elements for retransmission that is necessary for lossless transfer of contents into proxy caches. The control protocol RTSP is used to negotiate capabilities, such as support for congestion control or retransmission. By tests performed with our experimental platform in a lab test and over the Internet, we show that congestion controlled streaming of layer encoded video through proxy caches is a valid means of supporting heterogeneous clients. We show that filtering of layers depending on a TFRC-controlled permissible bandwidth allows the preferred delivery of the most relevant layers to end-systems while additional layers can be delivered to the cache server. We experiment with uncontrolled delivery from the proxy cache to the client as well, which may result in random loss and bandwidth waste but also a higher goodput, and compare these two approaches.

  5. Scalable and versatile graphene functionalized with the Mannich condensate.

    Science.gov (United States)

    Liao, Ruijuan; Tang, Zhenghai; Lin, Tengfei; Guo, Baochun

    2013-03-01

    The functionalized graphene (JTPG) is fabricated by chemical conversion of graphene oxide (GO), using tea polyphenols (TP) as the reducer and stabilizer, followed by further derivatization through the Mannich reaction between the pyrogallol groups on TP and Jeffamine M-2070. JTPG exhibits solubility in a broad spectrum of solvents, long-term stability and single-layered dispersion in water and organic solvents, which are substantiated by AFM, TEM, and XRD. The paper-like JTPG hybrids prepared by vacuum-assisted filtration exhibits an unusual combination of high toughness (tensile strength of ~275 MPa and break strain of ~8%) and high electrical conductivity (~700 S/m). Still, JTPG is revealed to be very promising in the fabrication of polymer/graphene composites due to the excellent solubility in the solvent with low boiling point and low toxicity. Accordingly, as an example, nitrile rubber/JTPG composites are fabricated by the solution compounding in acetone. The resulted composite shows low threshold percolation at 0.23 vol.% of graphene. The versatilities both in dispersibility and performance, together with the scalable process of JTPG, enable a new way to scale up the fabrication of the graphene-based polymer composites or hybrids with high performance.

  6. Scalability Issues for Remote Sensing Infrastructure: A Case Study.

    Science.gov (United States)

    Liu, Yang; Picard, Sean; Williamson, Carey

    2017-04-29

    For the past decade, a team of University of Calgary researchers has operated a large "sensor Web" to collect, analyze, and share scientific data from remote measurement instruments across northern Canada. This sensor Web receives real-time data streams from over a thousand Internet-connected sensors, with a particular emphasis on environmental data (e.g., space weather, auroral phenomena, atmospheric imaging). Through research collaborations, we had the opportunity to evaluate the performance and scalability of their remote sensing infrastructure. This article reports the lessons learned from our study, which considered both data collection and data dissemination aspects of their system. On the data collection front, we used benchmarking techniques to identify and fix a performance bottleneck in the system's memory management for TCP data streams, while also improving system efficiency on multi-core architectures. On the data dissemination front, we used passive and active network traffic measurements to identify and reduce excessive network traffic from the Web robots and JavaScript techniques used for data sharing. While our results are from one specific sensor Web system, the lessons learned may apply to other scientific Web sites with remote sensing infrastructure.

  7. Scalable Metropolis Monte Carlo for simulation of hard shapes

    Science.gov (United States)

    Anderson, Joshua A.; Eric Irrgang, M.; Glotzer, Sharon C.

    2016-07-01

    We design and implement a scalable hard particle Monte Carlo simulation toolkit (HPMC), and release it open source as part of HOOMD-blue. HPMC runs in parallel on many CPUs and many GPUs using domain decomposition. We employ BVH trees instead of cell lists on the CPU for fast performance, especially with large particle size disparity, and optimize inner loops with SIMD vector intrinsics on the CPU. Our GPU kernel proposes many trial moves in parallel on a checkerboard and uses a block-level queue to redistribute work among threads and avoid divergence. HPMC supports a wide variety of shape classes, including spheres/disks, unions of spheres, convex polygons, convex spheropolygons, concave polygons, ellipsoids/ellipses, convex polyhedra, convex spheropolyhedra, spheres cut by planes, and concave polyhedra. NVT and NPT ensembles can be run in 2D or 3D triclinic boxes. Additional integration schemes permit Frenkel-Ladd free energy computations and implicit depletant simulations. In a benchmark system of a fluid of 4096 pentagons, HPMC performs 10 million sweeps in 10 min on 96 CPU cores on XSEDE Comet. The same simulation would take 7.6 h in serial. HPMC also scales to large system sizes, and the same benchmark with 16.8 million particles runs in 1.4 h on 2048 GPUs on OLCF Titan.

  8. Scalable Virtual Network Mapping Algorithm for Internet-Scale Networks

    Science.gov (United States)

    Yang, Qiang; Wu, Chunming; Zhang, Min

    The proper allocation of network resources from a common physical substrate to a set of virtual networks (VNs) is one of the key technical challenges of network virtualization. While a variety of state-of-the-art algorithms have been proposed in an attempt to address this issue from different facets, the challenge still remains in the context of large-scale networks as the existing solutions mainly perform in a centralized manner which requires maintaining the overall and up-to-date information of the underlying substrate network. This implies the restricted scalability and computational efficiency when the network scale becomes large. This paper tackles the virtual network mapping problem and proposes a novel hierarchical algorithm in conjunction with a substrate network decomposition approach. By appropriately transforming the underlying substrate network into a collection of sub-networks, the hierarchical virtual network mapping algorithm can be carried out through a global virtual network mapping algorithm (GVNMA) and a local virtual network mapping algorithm (LVNMA) operated in the network central server and within individual sub-networks respectively with their cooperation and coordination as necessary. The proposed algorithm is assessed against the centralized approaches through a set of numerical simulation experiments for a range of network scenarios. The results show that the proposed hierarchical approach can be about 5-20 times faster for VN mapping tasks than conventional centralized approaches with acceptable communication overhead between GVNCA and LVNCA for all examined networks, whilst performs almost as well as the centralized solutions.

  9. Detailed Modeling and Evaluation of a Scalable Multilevel Checkpointing System

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Moody, Adam [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bronevetsky, Greg [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); de Supinski, Bronis R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-09-01

    High-performance computing (HPC) systems are growing more powerful by utilizing more components. As the system mean time before failure correspondingly drops, applications must checkpoint frequently to make progress. But, at scale, the cost of checkpointing becomes prohibitive. A solution to this problem is multilevel checkpointing, which employs multiple types of checkpoints in a single run. Moreover, lightweight checkpoints can handle the most common failure modes, while more expensive checkpoints can handle severe failures. We designed a multilevel checkpointing library, the Scalable Checkpoint/Restart (SCR) library, that writes lightweight checkpoints to node-local storage in addition to the parallel file system. We present probabilistic Markov models of SCR's performance. We show that on future large-scale systems, SCR can lead to a gain in machine efficiency of up to 35 percent, and reduce the load on the parallel file system by a factor of two. In addition, we predict that checkpoint scavenging, or only writing checkpoints to the parallel file system on application termination, can reduce the load on the parallel file system by 20 × on today's systems and still maintain high application efficiency.

  10. Scalable Multicore Motion Planning Using Lock-Free Concurrency.

    Science.gov (United States)

    Ichnowski, Jeffrey; Alterovitz, Ron

    2014-10-01

    We present PRRT (Parallel RRT) and PRRT* (Parallel RRT*), sampling-based methods for feasible and optimal motion planning designed for modern multicore CPUs. We parallelize RRT and RRT* such that all threads concurrently build a single motion planning tree. Parallelization in this manner requires that data structures, such as the nearest neighbor search tree and the motion planning tree, are safely shared across multiple threads. Rather than rely on traditional locks which can result in slowdowns due to lock contention, we introduce algorithms based on lock-free concurrency using atomic operations. We further improve scalability by using partition-based sampling (which shrinks each core's working data set to improve cache efficiency) and parallel work-saving (in reducing the number of rewiring steps performed in PRRT*). Because PRRT and PRRT* are CPU-based, they can be directly integrated with existing libraries. We demonstrate that PRRT and PRRT* scale well as core counts increase, in some cases exhibiting superlinear speedup, for scenarios such as the Alpha Puzzle and Cubicles scenarios and the Aldebaran Nao robot performing a 2-handed task.

  11. Scalability Issues for Remote Sensing Infrastructure: A Case Study

    Directory of Open Access Journals (Sweden)

    Yang Liu

    2017-04-01

    Full Text Available For the past decade, a team of University of Calgary researchers has operated a large “sensor Web” to collect, analyze, and share scientific data from remote measurement instruments across northern Canada. This sensor Web receives real-time data streams from over a thousand Internet-connected sensors, with a particular emphasis on environmental data (e.g., space weather, auroral phenomena, atmospheric imaging. Through research collaborations, we had the opportunity to evaluate the performance and scalability of their remote sensing infrastructure. This article reports the lessons learned from our study, which considered both data collection and data dissemination aspects of their system. On the data collection front, we used benchmarking techniques to identify and fix a performance bottleneck in the system’s memory management for TCP data streams, while also improving system efficiency on multi-core architectures. On the data dissemination front, we used passive and active network traffic measurements to identify and reduce excessive network traffic from the Web robots and JavaScript techniques used for data sharing. While our results are from one specific sensor Web system, the lessons learned may apply to other scientific Web sites with remote sensing infrastructure.

  12. Using a quality improvement model to enhance providers' performance in maternal and newborn health care: a post-only intervention and comparison design.

    Science.gov (United States)

    Ayalew, Firew; Eyassu, Gizachew; Seyoum, Negash; van Roosmalen, Jos; Bazant, Eva; Kim, Young Mi; Tekleberhan, Alemnesh; Gibson, Hannah; Daniel, Ephrem; Stekelenburg, Jelle

    2017-04-12

    The Standards Based Management and Recognition (SBM-R(©)) approach to quality improvement has been implemented in Ethiopia to strengthen routine maternal and newborn health (MNH) services. This evaluation assessed the effect of the intervention on MNH providers' performance of routine antenatal care (ANC), uncomplicated labor and delivery and immediate postnatal care (PNC) services. A post-only evaluation design was conducted at three hospitals and eight health centers implementing SBM-R and the same number of comparison health facilities. Structured checklists were used to observe MNH providers' performance on ANC (236 provider-client interactions), uncomplicated labor and delivery (226 provider-client interactions), and immediate PNC services in the six hours after delivery (232 provider-client interactions); observations were divided equally between intervention and comparison groups. Main outcomes were provider performance scores, calculated as the percentage of essential tasks in each service area completed by providers. Multilevel analysis was used to calculate adjusted mean percentage performance scores and standard errors to compare intervention and comparison groups. There was no statistically significant difference between intervention and comparison facilities in overall mean performance scores for ANC services (63.4% at intervention facilities versus 61.0% at comparison facilities, p = 0.650) or in any specific ANC skill area. MNH providers' overall mean performance score for uncomplicated labor and delivery care was 11.9 percentage points higher in the intervention than in the comparison group (77.5% versus 65.6%; p = 0.002). Overall mean performance scores for immediate PNC were 22.2 percentage points higher at intervention than at comparison facilities (72.8% versus 50.6%; p = 0.001); and there was a significant difference of 22 percentage points between intervention and comparison facilities for each PNC skill area: care for the newborn

  13. ARC Code TI: Block-GP: Scalable Gaussian Process Regression

    Data.gov (United States)

    National Aeronautics and Space Administration — Block GP is a Gaussian Process regression framework for multimodal data, that can be an order of magnitude more scalable than existing state-of-the-art nonlinear...

  14. Scalable pattern recognition algorithms applications in computational biology and bioinformatics

    CERN Document Server

    Maji, Pradipta

    2014-01-01

    Reviews the development of scalable pattern recognition algorithms for computational biology and bioinformatics Includes numerous examples and experimental results to support the theoretical concepts described Concludes each chapter with directions for future research and a comprehensive bibliography

  15. Scalability of telecom cloud architectures for live-TV distribution

    OpenAIRE

    Asensio Carmona, Adrian; Contreras, Luis Miguel; Ruiz Ramírez, Marc; López Álvarez, Victor; Velasco Esteban, Luis Domingo

    2015-01-01

    A hierarchical distributed telecom cloud architecture for live-TV distribution exploiting flexgrid networking and SBVTs is proposed. Its scalability is compared to that of a centralized architecture. Cost savings as high as 32 % are shown. Peer Reviewed

  16. Scalable RFCMOS Model for 90 nm Technology

    Directory of Open Access Journals (Sweden)

    Ah Fatt Tong

    2011-01-01

    Full Text Available This paper presents the formation of the parasitic components that exist in the RF MOSFET structure during its high-frequency operation. The parasitic components are extracted from the transistor's S-parameter measurement, and its geometry dependence is studied with respect to its layout structure. Physical geometry equations are proposed to represent these parasitic components, and by implementing them into the RF model, a scalable RFCMOS model, that is, valid up to 49.85 GHz is demonstrated. A new verification technique is proposed to verify the quality of the developed scalable RFCMOS model. The proposed technique can shorten the verification time of the scalable RFCMOS model and ensure that the coded scalable model file is error-free and thus more reliable to use.

  17. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2008-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately...

  18. Parallelizing SLPA for Scalable Overlapping Community Detection

    Directory of Open Access Journals (Sweden)

    Konstantin Kuzmin

    2015-01-01

    Full Text Available Communities in networks are groups of nodes whose connections to the nodes in a community are stronger than with the nodes in the rest of the network. Quite often nodes participate in multiple communities; that is, communities can overlap. In this paper, we first analyze what other researchers have done to utilize high performance computing to perform efficient community detection in social, biological, and other networks. We note that detection of overlapping communities is more computationally intensive than disjoint community detection, and the former presents new challenges that algorithm designers have to face. Moreover, the efficiency of many existing algorithms grows superlinearly with the network size making them unsuitable to process large datasets. We use the Speaker-Listener Label Propagation Algorithm (SLPA as the basis for our parallel overlapping community detection implementation. SLPA provides near linear time overlapping community detection and is well suited for parallelization. We explore the benefits of a multithreaded programming paradigm and show that it yields a significant performance gain over sequential execution while preserving the high quality of community detection. The algorithm was tested on four real-world datasets with up to 5.5 million nodes and 170 million edges. In order to assess the quality of community detection, at least 4 different metrics were used for each of the datasets.

  19. A scalable, fully automated process for construction of sequence-ready human exome targeted capture libraries.

    Science.gov (United States)

    Fisher, Sheila; Barry, Andrew; Abreu, Justin; Minie, Brian; Nolan, Jillian; Delorey, Toni M; Young, Geneva; Fennell, Timothy J; Allen, Alexander; Ambrogio, Lauren; Berlin, Aaron M; Blumenstiel, Brendan; Cibulskis, Kristian; Friedrich, Dennis; Johnson, Ryan; Juhn, Frank; Reilly, Brian; Shammas, Ramy; Stalker, John; Sykes, Sean M; Thompson, Jon; Walsh, John; Zimmer, Andrew; Zwirko, Zac; Gabriel, Stacey; Nicol, Robert; Nusbaum, Chad

    2011-01-01

    Genome targeting methods enable cost-effective capture of specific subsets of the genome for sequencing. We present here an automated, highly scalable method for carrying out the Solution Hybrid Selection capture approach that provides a dramatic increase in scale and throughput of sequence-ready libraries produced. Significant process improvements and a series of in-process quality control checkpoints are also added. These process improvements can also be used in a manual version of the protocol.

  20. Blocking Self-avoiding Walks Stops Cyber-epidemics: A Scalable GPU-based Approach

    OpenAIRE

    Nguyen, Hung T.; Cano, Alberto; Vu, Tam; Dinh, Thang N.

    2017-01-01

    Cyber-epidemics, the widespread of fake news or propaganda through social media, can cause devastating economic and political consequences. A common countermeasure against cyber-epidemics is to disable a small subset of suspected social connections or accounts to effectively contain the epidemics. An example is the recent shutdown of 125,000 ISIS-related Twitter accounts. Despite many proposed methods to identify such subset, none are scalable enough to provide high-quality solutions in nowad...

  1. CIC portal: a Collaborative and Scalable Integration Platform for High Availability Grid Operations

    OpenAIRE

    Cavalli, Alessandro; Cordier, Hélène; L'Orphelin, Cyril; Reynaud, Sylvain; Mathieu, Gilles; Pagano, Alfredo; Aidel, Osman

    2016-01-01

    International audience; EGEE, along with its sister project LCG, manages the world's largest Grid production infrastructure which is spreading nowadays over 260 sites in more than 40 countries. Just as building such a system requires novel approaches; its management also requires innovation. From an operational point of view, the first challenge we face is to provide scalable procedures and tools able to monitor the ever expanding infrastructure and the constant evolution of the needs. The se...

  2. celerite: Scalable 1D Gaussian Processes in C++, Python, and Julia

    Science.gov (United States)

    Foreman-Mackey, Daniel; Agol, Eric; Ambikasaran, Sivaram; Angus, Ruth

    2017-09-01

    celerite provides fast and scalable Gaussian Process (GP) Regression in one dimension and is implemented in C++, Python, and Julia. The celerite API is designed to be familiar to users of george and, like george, celerite is designed to efficiently evaluate the marginalized likelihood of a dataset under a GP model. This is then be used alongside a non-linear optimization or posterior inference library for the best results.

  3. Affordable and Scalable Manufacturing of Wearable Multi-Functional Sensory “Skin” for Internet of Everything Applications

    KAUST Repository

    Nassar, Joanna M.

    2017-10-01

    Demand for wearable electronics is expected to at least triple by 2020, embracing all sorts of Internet of Everything (IoE) applications, such as activity tracking, environmental mapping, and advanced healthcare monitoring, in the purpose of enhancing the quality of life. This entails the wide availability of free-form multifunctional sensory systems (i.e “skin” platforms) that can conform to the variety of uneven surfaces, providing intimate contact and adhesion with the skin, necessary for localized and enhanced sensing capabilities. However, current wearable devices appear to be bulky, rigid and not convenient for continuous wear in everyday life, hindering their implementation into advanced and unexplored applications beyond fitness tracking. Besides, they retail at high price tags which limits their availability to at least half of the World’s population. Hence, form factor (physical flexibility and/or stretchability), cost, and accessibility become the key drivers for further developments. To support this need in affordable and adaptive wearables and drive academic developments in “skin” platforms into practical and functional consumer devices, compatibility and integration into a high performance yet low power system is crucial to sustain the high data rates and large data management driven by IoE. Likewise, scalability becomes essential for batch fabrication and precision. Therefore, I propose to develop three distinct but necessary “skin” platforms using scalable and cost effective manufacturing techniques. My first approach is the fabrication of a CMOS-compatible “silicon skin”, crucial for any truly autonomous and conformal wearable device, where monolithic integration between heterogeneous material-based sensory platform and system components is a challenge yet to be addressed. My second approach displays an even more affordable and accessible “paper skin”, using recyclable and off-the-shelf materials, targeting environmental

  4. SDC: Scalable description coding for adaptive streaming media

    OpenAIRE

    Quinlan, Jason J.; Zahran, Ahmed H.; Sreenan, Cormac J.

    2012-01-01

    Video compression techniques enable adaptive media streaming over heterogeneous links to end-devices. Scalable Video Coding (SVC) and Multiple Description Coding (MDC) represent well-known techniques for video compression with distinct characteristics in terms of bandwidth efficiency and resiliency to packet loss. In this paper, we present Scalable Description Coding (SDC), a technique to compromise the tradeoff between bandwidth efficiency and error resiliency without sacrificing user-percei...

  5. Improving health worker performance of abortion services: an assessment of post-training support to providers in India, Nepal and Nigeria.

    Science.gov (United States)

    Benson, Janie; Healy, Joan; Dijkerman, Sally; Andersen, Kathryn

    2017-11-21

    Health worker performance has been the focus of numerous interventions and evaluation studies in low- and middle-income countries. Few have examined changes in individual provider performance with an intervention encompassing post-training support contacts to improve their clinical practice and resolve programmatic problems. This paper reports the results of an intervention with 3471 abortion providers in India, Nepal and Nigeria. Following abortion care training, providers received in-person visits and virtual contacts by a clinical and programmatic support team for a 12-month period, designed to address their individual practice issues. The intervention also included technical assistance to and upgrades in facilities where the providers worked. Quantitative measures to assess provider performance were established, including: 1) Increase in service provision; 2) Consistent service provision; 3) Provision of high quality of care through use of World Health Organization-recommended uterine evacuation technologies, management of pain and provision of post-abortion contraception; and 4) Post-abortion contraception method mix. Descriptive univariate analysis was conducted, followed by examination of the bivariate relationships between all independent variables and the four dependent performance outcome variables by calculating unadjusted odds ratios, by country and overall. Finally, multivariate logistic regression was performed for each outcome. Providers received an average of 5.7 contacts. Sixty-two percent and 46% of providers met measures for consistent service provision and quality of care, respectively. Fewer providers achieved an increased number of services (24%). Forty-six percent provided an appropriate postabortion contraceptive mix to clients. Most providers met the quality components for use of WHO-recommended abortion methods and provision of pain management. Factors significantly associated with achievement of all measures were providers working in

  6. Who do you prefer? A study of public preferences for health care provider type in performing cutaneous surgery and cosmetic procedures in the United States.

    Science.gov (United States)

    Bangash, Haider K; Ibrahimi, Omar A; Green, Lawrence J; Alam, Murad; Eisen, Daniel B; Armstrong, April W

    2014-06-01

    The public preference for provider type in performing cutaneous surgery and cosmetic procedures is unknown in the United States. An internet-based survey was administered to the lay public. Respondents were asked to select the health care provider (dermatologist, plastic surgeon, primary care physician, general surgeon, and nurse practitioner/physician's assistant) they mostly prefer to perform different cutaneous cosmetic and surgical procedures. Three hundred fifty-four respondents undertook the survey. Dermatologists were identified as the most preferable health care provider to evaluate and biopsy worrisome lesions on the face (69.8%), perform skin cancer surgery on the back (73.4%), perform skin cancer surgery on the face (62.7%), and perform laser procedures (56.3%) by most of the respondents. For filler injections, the responders similarly identified plastic surgeons and dermatologists (47.3% vs 44.6%, respectively) as the most preferred health care provider. For botulinum toxin injections, there was a slight preference for plastic surgeons followed by dermatologists (50.6% vs 38.4%). Plastic surgeons were the preferred health care provider for procedures such as liposuction (74.4%) and face-lift surgery (96.1%) by most of the respondents. Dermatologists are recognized as the preferred health care providers over plastic surgeons, primary care physicians, general surgeons, and nurse practitioners/physician's assistants to perform a variety of cutaneous cosmetic and surgical procedures including skin cancer surgery, on the face and body, and laser procedures. The general public expressed similar preferences for dermatologists and plastic surgeons regarding filler injections.

  7. Development of an instrument for a primary airway provider's performance with an ICU multidisciplinary team in pediatric respiratory failure using simulation.

    Science.gov (United States)

    Nishisaki, Akira; Donoghue, Aaron J; Colborn, Shawn; Watson, Christine; Meyer, Andrew; Niles, Dana; Bishnoi, Ram; Hales, Roberta; Hutchins, Larissa; Helfaer, Mark A; Brown, Calvin A; Walls, Ron M; Nadkarni, Vinay M; Boulet, John R

    2012-07-01

    To develop a scoring system that can assess the multidisciplinary management of respiratory failure in a pediatric ICU. In a single tertiary pediatric ICU we conducted a simulation-based evaluation in a patient care area auxiliary to the ICU. The subjects were pediatric and emergency medicine residents, nurses, and respiratory therapists who work in the pediatric ICU. A multidisciplinary focus group with experienced providers in pediatric ICU airway management and patient safety specialists was formed. A task-based scoring instrument was developed to evaluate a primary airway provider's performance through Healthcare Failure Mode and Effect Analysis. Reliability and validity of the instrument were evaluated using multidisciplinary simulation-based airway management training sessions. Each session was evaluated by 3 independent expert raters. A global assessment of the team performance and the previous experience in training were used to evaluate the validity of the instrument. The Just-in-Time Pediatric Airway Provider Performance Scale (JIT-PAPPS) version 3, with 34 task-based items (14 technical, 20 behavioral), was developed. Eighty-five teams led by resident airway providers were evaluated by 3 raters. The intraclass correlation coefficient for raters was 0.64. The JIT-PAPPS score correlated well with the global rating scale (r = 0.71, P primary airway provider's performance with a multidisciplinary pediatric ICU team on simulated pediatric respiratory failure was developed. Reliability and validity evaluation supports the developed scale.

  8. Approaches for scalable modeling and emulation of cyber systems : LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Mayo, Jackson R.; Minnich, Ronald G.; Armstrong, Robert C.; Rudish, Don W.

    2009-09-01

    The goal of this research was to combine theoretical and computational approaches to better understand the potential emergent behaviors of large-scale cyber systems, such as networks of {approx} 10{sup 6} computers. The scale and sophistication of modern computer software, hardware, and deployed networked systems have significantly exceeded the computational research community's ability to understand, model, and predict current and future behaviors. This predictive understanding, however, is critical to the development of new approaches for proactively designing new systems or enhancing existing systems with robustness to current and future cyber threats, including distributed malware such as botnets. We have developed preliminary theoretical and modeling capabilities that can ultimately answer questions such as: How would we reboot the Internet if it were taken down? Can we change network protocols to make them more secure without disrupting existing Internet connectivity and traffic flow? We have begun to address these issues by developing new capabilities for understanding and modeling Internet systems at scale. Specifically, we have addressed the need for scalable network simulation by carrying out emulations of a network with {approx} 10{sup 6} virtualized operating system instances on a high-performance computing cluster - a 'virtual Internet'. We have also explored mappings between previously studied emergent behaviors of complex systems and their potential cyber counterparts. Our results provide foundational capabilities for further research toward understanding the effects of complexity in cyber systems, to allow anticipating and thwarting hackers.

  9. A highly scalable massively parallel fast marching method for the Eikonal equation

    Science.gov (United States)

    Yang, Jianming; Stern, Frederick

    2017-03-01

    The fast marching method is a widely used numerical method for solving the Eikonal equation arising from a variety of scientific and engineering fields. It is long deemed inherently sequential and an efficient parallel algorithm applicable to large-scale practical applications is not available in the literature. In this study, we present a highly scalable massively parallel implementation of the fast marching method using a domain decomposition approach. Central to this algorithm is a novel restarted narrow band approach that coordinates the frequency of communications and the amount of computations extra to a sequential run for achieving an unprecedented parallel performance. Within each restart, the narrow band fast marching method is executed; simple synchronous local exchanges and global reductions are adopted for communicating updated data in the overlapping regions between neighboring subdomains and getting the latest front status, respectively. The independence of front characteristics is exploited through special data structures and augmented status tags to extract the masked parallelism within the fast marching method. The efficiency, flexibility, and applicability of the parallel algorithm are demonstrated through several examples. These problems are extensively tested on six grids with up to 1 billion points using different numbers of processes ranging from 1 to 65536. Remarkable parallel speedups are achieved using tens of thousands of processes. Detailed pseudo-codes for both the sequential and parallel algorithms are provided to illustrate the simplicity of the parallel implementation and its similarity to the sequential narrow band fast marching algorithm.

  10. GeoWeb Crawler: An Extensible and Scalable Web Crawling Framework for Discovering Geospatial Web Resources

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Huang

    2016-08-01

    Full Text Available With the advance of the World-Wide Web (WWW technology, people can easily share content on the Web, including geospatial data and web services. Thus, the “big geospatial data management” issues start attracting attention. Among the big geospatial data issues, this research focuses on discovering distributed geospatial resources. As resources are scattered on the WWW, users cannot find resources of their interests efficiently. While the WWW has Web search engines addressing web resource discovery issues, we envision that the geospatial Web (i.e., GeoWeb also requires GeoWeb search engines. To realize a GeoWeb search engine, one of the first steps is to proactively discover GeoWeb resources on the WWW. Hence, in this study, we propose the GeoWeb Crawler, an extensible Web crawling framework that can find various types of GeoWeb resources, such as Open Geospatial Consortium (OGC web services, Keyhole Markup Language (KML and Environmental Systems Research Institute, Inc (ESRI Shapefiles. In addition, we apply the distributed computing concept to promote the performance of the GeoWeb Crawler. The result shows that for 10 targeted resources types, the GeoWeb Crawler discovered 7351 geospatial services and 194,003 datasets. As a result, the proposed GeoWeb Crawler framework is proven to be extensible and scalable to provide a comprehensive index of GeoWeb.

  11. Scalable Indium Phosphide Thin-Film Nanophotonics Platform for Photovoltaic and Photoelectrochemical Devices.

    Science.gov (United States)

    Lin, Qingfeng; Sarkar, Debarghya; Lin, Yuanjing; Yeung, Matthew; Blankemeier, Louis; Hazra, Jubin; Wang, Wei; Niu, Shanyuan; Ravichandran, Jayakanth; Fan, Zhiyong; Kapadia, Rehan

    2017-05-23

    Recent developments in nanophotonics have provided a clear roadmap for improving the efficiency of photonic devices through control over absorption and emission of devices. These advances could prove transformative for a wide variety of devices, such as photovoltaics, photoelectrochemical devices, photodetectors, and light-emitting diodes. However, it is often challenging to physically create the nanophotonic designs required to engineer the optical properties of devices. Here, we present a platform based on crystalline indium phosphide that enables thin-film nanophotonic structures with physical morphologies that are impossible to achieve through conventional state-of-the-art material growth techniques. Here, nanostructured InP thin films have been demonstrated on non-epitaxial alumina inverted nanocone (i-cone) substrates via a low-cost and scalable thin-film vapor-liquid-solid growth technique. In this process, indium films are first evaporated onto the i-cone structures in the desired morphology, followed by a high-temperature step that causes a phase transformation of the indium into indium phosphide, preserving the original morphology of the deposited indium. Through this approach, a wide variety of nanostructured film morphologies are accessible using only control over evaporation process variables. Critically, the as-grown nanotextured InP thin films demonstrate excellent optoelectronic properties, suggesting this platform is promising for future high-performance nanophotonic devices.

  12. Development of a scalable generic platform for adaptive optics real time control

    Science.gov (United States)

    Surendran, Avinash; Burse, Mahesh P.; Ramaprakash, A. N.; Parihar, Padmakar

    2015-06-01

    The main objective of the present project is to explore the viability of an adaptive optics control system based exclusively on Field Programmable Gate Arrays (FPGAs), making strong use of their parallel processing capability. In an Adaptive Optics (AO) system, the generation of the Deformable Mirror (DM) control voltages from the Wavefront Sensor (WFS) measurements is usually through the multiplication of the wavefront slopes with a predetermined reconstructor matrix. The ability to access several hundred hard multipliers and memories concurrently in an FPGA allows performance far beyond that of a modern CPU or GPU for tasks with a well-defined structure such as Adaptive Optics control. The target of the current project is to generate a signal for a real time wavefront correction, from the signals coming from a Wavefront Sensor, wherein the system would be flexible to accommodate all the current Wavefront Sensing techniques and also the different methods which are used for wavefront compensation. The system should also accommodate for different data transmission protocols (like Ethernet, USB, IEEE 1394 etc.) for transmitting data to and from the FPGA device, thus providing a more flexible platform for Adaptive Optics control. Preliminary simulation results for the formulation of the platform, and a design of a fully scalable slope computer is presented.

  13. The Scalable Reasoning System: Lightweight Visualization for Distributed Analytics

    Energy Technology Data Exchange (ETDEWEB)

    Pike, William A.; Bruce, Joseph R.; Baddeley, Robert L.; Best, Daniel M.; Franklin, Lyndsey; May, Richard A.; Rice, Douglas M.; Riensche, Roderick M.; Younkin, Katarina

    2009-03-01

    A central challenge in visual analytics is the creation of accessible, widely distributable analysis applications that bring the benefits of visual discovery to as broad a user base as possible. Moreover, to support the role of visualization in the knowledge creation process, it is advantageous to allow users to describe the reasoning strategies they employ while interacting with analytic environments. We introduce an application suite called the Scalable Reasoning System (SRS), which provides web-based and mobile interfaces for visual analysis. The service-oriented analytic framework that underlies SRS provides a platform for deploying pervasive visual analytic environments across an enterprise. SRS represents a “lightweight” approach to visual analytics whereby thin client analytic applications can be rapidly deployed in a platform-agnostic fashion. Client applications support multiple coordinated views while giving analysts the ability to record evidence, assumptions, hypotheses and other reasoning artifacts. We describe the capabilities of SRS in the context of a real-world deployment at a regional law enforcement organization.

  14. Interactive and Animated Scalable Vector Graphics and R Data Displays

    Directory of Open Access Journals (Sweden)

    Deborah Nolan

    2012-01-01

    Full Text Available We describe an approach to creating interactive and animated graphical displays using R's graphics engine and Scalable Vector Graphics, an XML vocabulary for describing two-dimensional graphical displays. We use the svg( graphics device in R and then post-process the resulting XML documents. The post-processing identities the elements in the SVG that correspond to the different components of the graphical display, e.g., points, axes, labels, lines. One can then annotate these elements to add interactivity and animation effects. One can also use JavaScript to provide dynamic interactive effects to the plot, enabling rich user interactions and compelling visualizations. The resulting SVG documents can be embedded withinHTML documents and can involve JavaScript code that integrates the SVG and HTML objects. The functionality is provided via the SVGAnnotation package and makes static plots generated via R graphics functions available as stand-alone, interactive and animated plots for the Web and other venues.

  15. Scalable Photogrammetric Motion Capture System "mosca": Development and Application

    Science.gov (United States)

    Knyaz, V. A.

    2015-05-01

    Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.

  16. A scalable parallel black oil simulator on distributed memory parallel computers

    Science.gov (United States)

    Wang, Kun; Liu, Hui; Chen, Zhangxin

    2015-11-01

    This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.

  17. A Secure and Stable Multicast Overlay Network with Load Balancing for Scalable IPTV Services

    Directory of Open Access Journals (Sweden)

    Tsao-Ta Wei

    2012-01-01

    Full Text Available The emerging multimedia Internet application IPTV over P2P network preserves significant advantages in scalability. IPTV media content delivered in P2P networks over public Internet still preserves the issues of privacy and intellectual property rights. In this paper, we use SIP protocol to construct a secure application-layer multicast overlay network for IPTV, called SIPTVMON. SIPTVMON can secure all the IPTV media delivery paths against eavesdroppers via elliptic-curve Diffie-Hellman (ECDH key exchange on SIP signaling and AES encryption. Its load-balancing overlay tree is also optimized from peer heterogeneity and churn of peer joining and leaving to minimize both service degradation and latency. The performance results from large-scale simulations and experiments on different optimization criteria demonstrate SIPTVMON's cost effectiveness in quality of privacy protection, stability from user churn, and good perceptual quality of objective PSNR values for scalable IPTV services over Internet.

  18. A New, Scalable and Low Cost Multi-Channel Monitoring System for Polymer Electrolyte Fuel Cells

    Directory of Open Access Journals (Sweden)

    Antonio José Calderón

    2016-03-01

    Full Text Available In this work a new, scalable and low cost multi-channel monitoring system for Polymer Electrolyte Fuel Cells (PEFCs has been designed, constructed and experimentally validated. This developed monitoring system performs non-intrusive voltage measurement of each individual cell of a PEFC stack and it is scalable, in the sense that it is capable to carry out measurements in stacks from 1 to 120 cells (from watts to kilowatts. The developed system comprises two main subsystems: hardware devoted to data acquisition (DAQ and software devoted to real-time monitoring. The DAQ subsystem is based on the low-cost open-source platform Arduino and the real-time monitoring subsystem has been developed using the high-level graphical language NI LabVIEW. Such integration can be considered a novelty in scientific literature for PEFC monitoring systems. An original amplifying and multiplexing board has been designed to increase the Arduino input port availability. Data storage and real-time monitoring have been performed with an easy-to-use interface. Graphical and numerical visualization allows a continuous tracking of cell voltage. Scalability, flexibility, easy-to-use, versatility and low cost are the main features of the proposed approach. The system is described and experimental results are presented. These results demonstrate its suitability to monitor the voltage in a PEFC at cell level.

  19. A New, Scalable and Low Cost Multi-Channel Monitoring System for Polymer Electrolyte Fuel Cells

    Science.gov (United States)

    Calderón, Antonio José; González, Isaías; Calderón, Manuel; Segura, Francisca; Andújar, José Manuel

    2016-01-01

    In this work a new, scalable and low cost multi-channel monitoring system for Polymer Electrolyte Fuel Cells (PEFCs) has been designed, constructed and experimentally validated. This developed monitoring system performs non-intrusive voltage measurement of each individual cell of a PEFC stack and it is scalable, in the sense that it is capable to carry out measurements in stacks from 1 to 120 cells (from watts to kilowatts). The developed system comprises two main subsystems: hardware devoted to data acquisition (DAQ) and software devoted to real-time monitoring. The DAQ subsystem is based on the low-cost open-source platform Arduino and the real-time monitoring subsystem has been developed using the high-level graphical language NI LabVIEW. Such integration can be considered a novelty in scientific literature for PEFC monitoring systems. An original amplifying and multiplexing board has been designed to increase the Arduino input port availability. Data storage and real-time monitoring have been performed with an easy-to-use interface. Graphical and numerical visualization allows a continuous tracking of cell voltage. Scalability, flexibility, easy-to-use, versatility and low cost are the main features of the proposed approach. The system is described and experimental results are presented. These results demonstrate its suitability to monitor the voltage in a PEFC at cell level. PMID:27005630

  20. Using a quality improvement model to enhance providers' performance in maternal and newborn health care : a post-only intervention and comparison design

    NARCIS (Netherlands)

    Ayalew, Firew; Eyassu, Gizachew; Seyoum, Negash; van Roosmalen, Jos; Bazant, Eva; Kim, Young Mi; Tekleberhan, Alemnesh; Gibson, Hannah; Daniel, Ephrem; Stekelenburg, Jelle

    2017-01-01

    Background: The Standards Based Management and Recognition (SBM-R (R)) approach to quality improvement has been implemented in Ethiopia to strengthen routine maternal and newborn health (MNH) services. This evaluation assessed the effect of the intervention on MNH providers' performance of routine

  1. Attitudes, subjective norms, and intention to perform routine oral examination for oropharyngeal candidiasis as perceived by primary health-care providers in Nairobi Province

    NARCIS (Netherlands)

    Koyio, L.N.; Kikwilu, E.N.; Mulder, J.; Frencken, J.E.F.M.

    2013-01-01

    Objectives: To assess attitudes, subjective norms, and intentions of primary health-care (PHC) providers in performing routine oral examination for oropharyngeal candidiasis (OPC) during outpatient consultations. Methods: A 47-item Theory of Planned Behaviour-based questionnaire was developed and

  2. The effect of interprofessional education on interprofessional performance and diabetes care knowledge of health care teams at the level one of health service providing

    Directory of Open Access Journals (Sweden)

    Nikoo Yamani

    2014-01-01

    Conclusion: It seems that inter-professional education can improve the quality of health care to some extent through influencing knowledge and collaborative performance of health care teams. It also can make the health-related messages provided to the covered population more consistent in addition to enhancing self-confidence of the personnel.

  3. Scalable real space pseudopotential-density functional codes for materials applications

    Science.gov (United States)

    Chelikowsky, James R.; Lena, Charles; Schofield, Grady; Saad, Yousef; Deslippe, Jack; Yang, Chao

    2015-03-01

    Real-space pseudopotential density functional theory has proven to be an efficient method for computing the properties of matter in many different states and geometries, including liquids, wires, slabs and clusters with and without spin polarization. Fully self-consistent solutions have been routinely obtained for systems with thousands of atoms. However, there are still systems where quantum mechanical accuracy is desired, but scalability proves to be a hindrance, such as large biological molecules or complex interfaces. We will present an overview of our work on new algorithms, which offer improved scalability by implementing another layer of parallelism, and by optimizing communication and memory management. Support provided by the SciDAC program, Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences. Grant Numbers DE-SC0008877 (Austin) and DE-FG02-12ER4 (Berkeley).

  4. Layered self-identifiable and scalable video codec for delivery to heterogeneous receivers

    Science.gov (United States)

    Feng, Wei; Kassim, Ashraf A.; Tham, Chen-Khong

    2003-06-01

    This paper describes the development of a layered structure of a multi-resolutional scalable video codec based on the Color Set Partitioning in Hierarchical Trees (CSPIHT) scheme. The new structure is designed in such a way that it supports the network Quality of Service (QoS) by allowing packet marking in a real-time layered multicast system with heterogeneous clients. Also, it provides (spatial) resolution/ frame rate scalability from one embedded bit stream. The codec is self-identifiable since it labels the encoded bit stream according to the resolution. We also introduce asymmetry to the CSPIHT encoder and decoder which makes it possible to decode lossy bit streams at heterogeneous clients.

  5. Scalable Sensor Data Processor: A Multi-Core Payload Data Processor ASIC

    Science.gov (United States)

    Berrojo, L.; Moreno, R.; Regada, R.; Garcia, E.; Trautner, R.; Rauwerda, G.; Sunesen, K.; He, Y.; Redant, S.; Thys, G.; Andersson, J.; Habinc, S.

    2015-09-01

    The Scalable Sensor Data Processor (SSDP) project, under ESA contract and with TAS-E as prime contractor, targets the development of a multi-core ASIC for payload data processing to be used, among other terrestrial and space application areas, in future scientific and exploration missions with harsh radiation environments. The SSDP is a mixed-signal heterogeneous multi-core System-on-Chip (SoC). It combines GPP and NoC-based DSP subsystems with on-chip ADCs and several standard space I/Fs to make a flexible, configurable and scalable device. The NoC comprises two state-of-the-art fixed point Xentium® DSP processors, providing the device with high data processing capabilities.

  6. AEGIS: a robust and scalable real-time public health surveillance system.

    Science.gov (United States)

    Reis, Ben Y; Kirby, Chaim; Hadden, Lucy E; Olson, Karen; McMurry, Andrew J; Daniel, James B; Mandl, Kenneth D

    2007-01-01

    In this report, we describe the Automated Epidemiological Geotemporal Integrated Surveillance system (AEGIS), developed for real-time population health monitoring in the state of Massachusetts. AEGIS provides public health personnel with automated near-real-time situational awareness of utilization patterns at participating healthcare institutions, supporting surveillance of bioterrorism and naturally occurring outbreaks. As real-time public health surveillance systems become integrated into regional and national surveillance initiatives, the challenges of scalability, robustness, and data security become increasingly prominent. A modular and fault tolerant design helps AEGIS achieve scalability and robustness, while a distributed storage model with local autonomy helps to minimize risk of unauthorized disclosure. The report includes a description of the evolution of the design over time in response to the challenges of a regional and national integration environment.

  7. GenePING: secure, scalable management of personal genomic data

    Directory of Open Access Journals (Sweden)

    Kohane Isaac S

    2006-04-01

    Full Text Available Abstract Background Patient genomic data are rapidly becoming part of clinical decision making. Within a few years, full genome expression profiling and genotyping will be affordable enough to perform on every individual. The management of such sizeable, yet fine-grained, data in compliance with privacy laws and best practices presents significant security and scalability challenges. Results We present the design and implementation of GenePING, an extension to the PING personal health record system that supports secure storage of large, genome-sized datasets, as well as efficient sharing and retrieval of individual datapoints (e.g. SNPs, rare mutations, gene expression levels. Even with full access to the raw GenePING storage, an attacker cannot discover any stored genomic datapoint on any single patient. Given a large-enough number of patient records, an attacker cannot discover which data corresponds to which patient, or even the size of a given patient's record. The computational overhead of GenePING's security features is a small constant, making the system usable, even in emergency care, on today's hardware. Conclusion GenePING is the first personal health record management system to support the efficient and secure storage and sharing of large genomic datasets. GenePING is available online at http://ping.chip.org/genepinghtml, licensed under the LGPL.

  8. Scalable and responsive event processing in the cloud

    Science.gov (United States)

    Suresh, Visalakshmi; Ezhilchelvan, Paul; Watson, Paul

    2013-01-01

    Event processing involves continuous evaluation of queries over streams of events. Response-time optimization is traditionally done over a fixed set of nodes and/or by using metrics measured at query-operator levels. Cloud computing makes it easy to acquire and release computing nodes as required. Leveraging this flexibility, we propose a novel, queueing-theory-based approach for meeting specified response-time targets against fluctuating event arrival rates by drawing only the necessary amount of computing resources from a cloud platform. In the proposed approach, the entire processing engine of a distinct query is modelled as an atomic unit for predicting response times. Several such units hosted on a single node are modelled as a multiple class M/G/1 system. These aspects eliminate intrusive, low-level performance measurements at run-time, and also offer portability and scalability. Using model-based predictions, cloud resources are efficiently used to meet response-time targets. The efficacy of the approach is demonstrated through cloud-based experiments. PMID:23230164

  9. Scalable bonding of nanofibrous polytetrafluoroethylene (PTFE) membranes on microstructures

    Science.gov (United States)

    Mortazavi, Mehdi; Fazeli, Abdolreza; Moghaddam, Saeed

    2018-01-01

    Expanded polytetrafluoroethylene (ePTFE) nanofibrous membranes exhibit high porosity (80%–90%), high gas permeability, chemical inertness, and superhydrophobicity, which makes them a suitable choice in many demanding fields including industrial filtration, medical implants, bio-/nano- sensors/actuators and microanalysis (i.e. lab-on-a-chip). However, one of the major challenges that inhibit implementation of such membranes is their inability to bond to other materials due to their intrinsic low surface energy and chemical inertness. Prior attempts to improve adhesion of ePTFE membranes to other surfaces involved surface chemical treatments which have not been successful due to degradation of the mechanical integrity and the breakthrough pressure of the membrane. Here, we report a simple and scalable method of bonding ePTFE membranes to different surfaces via the introduction of an intermediate adhesive layer. While a variety of adhesives can be used with this technique, the highest bonding performance is obtained for adhesives that have moderate contact angles with the substrate and low contact angles with the membrane. A thin layer of an adhesive can be uniformly applied onto micro-patterned substrates with feature sizes down to 5 µm using a roll-coating process. Membrane-based microchannel and micropillar devices with burst pressures of up to 200 kPa have been successfully fabricated and tested. A thin layer of the membrane remains attached to the substrate after debonding, suggesting that mechanical interlocking through nanofiber engagement is the main mechanism of adhesion.

  10. Scalable Fast Rate-Distortion Optimization for H.264/AVC

    Directory of Open Access Journals (Sweden)

    Yu Hongtao

    2006-01-01

    Full Text Available The latest H.264/AVC video coding standard aims at significantly improving compression performance compared to all existing video coding standards. In order to achieve this, variable block-size inter- and intra-coding, with block sizes as large as and as small as , is used to enable very precise depiction of motion and texture details. The Lagrangian rate-distortion optimization (RDO can be employed to select the best coding mode. However, exhaustively searching through all coding modes is computationally expensive. This paper proposes a scalable fast RDO algorithm to effectively choose the best coding mode without exhaustively searching through all the coding modes. The statistical properties of MBs are analyzed to determine the order of coding modes in the mode decision priority queue such that the most probable mode will be checked first, followed by the second most probable mode, and so forth. The process will be terminated as soon as the computed rate-distortion (RD cost is below a threshold which is content adaptive and is also dependent on the RD cost of the previous MBs. By adjusting the threshold we can choose a good tradeoff between timesaving and peak signal-to-noise (PSNR ratio. Experimental results show that the proposed fast RDO algorithm can drastically reduce the encoding time up to 50% with negligible loss of coding efficiency.

  11. Scalable Parallel Density-based Clustering and Applications

    Science.gov (United States)

    Patwary, Mostofa Ali

    2014-04-01

    Recently, density-based clustering algorithms (DBSCAN and OPTICS) have gotten significant attention of the scientific community due to their unique capability of discovering arbitrary shaped clusters and eliminating noise data. These algorithms have several applications, which require high performance computing, including finding halos and subhalos (clusters) from massive cosmology data in astrophysics, analyzing satellite images, X-ray crystallography, and anomaly detection. However, parallelization of these algorithms are extremely challenging as they exhibit inherent sequential data access order, unbalanced workload resulting in low parallel efficiency. To break the data access sequentiality and to achieve high parallelism, we develop new parallel algorithms, both for DBSCAN and OPTICS, designed using graph algorithmic techniques. For example, our parallel DBSCAN algorithm exploits the similarities between DBSCAN and computing connected components. Using datasets containing up to a billion floating point numbers, we show that our parallel density-based clustering algorithms significantly outperform the existing algorithms, achieving speedups up to 27.5 on 40 cores on shared memory architecture and speedups up to 5,765 using 8,192 cores on distributed memory architecture. In our experiments, we found that while achieving the scalability, our algorithms produce clustering results with comparable quality to the classical algorithms.

  12. Spatiotemporal Stochastic Modeling of IoT Enabled Cellular Networks: Scalability and Stability Analysis

    KAUST Repository

    Gharbieh, Mohammad

    2017-05-02

    The Internet of Things (IoT) is large-scale by nature, which is manifested by the massive number of connected devices as well as their vast spatial existence. Cellular networks, which provide ubiquitous, reliable, and efficient wireless access, will play fundamental rule in delivering the first-mile access for the data tsunami to be generated by the IoT. However, cellular networks may have scalability problems to provide uplink connectivity to massive numbers of connected things. To characterize the scalability of cellular uplink in the context of IoT networks, this paper develops a traffic-aware spatiotemporal mathematical model for IoT devices supported by cellular uplink connectivity. The developed model is based on stochastic geometry and queueing theory to account for the traffic requirement per IoT device, the different transmission strategies, and the mutual interference between the IoT devices. To this end, the developed model is utilized to characterize the extent to which cellular networks can accommodate IoT traffic as well as to assess and compare three different transmission strategies that incorporate a combination of transmission persistency, backoff, and power-ramping. The analysis and the results clearly illustrate the scalability problem imposed by IoT on cellular network and offer insights into effective scenarios for each transmission strategy.

  13. Attitudes, subjective norms, and intention to perform routine oral examination for oropharyngeal candidiasis as perceived by primary health-care providers in Nairobi Province.

    Science.gov (United States)

    Koyio, Lucina N; Kikwilu, Emil; Mulder, Jan; Frencken, Jo E

    2013-01-01

    To assess attitudes, subjective norms, and intentions of primary health-care (PHC) providers in performing routine oral examination for oropharyngeal candidiasis (OPC) during outpatient consultations. A 47-item Theory of Planned Behaviour-based questionnaire was developed and administered, in a cross-sectional survey, to 216 PHC providers (clinical officers and nurses) working in 54 clinics, dispensaries, and health centers in Nairobi Province in January 2010. The constructs - attitudes, subjective norms, and perceived behavioral control (dependent variables) - and their individual indirect (direct) items were analyzed for scores, internal validity, independent variables (district, gender, years of service, profession, and age), and contribution to intentions. Perceived behavioral control had low construct validity and was therefore removed from subsequent analyses. The questionnaire was completed by 195 participants (90 percent response rate). PHC provider's attitudes, subjective norms, and intentions to perform an oral examination during outpatient consultations were highly positive, with mean scores of 6.30 (0.82), 6.06 (1.07), and 5.6 (1.33), respectively, regardless of sociodemographic characteristics. Indirect attitude and subjective norms were strongly correlated to their individual items (r=0.63-0.79, Psubjective norms (P<0.0001) were both predictive of intentions. PHC providers were willing to integrate patients' oral health care into their routine medical consultations. Emphasizing the importance of detecting other oral problems and of the fact that routine oral examination for OPC is likely to give patients' fulfillment will enhance PHC providers' morale in performing routine oral examinations. Winning support from policy makers, their supervisors, specialists, and colleagues is important for motivating PHC providers to perform routine oral examinations for OPC at their workplaces. © 2012 American Association of Public Health Dentistry.

  14. The design of a scalable, fixed-time computer benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Gustafson, J.; Rover, D.; Elbert, S.; Carter, M.

    1990-10-01

    By using the principle of fixed time benchmarking, it is possible to compare a very wide range of computers, from a small personal computer to the most powerful parallel supercomputer, an a single scale. Fixed-time benchmarks promise far greater longevity than those based on a particular problem size, and are more appropriate for grand challenge'' capability comparison. We present the design of a benchmark, SLALOM{trademark}, that scales automatically to the computing power available, and corrects several deficiencies in various existing benchmarks: it is highly scalable, it solves a real problem, it includes input and output times, and it can be run on parallel machines of all kinds, using any convenient language. The benchmark provides a reasonable estimate of the size of problem solvable on scientific computers. Results are presented that span six orders of magnitude for contemporary computers of various architectures. The benchmarks also can be used to demonstrate a new source of superlinear speedup in parallel computers. 15 refs., 14 figs., 3 tabs.

  15. Hierarchical sets: analyzing pangenome structure through scalable set visualizations.

    Science.gov (United States)

    Pedersen, Thomas Lin

    2017-06-01

    The increase in available microbial genome sequences has resulted in an increase in the size of the pangenomes being analyzed. Current pangenome visualizations are not intended for the pangenome sizes possible today and new approaches are necessary in order to convert the increase in available information to increase in knowledge. As the pangenome data structure is essentially a collection of sets we explore the potential for scalable set visualization as a tool for pangenome analysis. We present a new hierarchical clustering algorithm based on set arithmetics that optimizes the intersection sizes along the branches. The intersection and union sizes along the hierarchy are visualized using a composite dendrogram and icicle plot, which, in pangenome context, shows the evolution of pangenome and core size along the evolutionary hierarchy. Outlying elements, i.e. elements whose presence pattern do not correspond with the hierarchy, can be visualized using hierarchical edge bundles. When applied to pangenome data this plot shows putative horizontal gene transfers between the genomes and can highlight relationships between genomes that is not represented by the hierarchy. We illustrate the utility of hierarchical sets by applying it to a pangenome based on 113 Escherichia and Shigella genomes and find it provides a powerful addition to pangenome analysis. The described clustering algorithm and visualizations are implemented in the hierarchicalSets R package available from CRAN ( https://cran.r-project.org/web/packages/hierarchicalSets ). thomasp85@gmail.com. Supplementary data are available at Bioinformatics online.

  16. CX: A Scalable, Robust Network for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Peter Cappello

    2002-01-01

    Full Text Available CX, a network-based computational exchange, is presented. The system's design integrates variations of ideas from other researchers, such as work stealing, non-blocking tasks, eager scheduling, and space-based coordination. The object-oriented API is simple, compact, and cleanly separates application logic from the logic that supports interprocess communication and fault tolerance. Computations, of course, run to completion in the presence of computational hosts that join and leave the ongoing computation. Such hosts, or producers, use task caching and prefetching to overlap computation with interprocessor communication. To break a potential task server bottleneck, a network of task servers is presented. Even though task servers are envisioned as reliable, the self-organizing, scalable network of n- servers, described as a sibling-connected height-balanced fat tree, tolerates a sequence of n-1 server failures. Tasks are distributed throughout the server network via a simple "diffusion" process. CX is intended as a test bed for research on automated silent auctions, reputation services, authentication services, and bonding services. CX also provides a test bed for algorithm research into network-based parallel computation.

  17. SAChES: Scalable Adaptive Chain-Ensemble Sampling.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Huang, Maoyi [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hou, Zhangshuan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bao, Jie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ren, Huiying [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-08-01

    We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the use of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.

  18. Relationship of Provider and Practice Volume to Performance Measure Adherence for Coronary Artery Disease, Heart Failure, and Atrial Fibrillation: Results From the National Cardiovascular Data Registry.

    Science.gov (United States)

    Fleming, Lisa M; Jones, Philip; Chan, Paul S; Andrei, Adin-Christian; Maddox, Thomas M; Farmer, Steven A

    2016-01-01

    There is a reported association between high clinical volume and improved outcomes. Whether this relationship is true for outpatients with coronary artery disease (CAD), heart failure (HF), and atrial fibrillation (AF) remains unknown. Using the PINNACLE Registry (2009-2012), average monthly provider and practice volumes were calculated for CAD, HF, and AF. Adherence with 4 American Heart Association CAD, 2 HF, and 1 AF performance measure were assessed at the most recent encounter for each patient. Hierarchical logistic regression models were used to assess the relationship between provider and practice volume and performance on eligible quality measures. Data incorporated patients from 1094 providers at 71 practices (practice level analyses n=654 535; provider level analyses n=529 938). Median monthly provider volumes were 79 (interquartile range [IQR], 51-117) for CAD, 27 (16-45) for HF, and 37 (24-54) for AF. Median monthly practice volumes were 923 (IQR, 476-1455) for CAD, 311 (145-657) for HF, and 459 (185-720) for AF. Overall, 55% of patients met all CAD measures, 72% met all HF measures, and 58% met the AF measure. There was no definite relationship between practice volume and concordance for CAD, AF, or HF (P=0.56, 0.52, and 0.79, respectively). In contrast, higher provider volume was associated with increased concordance for CAD and AF performance measures (Pperformance was modest and variable. Higher provider volume was positively associated with quality, whereas practice volume was not. © 2015 American Heart Association, Inc.

  19. Microscopic droplet formation and energy transport analysis of condensation on scalable superhydrophobic nanostructured copper oxide surfaces.

    Science.gov (United States)

    Li, GuanQiu; Alhosani, Mohamed H; Yuan, ShaoJun; Liu, HaoRan; Ghaferi, Amal Al; Zhang, TieJun

    2014-12-09

    Utilization of nanotechnologies in condensation has been recognized as one opportunity to improve the efficiency of large-scale thermal power and desalination systems. High-performance and stable dropwise condensation in widely-used copper heat exchangers is appealing for energy and water industries. In this work, a scalable and low-cost nanofabrication approach was developed to fabricate superhydrophobic copper oxide (CuO) nanoneedle surfaces to promote dropwise condensation and even jumping-droplet condensation. By conducting systematic surface characterization and in situ environmental scanning electron microscope (ESEM) condensation experiments, we were able to probe the microscopic formation physics of droplets on irregular nanostructured surfaces. At the early stages of condensation process, the interfacial surface tensions at the edge of CuO nanoneedles were found to influence both the local energy barriers for microdroplet growth and the advancing contact angles when droplets undergo depinning. Local surface roughness also has a significant impact on the volume of the condensate within the nanostructures and overall heat transfer from the vapor to substrate. Both our theoretical analysis and in situ ESEM experiments have revealed that the liquid condensate within the nanostructures determines the amount of the work of adhesion and kinetic energy associated with droplet coalescence and jumping. Local and global droplet growth models were also proposed to predict how the microdroplet morphology within nanostructures affects the heat transfer performance of early-stage condensation. Our quantitative analysis of microdroplet formation and growth within irregular nanostructures provides the insight to guide the anodization-based nanofabrication for enhancing dropwise and jumping-droplet condensation performance.

  20. Scalable Coating and Properties of Transparent, Flexible, Silver Nanowire Electrodes

    KAUST Repository

    Hu, Liangbing

    2010-05-25

    We report a comprehensive study of transparent and conductive silver nanowire (Ag NW) electrodes, including a scalable fabrication process, morphologies, and optical, mechanical adhesion, and flexibility properties, and various routes to improve the performance. We utilized a synthesis specifically designed for long and thin wires for improved performance in terms of sheet resistance and optical transmittance. Twenty Ω/sq and ∼ 80% specular transmittance, and 8 ohms/sq and 80% diffusive transmittance in the visible range are achieved, which fall in the same range as the best indium tin oxide (ITO) samples on plastic substrates for flexible electronics and solar cells. The Ag NW electrodes show optical transparencies superior to ITO for near-infrared wavelengths (2-fold higher transmission). Owing to light scattering effects, the Ag NW network has the largest difference between diffusive transmittance and specular transmittance when compared with ITO and carbon nanotube electrodes, a property which could greatly enhance solar cell performance. A mechanical study shows that Ag NW electrodes on flexible substrates show excellent robustness when subjected to bending. We also study the electrical conductance of Ag nanowires and their junctions and report a facile electrochemical method for a Au coating to reduce the wire-to-wire junction resistance for better overall film conductance. Simple mechanical pressing was also found to increase the NW film conductance due to the reduction of junction resistance. The overall properties of transparent Ag NW electrodes meet the requirements of transparent electrodes for many applications and could be an immediate ITO replacement for flexible electronics and solar cells. © 2010 American Chemical Society.

  1. Generic, scalable and decentralized fault detection for robot swarms

    Science.gov (United States)

    Christensen, Anders Lyhne; Timmis, Jon

    2017-01-01

    Robot swarms are large-scale multirobot systems with decentralized control which means that each robot acts based only on local perception and on local coordination with neighboring robots. The decentralized approach to control confers number of potential benefits. In particular, inherent scalability and robustness are often highlighted as key distinguishing features of robot swarms compared with systems that rely on traditional approaches to multirobot coordination. It has, however, been shown that swarm robotics systems are not always fault tolerant. To realize the robustness potential of robot swarms, it is thus essential to give systems the capacity to actively detect and accommodate faults. In this paper, we present a generic fault-detection system for robot swarms. We show how robots with limited and imperfect sensing capabilities are able to observe and classify the behavior of one another. In order to achieve this, the underlying classifier is an immune system-inspired algorithm that learns to distinguish between normal behavior and abnormal behavior online. Through a series of experiments, we systematically assess the performance of our approach in a detailed simulation environment. In particular, we analyze our system’s capacity to correctly detect robots with faults, false positive rates, performance in a foraging task in which each robot exhibits a composite behavior, and performance under perturbations of the task environment. Results show that our generic fault-detection system is robust, that it is able to detect faults in a timely manner, and that it achieves a low false positive rate. The developed fault-detection system has the potential to enable long-term autonomy for robust multirobot systems, thus increasing the usefulness of robots for a diverse repertoire of upcoming applications in the area of distributed intelligent automation. PMID:28806756

  2. Generic, scalable and decentralized fault detection for robot swarms.

    Science.gov (United States)

    Tarapore, Danesh; Christensen, Anders Lyhne; Timmis, Jon

    2017-01-01

    Robot swarms are large-scale multirobot systems with decentralized control which means that each robot acts based only on local perception and on local coordination with neighboring robots. The decentralized approach to control confers number of potential benefits. In particular, inherent scalability and robustness are often highlighted as key distinguishing features of robot swarms compared with systems that rely on traditional approaches to multirobot coordination. It has, however, been shown that swarm robotics systems are not always fault tolerant. To realize the robustness potential of robot swarms, it is thus essential to give systems the capacity to actively detect and accommodate faults. In this paper, we present a generic fault-detection system for robot swarms. We show how robots with limited and imperfect sensing capabilities are able to observe and classify the behavior of one another. In order to achieve this, the underlying classifier is an immune system-inspired algorithm that learns to distinguish between normal behavior and abnormal behavior online. Through a series of experiments, we systematically assess the performance of our approach in a detailed simulation environment. In particular, we analyze our system's capacity to correctly detect robots with faults, false positive rates, performance in a foraging task in which each robot exhibits a composite behavior, and performance under perturbations of the task environment. Results show that our generic fault-detection system is robust, that it is able to detect faults in a timely manner, and that it achieves a low false positive rate. The developed fault-detection system has the potential to enable long-term autonomy for robust multirobot systems, thus increasing the usefulness of robots for a diverse repertoire of upcoming applications in the area of distributed intelligent automation.

  3. A Customer’s Possibilities to Increase the Performance of a Service Provider by Adding Value and Deepening the Partnership in Facility Management Service

    Directory of Open Access Journals (Sweden)

    Sillanpää Elina

    2016-06-01

    Full Text Available Reliable and good suppliers are an important competitive advantage for a customer and that is why the development of suppliers, improvement of performance and enhancement of customership are also in the interest of the customer. The purpose of this study is to clarify a customer’s possibilities to increase the performance of a service provider and to develop the service process in FM services and thus help to improve partnership development. This research is a qualitative research. The research complements the existing generic model of supplier development towards partnership development by customer and clarifies the special features that facility management services bring to this model. The data has been gathered from interviews of customers and service providers in the facility management service sector. The result is a model of customers’ possibilities to develop the performance of service providers from the viewpoint of value addition and relationship development and in that way ensure added value to the customer and the development of a long-term relationship. The results can be beneficial to customers when they develop the cooperation between the customer and the service provider toward being more strategic and more partnership focused.

  4. ParaText : scalable solutions for processing and searching very large document collections : final LDRD report.

    Energy Technology Data Exchange (ETDEWEB)

    Crossno, Patricia Joyce; Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-09-01

    This report is a summary of the accomplishments of the 'Scalable Solutions for Processing and Searching Very Large Document Collections' LDRD, which ran from FY08 through FY10. Our goal was to investigate scalable text analysis; specifically, methods for information retrieval and visualization that could scale to extremely large document collections. Towards that end, we designed, implemented, and demonstrated a scalable framework for text analysis - ParaText - as a major project deliverable. Further, we demonstrated the benefits of using visual analysis in text analysis algorithm development, improved performance of heterogeneous ensemble models in data classification problems, and the advantages of information theoretic methods in user analysis and interpretation in cross language information retrieval. The project involved 5 members of the technical staff and 3 summer interns (including one who worked two summers). It resulted in a total of 14 publications, 3 new software libraries (2 open source and 1 internal to Sandia), several new end-user software applications, and over 20 presentations. Several follow-on projects have already begun or will start in FY11, with additional projects currently in proposal.

  5. Particle Communication and Domain Neighbor Coupling: Scalable Domain Decomposed Algorithms for Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, M. J.; Brantley, P. S.

    2015-01-20

    In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 221 = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.

  6. Fast and Scalable Feature Selection for Gene Expression Data Using Hilbert-Schmidt Independence Criterion.

    Science.gov (United States)

    Gangeh, Mehrdad J; Zarkoob, Hadi; Ghodsi, Ali

    2017-01-01

    In computational biology, selecting a small subset of informative genes from microarray data continues to be a challenge due to the presence of thousands of genes. This paper aims at quantifying the dependence between gene expression data and the response variables and to identifying a subset of the most informative genes using a fast and scalable multivariate algorithm. A novel algorithm for feature selection from gene expression data was developed. The algorithm was based on the Hilbert-Schmidt independence criterion (HSIC), and was partly motivated by singular value decomposition (SVD). The algorithm is computationally fast and scalable to large datasets. Moreover, it can be applied to problems with any type of response variables including, biclass, multiclass, and continuous response variables. The performance of the proposed algorithm in terms of accuracy, stability of the selected genes, speed, and scalability was evaluated using both synthetic and real-world datasets. The simulation results demonstrated that the proposed algorithm effectively and efficiently extracted stable genes with high predictive capability, in particular for datasets with multiclass response variables. The proposed method does not require the whole microarray dataset to be stored in memory, and thus can easily be scaled to large datasets. This capability is an important attribute in big data analytics, where data can be large and massively distributed.

  7. Does ownership matter? An overview of systematic reviews of the performance of private for-profit, private not-for-profit and public healthcare providers.

    Science.gov (United States)

    Herrera, Cristian A; Rada, Gabriel; Kuhn-Barrientos, Lucy; Barrios, Ximena

    2014-01-01

    Ownership of healthcare providers has been considered as one factor that might influence their health and healthcare related performance. The aim of this article was to provide an overview of what is known about the effects on economic, administrative and health related outcomes of different types of ownership of healthcare providers--namely public, private non-for-profit (PNFP) and private for-profit (PFP)--based on the findings of systematic reviews (SR). An overview of systematic reviews was performed. Different databases were searched in order to select SRs according to an explicit comprehensive criterion. Included SRs were assessed to determine their methodological quality. Of the 5918 references reviewed, fifteen SR were included, but six of them were rated as having major limitations, so they weren't incorporated in the analyses. According to the nine analyzed SR, ownership does seem to have an effect on health and healthcare related outcomes. In the comparison of PFP and PNFP providers, significant differences in terms of mortality of patients and payments to facilities have been found, both being higher in PFP facilities. In terms of quality and economic indicators such as efficiency, there are no concluding results. When comparing PNFP and public providers, as well as for PFP and public providers, no clear differences were found. PFP providers seem to have worst results than their PNFP counterparts, but there are still important evidence gaps in the literature that needs to be covered, including the comparison between public and both PFP and PNFP providers. More research is needed in low and middle income countries to understand the impact on and development of healthcare delivery systems.

  8. Process measures or patient reported experience measures (PREMs) for comparing performance across providers? A study of measures related to access and continuity in Swedish primary care.

    Science.gov (United States)

    Glenngård, Anna H; Anell, Anders

    2017-09-15

    Aim To study (a) the covariation between patient reported experience measures (PREMs) and registered process measures of access and continuity when ranking providers in a primary care setting, and (b) whether registered process measures or PREMs provided more or less information about potential linkages between levels of access and continuity and explaining variables. Access and continuity are important objectives in primary care. They can be measured through registered process measures or PREMs. These measures do not necessarily converge in terms of outcomes. Patient views are affected by factors not necessarily reflecting quality of services. Results from surveys are often uncertain due to low response rates, particularly in vulnerable groups. The quality of process measures, on the other hand, may be influenced by registration practices and are often more easy to manipulate. With increased transparency and use of quality measures for management and governance purposes, knowledge about the pros and cons of using different measures to assess the performance across providers are important. Four regression models were developed with registered process measures and PREMs of access and continuity as dependent variables. Independent variables were characteristics of providers as well as geographical location and degree of competition facing providers. Data were taken from two large Swedish county councils. Findings Although ranking of providers is sensitive to the measure used, the results suggest that providers performing well with respect to one measure also tended to perform well with respect to the other. As process measures are easier and quicker to collect they may be looked upon as the preferred option. PREMs were better than process measures when exploring factors that contributed to variation in performance across providers in our study; however, if the purpose of comparison is continuous learning and development of services, a combination of PREMs and

  9. Heterogeneous scalable framework for multiphase flows

    Energy Technology Data Exchange (ETDEWEB)

    Morris, Karla Vanessa [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-09-01

    Two categories of challenges confront the developer of computational spray models: those related to the computation and those related to the physics. Regarding the computation, the trend towards heterogeneous, multi- and many-core platforms will require considerable re-engineering of codes written for the current supercomputing platforms. Regarding the physics, accurate methods for transferring mass, momentum and energy from the dispersed phase onto the carrier fluid grid have so far eluded modelers. Significant challenges also lie at the intersection between these two categories. To be competitive, any physics model must be expressible in a parallel algorithm that performs well on evolving computer platforms. This work created an application based on a software architecture where the physics and software concerns are separated in a way that adds flexibility to both. The develop spray-tracking package includes an application programming interface (API) that abstracts away the platform-dependent parallelization concerns, enabling the scientific programmer to write serial code that the API resolves into parallel processes and threads of execution. The project also developed the infrastructure required to provide similar APIs to other application. The API allow object-oriented Fortran applications direct interaction with Trilinos to support memory management of distributed objects in central processing units (CPU) and graphic processing units (GPU) nodes for applications using C++.

  10. Final Scientific Report: A Scalable Development Environment for Peta-Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Karbach, Carsten [Julich Research Center (Germany); Frings, Wolfgang [Julich Research Center (Germany)

    2013-02-22

    This document is the final scientific report of the project DE-SC000120 (A scalable Development Environment for Peta-Scale Computing). The objective of this project is the extension of the Parallel Tools Platform (PTP) for applying it to peta-scale systems. PTP is an integrated development environment for parallel applications. It comprises code analysis, performance tuning, parallel debugging and system monitoring. The contribution of the Juelich Supercomputing Centre (JSC) aims to provide a scalable solution for system monitoring of supercomputers. This includes the development of a new communication protocol for exchanging status data between the target remote system and the client running PTP. The communication has to work for high latency. PTP needs to be implemented robustly and should hide the complexity of the supercomputer's architecture in order to provide a transparent access to various remote systems via a uniform user interface. This simplifies the porting of applications to different systems, because PTP functions as abstraction layer between parallel application developer and compute resources. The common requirement for all PTP components is that they have to interact with the remote supercomputer. E.g. applications are built remotely and performance tools are attached to job submissions and their output data resides on the remote system. Status data has to be collected by evaluating outputs of the remote job scheduler and the parallel debugger needs to control an application executed on the supercomputer. The challenge is to provide this functionality for peta-scale systems in real-time. The client server architecture of the established monitoring application LLview, developed by the JSC, can be applied to PTP's system monitoring. LLview provides a well-arranged overview of the supercomputer's current status. A set of statistics, a list of running and queued jobs as well as a node display mapping running jobs to their compute

  11. Vocal activity as a low cost and scalable index of seabird colony size.

    Science.gov (United States)

    Borker, Abraham L; McKown, Matthew W; Ackerman, Joshua T; Eagles-Smith, Collin A; Tershy, Bernie R; Croll, Donald A

    2014-08-01

    Although wildlife conservation actions have increased globally in number and complexity, the lack of scalable, cost-effective monitoring methods limits adaptive management and the evaluation of conservation efficacy. Automated sensors and computer-aided analyses provide a scalable and increasingly cost-effective tool for conservation monitoring. A key assumption of automated acoustic monitoring of birds is that measures of acoustic activity at colony sites are correlated with the relative abundance of nesting birds. We tested this assumption for nesting Forster's terns (Sterna forsteri) in San Francisco Bay for 2 breeding seasons. Sensors recorded ambient sound at 7 colonies that had 15-111 nests in 2009 and 2010. Colonies were spaced at least 250 m apart and ranged from 36 to 2,571 m(2) . We used spectrogram cross-correlation to automate the detection of tern calls from recordings. We calculated mean seasonal call rate and compared it with mean active nest count at each colony. Acoustic activity explained 71% of the variation in nest abundance between breeding sites and 88% of the change in colony size between years. These results validate a primary assumption of acoustic indices; that is, for terns, acoustic activity is correlated to relative abundance, a fundamental step toward designing rigorous and scalable acoustic monitoring programs to measure the effectiveness of conservation actions for colonial birds and other acoustically active wildlife. © 2014 Society for Conservation Biology.

  12. Scalable splitting algorithms for big-data interferometric imaging in the SKA era

    Science.gov (United States)

    Onose, Alexandru; Carrillo, Rafael E.; Repetti, Audrey; McEwen, Jason D.; Thiran, Jean-Philippe; Pesquet, Jean-Christophe; Wiaux, Yves

    2016-11-01

    In the context of next-generation radio telescopes, like the Square Kilometre Array (SKA), the efficient processing of large-scale data sets is extremely important. Convex optimization tasks under the compressive sensing framework have recently emerged and provide both enhanced image reconstruction quality and scalability to increasingly larger data sets. We focus herein mainly on scalability and propose two new convex optimization algorithmic structures able to solve the convex optimization tasks arising in radio-interferometric imaging. They rely on proximal splitting and forward-backward iterations and can be seen, by analogy, with the CLEAN major-minor cycle, as running sophisticated CLEAN-like iterations in parallel in multiple data, prior, and image spaces. Both methods support any convex regularization function, in particular, the well-studied ℓ1 priors promoting image sparsity in an adequate domain. Tailored for big-data, they employ parallel and distributed computations to achieve scalability, in terms of memory and computational requirements. One of them also exploits randomization, over data blocks at each iteration, offering further flexibility. We present simulation results showing the feasibility of the proposed methods as well as their advantages compared to state-of-the-art algorithmic solvers. Our MATLAB code is available online on GitHub.

  13. A scalable neuroinformatics data flow for electrophysiological signals using MapReduce

    Science.gov (United States)

    Jayapandian, Catherine; Wei, Annan; Ramesh, Priya; Zonjy, Bilal; Lhatoo, Samden D.; Loparo, Kenneth; Zhang, Guo-Qiang; Sahoo, Satya S.

    2015-01-01

    Data-driven neuroscience research is providing new insights in progression of neurological disorders and supporting the development of improved treatment approaches. However, the volume, velocity, and variety of neuroscience data generated from sophisticated recording instruments and acquisition methods have exacerbated the limited scalability of existing neuroinformatics tools. This makes it difficult for neuroscience researchers to effectively leverage the growing multi-modal neuroscience data to advance research in serious neurological disorders, such as epilepsy. We describe the development of the Cloudwave data flow that uses new data partitioning techniques to store and analyze electrophysiological signal in distributed computing infrastructure. The Cloudwave data flow uses MapReduce parallel programming algorithm to implement an integrated signal data processing pipeline that scales with large volume of data generated at high velocity. Using an epilepsy domain ontology together with an epilepsy focused extensible data representation format called Cloudwave Signal Format (CSF), the data flow addresses the challenge of data heterogeneity and is interoperable with existing neuroinformatics data representation formats, such as HDF5. The scalability of the Cloudwave data flow is evaluated using a 30-node cluster installed with the open source Hadoop software stack. The results demonstrate that the Cloudwave data flow can process increasing volume of signal data by leveraging Hadoop Data Nodes to reduce the total data processing time. The Cloudwave data flow is a template for developing highly scalable neuroscience data processing pipelines using MapReduce algorithms to support a variety of user applications. PMID:25852536

  14. Efficient temporal and interlayer parameter prediction for weighted prediction in scalable high efficiency video coding

    Science.gov (United States)

    Tsang, Sik-Ho; Chan, Yui-Lam; Siu, Wan-Chi

    2017-01-01

    Weighted prediction (WP) is an efficient video coding tool that was introduced since the establishment of the H.264/AVC video coding standard, for compensating the temporal illumination change in motion estimation and compensation. WP parameters, including a multiplicative weight and an additive offset for each reference frame, are required to be estimated and transmitted to the decoder by slice header. These parameters cause extra bits in the coded video bitstream. High efficiency video coding (HEVC) provides WP parameter prediction to reduce the overhead. Therefore, WP parameter prediction is crucial to research works or applications, which are related to WP. Prior art has been suggested to further improve the WP parameter prediction by implicit prediction of image characteristics and derivation of parameters. By exploiting both temporal and interlayer redundancies, we propose three WP parameter prediction algorithms, enhanced implicit WP parameter, enhanced direct WP parameter derivation, and interlayer WP parameter, to further improve the coding efficiency of HEVC. Results show that our proposed algorithms can achieve up to 5.83% and 5.23% bitrate reduction compared to the conventional scalable HEVC in the base layer for SNR scalability and 2× spatial scalability, respectively.

  15. A scalable neuroinformatics data flow for electrophysiological signals using MapReduce.

    Science.gov (United States)

    Jayapandian, Catherine; Wei, Annan; Ramesh, Priya; Zonjy, Bilal; Lhatoo, Samden D; Loparo, Kenneth; Zhang, Guo-Qiang; Sahoo, Satya S

    2015-01-01

    Data-driven neuroscience research is providing new insights in progression of neurological disorders and supporting the development of improved treatment approaches. However, the volume, velocity, and variety of neuroscience data generated from sophisticated recording instruments and acquisition methods have exacerbated the limited scalability of existing neuroinformatics tools. This makes it difficult for neuroscience researchers to effectively leverage the growing multi-modal neuroscience data to advance research in serious neurological disorders, such as epilepsy. We describe the development of the Cloudwave data flow that uses new data partitioning techniques to store and analyze electrophysiological signal in distributed computing infrastructure. The Cloudwave data flow uses MapReduce parallel programming algorithm to implement an integrated signal data processing pipeline that scales with large volume of data generated at high velocity. Using an epilepsy domain ontology together with an epilepsy focused extensible data representation format called Cloudwave Signal Format (CSF), the data flow addresses the challenge of data heterogeneity and is interoperable with existing neuroinformatics data representation formats, such as HDF5. The scalability of the Cloudwave data flow is evaluated using a 30-node cluster installed with the open source Hadoop software stack. The results demonstrate that the Cloudwave data flow can process increasing volume of signal data by leveraging Hadoop Data Nodes to reduce the total data processing time. The Cloudwave data flow is a template for developing highly scalable neuroscience data processing pipelines using MapReduce algorithms to support a variety of user applications.

  16. A Scalable Neuroinformatics Data Flow for Electrophysiological Signals using MapReduce

    Directory of Open Access Journals (Sweden)

    Catherine eJayapandian

    2015-03-01

    Full Text Available Data-driven neuroscience research is providing new insights in progression of neurological disorders and supporting the development of improved treatment approaches. However, the volume, velocity, and variety of neuroscience data from sophisticated recording instruments and acquisition methods have exacerbated the limited scalability of existing neuroinformatics tools. This makes it difficult for neuroscience researchers to effectively leverage the growing multi-modal neuroscience data to advance research in serious neurological disorders, such as epilepsy. We describe the development of the Cloudwave data flow that uses new data partitioning techniques to store and analyze electrophysiological signal in distributed computing infrastructure. The Cloudwave data flow uses MapReduce parallel programming algorithm to implement an integrated signal data processing pipeline that scales with large volume of data generated at high velocity. Using an epilepsy domain ontology together with an epilepsy focused extensible data representation format called Cloudwave Signal Format (CSF, the data flow addresses the challenge of data heterogeneity and is interoperable with existing neuroinformatics data representation formats, such as HDF5. The scalability of the Cloudwave data flow is evaluated using a 30-node cluster installed with the open source Hadoop software stack. The results demonstrate that the Cloudwave data flow can process increasing volume of signal data by leveraging Hadoop Data Nodes to reduce the total data processing time. The Cloudwave data flow is a template for developing highly scalable neuroscience data processing pipelines using MapReduce algorithms to support a variety of user applications.

  17. Systematic analysis of adaptations in aerobic capacity and submaximal energy metabolism provides a unique insight into determinants of human aerobic performance

    DEFF Research Database (Denmark)

    Vollaard, Niels B J; Constantin-Teodosiu, Dimitru; Fredriksson, Katarina

    2009-01-01

    It has not been established which physiological processes contribute to endurance training-related changes (Delta) in aerobic performance. For example, the relationship between intramuscular metabolic responses at the intensity used during training and improved human functional capacity has...... Deltalactate (r(2) = 0.32; P humans are not related to altered maximal oxygen transport capacity. Altered muscle metabolism may provide the link between training...

  18. Exploring Performance Determinants of China’s Cable Operators and OTT Service Providers in the Era of Digital Convergence—From the Perspective of an Industry Platform

    Directory of Open Access Journals (Sweden)

    Xing Wan

    2017-12-01

    Full Text Available This paper investigates key determinants of business performance in China’s video industry in the era of digital convergence. Specifically, we analyze China’s OTT (over-the-top service providers and cable operators based on the perspective of an industry platform, which acts as the core module of a business ecosystem and is capable of facilitating and coordinating interdependence among different agents. Panel data models are established to empirically explore what factors impact the performance of these two types of players. The findings demonstrate that both platform use and the size of an installed base are crucial for the determinants of the performance of OTT service providers and cable operators. An online video platform can also benefit from an increasing proportion of mobile viewers by implementing a multi-screen strategy. Further, an OTT service provider can profit from the interaction between its installed base and UGC (user-generated content, while cable operators can take advantage of positive feedback between their demand side and supply side.

  19. Scalable partitioning and exploration of chemical spaces using geometric hashing.

    Science.gov (United States)

    Dutta, Debojyoti; Guha, Rajarshi; Jurs, Peter C; Chen, Ting

    2006-01-01

    Virtual screening (VS) has become a preferred tool to augment high-throughput screening(1) and determine new leads in the drug discovery process. The core of a VS informatics pipeline includes several data mining algorithms that work on huge databases of chemical compounds containing millions of molecular structures and their associated data. Thus, scaling traditional applications such as classification, partitioning, and outlier detection for huge chemical data sets without a significant loss in accuracy is very important. In this paper, we introduce a data mining framework built on top of a recently developed fast approximate nearest-neighbor-finding algorithm(2) called locality-sensitive hashing (LSH) that can be used to mine huge chemical spaces in a scalable fashion using very modest computational resources. The core LSH algorithm hashes chemical descriptors so that points close to each other in the descriptor space are also close to each other in the hashed space. Using this data structure, one can perform approximate nearest-neighbor searches very quickly, in sublinear time. We validate the accuracy and performance of our framework on three real data sets of sizes ranging from 4337 to 249 071 molecules. Results indicate that the identification of nearest neighbors using the LSH algorithm is at least 2 orders of magnitude faster than the traditional k-nearest-neighbor method and is over 94% accurate for most query parameters. Furthermore, when viewed as a data-partitioning procedure, the LSH algorithm lends itself to easy parallelization of nearest-neighbor classification or regression. We also apply our framework to detect outlying (diverse) compounds in a given chemical space; this algorithm is extremely rapid in determining whether a compound is located in a sparse region of chemical space or not, and it is quite accurate when compared to results obtained using principal-component-analysis-based heuristics.

  20. A Simple MPLS-based Flow Aggregation Scheme for Providing Scalable Quality of Service

    Science.gov (United States)

    2006-01-01

    ƾ:¤~¥J¾¿±§­Y±§³~¨¬¥J«*©:¸Ã²Ò±ºµ O ¸ αJ«c©’¹o±§¾¿±J¹Ã¨¬³vµQP ¸ Ψ¬ÁAÀ�¥J¯c«c©:¸o©’À µ R I ¸ES Î � D IT F IVU DJWYX>Ì T P ¸ Î U O ¸ Î�Ù:ÄÇÆc¸Ã

  1. Medicare Provider Data - Hospice Providers

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Hospice Utilization and Payment Public Use File provides information on services provided to Medicare beneficiaries by hospice providers. The Hospice PUF...

  2. Improving Rural Geriatric Care Through Education: A Scalable, Collaborative Project.

    Science.gov (United States)

    Buck, Harleah G; Kolanowski, Ann; Fick, Donna; Baronner, Lawrence

    2016-07-01

    HOW TO OBTAIN CONTACT HOURS BY READING THIS ISSUE Instructions: 1.2 contact hours will be awarded by Villanova University College of Nursing upon successful completion of this activity. A contact hour is a unit of measurement that denotes 60 minutes of an organized learning activity. This is a learner-based activity. Villanova University College of Nursing does not require submission of your answers to the quiz. A contact hour certificate will be awarded after you register, pay the registration fee, and complete the evaluation form online at http://goo.gl/gMfXaf. In order to obtain contact hours you must: 1. Read the article, "Improving Rural Geriatric Care Through Education: A Scalable, Collaborative Project," found on pages 306-313, carefully noting any tables and other illustrative materials that are included to enhance your knowledge and understanding of the content. Be sure to keep track of the amount of time (number of minutes) you spend reading the article and completing the quiz. 2. Read and answer each question on the quiz. After completing all of the questions, compare your answers to those provided within this issue. If you have incorrect answers, return to the article for further study. 3. Go to the Villanova website to register for contact hour credit. You will be asked to provide your name, contact information, and a VISA, MasterCard, or Discover card number for payment of the $20.00 fee. Once you complete the online evaluation, a certificate will be automatically generated. This activity is valid for continuing education credit until June 30, 2019. CONTACT HOURS This activity is co-provided by Villanova University College of Nursing and SLACK Incorporated. Villanova University College of Nursing is accredited as a provider of continuing nursing education by the American Nurses Credentialing Center's Commission on Accreditation. OBJECTIVES Describe the unique nursing challenges that occur in caring for older adults in rural areas. Discuss the

  3. Recurrent, robust and scalable patterns underlie human approach and avoidance.

    Directory of Open Access Journals (Sweden)

    Byoung Woo Kim

    Full Text Available BACKGROUND: Approach and avoidance behavior provide a means for assessing the rewarding or aversive value of stimuli, and can be quantified by a keypress procedure whereby subjects work to increase (approach, decrease (avoid, or do nothing about time of exposure to a rewarding/aversive stimulus. To investigate whether approach/avoidance behavior might be governed by quantitative principles that meet engineering criteria for lawfulness and that encode known features of reward/aversion function, we evaluated whether keypress responses toward pictures with potential motivational value produced any regular patterns, such as a trade-off between approach and avoidance, or recurrent lawful patterns as observed with prospect theory. METHODOLOGY/PRINCIPAL FINDINGS: Three sets of experiments employed this task with beautiful face images, a standardized set of affective photographs, and pictures of food during controlled states of hunger and satiety. An iterative modeling approach to data identified multiple law-like patterns, based on variables grounded in the individual. These patterns were consistent across stimulus types, robust to noise, describable by a simple power law, and scalable between individuals and groups. Patterns included: (i a preference trade-off counterbalancing approach and avoidance, (ii a value function linking preference intensity to uncertainty about preference, and (iii a saturation function linking preference intensity to its standard deviation, thereby setting limits to both. CONCLUSIONS/SIGNIFICANCE: These law-like patterns were compatible with critical features of prospect theory, the matching law, and alliesthesia. Furthermore, they appeared consistent with both mean-variance and expected utility approaches to the assessment of risk. Ordering of responses across categories of stimuli demonstrated three properties thought to be relevant for preference-based choice, suggesting these patterns might be grouped together as a

  4. Very High Resolution Mapping of Tree Cover Using Scalable Deep Learning Architectures

    Science.gov (United States)

    ganguly, sangram; basu, saikat; nemani, ramakrishna; mukhopadhyay, supratik; michaelis, andrew; votava, petr; saatchi, sassan

    2016-04-01

    Several studies to date have provided an extensive knowledge base for estimating forest aboveground biomass (AGB) and recent advances in space-based modeling of the 3-D canopy structure, combined with canopy reflectance measured by passive optical sensors and radar backscatter, are providing improved satellite-derived AGB density mapping for large scale carbon monitoring applications. A key limitation in forest AGB estimation from remote sensing, however, is the large uncertainty in forest cover estimates from the coarse-to-medium resolution satellite-derived land cover maps (present resolution is limited to 30-m of the USGS NLCD Program). As part of our NASA Carbon Monitoring System Phase II activities, we have demonstrated that uncertainties in forest cover estimates at the Landsat scale result in high uncertainties in AGB estimation, predominantly in heterogeneous forest and urban landscapes. We have successfully tested an approach using scalable deep learning architectures (Feature-enhanced Deep Belief Networks and Semantic Segmentation using Convolutional Neural Networks) and High-Performance Computing with NAIP air-borne imagery data for mapping tree cover at 1-m over California and Maryland. Our first high resolution satellite training label dataset from the NAIP data can be found here at http://csc.lsu.edu/~saikat/deepsat/ . In a comparison with high resolution LiDAR data available over selected regions in the two states, we found our results to be promising both in terms of accuracy as well as our ability to scale nationally. In this project, we propose to estimate very high resolution forest cover for the continental US at spatial resolution of 1-m in support of reducing uncertainties in the AGB estimation. The proposed work will substantially contribute to filling the gaps in ongoing carbon monitoring research and help quantifying the errors and uncertainties in related carbon products.

  5. Visibiome: an efficient microbiome search engine based on a scalable, distributed architecture.

    Science.gov (United States)

    Azman, Syafiq Kamarul; Anwar, Muhammad Zohaib; Henschel, Andreas

    2017-07-24

    Given the current influx of 16S rRNA profiles of microbiota samples, it is conceivable that large amounts of them eventually are available for search, comparison and contextualization with respect to novel samples. This process facilitates the identification of similar compositional features in microbiota elsewhere and therefore can help to understand driving factors for microbial community assembly. We present Visibiome, a microbiome search engine that can perform exhaustive, phylogeny based similarity search and contextualization of user-provided samples against a comprehensive dataset of 16S rRNA profiles environments, while tackling several computational challenges. In order to scale to high demands, we developed a distributed system that combines web framework technology, task queueing and scheduling, cloud computing and a dedicated database server. To further ensure speed and efficiency, we have deployed Nearest Neighbor search algorithms, capable of sublinear searches in high-dimensional metric spaces in combination with an optimized Earth Mover Distance based implementation of weighted UniFrac. The search also incorporates pairwise (adaptive) rarefaction and optionally, 16S rRNA copy number correction. The result of a query microbiome sample is the contextualization against a comprehensive database of microbiome samples from a diverse range of environments, visualized through a rich set of interactive figures and diagrams, including barchart-based compositional comparisons and ranking of the closest matches in the database. Visibiome is a convenient, scalable and efficient framework to search microbiomes against a comprehensive database of environmental samples. The search engine leverages a popular but computationally expensive, phylogeny based distance metric, while providing numerous advantages over the current state of the art tool.

  6. Natural product synthesis in the age of scalability.

    Science.gov (United States)

    Kuttruff, Christian A; Eastgate, Martin D; Baran, Phil S

    2014-04-01

    The ability to procure useful quantities of a molecule by simple, scalable routes is emerging as an important goal in natural product synthesis. Approaches to molecules that yield substantial material enable collaborative investigations (such as SAR studies or eventual commercial production) and inherently spur innovation in chemistry. As such, when evaluating a natural product synthesis, scalability is becoming an increasingly important factor. In this Highlight, we discuss recent examples of natural product synthesis from our laboratory and others, where the preparation of gram-scale quantities of a target compound or a key intermediate allowed for a deeper understanding of biological activities or enabled further investigational collaborations.

  7. A Scalable Smart Meter Data Generator Using Spark

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Liu, Xiufeng; Danalachi, Sergiu

    2017-01-01

    Today, smart meters are being used worldwide. As a matter of fact smart meters produce large volumes of data. Thus, it is important for smart meter data management and analytics systems to process petabytes of data. Benchmarking and testing of these systems require scalable data, however, it can...... be challenging to get large data sets due to privacy and/or data protection regulations. This paper presents a scalable smart meter data generator using Spark that can generate realistic data sets. The proposed data generator is based on a supervised machine learning method that can generate data of any size...

  8. The EDRN knowledge environment: an open source, scalable informatics platform for biological sciences research

    Science.gov (United States)

    Crichton, Daniel; Mahabal, Ashish; Anton, Kristen; Cinquini, Luca; Colbert, Maureen; Djorgovski, S. George; Kincaid, Heather; Kelly, Sean; Liu, David

    2017-05-01

    We describe here the Early Detection Research Network (EDRN) for Cancer's knowledge environment. It is an open source platform built by NASA's Jet Propulsion Laboratory with contributions from the California Institute of Technology, and Giesel School of Medicine at Dartmouth. It uses tools like Apache OODT, Plone, and Solr, and borrows heavily from JPL's Planetary Data System's ontological infrastructure. It has accumulated data on hundreds of thousands of biospecemens and serves over 1300 registered users across the National Cancer Institute (NCI). The scalable computing infrastructure is built such that we are being able to reach out to other agencies, provide homogeneous access, and provide seamless analytics support and bioinformatics tools through community engagement.

  9. Network-aware scalable video monitoring system for emergency situations with operator-managed fidelity control

    Science.gov (United States)

    Al Hadhrami, Tawfik; Nightingale, James M.; Wang, Qi; Grecos, Christos

    2014-05-01

    of video delivery transmits a high-quality video stream including all available scalable layers using the most reliable routes through the mesh network ensuring the highest possible video quality. The proposed scheme is implemented in a proven simulator, and the performance of the proposed system is numerically evaluated through extensive simulations. We further present an in-depth analysis of the proposed solutions and potential approaches towards supporting high-quality visual communications in such a demanding context.

  10. NWChem: A comprehensive and scalable open-source solution for large scale molecular simulations

    Science.gov (United States)

    Valiev, M.; Bylaska, E. J.; Govind, N.; Kowalski, K.; Straatsma, T. P.; Van Dam, H. J. J.; Wang, D.; Nieplocha, J.; Apra, E.; Windus, T. L.; de Jong, W. A.

    2010-09-01

    The latest release of NWChem delivers an open-source computational chemistry package with extensive capabilities for large scale simulations of chemical and biological systems. Utilizing a common computational framework, diverse theoretical descriptions can be used to provide the best solution for a given scientific problem. Scalable parallel implementations and modular software design enable efficient utilization of current computational architectures. This paper provides an overview of NWChem focusing primarily on the core theoretical modules provided by the code and their parallel performance. Program summaryProgram title: NWChem Catalogue identifier: AEGI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Open Source Educational Community License No. of lines in distributed program, including test data, etc.: 11 709 543 No. of bytes in distributed program, including test data, etc.: 680 696 106 Distribution format: tar.gz Programming language: Fortran 77, C Computer: all Linux based workstations and parallel supercomputers, Windows and Apple machines Operating system: Linux, OS X, Windows Has the code been vectorised or parallelized?: Code is parallelized Classification: 2.1, 2.2, 3, 7.3, 7.7, 16.1, 16.2, 16.3, 16.10, 16.13 Nature of problem: Large-scale atomistic simulations of chemical and biological systems require efficient and reliable methods for ground and excited solutions of many-electron Hamiltonian, analysis of the potential energy surface, and dynamics. Solution method: Ground and excited solutions of many-electron Hamiltonian are obtained utilizing density-functional theory, many-body perturbation approach, and coupled cluster expansion. These solutions or a combination thereof with classical descriptions are then used to analyze potential energy surface and perform dynamical simulations. Additional comments: Full

  11. Scalability of Semi-Implicit Time Integrators for Nonhydrostatic Galerkin-based Atmospheric Models on Large Scale Cluster

    Science.gov (United States)

    2011-01-01

    present performance statistics to explain the scalability behavior. Keywords-atmospheric models, time intergrators , MPI, scal- ability, performance; I...solution vector q = (ρ′,uT , θ′), Eq. (1) is written in condensed form as ∂q ∂t = S(q) (2) Report Documentation Page Form ApprovedOMB No. 0704-0188...Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions

  12. A Low-Power Scalable Stream Compute Accelerator for General Matrix Multiply (GEMM

    Directory of Open Access Journals (Sweden)

    Antony Savich

    2014-01-01

    play an important role in determining the performance of such applications. This paper proposes a novel efficient, highly scalable hardware accelerator that is of equivalent performance to a 2 GHz quad core PC but can be used in low-power applications targeting embedded systems requiring high performance computation. Power, performance, and resource consumption are demonstrated on a fully-functional prototype. The proposed hardware accelerator is 36× more energy efficient per unit of computation compared to state-of-the-art Xeon processor of equal vintage and is 14× more efficient as a stand-alone platform with equivalent performance. An important comparison between simulated system estimates and real system performance is carried out.

  13. Scalable Visualization, applied to Galaxies,Oceans & Brains

    Science.gov (United States)

    Pailthorpe, Bernard

    2001-06-01

    The frontiers of Scientific Visualisation now include problems arising with data that scales in size or complexity. New metaphors may be needed to navigate, analyse and display the data emerging from bio-diversity, genomic and soci- economic studies. This talk addresses the challenges in generating algorithms and software libraries which are suitable for the large scale data emerging from tera-scale simulations and instruments. With larger and more complex datasets, moving into the 100GB-1TB realm, scalable methodologies and tools are required. The collaborative efforts to address these challenges, currently underway at the San Diego Supercomputer Center and within the National Partnership for Advanced Computational Infrastructure (NPACI), will be summarised. The ultimate aim of this R&D program is to facilitate queries and analysis of multiple, large data sets derived from motivating applications in astrophysics, planetary-scale oceanographic simulations and human brain mapping. Research challenges in such science application domains provide the justification for developing such tools. Previously planetary-scale oceanographic simulations had resolutions limited to 2 deg. latitude and longitude. With Teraflop computing resources coming on line, such simulations will be conducted at 10x (and presently 100x) resolution, soon yielding multiple sets of 100 GByte numerical output. In mapping the human brain, up to four distinct imaging modalities are used, with datasets already at 10s of GBytes. The immediate research challenge is composite these images, facilitating simultaneous analysis of structural and functional information. These applications manifest the need for high capacity computer displays,moving beyond the usual 1 Mega-pixel desktops to 10 M-pixel and more. Developments in this area will be discussed.

  14. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Dongyul Lee

    2014-01-01

    Full Text Available The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC with adaptive modulation and coding (AMC provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  15. Scalable process for mitigation of laser-damaged potassium dihydrogen phosphate crystal optic surfaces with removal of damaged antireflective coating.

    Science.gov (United States)

    Elhadj, S; Steele, W A; VanBlarcom, D S; Hawley, R A; Schaffers, K I; Geraghty, P

    2017-03-10

    We investigate an approach for the recycling of laser-damaged large-aperture deuterated potassium dihydrogen phosphate (DKDP) crystals used for optical switching (KDP) and for frequency conversion (DKDP) in megajoule-class high-power laser systems. The approach consists of micromachining the surface laser damage sites (mitigation), combined with multiple soaks and ultrasonication steps in a coating solvent to remove, synergistically, both the highly adherent machining debris and the laser-damage-affected antireflection coating. We identify features of the laser-damage-affected coating, such as the "solvent-persistent" coating and the "burned-in" coating, that are difficult to remove by conventional approaches without damaging the surface. We also provide a solution to the erosion problem identified in this work when colloidal coatings are processed during ultrasonication. Finally, we provide a proof of principle of the approach by testing the full process that includes laser damage mitigation of DKDP test parts, coat stripping, reapplication of a new antireflective coat, and a laser damage test demonstrating performance up to at least 12  J/cm2 at UV wavelengths, which is well above current requirements. This approach ultimately provides a potential path to a scalable recycling loop for the management of optics in large, high-power laser systems that can reduce cost and extend lifetime of highly valuable and difficult to grow large DKDP crystals.

  16. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    Science.gov (United States)

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  17. A Hardware-Efficient Scalable Spike Sorting Neural Signal Processor Module for Implantable High-Channel-Count Brain Machine Interfaces.

    Science.gov (United States)

    Yang, Yuning; Boling, Sam; Mason, Andrew J

    2017-08-01

    Next-generation brain machine interfaces demand a high-channel-count neural recording system to wirelessly monitor activities of thousands of neurons. A hardware efficient neural signal processor (NSP) is greatly desirable to ease the data bandwidth bottleneck for a fully implantable wireless neural recording system. This paper demonstrates a complete multichannel spike sorting NSP module that incorporates all of the necessary spike detector, feature extractor, and spike classifier blocks. To meet high-channel-count and implantability demands, each block was designed to be highly hardware efficient and scalable while sharing resources efficiently among multiple channels. To process multiple channels in parallel, scalability analysis was performed, and the utilization of each block was optimized according to its input data statistics and the power, area and/or speed of each block. Based on this analysis, a prototype 32-channel spike sorting NSP scalable module was designed and tested on an FPGA using synthesized datasets over a wide range of signal to noise ratios. The design was mapped to 130 nm CMOS to achieve 0.75 μW power and 0.023 mm2 area consumptions per channel based on post synthesis simulation results, which permits scalability of digital processing to 690 channels on a 4×4 mm2 electrode array.

  18. Scalable Learning for Geostatistics and Speaker Recognition

    Science.gov (United States)

    2011-01-01

    28 2.6 Performance of WMW -statistic based ranking, GPU based approach vs the linear algorithm in [71...Device Architecture (CUDA)[63], a parallel programming model that leverages the parallel compute engine in NVIDIA GPUs to solve general purpose...Performance of WMW -statistic based ranking, GPU based approach vs the linear algorithm in [71] can be sorted for ranking. There are several approaches to

  19. Quicksilver: Middleware for Scalable Self-Regenerative Systems

    Science.gov (United States)

    2006-04-01

    standard best practice in the area, and hence helped us identify problems that can be justified in terms of real user needs. Our own group may write a...semantics, generally lack efficient, scalable implementations. Systems aproaches usually lack a precise formal specification, limiting the

  20. Scalable learning of probabilistic latent models for collaborative filtering

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre

    2015-01-01

    Collaborative filtering has emerged as a popular way of making user recommendations, but with the increasing sizes of the underlying databases scalability is becoming a crucial issue. In this paper we focus on a recently proposed probabilistic collaborative filtering model that explicitly...

  1. PSOM2—partitioning-based scalable ontology matching using ...

    Indian Academy of Sciences (India)

    B Sathiya

    2017-11-16

    Nov 16, 2017 ... Abstract. The growth and use of semantic web has led to a drastic increase in the size, heterogeneity and number of ontologies that are available on the web. Correspondingly, scalable ontology matching algorithms that will eliminate the heterogeneity among large ontologies have become a necessity.

  2. Cognition-inspired Descriptors for Scalable Cover Song Retrieval

    NARCIS (Netherlands)

    van Balen, J.M.H.; Bountouridis, D.; Wiering, F.; Veltkamp, R.C.

    2014-01-01

    Inspired by representations used in music cognition studies and computational musicology, we propose three simple and interpretable descriptors for use in mid- to high-level computational analysis of musical audio and applications in content-based retrieval. We also argue that the task of scalable

  3. Scalable Directed Self-Assembly Using Ultrasound Waves

    Science.gov (United States)

    2015-09-04

    at Aberdeen Proving Grounds (APG), to discuss a possible collaboration. The idea is to integrate the ultrasound directed self- assembly technique ...difference between the ultrasound technology studied in this project, and other directed self-assembly techniques is its scalability and...deliverable: A scientific tool to predict particle organization, pattern, and orientation, based on the operating and design parameters of the ultrasound

  4. Coilable Crystalline Fiber (CCF) Lasers and their Scalability

    Science.gov (United States)

    2014-03-01

    highly power scalable, nearly diffraction-limited output laser. 37 References 1. Snitzer, E. Optical Maser Action of Nd 3+ in A Barium Crown Glass ...Electron Devices Directorate Helmuth Meissner Onyx Optics Approved for public release; distribution...lasers, but their composition ( glass ) poses significant disadvantages in pump absorption, gain, and thermal conductivity. All-crystalline fiber lasers

  5. Efficient Enhancement for Spatial Scalable Video Coding Transmission

    Directory of Open Access Journals (Sweden)

    Mayada Khairy

    2017-01-01

    Full Text Available Scalable Video Coding (SVC is an international standard technique for video compression. It is an extension of H.264 Advanced Video Coding (AVC. In the encoding of video streams by SVC, it is suitable to employ the macroblock (MB mode because it affords superior coding efficiency. However, the exhaustive mode decision technique that is usually used for SVC increases the computational complexity, resulting in a longer encoding time (ET. Many other algorithms were proposed to solve this problem with imperfection of increasing transmission time (TT across the network. To minimize the ET and TT, this paper introduces four efficient algorithms based on spatial scalability. The algorithms utilize the mode-distribution correlation between the base layer (BL and enhancement layers (ELs and interpolation between the EL frames. The proposed algorithms are of two categories. Those of the first category are based on interlayer residual SVC spatial scalability. They employ two methods, namely, interlayer interpolation (ILIP and the interlayer base mode (ILBM method, and enable ET and TT savings of up to 69.3% and 83.6%, respectively. The algorithms of the second category are based on full-search SVC spatial scalability. They utilize two methods, namely, full interpolation (FIP and the full-base mode (FBM method, and enable ET and TT savings of up to 55.3% and 76.6%, respectively.

  6. Scalable power selection method for wireless mesh networks

    CSIR Research Space (South Africa)

    Olwal, TO

    2009-01-01

    Full Text Available This paper addresses the problem of a scalable dynamic power control (SDPC) for wireless mesh networks (WMNs) based on IEEE 802.11 standards. An SDPC model that accounts for architectural complexities witnessed in multiple radios and hops...

  7. Estimates of the Sampling Distribution of Scalability Coefficient H

    Science.gov (United States)

    Van Onna, Marieke J. H.

    2004-01-01

    Coefficient "H" is used as an index of scalability in nonparametric item response theory (NIRT). It indicates the degree to which a set of items rank orders examinees. Theoretical sampling distributions, however, have only been derived asymptotically and only under restrictive conditions. Bootstrap methods offer an alternative possibility to…

  8. Mid-level providers in emergency obstetric and newborn health care: factors affecting their performance and retention within the Malawian health system

    Directory of Open Access Journals (Sweden)

    McAuliffe Eilish

    2009-02-01

    Full Text Available Abstract Background Malawi has a chronic shortage of human resources for health. This has a significant impact on maternal health, with mortality rates amongst the highest in the world. Mid-level cadres of health workers provide the bulk of emergency obstetric and neonatal care. In this context these cadres are defined as those who undertake roles and tasks that are more usually the province of internationally recognised cadres, such as doctors and nurses. While there have been several studies addressing retention factors for doctors and registered nurses, data and studies addressing the perceptions of these mid-level cadres on the factors that influence their performance and retention within health care systems are scarce. Methods This exploratory qualitative study took place in four rural mission hospitals in Malawi. The study population was mid-level providers of emergency obstetric and neonatal care. Focus group discussions took place with nursing and medical cadres. Semi-structured interviews with key human resources, training and administrative personnel were used to provide context and background. Data were analysed using a framework analysis. Results Participants confirmed the difficulties of their working conditions and the clear commitment they have to serving the rural Malawian population. Although insufficient financial remuneration had a negative impact on retention and performance, the main factors identified were limited opportunities for career development and further education (particularly for clinical officers and inadequate or non-existent human resources management systems. The lack of performance-related rewards and recognition were perceived to be particularly demotivating. Conclusion Mid-level cadres are being used to stem Africa's brain drain. It is in the interests of both the government and mission organizations to protect their investment in these workers. For optimal performance and quality of care they need to be

  9. Mid-level providers in emergency obstetric and newborn health care: factors affecting their performance and retention within the Malawian health system.

    Science.gov (United States)

    Bradley, Susan; McAuliffe, Eilish

    2009-02-19

    Malawi has a chronic shortage of human resources for health. This has a significant impact on maternal health, with mortality rates amongst the highest in the world. Mid-level cadres of health workers provide the bulk of emergency obstetric and neonatal care. In this context these cadres are defined as those who undertake roles and tasks that are more usually the province of internationally recognised cadres, such as doctors and nurses. While there have been several studies addressing retention factors for doctors and registered nurses, data and studies addressing the perceptions of these mid-level cadres on the factors that influence their performance and retention within health care systems are scarce. This exploratory qualitative study took place in four rural mission hospitals in Malawi. The study population was mid-level providers of emergency obstetric and neonatal care. Focus group discussions took place with nursing and medical cadres. Semi-structured interviews with key human resources, training and administrative personnel were used to provide context and background. Data were analysed using a framework analysis. Participants confirmed the difficulties of their working conditions and the clear commitment they have to serving the rural Malawian population. Although insufficient financial remuneration had a negative impact on retention and performance, the main factors identified were limited opportunities for career development and further education (particularly for clinical officers) and inadequate or non-existent human resources management systems. The lack of performance-related rewards and recognition were perceived to be particularly demotivating. Mid-level cadres are being used to stem Africa's brain drain. It is in the interests of both the government and mission organizations to protect their investment in these workers. For optimal performance and quality of care they need to be supported and properly motivated. A structured system of continuing

  10. Scalable printed electronics: an organic decoder addressing ferroelectric non-volatile memory.

    Science.gov (United States)

    Ng, Tse Nga; Schwartz, David E; Lavery, Leah L; Whiting, Gregory L; Russo, Beverly; Krusor, Brent; Veres, Janos; Bröms, Per; Herlogsson, Lars; Alam, Naveed; Hagel, Olle; Nilsson, Jakob; Karlsson, Christer

    2012-01-01

    Scalable circuits of organic logic and memory are realized using all-additive printing processes. A 3-bit organic complementary decoder is fabricated and used to read and write non-volatile, rewritable ferroelectric memory. The decoder-memory array is patterned by inkjet and gravure printing on flexible plastics. Simulation models for the organic transistors are developed, enabling circuit designs tolerant of the variations in printed devices. We explain the key design rules in fabrication of complex printed circuits and elucidate the performance requirements of materials and devices for reliable organic digital logic.

  11. Scalable printed electronics: an organic decoder addressing ferroelectric non-volatile memory

    Science.gov (United States)

    Ng, Tse Nga; Schwartz, David E.; Lavery, Leah L.; Whiting, Gregory L.; Russo, Beverly; Krusor, Brent; Veres, Janos; Bröms, Per; Herlogsson, Lars; Alam, Naveed; Hagel, Olle; Nilsson, Jakob; Karlsson, Christer

    2012-01-01

    Scalable circuits of organic logic and memory are realized using all-additive printing processes. A 3-bit organic complementary decoder is fabricated and used to read and write non-volatile, rewritable ferroelectric memory. The decoder-memory array is patterned by inkjet and gravure printing on flexible plastics. Simulation models for the organic transistors are developed, enabling circuit designs tolerant of the variations in printed devices. We explain the key design rules in fabrication of complex printed circuits and elucidate the performance requirements of materials and devices for reliable organic digital logic. PMID:22900143

  12. Convenient and Scalable Synthesis of Fmoc-Protected Peptide Nucleic Acid Backbone

    Directory of Open Access Journals (Sweden)

    Trevor A. Feagin

    2012-01-01

    Full Text Available The peptide nucleic acid backbone Fmoc-AEG-OBn has been synthesized via a scalable and cost-effective route. Ethylenediamine is mono-Boc protected, then alkylated with benzyl bromoacetate. The Boc group is removed and replaced with an Fmoc group. The synthesis was performed starting with 50 g of Boc anhydride to give 31 g of product in 32% overall yield. The Fmoc-protected PNA backbone is a key intermediate in the synthesis of nucleobase-modified PNA monomers. Thus, improved access to this molecule is anticipated to facilitate future investigations into the chemical properties and applications of nucleobase-modified PNA.

  13. A scalable and practical one-pass clustering algorithm for recommender system

    Science.gov (United States)

    Khalid, Asra; Ghazanfar, Mustansar Ali; Azam, Awais; Alahmari, Saad Ali

    2015-12-01

    KMeans clustering-based recommendation algorithms have been proposed claiming to increase the scalability of recommender systems. One potential drawback of these algorithms is that they perform training offline and hence cannot accommodate the incremental updates with the arrival of new data, making them unsuitable for the dynamic environments. From this line of research, a new clustering algorithm called One-Pass is proposed, which is a simple, fast, and accurate. We show empirically that the proposed algorithm outperforms K-Means in terms of recommendation and training time while maintaining a good level of accuracy.

  14. Evaluating the scalability of HEP software and multi-core hardware

    CERN Document Server

    Jarp, S; Leduc, J; Nowak, A

    2011-01-01

    As researchers have reached the practical limits of processor performance improvements by frequency scaling, it is clear that the future of computing lies in the effective utilization of parallel and multi-core architectures. Since this significant change in computing is well underway, it is vital for HEP programmers to understand the scalability of their software on modern hardware and the opportunities for potential improvements. This work aims to quantify the benefit of new mainstream architectures to the HEP community through practical benchmarking on recent hardware solutions, including the usage of parallelized HEP applications.

  15. High performance data transfer

    Science.gov (United States)

    Cottrell, R.; Fang, C.; Hanushevsky, A.; Kreuger, W.; Yang, W.

    2017-10-01

    The exponentially increasing need for high speed data transfer is driven by big data, and cloud computing together with the needs of data intensive science, High Performance Computing (HPC), defense, the oil and gas industry etc. We report on the Zettar ZX software. This has been developed since 2013 to meet these growing needs by providing high performance data transfer and encryption in a scalable, balanced, easy to deploy and use way while minimizing power and space utilization. In collaboration with several commercial vendors, Proofs of Concept (PoC) consisting of clusters have been put together using off-the- shelf components to test the ZX scalability and ability to balance services using multiple cores, and links. The PoCs are based on SSD flash storage that is managed by a parallel file system. Each cluster occupies 4 rack units. Using the PoCs, between clusters we have achieved almost 200Gbps memory to memory over two 100Gbps links, and 70Gbps parallel file to parallel file with encryption over a 5000 mile 100Gbps link.

  16. Scalable Molecular Dynamics for Large Biomolecular Systems

    Directory of Open Access Journals (Sweden)

    Robert K. Brunner

    2000-01-01

    Full Text Available We present an optimized parallelization scheme for molecular dynamics simulations of large biomolecular systems, implemented in the production-quality molecular dynamics program NAMD. With an object-based hybrid force and spatial decomposition scheme, and an aggressive measurement-based predictive load balancing framework, we have attained speeds and speedups that are much higher than any reported in literature so far. The paper first summarizes the broad methodology we are pursuing, and the basic parallelization scheme we used. It then describes the optimizations that were instrumental in increasing performance, and presents performance results on benchmark simulations.

  17. External muscle heating during warm-up does not provide added performance benefit above external heating in the recovery period alone.

    Science.gov (United States)

    Faulkner, Steve H; Ferguson, Richard A; Hodder, Simon G; Havenith, George

    2013-11-01

    Having previously shown the use of passive external heating between warm-up completion and sprint cycling to have had a positive effect on muscle temperature (T m) and maximal sprint performance, we sought to determine whether adding passive heating during active warm up was of further benefit. Ten trained male cyclists completed a standardised 15 min sprint based warm-up on a cycle ergometer, followed by 30 min passive recovery before completing a 30 s maximal sprint test. Warm up was completed either with or without additional external passive heating. During recovery, external passive leg heating was used in both standard warm-up (CONHOT) and heated warm-up (HOTHOT) conditions, for control, a standard tracksuit was worn (CON). T m declined exponentially during CON, CONHOT and HOTHOT reduced the exponential decline during recovery. Peak (11.1 %, 1561 ± 258 W and 1542 ± 223 W), relative (10.6 % 21.0 ± 2.2 W kg(-1) and 20.9 ± 1.8 W kg(-1)) and mean (4.1 %, 734 ± 126 W and 729 ± 125 W) power were all improved with CONHOT and HOTHOT, respectively compared to CON (1,397 ± 239 W; 18.9 ± 3.0 W kg(-1) and 701 ± 109 W). There was no additional benefit of HOTHOT on T m or sprint performance compared to CONHOT. External heating during an active warm up does not provide additional physiological or performance benefit. As noted previously, external heating is capable of reducing the rate of decline in T m after an active warm-up, improving subsequent sprint cycling performance.

  18. How can information systems provide support to nurses' hand hygiene performance? Using gamification and indoor location to improve hand hygiene awareness and reduce hospital infections.

    Science.gov (United States)

    Marques, Rita; Gregório, João; Pinheiro, Fernando; Póvoa, Pedro; da Silva, Miguel Mira; Lapão, Luís Velez

    2017-01-31

    Hospital-acquired infections are still amongst the major problems health systems are facing. Their occurrence can lead to higher morbidity and mortality rates, increased length of hospital stay, and higher costs for both hospital and patients. Performing hand hygiene is a simple and inexpensive prevention measure, but healthcare workers' compliance with it is often far from ideal. To raise awareness regarding hand hygiene compliance, individual behaviour change and performance optimization, we aimed to develop a gamification solution that collects data and provides real-time feedback accurately in a fun and engaging way. A Design Science Research Methodology (DSRM) was used to conduct this work. DSRM is useful to study the link between research and professional practices by designing, implementing and evaluating artifacts that address a specific need. It follows a development cycle (or iteration) composed by six activities. Two work iterations were performed applying gamification components, each using a different indoor location technology. Preliminary experiments, simulations and field studies were performed in an Intensive Care Unit (ICU) of a Portuguese tertiary hospital. Nurses working on this ICU were in a focus group during the research, participating in several sessions across the implementation process. Nurses enjoyed the concept and considered that it allows for a unique opportunity to receive feedback regarding their performance. Tests performed on the indoor location technology applied in the first iteration regarding distances estimation presented an unacceptable lack of accuracy. Using a proximity-based technique, it was possible to identify the sequence of positions, but beacons presented an unstable behaviour. In the second work iteration, a different indoor location technology was explored but it did not work properly, so there was no chance of testing the solution as a whole (gamification application included). Combining automated monitoring

  19. Scalable multi-core model checking

    NARCIS (Netherlands)

    Laarman, Alfons

    2014-01-01

    Our modern society relies increasingly on the sound performance of digital systems. Guaranteeing that these systems actually behave correctly according to their specification is not a trivial task, yet it is essential for mission-critical systems like auto-pilots, (nuclear) power-plant controllers

  20. Scalable, non-invasive glucose sensor based on boronic acid functionalized carbon nanotube transistors

    Science.gov (United States)

    Lerner, Mitchell B.; Kybert, Nicholas; Mendoza, Ryan; Villechenon, Romain; Bonilla Lopez, Manuel A.; Charlie Johnson, A. T.

    2013-05-01

    We developed a scalable, label-free all-electronic sensor for D-glucose based on a carbon nanotube transistor functionalized with pyrene-1-boronic acid. This sensor responds to glucose in the range 1 μM-100 mM, which includes typical glucose concentrations in human blood and saliva. Control experiments establish that functionalization with the boronic acid provides high sensitivity and selectivity for glucose. The devices show better sensitivity than commercial blood glucose meters and could represent a general strategy to bloodless glucose monitoring by detecting low concentrations of glucose in saliva.