WorldWideScience

Sample records for providing scalable performance

  1. Scalable Performance Measurement and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gamblin, Todd [Univ. of North Carolina, Chapel Hill, NC (United States)

    2009-01-01

    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number of tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.

  2. Embedded High Performance Scalable Computing Systems

    National Research Council Canada - National Science Library

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  3. Software performance and scalability a quantitative approach

    CERN Document Server

    Liu, Henry H

    2009-01-01

    Praise from the Reviewers:"The practicality of the subject in a real-world situation distinguishes this book from othersavailable on the market."—Professor Behrouz Far, University of Calgary"This book could replace the computer organization texts now in use that every CS and CpEstudent must take. . . . It is much needed, well written, and thoughtful."—Professor Larry Bernstein, Stevens Institute of TechnologyA distinctive, educational text onsoftware performance and scalabilityThis is the first book to take a quantitative approach to the subject of software performance and scalability

  4. Oracle database performance and scalability a quantitative approach

    CERN Document Server

    Liu, Henry H

    2011-01-01

    A data-driven, fact-based, quantitative text on Oracle performance and scalability With database concepts and theories clearly explained in Oracle's context, readers quickly learn how to fully leverage Oracle's performance and scalability capabilities at every stage of designing and developing an Oracle-based enterprise application. The book is based on the author's more than ten years of experience working with Oracle, and is filled with dependable, tested, and proven performance optimization techniques. Oracle Database Performance and Scalability is divided into four parts that enable reader

  5. Scalability of DL_POLY on High Performance Computing Platform

    Directory of Open Access Journals (Sweden)

    Mabule Samuel Mabakane

    2017-12-01

    Full Text Available This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.

  6. Generic algorithms for high performance scalable geocomputing

    Science.gov (United States)

    de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek

    2016-04-01

    During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system

  7. Proof of Stake Blockchain: Performance and Scalability for Groupware Communications

    DEFF Research Database (Denmark)

    Spasovski, Jason; Eklund, Peter

    2017-01-01

    A blockchain is a distributed transaction ledger, a disruptive technology that creates new possibilities for digital ecosystems. The blockchain ecosystem maintains an immutable transaction record to support many types of digital services. This paper compares the performance and scalability of a web......-based groupware communication application using both non-blockchain and blockchain technologies. Scalability is measured where message load is synthesized over two typical communication topologies. The first is 1 to n network -- a typical client-server or star-topology with a central vertex (server) receiving all...... messages from the remaining n - 1 vertices (clients). The second is a more naturally occurring scale-free network topology, where multiple communication hubs are distributed throughout the network. System performance is tested with both blockchain and non-blockchain solutions using multiple cloud computing...

  8. High-performance scalable Information Service for the ATLAS experiment

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Hauser, R

    2012-01-01

    The ATLAS experiment is being operated by highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to access the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS TDAQ project. The IS provides high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about hundred gigabytes of information which is being constantly updated with the update interval varying from a second to few tens of seconds. IS ...

  9. High-Performance Scalable Information Service for the ATLAS Experiment

    International Nuclear Information System (INIS)

    Kolos, S; Boutsioukis, G; Hauser, R

    2012-01-01

    The ATLAS[1] experiment is operated by a highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to assess the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS Trigger and Data Acquisition (TDAQ)[2] project. The IS provides a high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about a hundred gigabytes of information which is being constantly updated with the update interval varying from a second to a few tens of seconds. IS provides access to any information item on request as well as distributing notification to all the information subscribers. In the latter case IS subscribers receive information within a few milliseconds after it was updated. IS can handle arbitrary types of information, including histograms produced by the HLT applications, and provides C++, Java and Python API. The Information Service is a unique source of information for the majority of the online monitoring analysis and GUI applications used to control and monitor the ATLAS experiment. Information Service provides streaming functionality allowing efficient replication of all or part of the managed information. This functionality is used to duplicate the subset of the ATLAS monitoring data to the CERN public network with a latency of a few milliseconds, allowing efficient real-time monitoring of the data taking from outside the protected ATLAS network. Each information

  10. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  11. Performance-scalable volumetric data classification for online industrial inspection

    Science.gov (United States)

    Abraham, Aby J.; Sadki, Mustapha; Lea, R. M.

    2002-03-01

    Non-intrusive inspection and non-destructive testing of manufactured objects with complex internal structures typically requires the enhancement, analysis and visualization of high-resolution volumetric data. Given the increasing availability of fast 3D scanning technology (e.g. cone-beam CT), enabling on-line detection and accurate discrimination of components or sub-structures, the inherent complexity of classification algorithms inevitably leads to throughput bottlenecks. Indeed, whereas typical inspection throughput requirements range from 1 to 1000 volumes per hour, depending on density and resolution, current computational capability is one to two orders-of-magnitude less. Accordingly, speeding up classification algorithms requires both reduction of algorithm complexity and acceleration of computer performance. A shape-based classification algorithm, offering algorithm complexity reduction, by using ellipses as generic descriptors of solids-of-revolution, and supporting performance-scalability, by exploiting the inherent parallelism of volumetric data, is presented. A two-stage variant of the classical Hough transform is used for ellipse detection and correlation of the detected ellipses facilitates position-, scale- and orientation-invariant component classification. Performance-scalability is achieved cost-effectively by accelerating a PC host with one or more COTS (Commercial-Off-The-Shelf) PCI multiprocessor cards. Experimental results are reported to demonstrate the feasibility and cost-effectiveness of the data-parallel classification algorithm for on-line industrial inspection applications.

  12. High-performance, scalable optical network-on-chip architectures

    Science.gov (United States)

    Tan, Xianfang

    The rapid advance of technology enables a large number of processing cores to be integrated into a single chip which is called a Chip Multiprocessor (CMP) or a Multiprocessor System-on-Chip (MPSoC) design. The on-chip interconnection network, which is the communication infrastructure for these processing cores, plays a central role in a many-core system. With the continuously increasing complexity of many-core systems, traditional metallic wired electronic networks-on-chip (NoC) became a bottleneck because of the unbearable latency in data transmission and extremely high energy consumption on chip. Optical networks-on-chip (ONoC) has been proposed as a promising alternative paradigm for electronic NoC with the benefits of optical signaling communication such as extremely high bandwidth, negligible latency, and low power consumption. This dissertation focus on the design of high-performance and scalable ONoC architectures and the contributions are highlighted as follow: 1. A micro-ring resonator (MRR)-based Generic Wavelength-routed Optical Router (GWOR) is proposed. A method for developing any sized GWOR is introduced. GWOR is a scalable non-blocking ONoC architecture with simple structure, low cost and high power efficiency compared to existing ONoC designs. 2. To expand the bandwidth and improve the fault tolerance of the GWOR, a redundant GWOR architecture is designed by cascading different type of GWORs into one network. 3. The redundant GWOR built with MRR-based comb switches is proposed. Comb switches can expand the bandwidth while keep the topology of GWOR unchanged by replacing the general MRRs with comb switches. 4. A butterfly fat tree (BFT)-based hybrid optoelectronic NoC (HONoC) architecture is developed in which GWORs are used for global communication and electronic routers are used for local communication. The proposed HONoC uses less numbers of electronic routers and links than its counterpart of electronic BFT-based NoC. It takes the advantages of

  13. A Cross-Platform Infrastructure for Scalable Runtime Application Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jack Dongarra; Shirley Moore; Bart Miller, Jeffrey Hollingsworth; Tracy Rafferty

    2005-03-15

    The purpose of this project was to build an extensible cross-platform infrastructure to facilitate the development of accurate and portable performance analysis tools for current and future high performance computing (HPC) architectures. Major accomplishments include tools and techniques for multidimensional performance analysis, as well as improved support for dynamic performance monitoring of multithreaded and multiprocess applications. Previous performance tool development has been limited by the burden of having to re-write a platform-dependent low-level substrate for each architecture/operating system pair in order to obtain the necessary performance data from the system. Manual interpretation of performance data is not scalable for large-scale long-running applications. The infrastructure developed by this project provides a foundation for building portable and scalable performance analysis tools, with the end goal being to provide application developers with the information they need to analyze, understand, and tune the performance of terascale applications on HPC architectures. The backend portion of the infrastructure provides runtime instrumentation capability and access to hardware performance counters, with thread-safety for shared memory environments and a communication substrate to support instrumentation of multiprocess and distributed programs. Front end interfaces provides tool developers with a well-defined, platform-independent set of calls for requesting performance data. End-user tools have been developed that demonstrate runtime data collection, on-line and off-line analysis of performance data, and multidimensional performance analysis. The infrastructure is based on two underlying performance instrumentation technologies. These technologies are the PAPI cross-platform library interface to hardware performance counters and the cross-platform Dyninst library interface for runtime modification of executable images. The Paradyn and KOJAK

  14. Providing Availability, Performance, and Scalability By Using Cloud Database

    OpenAIRE

    Prof. Dr. Alaa Hussein Al-Hamami; RafalAdeeb Al-Khashab

    2014-01-01

    With the development of the internet, new technical and concepts have attention to all users of the internet especially in the development of information technology, such as concept is cloud. Cloud computing includes different components, of which cloud database has become an important one. A cloud database is a distributed database that delivers computing as a service or in form of virtual machine image instead of a product via the internet; its advantage is that database can...

  15. Scalable High Performance Message Passing over InfiniBand for Open MPI

    Energy Technology Data Exchange (ETDEWEB)

    Friedley, A; Hoefler, T; Leininger, M L; Lumsdaine, A

    2007-10-24

    InfiniBand (IB) is a popular network technology for modern high-performance computing systems. MPI implementations traditionally support IB using a reliable, connection-oriented (RC) transport. However, per-process resource usage that grows linearly with the number of processes, makes this approach prohibitive for large-scale systems. IB provides an alternative in the form of a connectionless unreliable datagram transport (UD), which allows for near-constant resource usage and initialization overhead as the process count increases. This paper describes a UD-based implementation for IB in Open MPI as a scalable alternative to existing RC-based schemes. We use the software reliability capabilities of Open MPI to provide the guaranteed delivery semantics required by MPI. Results show that UD not only requires fewer resources at scale, but also allows for shorter MPI startup times. A connectionless model also improves performance for applications that tend to send small messages to many different processes.

  16. Frontier: High Performance Database Access Using Standard Web Components in a Scalable Multi-Tier Architecture

    International Nuclear Information System (INIS)

    Kosyakov, S.; Kowalkowski, J.; Litvintsev, D.; Lueking, L.; Paterno, M.; White, S.P.; Autio, Lauri; Blumenfeld, B.; Maksimovic, P.; Mathis, M.

    2004-01-01

    A high performance system has been assembled using standard web components to deliver database information to a large number of broadly distributed clients. The CDF Experiment at Fermilab is establishing processing centers around the world imposing a high demand on their database repository. For delivering read-only data, such as calibrations, trigger information, and run conditions data, we have abstracted the interface that clients use to retrieve data objects. A middle tier is deployed that translates client requests into database specific queries and returns the data to the client as XML datagrams. The database connection management, request translation, and data encoding are accomplished in servlets running under Tomcat. Squid Proxy caching layers are deployed near the Tomcat servers, as well as close to the clients, to significantly reduce the load on the database and provide a scalable deployment model. Details the system's construction and use are presented, including its architecture, design, interfaces, administration, performance measurements, and deployment plan

  17. Highly scalable and robust rule learner: performance evaluation and comparison.

    Science.gov (United States)

    Kurgan, Lukasz A; Cios, Krzysztof J; Dick, Scott

    2006-02-01

    Business intelligence and bioinformatics applications increasingly require the mining of datasets consisting of millions of data points, or crafting real-time enterprise-level decision support systems for large corporations and drug companies. In all cases, there needs to be an underlying data mining system, and this mining system must be highly scalable. To this end, we describe a new rule learner called DataSqueezer. The learner belongs to the family of inductive supervised rule extraction algorithms. DataSqueezer is a simple, greedy, rule builder that generates a set of production rules from labeled input data. In spite of its relative simplicity, DataSqueezer is a very effective learner. The rules generated by the algorithm are compact, comprehensible, and have accuracy comparable to rules generated by other state-of-the-art rule extraction algorithms. The main advantages of DataSqueezer are very high efficiency, and missing data resistance. DataSqueezer exhibits log-linear asymptotic complexity with the number of training examples, and it is faster than other state-of-the-art rule learners. The learner is also robust to large quantities of missing data, as verified by extensive experimental comparison with the other learners. DataSqueezer is thus well suited to modern data mining and business intelligence tasks, which commonly involve huge datasets with a large fraction of missing data.

  18. StagBL : A Scalable, Portable, High-Performance Discretization and Solver Layer for Geodynamic Simulation

    Science.gov (United States)

    Sanan, P.; Tackley, P. J.; Gerya, T.; Kaus, B. J. P.; May, D.

    2017-12-01

    StagBL is an open-source parallel solver and discretization library for geodynamic simulation,encapsulating and optimizing operations essential to staggered-grid finite volume Stokes flow solvers.It provides a parallel staggered-grid abstraction with a high-level interface in C and Fortran.On top of this abstraction, tools are available to define boundary conditions and interact with particle systems.Tools and examples to efficiently solve Stokes systems defined on the grid are provided in small (direct solver), medium (simple preconditioners), and large (block factorization and multigrid) model regimes.By working directly with leading application codes (StagYY, I3ELVIS, and LaMEM) and providing an API and examples to integrate with others, StagBL aims to become a community tool supplying scalable, portable, reproducible performance toward novel science in regional- and planet-scale geodynamics and planetary science.By implementing kernels used by many research groups beneath a uniform abstraction layer, the library will enable optimization for modern hardware, thus reducing community barriers to large- or extreme-scale parallel simulation on modern architectures. In particular, the library will include CPU-, Manycore-, and GPU-optimized variants of matrix-free operators and multigrid components.The common layer provides a framework upon which to introduce innovative new tools.StagBL will leverage p4est to provide distributed adaptive meshes, and incorporate a multigrid convergence analysis tool.These options, in addition to a wealth of solver options provided by an interface to PETSc, will make the most modern solution techniques available from a common interface. StagBL in turn provides a PETSc interface, DMStag, to its central staggered grid abstraction.We present public version 0.5 of StagBL, including preliminary integration with application codes and demonstrations with its own demonstration application, StagBLDemo. Central to StagBL is the notion of an

  19. Evaluating Nonclinical Performance of the Academic Pathologist: A Comprehensive, Scalable, and Flexible System for Leadership Use.

    Science.gov (United States)

    Wiles, Austin Blackburn; Idowu, Michael O; Clevenger, Charles V; Powers, Celeste N

    2018-01-01

    Academic pathologists perform clinical duties, as well as valuable nonclinical activities. Nonclinical activities may consist of research, teaching, and administrative management among many other important tasks. While clinical duties have many clear metrics to measure productivity, like the relative value units of Medicare reimbursement, nonclinical performance is often difficult to measure. Despite the difficulty of evaluating nonclinical activities, nonclinical productivity is used to determine promotion, funding, and inform professional evaluations of performance. In order to better evaluate the important nonclinical performance of academic pathologists, we present an evaluation system for leadership use. This system uses a Microsoft Excel workbook to provide academic pathologist respondents and reviewing leadership a transparent, easy-to-complete system that is both flexible and scalable. This system provides real-time feedback to academic pathologist respondents and a clear executive summary that allows for focused guidance of the respondent. This system may be adapted to fit practices of varying size, measure performance differently based on years of experience, and can work with many different institutional values.

  20. Scalable devices

    KAUST Repository

    Krü ger, Jens J.; Hadwiger, Markus

    2014-01-01

    In computer science in general and in particular the field of high performance computing and supercomputing the term scalable plays an important role. It indicates that a piece of hardware, a concept, an algorithm, or an entire system scales

  1. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    Energy Technology Data Exchange (ETDEWEB)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K. [Cray Inc., St. Paul, MN 55101 (United States); Porter, D. [Minnesota Supercomputing Institute for Advanced Computational Research, Minneapolis, MN USA (United States); O’Neill, B. J.; Nolting, C.; Donnert, J. M. F.; Jones, T. W. [School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455 (United States); Edmon, P., E-mail: pjm@cray.com, E-mail: nradclif@cray.com, E-mail: kkandalla@cray.com, E-mail: oneill@astro.umn.edu, E-mail: nolt0040@umn.edu, E-mail: donnert@ira.inaf.it, E-mail: twj@umn.edu, E-mail: dhp@umn.edu, E-mail: pedmon@cfa.harvard.edu [Institute for Theory and Computation, Center for Astrophysics, Harvard University, Cambridge, MA 02138 (United States)

    2017-02-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.

  2. WOMBAT: A Scalable and High-performance Astrophysical Magnetohydrodynamics Code

    International Nuclear Information System (INIS)

    Mendygral, P. J.; Radcliffe, N.; Kandalla, K.; Porter, D.; O’Neill, B. J.; Nolting, C.; Donnert, J. M. F.; Jones, T. W.; Edmon, P.

    2017-01-01

    We present a new code for astrophysical magnetohydrodynamics specifically designed and optimized for high performance and scaling on modern and future supercomputers. We describe a novel hybrid OpenMP/MPI programming model that emerged from a collaboration between Cray, Inc. and the University of Minnesota. This design utilizes MPI-RMA optimized for thread scaling, which allows the code to run extremely efficiently at very high thread counts ideal for the latest generation of multi-core and many-core architectures. Such performance characteristics are needed in the era of “exascale” computing. We describe and demonstrate our high-performance design in detail with the intent that it may be used as a model for other, future astrophysical codes intended for applications demanding exceptional performance.

  3. Performance and Scalability of Blockchain Networks and Smart Contracts

    OpenAIRE

    Scherer, Mattias

    2017-01-01

    The blockchain technology started as the innovation that powered the cryptocurrency Bitcoin. But in recent years, leaders in finance, banking, and many more companies has given this new innovation more attention than ever before. They seek a new technology to replace their system which are often inefficient and costly to operate. However, one of the reasons why it not possible to use a blockchain right away is because of the poor performance. Public blockchains, where anyone can participate, ...

  4. Scalability of DL_POLY on High Performance Computing Platform

    CSIR Research Space (South Africa)

    Mabakane, Mabule S

    2017-12-01

    Full Text Available stream_source_info Mabakanea_19979_2017.pdf.txt stream_content_type text/plain stream_size 33716 Content-Encoding UTF-8 stream_name Mabakanea_19979_2017.pdf.txt Content-Type text/plain; charset=UTF-8 SACJ 29(3) December... when using many processors within the compute nodes of the supercomputer. The type of the processors of compute nodes and their memory also play an important role in the overall performance of the parallel application running on a supercomputer. DL...

  5. Pursuit of a scalable high performance multi-petabyte database

    CERN Document Server

    Hanushevsky, A

    1999-01-01

    When the BaBar experiment at the Stanford Linear Accelerator Center starts in April 1999, it will generate approximately 200 TB/year of data at a rate of 10 MB/sec for 10 years. A mere six years later, CERN, the European Laboratory for Particle Physics, will start an experiment whose data storage requirements are two orders of magnitude larger. In both experiments, all of the data will reside in Objectivity databases accessible via the Advanced Multi-threaded Server (AMS). The quantity and rate at which the data is produced requires the use of a high performance hierarchical mass storage system in place of a standard Unix file system. Furthermore, the distributed nature of the experiment, involving scientists from 80 Institutions in 10 countries, also requires an extended security infrastructure not commonly found in standard Unix file systems. The combination of challenges that must be overcome in order to effectively deal with a multi-petabyte object oriented database is substantial. Our particular approach...

  6. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    International Nuclear Information System (INIS)

    Desai, Ajit; Pettit, Chris; Poirel, Dominique; Sarkar, Abhijit

    2017-01-01

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolution in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.

  7. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy.

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-19

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available.

  8. Design and thermal performances of a scalable linear Fresnel reflector solar system

    International Nuclear Information System (INIS)

    Zhu, Yanqing; Shi, Jifu; Li, Yujian; Wang, Leilei; Huang, Qizhang; Xu, Gang

    2017-01-01

    Highlights: • A scalable linear Fresnel reflector which can supply different temperatures is proposed. • Inclination design of the mechanical structure is used to reduce the end losses. • The maximum thermal efficiency of 64% is achieved in Guangzhou. - Abstract: This paper proposes a scalable linear Fresnel reflector (SLFR) solar system. The optical mirror field which contains an array of linear plat mirrors closed to each other is designed to eliminate the inter-low shading and blocking. Scalable mechanical mirror support which can place different number of mirrors is designed to supply different temperatures. The mechanical structure can be inclined to reduce the end losses. Finally, the thermal efficiency of the SLFR with two stage mirrors is tested. After adjustment, the maximum thermal efficiency of 64% is obtained and the mean thermal efficiency is higher than that before adjustment. The results indicate that the end losses have been reduced effectively by the inclination design and excellent thermal performance can be obtained by the SLFR after adjustment.

  9. Scalable devices

    KAUST Repository

    Krüger, Jens J.

    2014-01-01

    In computer science in general and in particular the field of high performance computing and supercomputing the term scalable plays an important role. It indicates that a piece of hardware, a concept, an algorithm, or an entire system scales with the size of the problem, i.e., it can not only be used in a very specific setting but it\\'s applicable for a wide range of problems. From small scenarios to possibly very large settings. In this spirit, there exist a number of fixed areas of research on scalability. There are works on scalable algorithms, scalable architectures but what are scalable devices? In the context of this chapter, we are interested in a whole range of display devices, ranging from small scale hardware such as tablet computers, pads, smart-phones etc. up to large tiled display walls. What interests us mostly is not so much the hardware setup but mostly the visualization algorithms behind these display systems that scale from your average smart phone up to the largest gigapixel display walls.

  10. SAME4HPC: A Promising Approach in Building a Scalable and Mobile Environment for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Karthik, Rajasekar [ORNL

    2014-01-01

    In this paper, an architecture for building Scalable And Mobile Environment For High-Performance Computing with spatial capabilities called SAME4HPC is described using cutting-edge technologies and standards such as Node.js, HTML5, ECMAScript 6, and PostgreSQL 9.4. Mobile devices are increasingly becoming powerful enough to run high-performance apps. At the same time, there exist a significant number of low-end and older devices that rely heavily on the server or the cloud infrastructure to do the heavy lifting. Our architecture aims to support both of these types of devices to provide high-performance and rich user experience. A cloud infrastructure consisting of OpenStack with Ubuntu, GeoServer, and high-performance JavaScript frameworks are some of the key open-source and industry standard practices that has been adopted in this architecture.

  11. Scalable creation of gold nanostructures on high performance engineering polymeric substrate

    Science.gov (United States)

    Jia, Kun; Wang, Pan; Wei, Shiliang; Huang, Yumin; Liu, Xiaobo

    2017-12-01

    The article reveals a facile protocol for scalable production of gold nanostructures on a high performance engineering thermoplastic substrate made of polyarylene ether nitrile (PEN) for the first time. Firstly, gold thin films with different thicknesses of 2 nm, 4 nm and 6 nm were evaporated on a spin-coated PEN substrate on glass slide in vacuum. Next, the as-evaporated samples were thermally annealed around the glass transition temperature of the PEN substrate, on which gold nanostructures with island-like morphology were created. Moreover, it was found that the initial gold evaporation thickness and annealing atmosphere played an important role in determining the morphology and plasmonic properties of the formulated Au NPs. Interestingly, we discovered that isotropic Au NPs can be easily fabricated on the freestanding PEN substrate, which was fabricated by a cost-effective polymer solution casting method. More specifically, monodispersed Au nanospheres with an average size of ∼60 nm were obtained after annealing a 4 nm gold film covered PEN casting substrate at 220 °C for 2 h in oxygen. Therefore, the scalable production of Au NPs with controlled morphology on PEN substrate would open the way for development of robust flexible nanosensors and optical devices using high performance engineering polyarylene ethers.

  12. Performance and scalability analysis of teraflop-scale parallel architectures using multidimensional wavefront applications

    International Nuclear Information System (INIS)

    Hoisie, A.; Lubeck, O.; Wasserman, H.

    1998-01-01

    The authors develop a model for the parallel performance of algorithms that consist of concurrent, two-dimensional wavefronts implemented in a message passing environment. The model, based on a LogGP machine parameterization, combines the separate contributions of computation and communication wavefronts. They validate the model on three important supercomputer systems, on up to 500 processors. They use data from a deterministic particle transport application taken from the ASCI workload, although the model is general to any wavefront algorithm implemented on a 2-D processor domain. They also use the validated model to make estimates of performance and scalability of wavefront algorithms on 100-TFLOPS computer systems expected to be in existence within the next decade as part of the ASCI program and elsewhere. In this context, they analyze two problem sizes. The model shows that on the largest such problem (1 billion cells), inter-processor communication performance is not the bottleneck. Single-node efficiency is the dominant factor

  13. Scalable 2D Mesoporous Silicon Nanosheets for High-Performance Lithium-Ion Battery Anode.

    Science.gov (United States)

    Chen, Song; Chen, Zhuo; Xu, Xingyan; Cao, Chuanbao; Xia, Min; Luo, Yunjun

    2018-03-01

    Constructing unique mesoporous 2D Si nanostructures to shorten the lithium-ion diffusion pathway, facilitate interfacial charge transfer, and enlarge the electrode-electrolyte interface offers exciting opportunities in future high-performance lithium-ion batteries. However, simultaneous realization of 2D and mesoporous structures for Si material is quite difficult due to its non-van der Waals structure. Here, the coexistence of both mesoporous and 2D ultrathin nanosheets in the Si anodes and considerably high surface area (381.6 m 2 g -1 ) are successfully achieved by a scalable and cost-efficient method. After being encapsulated with the homogeneous carbon layer, the Si/C nanocomposite anodes achieve outstanding reversible capacity, high cycle stability, and excellent rate capability. In particular, the reversible capacity reaches 1072.2 mA h g -1 at 4 A g -1 even after 500 cycles. The obvious enhancements can be attributed to the synergistic effect between the unique 2D mesoporous nanostructure and carbon capsulation. Furthermore, full-cell evaluations indicate that the unique Si/C nanostructures have a great potential in the next-generation lithium-ion battery. These findings not only greatly improve the electrochemical performances of Si anode, but also shine some light on designing the unique nanomaterials for various energy devices. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Provider Customer Service Program - Performance Data

    Data.gov (United States)

    U.S. Department of Health & Human Services — CMS is continuously analyzing performance and quality of the Provider Customer Service Programs (PCSPs) of the contractors and will be identifying trends and making...

  15. Instrumentation Of The CERN Accelerator Logging Service: Ensuring Performance, Scalability, Maintenance And Diagnostics

    CERN Document Server

    Roderick, C; Dinis Teixeira, D

    2011-01-01

    The CERN accelerator Logging Service currently holds more than 90 terabytes of data online, and processes approximately 450 gigabytes per day, via hundreds of data loading processes and data extraction requests. This service is mission-critical for day-to-day operations, especially with respect to the tracking of live data from the LHC beam and equipment. In order to effectively manage any service, the service provider’s goals should include knowing how the underlying systems are being used, in terms of: “Who is doing what, from where, using which applications and methods, and how long each action takes”. Armed with such information, it is then possible to: analyse and tune system performance over time; plan for scalability ahead of time; assess the impact of maintenance operations and infrastructure upgrades; diagnose past, on-going, or re-occurring problems. The Logging Service is based on Oracle DBMS and Application Servers, and Java technology, and is comprised of several layered and multi-tiered s...

  16. Pre-coating of LSCM perovskite with metal catalyst for scalable high performance anodes

    KAUST Repository

    Boulfrad, Samir

    2013-07-01

    In this work, a highly scalable technique is proposed as an alternative to the lab-scale impregnation method. LSCM-CGO powders were pre-coated with 5 wt% of Ni from nitrates. After appropriate mixing and adequate heat treatment, coated powders were then dispersed into organic based vehicles to form a screen-printable ink which was deposited and fired to form SOFC anode layers. Electrochemical tests show a considerable enhancement of the pre-coated anode performances under 50 ml/min wet H2 flow with polarization resistance decreased from about 0.60cm2 to 0.38 cm2 at 900 C and from 6.70 cm2 to 1.37 cm2 at 700 C. This is most likely due to the pre-coating process resulting in nano-scaled Ni particles with two typical sizes; from 50 to 200 nm and from 10 to 40 nm. Converging indications suggest that the latter type of particle comes from solid state solution of Ni in LSCM phase under oxidizing conditions and exsolution as nanoparticles under reducing atmospheres. Copyright © 2013, Hydrogen Energy Publications, LLC. Published by Elsevier Ltd. All rights reserved.

  17. Provide a model to improve the performance of intrusion detection systems in the cloud

    OpenAIRE

    Foroogh Sedighi

    2016-01-01

    High availability of tools and service providers in cloud computing and the fact that cloud computing services are provided by internet and deal with public, have caused important challenges for new computing model. Cloud computing faces problems and challenges such as user privacy, data security, data ownership, availability of services, and recovery after breaking down, performance, scalability, programmability. So far, many different methods are presented for detection of intrusion in clou...

  18. The Computational Complexity, Parallel Scalability, and Performance of Atmospheric Data Assimilation Algorithms

    Science.gov (United States)

    Lyster, Peter M.; Guo, J.; Clune, T.; Larson, J. W.; Atlas, Robert (Technical Monitor)

    2001-01-01

    The computational complexity of algorithms for Four Dimensional Data Assimilation (4DDA) at NASA's Data Assimilation Office (DAO) is discussed. In 4DDA, observations are assimilated with the output of a dynamical model to generate best-estimates of the states of the system. It is thus a mapping problem, whereby scattered observations are converted into regular accurate maps of wind, temperature, moisture and other variables. The DAO is developing and using 4DDA algorithms that provide these datasets, or analyses, in support of Earth System Science research. Two large-scale algorithms are discussed. The first approach, the Goddard Earth Observing System Data Assimilation System (GEOS DAS), uses an atmospheric general circulation model (GCM) and an observation-space based analysis system, the Physical-space Statistical Analysis System (PSAS). GEOS DAS is very similar to global meteorological weather forecasting data assimilation systems, but is used at NASA for climate research. Systems of this size typically run at between 1 and 20 gigaflop/s. The second approach, the Kalman filter, uses a more consistent algorithm to determine the forecast error covariance matrix than does GEOS DAS. For atmospheric assimilation, the gridded dynamical fields typically have More than 10(exp 6) variables, therefore the full error covariance matrix may be in excess of a teraword. For the Kalman filter this problem can easily scale to petaflop/s proportions. We discuss the computational complexity of GEOS DAS and our implementation of the Kalman filter. We also discuss and quantify some of the technical issues and limitations in developing efficient, in terms of wall clock time, and scalable parallel implementations of the algorithms.

  19. Computational Fluid Dynamics (CFD) Computations With Zonal Navier-Stokes Flow Solver (ZNSFLOW) Common High Performance Computing Scalable Software Initiative (CHSSI) Software

    National Research Council Canada - National Science Library

    Edge, Harris

    1999-01-01

    ...), computational fluid dynamics (CFD) 6 project. Under the project, a proven zonal Navier-Stokes solver was rewritten for scalable parallel performance on both shared memory and distributed memory high performance computers...

  20. Durango: Scalable Synthetic Workload Generation for Extreme-Scale Application Performance Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Carothers, Christopher D. [Rensselaer Polytechnic Institute (RPI); Meredith, Jeremy S. [ORNL; Blanco, Marc [Rensselaer Polytechnic Institute (RPI); Vetter, Jeffrey S. [ORNL; Mubarak, Misbah [Argonne National Laboratory; LaPre, Justin [Rensselaer Polytechnic Institute (RPI); Moore, Shirley V. [ORNL

    2017-05-01

    Performance modeling of extreme-scale applications on accurate representations of potential architectures is critical for designing next generation supercomputing systems because it is impractical to construct prototype systems at scale with new network hardware in order to explore designs and policies. However, these simulations often rely on static application traces that can be difficult to work with because of their size and lack of flexibility to extend or scale up without rerunning the original application. To address this problem, we have created a new technique for generating scalable, flexible workloads from real applications, we have implemented a prototype, called Durango, that combines a proven analytical performance modeling language, Aspen, with the massively parallel HPC network modeling capabilities of the CODES framework.Our models are compact, parameterized and representative of real applications with computation events. They are not resource intensive to create and are portable across simulator environments. We demonstrate the utility of Durango by simulating the LULESH application in the CODES simulation environment on several topologies and show that Durango is practical to use for simulation without loss of fidelity, as quantified by simulation metrics. During our validation of Durango's generated communication model of LULESH, we found that the original LULESH miniapp code had a latent bug where the MPI_Waitall operation was used incorrectly. This finding underscores the potential need for a tool such as Durango, beyond its benefits for flexible workload generation and modeling.Additionally, we demonstrate the efficacy of Durango's direct integration approach, which links Aspen into CODES as part of the running network simulation model. Here, Aspen generates the application-level computation timing events, which in turn drive the start of a network communication phase. Results show that Durango's performance scales well when

  1. Scalable preparation of porous micron-SnO2/C composites as high performance anode material for lithium ion battery

    Science.gov (United States)

    Wang, Ming-Shan; Lei, Ming; Wang, Zhi-Qiang; Zhao, Xing; Xu, Jun; Yang, Wei; Huang, Yun; Li, Xing

    2016-03-01

    Nano tin dioxide-carbon (SnO2/C) composites prepared by various carbon materials, such as carbon nanotubes, porous carbon, and graphene, have attracted extensive attention in wide fields. However, undesirable concerns of nanoparticles, including in higher surface area, low tap density, and self-agglomeration, greatly restricted their large-scale practical applications. In this study, novel porous micron-SnO2/C (p-SnO2/C) composites are scalable prepared by a simple hydrothermal approach using glucose as a carbon source and Pluronic F127 as a pore forming agent/soft template. The SnO2 nanoparticles were homogeneously dispersed in micron carbon spheres by assembly with F127/glucose. The continuous three-dimensional porous carbon networks have effectively provided strain relaxation for SnO2 volume expansion/shrinkage during lithium insertion/extraction. In addition, the carbon matrix could largely minimize the direct exposure of SnO2 to the electrolyte, thus ensure formation of stable solid electrolyte interface films. Moreover, the porous structure could also create efficient channels for the fast transport of lithium ions. As a consequence, the p-SnO2/C composites exhibit stable cycle performance, such as a high capacity retention of over 96% for 100 cycles at a current density of 200 mA g-1 and a long cycle life up to 800 times at a higher current density of 1000 mA g-1.

  2. Engaging service providers in improving industry performance

    International Nuclear Information System (INIS)

    Oberth, R.

    2012-01-01

    Effective task leadership is the key to achieving results in the nuclear industry and in most other industries. One of the themes of this conference is to discuss how the nuclear industry can undertake Issue-Identification and Definition as a means of 'identifying what needs attention' and then 'defining what needs to be done to make that happen'. I will explore this theme from the perspective of the 'Service Provider' - which by the definition of this conference includes everyone not within an operating utility - meaning 'those involved in everything from inspection and repair to research and plant architecture' - basically the member companies of my association, OCI. Our members take the definition of the roles and responsibilities of the 'Service Provider' community very seriously. In the context of this discussion a key utility function is the early definition of requirements and expectations of Service Providers in supplying to these requirements. Let's explore for a moment the Service Provider role and perspective. Service Providers are by nature pro-active - they seek ways to engage with utilities (and tier one vendors) to solve problems and achieve good outcomes. They come to industry conferences like this one to learn about upcoming utility programs and supply opportunities and how they can improve performance. Service Providers particularly want to hear senior utility people comment on emerging issues even those at the very early identification stage. Some Clarification of Roles is in Order - as that is the focus of this conference: 'Issue-Identification and Definition'. 'Issue-Identification' is the utility's job - it is the utility's role to identify as early as possible 'what needs attention and what their needs and expectations are'. This takes place before service provider engagement. 'Issue-Definition' is more challenging. It means 'determining and prioritizing what needs to be done to deal with the situation at hand'. This typically involves

  3. Scalable optical quantum computer

    Energy Technology Data Exchange (ETDEWEB)

    Manykin, E A; Mel' nichenko, E V [Institute for Superconductivity and Solid-State Physics, Russian Research Centre ' Kurchatov Institute' , Moscow (Russian Federation)

    2014-12-31

    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

  4. Scalable optical quantum computer

    International Nuclear Information System (INIS)

    Manykin, E A; Mel'nichenko, E V

    2014-01-01

    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr 3+ , regularly located in the lattice of the orthosilicate (Y 2 SiO 5 ) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

  5. Building a Community Infrastructure for Scalable On-Line Performance Analysis Tools around Open|SpeedShop

    Energy Technology Data Exchange (ETDEWEB)

    Galarowicz, James E. [Krell Institute, Ames, IA (United States); Miller, Barton P. [Univ. of Wisconsin, Madison, WI (United States). Computer Sciences Dept.; Hollingsworth, Jeffrey K. [Univ. of Maryland, College Park, MD (United States). Computer Sciences Dept.; Roth, Philip [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Future Technologies Group, Computer Science and Math Division; Schulz, Martin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing (CASC)

    2013-12-19

    In this project we created a community tool infrastructure for program development tools targeting Petascale class machines and beyond. This includes tools for performance analysis, debugging, and correctness tools, as well as tuning and optimization frameworks. The developed infrastructure provides a comprehensive and extensible set of individual tool building components. We started with the basic elements necessary across all tools in such an infrastructure followed by a set of generic core modules that allow a comprehensive performance analysis at scale. Further, we developed a methodology and workflow that allows others to add or replace modules, to integrate parts into their own tools, or to customize existing solutions. In order to form the core modules, we built on the existing Open|SpeedShop infrastructure and decomposed it into individual modules that match the necessary tool components. At the same time, we addressed the challenges found in performance tools for petascale systems in each module. When assembled, this instantiation of community tool infrastructure provides an enhanced version of Open|SpeedShop, which, while completely different in its architecture, provides scalable performance analysis for petascale applications through a familiar interface. This project also built upon and enhances capabilities and reusability of project partner components as specified in the original project proposal. The overall project team’s work over the project funding cycle was focused on several areas of research, which are described in the following sections. The reminder of this report also highlights related work as well as preliminary work that supported the project. In addition to the project partners funded by the Office of Science under this grant, the project team included several collaborators who contribute to the overall design of the envisioned tool infrastructure. In particular, the project team worked closely with the other two DOE NNSA

  6. Scalable and reusable emulator for evaluating the performance of SS7 networks

    Science.gov (United States)

    Lazar, Aurel A.; Tseng, Kent H.; Lim, Koon Seng; Choe, Winston

    1994-04-01

    A scalable and reusable emulator was designed and implemented for studying the behavior of SS7 networks. The emulator design was largely based on public domain software. It was developed on top of an environment supported by PVM, the Parallel Virtual Machine, and managed by OSIMIS-the OSI Management Information Service platform. The emulator runs on top of a commercially available ATM LAN interconnecting engineering workstations. As a case study for evaluating the emulator, the behavior of the Singapore National SS7 Network under fault and unbalanced loading conditions was investigated.

  7. Scalable High-Performance Ultraminiature Graphene Micro-Supercapacitors by a Hybrid Technique Combining Direct Writing and Controllable Microdroplet Transfer.

    Science.gov (United States)

    Shen, Daozhi; Zou, Guisheng; Liu, Lei; Zhao, Wenzheng; Wu, Aiping; Duley, Walter W; Zhou, Y Norman

    2018-02-14

    Miniaturization of energy storage devices can significantly decrease the overall size of electronic systems. However, this miniaturization is limited by the reduction of electrode dimensions and the reproducible transfer of small electrolyte drops. This paper reports first a simple scalable direct writing method for the production of ultraminiature microsupercapacitor (MSC) electrodes, based on femtosecond laser reduced graphene oxide (fsrGO) interlaced pads. These pads, separated by 2 μm spacing, are 100 μm long and 8 μm wide. A second stage involves the accurate transfer of an electrolyte microdroplet on top of each individual electrode, which can avoid any interference of the electrolyte with other electronic components. Abundant in-plane mesopores in fsrGO induced by a fs laser together with ultrashort interelectrode spacing enables MSCs to exhibit a high specific capacitance (6.3 mF cm -2 and 105 F cm -3 ) and ∼100% retention after 1000 cycles. An all graphene resistor-capacitor (RC) filter is also constructed by combining the MSC and a fsrGO resistor, which is confirmed to exhibit highly enhanced performance characteristics. This new hybrid technique combining fs laser direct writing and precise microdroplet transfer easily enables scalable production of ultraminiature MSCs, which is believed to be significant for practical application of micro-supercapacitor microelectronic systems.

  8. Content-Aware Scalability-Type Selection for Rate Adaptation of Scalable Video

    Directory of Open Access Journals (Sweden)

    Tekalp A Murat

    2007-01-01

    Full Text Available Scalable video coders provide different scaling options, such as temporal, spatial, and SNR scalabilities, where rate reduction by discarding enhancement layers of different scalability-type results in different kinds and/or levels of visual distortion depend on the content and bitrate. This dependency between scalability type, video content, and bitrate is not well investigated in the literature. To this effect, we first propose an objective function that quantifies flatness, blockiness, blurriness, and temporal jerkiness artifacts caused by rate reduction by spatial size, frame rate, and quantization parameter scaling. Next, the weights of this objective function are determined for different content (shot types and different bitrates using a training procedure with subjective evaluation. Finally, a method is proposed for choosing the best scaling type for each temporal segment that results in minimum visual distortion according to this objective function given the content type of temporal segments. Two subjective tests have been performed to validate the proposed procedure for content-aware selection of the best scalability type on soccer videos. Soccer videos scaled from 600 kbps to 100 kbps by the proposed content-aware selection of scalability type have been found visually superior to those that are scaled using a single scalability option over the whole sequence.

  9. Scalable photoreactor for hydrogen production

    KAUST Repository

    Takanabe, Kazuhiro; Shinagawa, Tatsuya

    2017-01-01

    Provided herein are scalable photoreactors that can include a membrane-free water- splitting electrolyzer and systems that can include a plurality of membrane-free water- splitting electrolyzers. Also provided herein are methods of using the scalable photoreactors provided herein.

  10. Scalable photoreactor for hydrogen production

    KAUST Repository

    Takanabe, Kazuhiro

    2017-04-06

    Provided herein are scalable photoreactors that can include a membrane-free water- splitting electrolyzer and systems that can include a plurality of membrane-free water- splitting electrolyzers. Also provided herein are methods of using the scalable photoreactors provided herein.

  11. Performance and scalability of the back-end sub-system in the ATLAS DAQ/EF prototype

    CERN Document Server

    Alexandrov, I N; Badescu, E; Burckhart, Doris; Caprini, M; Cohen, L; Duval, P Y; Hart, R; Jones, R; Kazarov, A; Kolos, S; Kotov, V; Laugier, D; Mapelli, Livio P; Moneta, L; Qian, Z; Radu, A A; Ribeiro, C A; Roumiantsev, V; Ryabov, Yu; Schweiger, D; Soloviev, I V

    2000-01-01

    The DAQ group of the future ATLAS experiment has developed a prototype system based on the trigger/DAQ architecture described in the ATLAS Technical Proposal to support studies of the full system functionality, architecture as well as available hardware and software technologies. One sub-system of this prototype is the back- end which encompasses the software needed to configure, control and monitor the DAQ, but excludes the processing and transportation of physics data. The back-end consists of a number of components including run control, configuration databases and message reporting system. The software has been developed using standard, external software technologies such as OO databases and CORBA. It has been ported to several C++ compilers and operating systems including Solaris, Linux, WNT and LynxOS. This paper gives an overview of the back-end software, its performance, scalability and current status. (17 refs).

  12. Performance Evaluation of Residential Demand Response Based on a Modified Fuzzy VIKOR and Scalable Computing Method

    Directory of Open Access Journals (Sweden)

    Jun Dong

    2018-04-01

    Full Text Available For better utilizing renewable energy resources and improving the sustainability of power systems, demand response is widely applied in China, especially in recent decades. Considering the massive potential flexible resources in the residential sector, demand response programs are able to achieve significant benefits. This paper proposes an effective performance evaluation framework for such programs aimed at residential customers. In general, the evaluation process will face multiple criteria and some uncertain factors. Therefore, we combine the multi-criteria decision making concept and fuzzy set theory to accomplish the model establishment. By introducing trapezoidal fuzzy numbers into the Vlsekriterijumska Optimizacijia I Kompromisno Resenje (VIKOR method, the evaluation model can effectively deal with the subjection and fuzziness of experts’ opinions. Furthermore, we ameliorate the criteria weight determination procedure of traditional models via combining the fuzzy Analytic Hierarchy Process and Shannon entropy method, which can incorporate objective information and subjective judgments. Finally, the proposed evaluation framework is verified by the empirical analysis of five demand response projects in Chinese residential areas. The results give a valid performance ranking of the five alternatives and indicate that more attention should be paid to the criteria affiliated with technology level and economy benefits. In addition, a series of sensitivity analyses are conducted to examine the validity and effectiveness of the established evaluation framework and results. The study improves traditional multi-criteria decision making method VIKOR by introducing trapezoidal fuzzy numbers and combination weighing technique, which can provide an effective mean for performance evaluation of residential demand response programs in a fuzzy environment.

  13. Scalable Upcycling Silicon from Waste Slicing Sludge for High-performance Lithium-ion Battery Anodes

    International Nuclear Information System (INIS)

    Bao, Qi; Huang, Yao-Hui; Lan, Chun-Kai; Chen, Bing-Hong; Duh, Jenq-Gong

    2015-01-01

    Silicon (Si) has been perceived as a promising next-generation anode material for lithium ion batteries (LIBs) due to its superior theoretical capacity. Despite the natural abundance of this element on Earth, large-scale production of high-purity Si nanomaterials in a green and energy-efficient way is yet to become an industrial reality. Spray-drying methods have been exploited to recover Si particles from low-value sludge produced in the photovoltaic industry, providing a massive and cost-effective Si resource for fabricating anode materials. To address such drawbacks like volume expansion, low electrical and Li + conductivity and unstable solid electrolyte interphase (SEI) formation, the recycled silicon particles have been downsized into nanoscale and shielded by a highly conductive and protective graphene multilayer through high energy ball milling. Cyclic voltammetry and electrochemical impedance spectroscopy measurements have revealed that the graphene wrapping and size reduction approach have significantly improved the electrochemical performance. It delivers an excellent reversible capacity of 1,138 mA h g −1 and a long cycle life with 73% capacity retention over 150 cycles at a high current of 450 mA g −1 . The plentiful waste conversion methodology also provides considerable opportunities for developing additional rechargeable devices, ceramic, powder metallurgy and silane/siloxane products

  14. High performance flexible metal oxide/silver nanowire based transparent conductive films by a scalable lamination-assisted solution method

    Directory of Open Access Journals (Sweden)

    Hua Yu

    2017-03-01

    Full Text Available Flexible MoO3/silver nanowire (AgNW/MoO3/TiO2/Epoxy electrodes with comparable performance to ITO were fabricated by a scalable solution-processed method with lamination assistance for transparent and conductive applications. Silver nanoparticle-based electrodes were also prepared for comparison. Using a simple spin-coating and lamination-assisted planarization method, a full solution-based approach allows preparation of AgNW-based composite electrodes at temperatures as low as 140 °C. The resulting flexible AgNW-based electrodes exhibit higher transmittance of 82% at 550 nm and lower sheet resistance about 12–15 Ω sq−1, in comparison with the values of 68% and 22–25 Ω sq−1 separately for AgNP based electrodes. Scanning electron microscopy (SEM and Atomic force microscopy (AFM reveals that the multi-stacked metal-oxide layers embedded with the AgNWs possess lower surface roughness (<15 nm. The AgNW/MoO3 composite network could enhance the charge transport and collection efficiency by broadening the lateral conduction range due to the built of an efficient charge transport network with long-sized nanowire. In consideration of the manufacturing cost, the lamination-assisted solution-processed method is cost-effective and scalable, which is desire for large-area fabrication. While in view of the materials cost and comparable performance, this AgNW-based transparent and conductive electrodes is potential as an alternative to ITO for various optoelectronic applications.

  15. Scalable coherent interface

    International Nuclear Information System (INIS)

    Alnaes, K.; Kristiansen, E.H.; Gustavson, D.B.; James, D.V.

    1990-01-01

    The Scalable Coherent Interface (IEEE P1596) is establishing an interface standard for very high performance multiprocessors, supporting a cache-coherent-memory model scalable to systems with up to 64K nodes. This Scalable Coherent Interface (SCI) will supply a peak bandwidth per node of 1 GigaByte/second. The SCI standard should facilitate assembly of processor, memory, I/O and bus bridge cards from multiple vendors into massively parallel systems with throughput far above what is possible today. The SCI standard encompasses two levels of interface, a physical level and a logical level. The physical level specifies electrical, mechanical and thermal characteristics of connectors and cards that meet the standard. The logical level describes the address space, data transfer protocols, cache coherence mechanisms, synchronization primitives and error recovery. In this paper we address logical level issues such as packet formats, packet transmission, transaction handshake, flow control, and cache coherence. 11 refs., 10 figs

  16. PERC 2 High-End Computer System Performance: Scalable Science and Engineering

    Energy Technology Data Exchange (ETDEWEB)

    Daniel Reed

    2006-10-15

    During two years of SciDAC PERC-2, our activities had centered largely on development of new performance analysis techniques to enable efficient use on systems containing thousands or tens of thousands of processors. In addition, we continued our application engagement efforts and utilized our tools to study the performance of various SciDAC applications on a variety of HPC platforms.

  17. Enhancing Scalability of Sparse Direct Methods

    International Nuclear Information System (INIS)

    Li, Xiaoye S.; Demmel, James; Grigori, Laura; Gu, Ming; Xia, Jianlin; Jardin, Steve; Sovinec, Carl; Lee, Lie-Quan

    2007-01-01

    TOPS is providing high-performance, scalable sparse direct solvers, which have had significant impacts on the SciDAC applications, including fusion simulation (CEMM), accelerator modeling (COMPASS), as well as many other mission-critical applications in DOE and elsewhere. Our recent developments have been focusing on new techniques to overcome scalability bottleneck of direct methods, in both time and memory. These include parallelizing symbolic analysis phase and developing linear-complexity sparse factorization methods. The new techniques will make sparse direct methods more widely usable in large 3D simulations on highly-parallel petascale computers

  18. High performance SDN enabled flat data center network architecture based on scalable and flow-controlled optical switching system

    NARCIS (Netherlands)

    Calabretta, N.; Miao, W.; Dorren, H.J.S.

    2015-01-01

    We demonstrate a reconfigurable virtual datacenter network by utilizing statistical multiplexing offered by scalable and flow-controlled optical switching system. Results show QoS guarantees by the priority assignment and load balancing for applications in virtual networks.

  19. Performance of low cost scalable air-cathode microbial fuel cell made from clayware separator using multiple electrodes.

    Science.gov (United States)

    Ghadge, Anil N; Ghangrekar, Makarand M

    2015-04-01

    Performance of scalable air-cathode microbial fuel cell (MFC) of 26 L volume, made from clayware cylinder with multiple electrodes, was evaluated. When electrodes were connected in parallel with 100 Ω resistance (R ext), power of 11.46 mW was produced which was 4.48 and 3.73 times higher than individual electrode pair and series connection, respectively. Coulombic efficiency of 5.10 ± 0.13% and chemical oxygen demand (COD) removal of 78.8 ± 5.52% was observed at R ext of 3 Ω. Performance under different organic loading rates (OLRs) varying from 0.75 to 6.0 g CODL(-1)d(-1) revealed power of 17.85 mW (47.28 mA current) at OLR of 3.0 g CODL(-1)d(-1). Internal resistance (R int) of 5.2 Ω observed is among the least value reported in literature. Long term operational stability (14 months) demonstrates the technical viability of clayware MFC for practical applications and potential benefits towards wastewater treatment and electricity recovery. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Pre-coating of LSCM perovskite with metal catalyst for scalable high performance anodes

    KAUST Repository

    Boulfrad, Samir; Cassidy, Mark; Djurado, Elisabeth; Irvine, John Ts S; Jabbour, Ghassan E.

    2013-01-01

    then dispersed into organic based vehicles to form a screen-printable ink which was deposited and fired to form SOFC anode layers. Electrochemical tests show a considerable enhancement of the pre-coated anode performances under 50 ml/min wet H2 flow

  1. Performance and scalability of isolated DC-DC converter topologies in low voltage, high current applications

    Energy Technology Data Exchange (ETDEWEB)

    Vaisanen, V.

    2012-07-01

    Fuel cells are a promising alternative for clean and efficient energy production. A fuel cell is probably the most demanding of all distributed generation power sources. It resembles a solar cell in many ways, but sets strict limits to current ripple, common mode voltages and load variations. The typically low output voltage from the fuel cell stack needs to be boosted to a higher voltage level for grid interfacing. Due to the high electrical efficiency of the fuel cell, there is a need for high efficiency power converters, and in the case of low voltage, high current and galvanic isolation, the implementation of such converters is not a trivial task. This thesis presents galvanically isolated DC-DC converter topologies that have favorable characteristics for fuel cell usage and reviews the topologies from the viewpoint of electrical efficiency and cost efficiency. The focus is on evaluating the design issues when considering a single converter module having large current stresses. The dominating loss mechanism in low voltage, high current applications is conduction losses. In the case of MOSFETs, the conduction losses can be efficiently reduced by paralleling, but in the case of diodes, the effectiveness of paralleling depends strongly on the semiconductor material, diode parameters and output configuration. The transformer winding losses can be a major source of losses if the windings are not optimized according to the topology and the operating conditions. Transformer prototyping can be expensive and time consuming, and thus it is preferable to utilize various calculation methods during the design process in order to evaluate the performance of the transformer. This thesis reviews calculation methods for solid wire, litz wire and copper foil winding losses, and in order to evaluate the applicability of the methods, the calculations are compared against measurements and FEM simulations. By selecting a proper calculation method for each winding type, the winding

  2. A facile and scalable strategy for synthesis of size-tunable NiCo2O4 with nanocoral-like architecture for high-performance supercapacitors

    International Nuclear Information System (INIS)

    Tao, Yan; Ruiyi, Li; Zaijun, Li; Yinjun, Fang

    2014-01-01

    Graphical abstract: We reported a facile and scalable strategy for synthesis of size-tunable NiCo 2 O 4 with nanocoral-like architecture. The unique structure will improve faradaic redox reaction and mass transfer, NiCo 2 O 4 offers excellent electrochemical performance for supercapacitors. - Highlights: • We reported a facile and scalable strategy for synthesis of size-tunable NiCo 2 O 4 withnanocoral-lide architecture. • Combination of microwave and tertbutanol as medium creates ultrathin nickel/cobalt double hydroxide with flowerclusters. • The method is very simple, rapid and efficient, it can be used for large scale productionof nanomaterials. • The size of NiCo 2 O 4 nanocorals is easy to be can be controlled by adjusting calcination temperature. • Unique structure enhances rates of electron transfer and mass transport, NiCo 2 O 4 shows high electrochemical performance. - Abstract: There is a great need to develop high-performance electroactive materials for supercapacitors. The study reported a facile and scalable strategy for synthesis of size-tunable NiCo 2 O 4 with nanocoral-like architecture. Cobalt nitrate and nickel nitrate were dissolved in a tertbutanol solution and heated to reflux state under microwave radiation. The amounts of ammonia was dropped into the mixed solution to form nickel/cobalt double hydroxides. The reaction can complete within 15 min with the productivity of 99.9%. The obtained double hydroxides display flowercluster-like ultrathin nanostructure. The double hydroxide was calcined into different NiCo 2 O 4 products using different calcination temperature, including 400 °C, 500 °C, 600 °C and 700 °C. The resulting NiCo 2 O 4 is of nanocoral-like architecture. Interestingly, the size of coral can be easily controlled by adjusting the temperature. The NiCo 2 O 4 prepared at 400°C gives a minimum building block size (10.2 nm) and maximum specific surface area (108.8 m 2 ·g −1 ). The unique structure will greatly

  3. 5 CFR 9701.407 - Monitoring performance and providing feedback.

    Science.gov (United States)

    2010-01-01

    ... feedback. 9701.407 Section 9701.407 Administrative Personnel DEPARTMENT OF HOMELAND SECURITY HUMAN... performance and providing feedback. In applying the requirements of the performance management system and its... organization; and (b) Provide timely periodic feedback to employees on their actual performance with respect to...

  4. Performance and scalability of finite-difference and finite-element wave-propagation modeling on Intel's Xeon Phi

    NARCIS (Netherlands)

    Zhebel, E.; Minisini, S.; Kononov, A.; Mulder, W.A.

    2013-01-01

    With the rapid developments in parallel compute architectures, algorithms for seismic modeling and imaging need to be reconsidered in terms of parallelization. The aim of this paper is to compare scalability of seismic modeling algorithms: finite differences, continuous mass-lumped finite elements

  5. Security measures effect over performance in service provider network

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... Abstract—network security is defined as a set of policies and actions taken by a ... These threats are linked with the following factors that are ... typically smaller than those in the service provider space. ... Service providers cannot manage to provide ... e the DB performance effect ... r the business needs [10].

  6. Physical Profiling Performance of Air Force Primary Care Providers

    Science.gov (United States)

    2017-08-09

    AFRL-SA-WP-TR-2017-0014 Physical Profiling Performance of Air Force Primary Care Providers Anthony P. Tvaryanas1; William P...COVERED (From – To) September 2016 – January 2017 4. TITLE AND SUBTITLE Physical Profiling Performance of Air Force Primary Care Providers...encounter with their primary care team. An independent medical standards subject matter expert (SME) reviewed encounters in the electronic health record

  7. On Scalability and Replicability of Smart Grid Projects—A Case Study

    Directory of Open Access Journals (Sweden)

    Lukas Sigrist

    2016-03-01

    Full Text Available This paper studies the scalability and replicability of smart grid projects. Currently, most smart grid projects are still in the R&D or demonstration phases. The full roll-out of the tested solutions requires a suitable degree of scalability and replicability to prevent project demonstrators from remaining local experimental exercises. Scalability and replicability are the preliminary requisites to perform scaling-up and replication successfully; therefore, scalability and replicability allow for or at least reduce barriers for the growth and reuse of the results of project demonstrators. The paper proposes factors that influence and condition a project’s scalability and replicability. These factors involve technical, economic, regulatory and stakeholder acceptance related aspects, and they describe requirements for scalability and replicability. In order to assess and evaluate the identified scalability and replicability factors, data has been collected from European and national smart grid projects by means of a survey, reflecting the projects’ view and results. The evaluation of the factors allows quantifying the status quo of on-going projects with respect to the scalability and replicability, i.e., they provide a feedback on to what extent projects take into account these factors and on whether the projects’ results and solutions are actually scalable and replicable.

  8. Design and performance verification of a wideband scalable dual-polarized probe for spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Pivnenko, Sergey; Kim, Oleksiy S.; Nielsen, Jeppe Majlund

    2012-01-01

    A wideband scalable dual-polarized probe designed by the Electromagnetic Systems group at the Technical University of Denmark is presented. The design was scaled and two probes were manufactured for the frequency bands 1-3 GHz and 0.4-1.2 GHz. The results of the acceptance tests of the 0.4-1.2 GHz...... probe are briefly discussed. Since these probes represent so-called higher-order antennas, applicability of the recently developed higher-order probe correction technique [3] for these probes was investigated. Extensive tests were carried out for two representative antennas under test using...

  9. Fast volume reconstruction in positron emission tomography: Implementation of four algorithms on a high-performance scalable parallel platform

    International Nuclear Information System (INIS)

    Egger, M.L.; Scheurer, A.H.; Joseph, C.

    1996-01-01

    The issue of long reconstruction times in PET has been addressed from several points of view, resulting in an affordable dedicated system capable of handling routine 3D reconstruction in a few minutes per frame: on the hardware side using fast processors and a parallel architecture, and on the software side, using efficient implementations of computationally less intensive algorithms. Execution times obtained for the PRT-1 data set on a parallel system of five hybrid nodes, each combining an Alpha processor for computation and a transputer for communication, are the following (256 sinograms of 96 views by 128 radial samples): Ramp algorithm 56 s, Favor 81 s and reprojection algorithm of Kinahan and Rogers 187 s. The implementation of fast rebinning algorithms has shown our hardware platform to become communications-limited; they execute faster on a conventional single-processor Alpha workstation: single-slice rebinning 7 s, Fourier rebinning 22 s, 2D filtered backprojection 5 s. The scalability of the system has been demonstrated, and a saturation effect at network sizes above ten nodes has become visible; new T9000-based products lifting most of the constraints on network topology and link throughput are expected to result in improved parallel efficiency and scalability properties

  10. Performance improvement and better scalability of wet-recessed and wet-oxidized AlGaN/GaN high electron mobility transistors

    Science.gov (United States)

    Takhar, Kuldeep; Meer, Mudassar; Upadhyay, Bhanu B.; Ganguly, Swaroop; Saha, Dipankar

    2017-05-01

    We have demonstrated that a thin layer of Al2O3 grown by wet-oxidation of wet-recessed AlGaN barrier layer in an AlGaN/GaN heterostructure can significantly improve the performance of GaN based high electron mobility transistors (HEMTs). The wet-etching leads to a damage free recession of the gate region and compensates for the decreased gate capacitance and increased gate leakage. The performance improvement is manifested as an increase in the saturation drain current, transconductance, and unity current gain frequency (fT). This is further augmented with a large decrease in the subthreshold current. The performance improvement is primarily ascribed to an increase in the effective velocity in two-dimensional electron gas without sacrificing gate capacitance, which make the wet-recessed gate oxide-HEMTs much more scalable in comparison to their conventional counterpart. The improved scalability leads to an increase in the product of unity current gain frequency and gate length (fT × Lg).

  11. Scalable fabrication of perovskite solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Li, Zhen; Klein, Talysa R.; Kim, Dong Hoe; Yang, Mengjin; Berry, Joseph J.; van Hest, Maikel F. A. M.; Zhu, Kai

    2018-03-27

    Perovskite materials use earth-abundant elements, have low formation energies for deposition and are compatible with roll-to-roll and other high-volume manufacturing techniques. These features make perovskite solar cells (PSCs) suitable for terawatt-scale energy production with low production costs and low capital expenditure. Demonstrations of performance comparable to that of other thin-film photovoltaics (PVs) and improvements in laboratory-scale cell stability have recently made scale up of this PV technology an intense area of research focus. Here, we review recent progress and challenges in scaling up PSCs and related efforts to enable the terawatt-scale manufacturing and deployment of this PV technology. We discuss common device and module architectures, scalable deposition methods and progress in the scalable deposition of perovskite and charge-transport layers. We also provide an overview of device and module stability, module-level characterization techniques and techno-economic analyses of perovskite PV modules.

  12. Optical Backplane Based on Ring-Resonators: Scalability and Performance Analysis for 10 Gb/s OOK-NRZ

    Directory of Open Access Journals (Sweden)

    Giuseppe Rizzelli

    2014-05-01

    Full Text Available The use of architectures that implement optical switching without any need of optoelectronic conversion allows us to overcome the limits imposed by today’s electronic backplane, such as power consumption and dissipation, as well as power supply and footprint requirements. We propose a ring-resonator based optical backplane for router line-card interconnection. In particular we investigate how the scalability of the architecture is affected by the following parameters: number of line cards, switching-element round-trip losses, frequency drifting due to thermal variations, and waveguide-crossing effects. Moreover, to quantify the signal distortions introduced by filtering operations, the bit error rate for the different parameter conditions are shown in case of an on-off keying non-return-to-zero (OOK-NRZ input signal at 10 Gb/s.

  13. Slot-die Coating of a High Performance Copolymer in a Readily Scalable Roll Process for Polymer Solar Cells

    DEFF Research Database (Denmark)

    Helgesen, Martin; Carlé, Jon Eggert; Krebs, Frederik C

    2013-01-01

    reported compact coating/printing machine, which enables the preparation of PSCs that are directly scalable with full roll-to-roll processing. The positioning of the side-chains on the thiophene units proves to be very significant in terms of solubility of the polymers and consequently has a major impact...... on the device yield and process control. The most successful processing is accomplished with the polymer, PDTSTTz-4, that has the side-chains situated in the 4-position on the thiophene units. Inverted PSCs based on PDTSTTz-4 demonstrate high fill factors, up to 59%, even with active layer thicknesses well...... above 200 nm. Power conversion efficiencies of up to 3.5% can be reached with the roll-coated PDTSTTz-4:PCBM solar cells that, together with good process control and high device yield, designate PDTSTTz-4 as a convincing candidate for high-throughput roll-to-roll production of PSCs....

  14. Scalable Production of the Silicon-Tin Yin-Yang Hybrid Structure with Graphene Coating for High Performance Lithium-Ion Battery Anodes.

    Science.gov (United States)

    Jin, Yan; Tan, Yingling; Hu, Xiaozhen; Zhu, Bin; Zheng, Qinghui; Zhang, Zijiao; Zhu, Guoying; Yu, Qian; Jin, Zhong; Zhu, Jia

    2017-05-10

    Alloy anodes possessed of high theoretical capacity show great potential for next-generation advanced lithium-ion battery. Even though huge volume change during lithium insertion and extraction leads to severe problems, such as pulverization and an unstable solid-electrolyte interphase (SEI), various nanostructures including nanoparticles, nanowires, and porous networks can address related challenges to improve electrochemical performance. However, the complex and expensive fabrication process hinders the widespread application of nanostructured alloy anodes, which generate an urgent demand of low-cost and scalable processes to fabricate building blocks with fine controls of size, morphology, and porosity. Here, we demonstrate a scalable and low-cost process to produce a porous yin-yang hybrid composite anode with graphene coating through high energy ball-milling and selective chemical etching. With void space to buffer the expansion, the produced functional electrodes demonstrate stable cycling performance of 910 mAh g -1 over 600 cycles at a rate of 0.5C for Si-graphene "yin" particles and 750 mAh g -1 over 300 cycles at 0.2C for Sn-graphene "yang" particles. Therefore, we open up a new approach to fabricate alloy anode materials at low-cost, low-energy consumption, and large scale. This type of porous silicon or tin composite with graphene coating can also potentially play a significant role in thermoelectrics and optoelectronics applications.

  15. High-performance flat data center network architecture based on scalable and flow-controlled optical switching system

    Science.gov (United States)

    Calabretta, Nicola; Miao, Wang; Dorren, Harm

    2016-03-01

    Traffic in data centers networks (DCNs) is steadily growing to support various applications and virtualization technologies. Multi-tenancy enabling efficient resource utilization is considered as a key requirement for the next generation DCs resulting from the growing demands for services and applications. Virtualization mechanisms and technologies can leverage statistical multiplexing and fast switch reconfiguration to further extend the DC efficiency and agility. We present a novel high performance flat DCN employing bufferless and distributed fast (sub-microsecond) optical switches with wavelength, space, and time switching operation. The fast optical switches can enhance the performance of the DCNs by providing large-capacity switching capability and efficiently sharing the data plane resources by exploiting statistical multiplexing. Benefiting from the Software-Defined Networking (SDN) control of the optical switches, virtual DCNs can be flexibly created and reconfigured by the DCN provider. Numerical and experimental investigations of the DCN based on the fast optical switches show the successful setup of virtual network slices for intra-data center interconnections. Experimental results to assess the DCN performance in terms of latency and packet loss show less than 10^-5 packet loss and 640ns end-to-end latency with 0.4 load and 16- packet size buffer. Numerical investigation on the performance of the systems when the port number of the optical switch is scaled to 32x32 system indicate that more than 1000 ToRs each with Terabit/s interface can be interconnected providing a Petabit/s capacity. The roadmap to photonic integration of large port optical switches will be also presented.

  16. Horizontal cooperations between logistics service providers:motives, structure, performance

    OpenAIRE

    Schmoltzi, Christina; Wallenburg, Carl Marcus

    2011-01-01

    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich. This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively. Purpose – This paper aims to provide a comprehensive overview of the motives, structure and performance attributes of horizontal cooperations between logistics service provi...

  17. Network performance and fault analytics for LTE wireless service providers

    CERN Document Server

    Kakadia, Deepak; Gilgur, Alexander

    2017-01-01

     This book is intended to describe how to leverage emerging technologies big data analytics and SDN, to address challenges specific to LTE and IP network performance and fault management data in order to more efficiently manage and operate an LTE wireless networks. The proposed integrated solutions permit the LTE network service provider to operate entire integrated network, from RAN to Core , from UE to application service, as one unified system and correspondingly collect and align disparate key metrics and data, using an integrated and holistic approach to network analysis. The LTE wireless network performance and fault involves the network performance and management of network elements in EUTRAN, EPC and IP transport components, not only as individual components, but also as nuances of inter-working of these components. The key metrics for EUTRAN include radio access network accessibility, retainability, integrity, availability and mobility. The key metrics for EPC include MME accessibility, mobility and...

  18. Scalable cloud without dedicated storage

    Science.gov (United States)

    Batkovich, D. V.; Kompaniets, M. V.; Zarochentsev, A. K.

    2015-05-01

    We present a prototype of a scalable computing cloud. It is intended to be deployed on the basis of a cluster without the separate dedicated storage. The dedicated storage is replaced by the distributed software storage. In addition, all cluster nodes are used both as computing nodes and as storage nodes. This solution increases utilization of the cluster resources as well as improves fault tolerance and performance of the distributed storage. Another advantage of this solution is high scalability with a relatively low initial and maintenance cost. The solution is built on the basis of the open source components like OpenStack, CEPH, etc.

  19. Costs and Performance of English Mental Health Providers.

    Science.gov (United States)

    Moran, Valerie; Jacobs, Rowena

    2017-06-01

    Despite limited resources in mental health care, there is little research exploring variations in cost performance across mental health care providers. In England, a prospective payment system for mental health care based on patient needs has been introduced with the potential to incentivise providers to control costs. The units of payment under the new system are 21 care clusters. Patients are allocated to a cluster by clinicians, and each cluster has a maximum review period. The aim of this research is to explain variations in cluster costs between mental health providers using observable patient demographic, need, social and treatment variables. We also investigate if provider-level variables explain differences in costs. The residual variation in cluster costs is compared across providers to provide insights into which providers may gain or lose under the new financial regime. The main data source is the Mental Health Minimum Data Set (MHMDS) for England for the years 2011/12 and 2012/13. Our unit of observation is the period of time spent in a care cluster and costs associated with the cluster review period are calculated from NHS Reference Cost data. Costs are modelled using multi-level log-linear and generalised linear models. The residual variation in costs at the provider level is quantified using Empirical Bayes estimates and comparative standard errors used to rank and compare providers. There are wide variations in costs across providers. We find that variables associated with higher costs include older age, black ethnicity, admission under the Mental Health Act, and higher need as reflected in the care clusters. Provider type, size, occupancy and the proportion of formal admissions at the provider-level are also found to be significantly associated with costs. After controlling for patient- and provider-level variables, significant residual variation in costs remains at the provider level. The results suggest that some providers may have to increase

  20. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Chase Qishi [New Jersey Inst. of Technology, Newark, NJ (United States); Univ. of Memphis, TN (United States); Zhu, Michelle Mengxia [Southern Illinois Univ., Carbondale, IL (United States)

    2016-06-06

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models feature diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific

  1. Scalable Optical-Fiber Communication Networks

    Science.gov (United States)

    Chow, Edward T.; Peterson, John C.

    1993-01-01

    Scalable arbitrary fiber extension network (SAFEnet) is conceptual fiber-optic communication network passing digital signals among variety of computers and input/output devices at rates from 200 Mb/s to more than 100 Gb/s. Intended for use with very-high-speed computers and other data-processing and communication systems in which message-passing delays must be kept short. Inherent flexibility makes it possible to match performance of network to computers by optimizing configuration of interconnections. In addition, interconnections made redundant to provide tolerance to faults.

  2. Scalable Fabrication of High-Performance Transparent Conductors Using Graphene Oxide-Stabilized Single-Walled Carbon Nanotube Inks

    Directory of Open Access Journals (Sweden)

    Linxiang He

    2018-04-01

    Full Text Available Recent development in liquid-phase processing of single-walled carbon nanotubes (SWNTs has revealed rod-coating as a promising approach for large-scale production of SWNT-based transparent conductors. Of great importance in the ink formulation is the stabilizer having excellent dispersion stability, environmental friendly and tunable rheology in the liquid state, and also can be readily removed to enhance electrical conductivity and mechanical stability. Herein we demonstrate the promise of graphene oxide (GO as a synergistic stabilizer for SWNTs in water. SWNTs dispersed in GO is formulated into inks with homogeneous nanotube distribution, good wetting and rheological properties, and compatible with industrial rod coating practice. Microwave treatment of rod-coated films can reduce GOs and enhance electro-optical performance. The resultant films offer a sheet resistance of ~80 Ω/sq at 86% transparency, along with good mechanical flexibility. Doping the films with nitric acid can further decrease the sheet resistance to ~25 Ω/sq. Comparing with the films fabricated from typical surfactant-based SWNT inks, our films offer superior adhesion as assessed by the Scotch tape test. This study provides new insight into the selection of suitable stabilizers for functional SWNT inks with strong potential for printed electronics.

  3. Designing a Scalable Fault Tolerance Model for High Performance Computational Chemistry: A Case Study with Coupled Cluster Perturbative Triples.

    Science.gov (United States)

    van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A

    2011-01-11

    In the past couple of decades, the massive computational power provided by the most modern supercomputers has resulted in simulation of higher-order computational chemistry methods, previously considered intractable. As the system sizes continue to increase, the computational chemistry domain continues to escalate this trend using parallel computing with programming models such as Message Passing Interface (MPI) and Partitioned Global Address Space (PGAS) programming models such as Global Arrays. The ever increasing scale of these supercomputers comes at a cost of reduced Mean Time Between Failures (MTBF), currently on the order of days and projected to be on the order of hours for upcoming extreme scale systems. While traditional disk-based check pointing methods are ubiquitous for storing intermediate solutions, they suffer from high overhead of writing and recovering from checkpoints. In practice, checkpointing itself often brings the system down. Clearly, methods beyond checkpointing are imperative to handling the aggravating issue of reducing MTBF. In this paper, we address this challenge by designing and implementing an efficient fault tolerant version of the Coupled Cluster (CC) method with NWChem, using in-memory data redundancy. We present the challenges associated with our design, including an efficient data storage model, maintenance of at least one consistent data copy, and the recovery process. Our performance evaluation without faults shows that the current design exhibits a small overhead. In the presence of a simulated fault, the proposed design incurs negligible overhead in comparison to the state of the art implementation without faults.

  4. Adaptive format conversion for scalable video coding

    Science.gov (United States)

    Wan, Wade K.; Lim, Jae S.

    2001-12-01

    The enhancement layer in many scalable coding algorithms is composed of residual coding information. There is another type of information that can be transmitted instead of (or in addition to) residual coding. Since the encoder has access to the original sequence, it can utilize adaptive format conversion (AFC) to generate the enhancement layer and transmit the different format conversion methods as enhancement data. This paper investigates the use of adaptive format conversion information as enhancement data in scalable video coding. Experimental results are shown for a wide range of base layer qualities and enhancement bitrates to determine when AFC can improve video scalability. Since the parameters needed for AFC are small compared to residual coding, AFC can provide video scalability at low enhancement layer bitrates that are not possible with residual coding. In addition, AFC can also be used in addition to residual coding to improve video scalability at higher enhancement layer bitrates. Adaptive format conversion has not been studied in detail, but many scalable applications may benefit from it. An example of an application that AFC is well-suited for is the migration path for digital television where AFC can provide immediate video scalability as well as assist future migrations.

  5. LHCb: Performance evaluation and capacity planning for a scalable and highly available virtulization infrastructure for the LHCb experiment

    CERN Multimedia

    Sborzacchi, F; Neufeld, N

    2013-01-01

    The virtual computing is often run to satisfy different needs: reduce costs, reduce resources, simplify maintenance and the last but not the least add flexibility. The use of Virtualization in a complex system such as a farm of PCs that control the hardware of an experiment (PLC, power supplies ,gas, magnets..) put us in a condition where not only an High Performance requirements need to be carefully considered but also a deep analysis of strategies to achieve a certain level of High Availability. We conducted a performance evaluation on different and comparable storage/network/virtulization platforms. The performance is measured using a series of independent benchmarks , testing the speed an the stability of multiple VMs runnng heavy-load operations on the I/O of virtualized storage and the virtualized network. The result from the benchmark tests allowed us to study and evaluate how the different workloads of Vm interact with the Hardware/Software resource layers.

  6. Towards Scalable Graph Computation on Mobile Devices.

    Science.gov (United States)

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2014-10-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach.

  7. Towards Scalable Graph Computation on Mobile Devices

    Science.gov (United States)

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2015-01-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach. PMID:25859564

  8. A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system

    International Nuclear Information System (INIS)

    Toor, S; Eerola, P; Kraemer, O; Lindén, T; Osmani, L; Tarkoma, S; White, J

    2014-01-01

    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

  9. A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system

    Science.gov (United States)

    Toor, S.; Osmani, L.; Eerola, P.; Kraemer, O.; Lindén, T.; Tarkoma, S.; White, J.

    2014-06-01

    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

  10. Performance Verification of Production-Scalable Energy-Efficient Solutions: Winchester/Camberley Homes Mixed-Humid Climate

    Energy Technology Data Exchange (ETDEWEB)

    Mallay, D. [Partnership for Home Innovation, Upper Marlboro, MD (United States); Wiehagen, J. [Partnership for Home Innovation, Upper Marlboro, MD (United States)

    2014-07-01

    Winchester/Camberley Homes collaborated with the Building America team Partnership for Home Innovation to develop a new set of high performance home designs that could be applicable on a production scale. The new home designs are to be constructed in the mixed humid climate zone and could eventually apply to all of the builder's home designs to meet or exceed future energy codes or performance-based programs. However, the builder recognized that the combination of new wall framing designs and materials, higher levels of insulation in the wall cavity, and more detailed air sealing to achieve lower infiltration rates changes the moisture characteristics of the wall system. In order to ensure long term durability and repeatable successful implementation with few call-backs, the project team demonstrated through measured data that the wall system functions as a dynamic system, responding to changing interior and outdoor environmental conditions within recognized limits of the materials that make up the wall system. A similar investigation was made with respect to the complete redesign of the HVAC systems to significantly improve efficiency while maintaining indoor comfort. Recognizing the need to demonstrate the benefits of these efficiency features, the builder offered a new house model to serve as a test case to develop framing designs, evaluate material selections and installation requirements, changes to work scopes and contractor learning curves, as well as to compare theoretical performance characteristics with measured results.

  11. Performance Verification of Production-Scalable Energy-Efficient Solutions: Winchester/Camberley Homes Mixed-Humid Climate

    Energy Technology Data Exchange (ETDEWEB)

    Mallay, D.; Wiehagen, J.

    2014-07-01

    Winchester/Camberley Homes with the Building America program and its NAHB Research Center Industry Partnership collaborated to develop a new set of high performance home designs that could be applicable on a production scale. The new home designs are to be constructed in the mixed humid climate zone four and could eventually apply to all of the builder's home designs to meet or exceed future energy codes or performance-based programs. However, the builder recognized that the combination of new wall framing designs and materials, higher levels of insulation in the wall cavity, and more detailed air sealing to achieve lower infiltration rates changes the moisture characteristics of the wall system. In order to ensure long term durability and repeatable successful implementation with few call-backs, this report demonstrates through measured data that the wall system functions as a dynamic system, responding to changing interior and outdoor environmental conditions within recognized limits of the materials that make up the wall system. A similar investigation was made with respect to the complete redesign of the heating, cooling, air distribution, and ventilation systems intended to optimize the equipment size and configuration to significantly improve efficiency while maintaining indoor comfort. Recognizing the need to demonstrate the benefits of these efficiency features, the builder offered a new house model to serve as a test case to develop framing designs, evaluate material selections and installation requirements, changes to work scopes and contractor learning curves, as well as to compare theoretical performance characteristics with measured results.

  12. A scalable healthcare information system based on a service-oriented architecture.

    Science.gov (United States)

    Yang, Tzu-Hsiang; Sun, Yeali S; Lai, Feipei

    2011-06-01

    Many existing healthcare information systems are composed of a number of heterogeneous systems and face the important issue of system scalability. This paper first describes the comprehensive healthcare information systems used in National Taiwan University Hospital (NTUH) and then presents a service-oriented architecture (SOA)-based healthcare information system (HIS) based on the service standard HL7. The proposed architecture focuses on system scalability, in terms of both hardware and software. Moreover, we describe how scalability is implemented in rightsizing, service groups, databases, and hardware scalability. Although SOA-based systems sometimes display poor performance, through a performance evaluation of our HIS based on SOA, the average response time for outpatient, inpatient, and emergency HL7Central systems are 0.035, 0.04, and 0.036 s, respectively. The outpatient, inpatient, and emergency WebUI average response times are 0.79, 1.25, and 0.82 s. The scalability of the rightsizing project and our evaluation results show that the SOA HIS we propose provides evidence that SOA can provide system scalability and sustainability in a highly demanding healthcare information system.

  13. Numeric Analysis for Relationship-Aware Scalable Streaming Scheme

    Directory of Open Access Journals (Sweden)

    Heung Ki Lee

    2014-01-01

    Full Text Available Frequent packet loss of media data is a critical problem that degrades the quality of streaming services over mobile networks. Packet loss invalidates frames containing lost packets and other related frames at the same time. Indirect loss caused by losing packets decreases the quality of streaming. A scalable streaming service can decrease the amount of dropped multimedia resulting from a single packet loss. Content providers typically divide one large media stream into several layers through a scalable streaming service and then provide each scalable layer to the user depending on the mobile network. Also, a scalable streaming service makes it possible to decode partial multimedia data depending on the relationship between frames and layers. Therefore, a scalable streaming service provides a way to decrease the wasted multimedia data when one packet is lost. However, the hierarchical structure between frames and layers of scalable streams determines the service quality of the scalable streaming service. Even if whole packets of layers are transmitted successfully, they cannot be decoded as a result of the absence of reference frames and layers. Therefore, the complicated relationship between frames and layers in a scalable stream increases the volume of abandoned layers. For providing a high-quality scalable streaming service, we choose a proper relationship between scalable layers as well as the amount of transmitted multimedia data depending on the network situation. We prove that a simple scalable scheme outperforms a complicated scheme in an error-prone network. We suggest an adaptive set-top box (AdaptiveSTB to lower the dependency between scalable layers in a scalable stream. Also, we provide a numerical model to obtain the indirect loss of multimedia data and apply it to various multimedia streams. Our AdaptiveSTB enhances the quality of a scalable streaming service by removing indirect loss.

  14. A facile and scalable method to prepare carbon nanotube-grafted-graphene for high performance Li-S battery

    Science.gov (United States)

    Wang, Q. Q.; Huang, J. B.; Li, G. R.; Lin, Z.; Liu, B. H.; Li, Z. P.

    2017-01-01

    A carbon nanotube-grafted-graphene (CNT-g-Gr) is developed for enhancements of electrical conduction and polysulfide (PS) absorption to improve rate performance and cycleability of lithium-sulfur battery. The CNT-g-Gr is prepared through CNT growth on Ni-deposited graphene sheet which is fabricated via pyrolysis of glucose in a molten salt. The obtained CNT-g-Gr shows much higher specific surface area and PS adsorption capability than graphene. The in-situ formed Ni nanoparticles on graphene sheet not only serve as the catalytic sites for CNT growth, but also function as the anchor-sites for polar PS absorption. The CNT-g-Gr contributes a superb PS adsorption capability arising from graphene and CNT absorbing weakly-polar PS species, and Ni nanoparticles absorbing the species with stronger polarity. The resultant Li-S battery with the CNT-g-Gr shows excellent cycleability and rate performance. A stable discharge capacity of 900 mAh g-1 (with low capacity degradation rate) and a rate capacity of 260 mAh g-1 at 30 C discharge rate have been achieved.

  15. Providers' perceptions of the implementation of a performance ...

    African Journals Online (AJOL)

    partly due to the lack of a system for reminding providers that patients were due to complete the ... system was feasible to implement as part of standard practice in ... Although the SAATSA is completed by the patients, in cases where they have ... platform is an important goal for this initiative; in the short term we are trying to ...

  16. The Effect of Providing Breakfast in Class on Student Performance

    Science.gov (United States)

    Imberman, Scott A.; Kugler, Adriana D.

    2014-01-01

    Many schools have recently experimented with moving breakfast from the cafeteria to the classroom. We examine whether such a program increases achievement, grades, and attendance rates. We exploit quasi-random timing of program implementation that allows for a difference-in-differences identification strategy. We find that providing breakfast in…

  17. Highly nitrogen-doped carbon capsules: scalable preparation and high-performance applications in fuel cells and lithium ion batteries.

    Science.gov (United States)

    Hu, Chuangang; Xiao, Ying; Zhao, Yang; Chen, Nan; Zhang, Zhipan; Cao, Minhua; Qu, Liangti

    2013-04-07

    Highly nitrogen-doped carbon capsules (hN-CCs) have been successfully prepared by using inexpensive melamine and glyoxal as precursors via solvothermal reaction and carbonization. With a great promise for large scale production, the hN-CCs, having large surface area and high-level nitrogen content (N/C atomic ration of ca. 13%), possess superior crossover resistance, selective activity and catalytic stability towards oxygen reduction reaction for fuel cells in alkaline medium. As a new anode material in lithium-ion battery, hN-CCs also exhibit excellent cycle performance and high rate capacity with a reversible capacity of as high as 1046 mA h g(-1) at a current density of 50 mA g(-1) after 50 cycles. These features make the hN-CCs developed in this study promising as suitable substitutes for the expensive noble metal catalysts in the next generation alkaline fuel cells, and as advanced electrode materials in lithium-ion batteries.

  18. Facile and Scalable Synthesis of Zn3V2O7(OH)2·2H2O Microflowers as a High-Performance Anode for Lithium-Ion Batteries.

    Science.gov (United States)

    Yan, Haowu; Luo, Yanzhu; Xu, Xu; He, Liang; Tan, Jian; Li, Zhaohuai; Hong, Xufeng; He, Pan; Mai, Liqiang

    2017-08-23

    The employment of nanomaterials and nanotechnologies has been widely acknowledged as an effective strategy to enhance the electrochemical performance of lithium-ion batteries (LIBs). However, how to produce nanomaterials effectively on a large scale remains a challenge. Here, the highly crystallized Zn 3 V 2 O 7 (OH) 2 ·2H 2 O is synthesized through a simple liquid phase method at room temperature in a large scale, which is easily realized in industry. Through suppressing the reaction dynamics with ethylene glycol, a uniform morphology of microflowers is obtained. Owing to the multiple reaction mechanisms (insertion, conversion, and alloying) during Li insertion/extraction, the prepared electrode delivers a remarkable specific capacity of 1287 mA h g -1 at 0.2 A g -1 after 120 cycles. In addition, a high capacity of 298 mA h g -1 can be obtained at 5 A g -1 after 1400 cycles. The excellent electrochemical performance can be attributed to the high crystallinity and large specific surface area of active materials. The smaller particles after cycling could facilitate the lithium-ion transport and provide more reaction sites. The facile and scalable synthesis process and excellent electrochemical performance make this material a highly promising anode for the commercial LIBs.

  19. Portable and Transparent Message Compression in MPI Libraries to Improve the Performance and Scalability of Parallel Applications

    Energy Technology Data Exchange (ETDEWEB)

    Albonesi, David; Burtscher, Martin

    2009-04-17

    The goal of this project has been to develop a lossless compression algorithm for message-passing libraries that can accelerate HPC systems by reducing the communication time. Because both compression and decompression have to be performed in software in real time, the algorithm has to be extremely fast while still delivering a good compression ratio. During the first half of this project, they designed a new compression algorithm called FPC for scientific double-precision data, made the source code available on the web, and published two papers describing its operation, the first in the proceedings of the Data Compression Conference and the second in the IEEE Transactions on Computers. At comparable average compression ratios, this algorithm compresses and decompresses 10 to 100 times faster than BZIP2, DFCM, FSD, GZIP, and PLMI on the three architectures tested. With prediction tables that fit into the CPU's L1 data acache, FPC delivers a guaranteed throughput of six gigabits per second on a 1.6 GHz Itanium 2 system. The C source code and documentation of FPC are posted on-line and have already been downloaded hundreds of times. To evaluate FPC, they gathered 13 real-world scientific datasets from around the globe, including satellite data, crash-simulation data, and messages from HPC systems. Based on the large number of requests they received, they also made these datasets available to the community (with permission of the original sources). While FPC represents a great step forward, it soon became clear that its throughput was too slow for the emerging 10 gigabits per second networks. Hence, no speedup can be gained by including this algorithm in an MPI library. They therefore changed the aim of the second half of the project. Instead of implementing FPC in an MPI library, they refocused their efforts to develop a parallel compression algorithm to further boost the throughput. After all, all modern high-end microprocessors contain multiple CPUs on a

  20. ATLAS Grid Data Processing: system evolution and scalability

    CERN Document Server

    Golubkov, D; The ATLAS collaboration; Klimentov, A; Minaenko, A; Nevski, P; Vaniachine, A; Walker, R

    2012-01-01

    The production system for Grid Data Processing handles petascale ATLAS data reprocessing and Monte Carlo activities. The production system empowered further data processing steps on the Grid performed by dozens of ATLAS physics groups with coordinated access to computing resources worldwide, including additional resources sponsored by regional facilities. The system provides knowledge management of configuration parameters for massive data processing tasks, reproducibility of results, scalable database access, orchestrated workflow and performance monitoring, dynamic workload sharing, automated fault tolerance and petascale data integrity control. The system evolves to accommodate a growing number of users and new requirements from our contacts in ATLAS main areas: Trigger, Physics, Data Preparation and Software & Computing. To assure scalability, the next generation production system architecture development is in progress. We report on scaling up the production system for a growing number of users provi...

  1. Scalable Synthesis of Triple-Core-Shell Nanostructures of TiO2 @MnO2 @C for High Performance Supercapacitors Using Structure-Guided Combustion Waves.

    Science.gov (United States)

    Shin, Dongjoon; Shin, Jungho; Yeo, Taehan; Hwang, Hayoung; Park, Seonghyun; Choi, Wonjoon

    2018-03-01

    Core-shell nanostructures of metal oxides and carbon-based materials have emerged as outstanding electrode materials for supercapacitors and batteries. However, their synthesis requires complex procedures that incur high costs and long processing times. Herein, a new route is proposed for synthesizing triple-core-shell nanoparticles of TiO 2 @MnO 2 @C using structure-guided combustion waves (SGCWs), which originate from incomplete combustion inside chemical-fuel-wrapped nanostructures, and their application in supercapacitor electrodes. SGCWs transform TiO 2 to TiO 2 @C and TiO 2 @MnO 2 to TiO 2 @MnO 2 @C via the incompletely combusted carbonaceous fuels under an open-air atmosphere, in seconds. The synthesized carbon layers act as templates for MnO 2 shells in TiO 2 @C and organic shells of TiO 2 @MnO 2 @C. The TiO 2 @MnO 2 @C-based electrodes exhibit a greater specific capacitance (488 F g -1 at 5 mV s -1 ) and capacitance retention (97.4% after 10 000 cycles at 1.0 V s -1 ), while the absence of MnO 2 and carbon shells reveals a severe degradation in the specific capacitance and capacitance retention. Because the core-TiO 2 nanoparticles and carbon shell prevent the deformation of the inner and outer sides of the MnO 2 shell, the nanostructures of the TiO 2 @MnO 2 @C are preserved despite the long-term cycling, giving the superior performance. This SGCW-driven fabrication enables the scalable synthesis of multiple-core-shell structures applicable to diverse electrochemical applications. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. PKI Scalability Issues

    OpenAIRE

    Slagell, Adam J; Bonilla, Rafael

    2004-01-01

    This report surveys different PKI technologies such as PKIX and SPKI and the issues of PKI that affect scalability. Much focus is spent on certificate revocation methodologies and status verification systems such as CRLs, Delta-CRLs, CRS, Certificate Revocation Trees, Windowed Certificate Revocation, OCSP, SCVP and DVCS.

  3. The intergroup protocols: Scalable group communication for the internet

    Energy Technology Data Exchange (ETDEWEB)

    Berket, Karlo [Univ. of California, Santa Barbara, CA (United States)

    2000-12-04

    Reliable group ordered delivery of multicast messages in a distributed system is a useful service that simplifies the programming of distributed applications. Such a service helps to maintain the consistency of replicated information and to coordinate the activities of the various processes. With the increasing popularity of the Internet, there is an increasing interest in scaling the protocols that provide this service to the environment of the Internet. The InterGroup protocol suite, described in this dissertation, provides such a service, and is intended for the environment of the Internet with scalability to large numbers of nodes and high latency links. The InterGroup protocols approach the scalability problem from various directions. They redefine the meaning of group membership, allow voluntary membership changes, add a receiver-oriented selection of delivery guarantees that permits heterogeneity of the receiver set, and provide a scalable reliability service. The InterGroup system comprises several components, executing at various sites within the system. Each component provides part of the services necessary to implement a group communication system for the wide-area. The components can be categorized as: (1) control hierarchy, (2) reliable multicast, (3) message distribution and delivery, and (4) process group membership. We have implemented a prototype of the InterGroup protocols in Java, and have tested the system performance in both local-area and wide-area networks.

  4. On the Scalability of Time-predictable Chip-Multiprocessing

    DEFF Research Database (Denmark)

    Puffitsch, Wolfgang; Schoeberl, Martin

    2012-01-01

    Real-time systems need a time-predictable execution platform to be able to determine the worst-case execution time statically. In order to be time-predictable, several advanced processor features, such as out-of-order execution and other forms of speculation, have to be avoided. However, just using...... simple processors is not an option for embedded systems with high demands on computing power. In order to provide high performance and predictability we argue to use multiprocessor systems with a time-predictable memory interface. In this paper we present the scalability of a Java chip......-multiprocessor system that is designed to be time-predictable. Adding time-predictable caches is mandatory to achieve scalability with a shared memory multi-processor system. As Java bytecode retains information about the nature of memory accesses, it is possible to implement a memory hierarchy that takes...

  5. Scalable Resolution Display Walls

    KAUST Repository

    Leigh, Jason; Johnson, Andrew; Renambot, Luc; Peterka, Tom; Jeong, Byungil; Sandin, Daniel J.; Talandis, Jonas; Jagodic, Ratko; Nam, Sungwon; Hur, Hyejung; Sun, Yiwen

    2013-01-01

    This article will describe the progress since 2000 on research and development in 2-D and 3-D scalable resolution display walls that are built from tiling individual lower resolution flat panel displays. The article will describe approaches and trends in display hardware construction, middleware architecture, and user-interaction design. The article will also highlight examples of use cases and the benefits the technology has brought to their respective disciplines. © 1963-2012 IEEE.

  6. Advanced technologies for scalable ATLAS conditions database access on the grid

    CERN Document Server

    Basset, R; Dimitrov, G; Girone, M; Hawkings, R; Nevski, P; Valassi, A; Vaniachine, A; Viegas, F; Walker, R; Wong, A

    2010-01-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysi...

  7. The Concept of Business Model Scalability

    DEFF Research Database (Denmark)

    Lund, Morten; Nielsen, Christian

    2018-01-01

    -term pro table business. However, the main message of this article is that while providing a good value proposition may help the rm ‘get by’, the really successful businesses of today are those able to reach the sweet-spot of business model scalability. Design/Methodology/Approach: The article is based...... on a ve-year longitudinal action research project of over 90 companies that participated in the International Center for Innovation project aimed at building 10 global network-based business models. Findings: This article introduces and discusses the term scalability from a company-level perspective......Purpose: The purpose of the article is to de ne what scalable business models are. Central to the contemporary understanding of business models is the value proposition towards the customer and the hypotheses generated about delivering value to the customer which become a good foundation for a long...

  8. Scalable Transactions for Web Applications in the Cloud

    NARCIS (Netherlands)

    Zhou, W.; Pierre, G.E.O.; Chi, C.-H.

    2009-01-01

    Cloud Computing platforms provide scalability and high availability properties for web applications but they sacrifice data consistency at the same time. However, many applications cannot afford any data inconsistency. We present a scalable transaction manager for NoSQL cloud database services to

  9. Using scalable vector graphics to evolve art

    NARCIS (Netherlands)

    den Heijer, E.; Eiben, A. E.

    2016-01-01

    In this paper, we describe our investigations of the use of scalable vector graphics as a genotype representation in evolutionary art. We describe the technical aspects of using SVG in evolutionary art, and explain our custom, SVG specific operators initialisation, mutation and crossover. We perform

  10. Scalable fast multipole accelerated vortex methods

    KAUST Repository

    Hu, Qi; Gumerov, Nail A.; Yokota, Rio; Barba, Lorena A.; Duraiswami, Ramani

    2014-01-01

    -node communication and load balance efficiently, with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff

  11. Scalable Video Coding with Interlayer Signal Decorrelation Techniques

    Directory of Open Access Journals (Sweden)

    Yang Wenxian

    2007-01-01

    Full Text Available Scalability is one of the essential requirements in the compression of visual data for present-day multimedia communications and storage. The basic building block for providing the spatial scalability in the scalable video coding (SVC standard is the well-known Laplacian pyramid (LP. An LP achieves the multiscale representation of the video as a base-layer signal at lower resolution together with several enhancement-layer signals at successive higher resolutions. In this paper, we propose to improve the coding performance of the enhancement layers through efficient interlayer decorrelation techniques. We first show that, with nonbiorthogonal upsampling and downsampling filters, the base layer and the enhancement layers are correlated. We investigate two structures to reduce this correlation. The first structure updates the base-layer signal by subtracting from it the low-frequency component of the enhancement layer signal. The second structure modifies the prediction in order that the low-frequency component in the new enhancement layer is diminished. The second structure is integrated in the JSVM 4.0 codec with suitable modifications in the prediction modes. Experimental results with some standard test sequences demonstrate coding gains up to 1 dB for I pictures and up to 0.7 dB for both I and P pictures.

  12. Fourier transform based scalable image quality measure.

    Science.gov (United States)

    Narwaria, Manish; Lin, Weisi; McLoughlin, Ian; Emmanuel, Sabu; Chia, Liang-Tien

    2012-08-01

    We present a new image quality assessment (IQA) algorithm based on the phase and magnitude of the 2D (twodimensional) Discrete Fourier Transform (DFT). The basic idea is to compare the phase and magnitude of the reference and distorted images to compute the quality score. However, it is well known that the Human Visual Systems (HVSs) sensitivity to different frequency components is not the same. We accommodate this fact via a simple yet effective strategy of nonuniform binning of the frequency components. This process also leads to reduced space representation of the image thereby enabling the reduced-reference (RR) prospects of the proposed scheme. We employ linear regression to integrate the effects of the changes in phase and magnitude. In this way, the required weights are determined via proper training and hence more convincing and effective. Lastly, using the fact that phase usually conveys more information than magnitude, we use only the phase for RR quality assessment. This provides the crucial advantage of further reduction in the required amount of reference image information. The proposed method is therefore further scalable for RR scenarios. We report extensive experimental results using a total of 9 publicly available databases: 7 image (with a total of 3832 distorted images with diverse distortions) and 2 video databases (totally 228 distorted videos). These show that the proposed method is overall better than several of the existing fullreference (FR) algorithms and two RR algorithms. Additionally, there is a graceful degradation in prediction performance as the amount of reference image information is reduced thereby confirming its scalability prospects. To enable comparisons and future study, a Matlab implementation of the proposed algorithm is available at http://www.ntu.edu.sg/home/wslin/reduced_phase.rar.

  13. Scalable synthesis of interconnected porous silicon/carbon composites by the Rochow reaction as high-performance anodes of lithium ion batteries.

    Science.gov (United States)

    Zhang, Zailei; Wang, Yanhong; Ren, Wenfeng; Tan, Qiangqiang; Chen, Yunfa; Li, Hong; Zhong, Ziyi; Su, Fabing

    2014-05-12

    Despite the promising application of porous Si-based anodes in future Li ion batteries, the large-scale synthesis of these materials is still a great challenge. A scalable synthesis of porous Si materials is presented by the Rochow reaction, which is commonly used to produce organosilane monomers for synthesizing organosilane products in chemical industry. Commercial Si microparticles reacted with gas CH3 Cl over various Cu-based catalyst particles to substantially create macropores within the unreacted Si accompanying with carbon deposition to generate porous Si/C composites. Taking advantage of the interconnected porous structure and conductive carbon-coated layer after simple post treatment, these composites as anodes exhibit high reversible capacity and long cycle life. It is expected that by integrating the organosilane synthesis process and controlling reaction conditions, the manufacture of porous Si-based anodes on an industrial scale is highly possible. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. New Complexity Scalable MPEG Encoding Techniques for Mobile Applications

    Directory of Open Access Journals (Sweden)

    Stephan Mietens

    2004-03-01

    Full Text Available Complexity scalability offers the advantage of one-time design of video applications for a large product family, including mobile devices, without the need of redesigning the applications on the algorithmic level to meet the requirements of the different products. In this paper, we present complexity scalable MPEG encoding having core modules with modifications for scalability. The interdependencies of the scalable modules and the system performance are evaluated. Experimental results show scalability giving a smooth change in complexity and corresponding video quality. Scalability is basically achieved by varying the number of computed DCT coefficients and the number of evaluated motion vectors but other modules are designed such they scale with the previous parameters. In the experiments using the “Stefan” sequence, the elapsed execution time of the scalable encoder, reflecting the computational complexity, can be gradually reduced to roughly 50% of its original execution time. The video quality scales between 20 dB and 48 dB PSNR with unity quantizer setting, and between 21.5 dB and 38.5 dB PSNR for different sequences targeting 1500 kbps. The implemented encoder and the scalability techniques can be successfully applied in mobile systems based on MPEG video compression.

  15. pcircle - A Suite of Scalable Parallel File System Tools

    Energy Technology Data Exchange (ETDEWEB)

    2015-10-01

    Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.

  16. Scalable Packet Classification with Hash Tables

    Science.gov (United States)

    Wang, Pi-Chung

    In the last decade, the technique of packet classification has been widely deployed in various network devices, including routers, firewalls and network intrusion detection systems. In this work, we improve the performance of packet classification by using multiple hash tables. The existing hash-based algorithms have superior scalability with respect to the required space; however, their search performance may not be comparable to other algorithms. To improve the search performance, we propose a tuple reordering algorithm to minimize the number of accessed hash tables with the aid of bitmaps. We also use pre-computation to ensure the accuracy of our search procedure. Performance evaluation based on both real and synthetic filter databases shows that our scheme is effective and scalable and the pre-computation cost is moderate.

  17. Scalable and Media Aware Adaptive Video Streaming over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Béatrice Pesquet-Popescu

    2008-07-01

    Full Text Available This paper proposes an advanced video streaming system based on scalable video coding in order to optimize resource utilization in wireless networks with retransmission mechanisms at radio protocol level. The key component of this system is a packet scheduling algorithm which operates on the different substreams of a main scalable video stream and which is implemented in a so-called media aware network element. The concerned type of transport channel is a dedicated channel subject to parameters (bitrate, loss rate variations on the long run. Moreover, we propose a combined scalability approach in which common temporal and SNR scalability features can be used jointly with a partitioning of the image into regions of interest. Simulation results show that our approach provides substantial quality gain compared to classical packet transmission methods and they demonstrate how ROI coding combined with SNR scalability allows to improve again the visual quality.

  18. 25 CFR 115.803 - What information will be provided in a statement of performance?

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false What information will be provided in a statement of performance? 115.803 Section 115.803 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR FINANCIAL... provided in a statement of performance? The statement of performance will identify the source, type, and...

  19. Scalable-to-lossless transform domain distributed video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Veselov, Anton

    2010-01-01

    Distributed video coding (DVC) is a novel approach providing new features as low complexity encoding by mainly exploiting the source statistics at the decoder based on the availability of decoder side information. In this paper, scalable-tolossless DVC is presented based on extending a lossy Tran...... codec provides frame by frame encoding. Comparing the lossless coding efficiency, the proposed scalable-to-lossless TDWZ video codec can save up to 5%-13% bits compared to JPEG LS and H.264 Intra frame lossless coding and do so as a scalable-to-lossless coding....

  20. Compliance Performance: Effects of a Provider Incentive Program and Coding Compliance Plan

    National Research Council Canada - National Science Library

    Tudela, Joseph A

    2004-01-01

    The purpose of this project is to study provider and coder related performance, i.e., provider compliance rate and coder productivity/accuracy rates and average dollar difference between coder and auditor, at Brooke Army Medical Center...

  1. Scalable Frequent Subgraph Mining

    KAUST Repository

    Abdelhamid, Ehab

    2017-06-19

    A graph is a data structure that contains a set of nodes and a set of edges connecting these nodes. Nodes represent objects while edges model relationships among these objects. Graphs are used in various domains due to their ability to model complex relations among several objects. Given an input graph, the Frequent Subgraph Mining (FSM) task finds all subgraphs with frequencies exceeding a given threshold. FSM is crucial for graph analysis, and it is an essential building block in a variety of applications, such as graph clustering and indexing. FSM is computationally expensive, and its existing solutions are extremely slow. Consequently, these solutions are incapable of mining modern large graphs. This slowness is caused by the underlying approaches of these solutions which require finding and storing an excessive amount of subgraph matches. This dissertation proposes a scalable solution for FSM that avoids the limitations of previous work. This solution is composed of four components. The first component is a single-threaded technique which, for each candidate subgraph, needs to find only a minimal number of matches. The second component is a scalable parallel FSM technique that utilizes a novel two-phase approach. The first phase quickly builds an approximate search space, which is then used by the second phase to optimize and balance the workload of the FSM task. The third component focuses on accelerating frequency evaluation, which is a critical step in FSM. To do so, a machine learning model is employed to predict the type of each graph node, and accordingly, an optimized method is selected to evaluate that node. The fourth component focuses on mining dynamic graphs, such as social networks. To this end, an incremental index is maintained during the dynamic updates. Only this index is processed and updated for the majority of graph updates. Consequently, search space is significantly pruned and efficiency is improved. The empirical evaluation shows that the

  2. Integration of an intelligent systems behavior simulator and a scalable soldier-machine interface

    Science.gov (United States)

    Johnson, Tony; Manteuffel, Chris; Brewster, Benjamin; Tierney, Terry

    2007-04-01

    As the Army's Future Combat Systems (FCS) introduce emerging technologies and new force structures to the battlefield, soldiers will increasingly face new challenges in workload management. The next generation warfighter will be responsible for effectively managing robotic assets in addition to performing other missions. Studies of future battlefield operational scenarios involving the use of automation, including the specification of existing and proposed technologies, will provide significant insight into potential problem areas regarding soldier workload. The US Army Tank Automotive Research, Development, and Engineering Center (TARDEC) is currently executing an Army technology objective program to analyze and evaluate the effect of automated technologies and their associated control devices with respect to soldier workload. The Human-Robotic Interface (HRI) Intelligent Systems Behavior Simulator (ISBS) is a human performance measurement simulation system that allows modelers to develop constructive simulations of military scenarios with various deployments of interface technologies in order to evaluate operator effectiveness. One such interface is TARDEC's Scalable Soldier-Machine Interface (SMI). The scalable SMI provides a configurable machine interface application that is capable of adapting to several hardware platforms by recognizing the physical space limitations of the display device. This paper describes the integration of the ISBS and Scalable SMI applications, which will ultimately benefit both systems. The ISBS will be able to use the Scalable SMI to visualize the behaviors of virtual soldiers performing HRI tasks, such as route planning, and the scalable SMI will benefit from stimuli provided by the ISBS simulation environment. The paper describes the background of each system and details of the system integration approach.

  3. Scalable Partitioning Algorithms for FPGAs With Heterogeneous Resources

    National Research Council Canada - National Science Library

    Selvakkumaran, Navaratnasothie; Ranjan, Abhishek; Raje, Salil; Karypis, George

    2004-01-01

    As FPGA densities increase, partitioning-based FPGA placement approaches are becoming increasingly important as they can be used to provide high-quality and computationally scalable placement solutions...

  4. Scalable Nanomanufacturing—A Review

    Directory of Open Access Journals (Sweden)

    Khershed Cooper

    2017-01-01

    Full Text Available This article describes the field of scalable nanomanufacturing, its importance and need, its research activities and achievements. The National Science Foundation is taking a leading role in fostering basic research in scalable nanomanufacturing (SNM. From this effort several novel nanomanufacturing approaches have been proposed, studied and demonstrated, including scalable nanopatterning. This paper will discuss SNM research areas in materials, processes and applications, scale-up methods with project examples, and manufacturing challenges that need to be addressed to move nanotechnology discoveries closer to the marketplace.

  5. Neutron generators with size scalability, ease of fabrication and multiple ion source functionalities

    Science.gov (United States)

    Elizondo-Decanini, Juan M

    2014-11-18

    A neutron generator is provided with a flat, rectilinear geometry and surface mounted metallizations. This construction provides scalability and ease of fabrication, and permits multiple ion source functionalities.

  6. Scalable Nonlinear Compact Schemes

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, Debojyoti [Argonne National Lab. (ANL), Argonne, IL (United States); Constantinescu, Emil M. [Univ. of Chicago, IL (United States); Brown, Jed [Univ. of Colorado, Boulder, CO (United States)

    2014-04-01

    In this work, we focus on compact schemes resulting in tridiagonal systems of equations, specifically the fifth-order CRWENO scheme. We propose a scalable implementation of the nonlinear compact schemes by implementing a parallel tridiagonal solver based on the partitioning/substructuring approach. We use an iterative solver for the reduced system of equations; however, we solve this system to machine zero accuracy to ensure that no parallelization errors are introduced. It is possible to achieve machine-zero convergence with few iterations because of the diagonal dominance of the system. The number of iterations is specified a priori instead of a norm-based exit criterion, and collective communications are avoided. The overall algorithm thus involves only point-to-point communication between neighboring processors. Our implementation of the tridiagonal solver differs from and avoids the drawbacks of past efforts in the following ways: it introduces no parallelization-related approximations (multiprocessor solutions are exactly identical to uniprocessor ones), it involves minimal communication, the mathematical complexity is similar to that of the Thomas algorithm on a single processor, and it does not require any communication and computation scheduling.

  7. SOL: A Library for Scalable Online Learning Algorithms

    OpenAIRE

    Wu, Yue; Hoi, Steven C. H.; Liu, Chenghao; Lu, Jing; Sahoo, Doyen; Yu, Nenghai

    2016-01-01

    SOL is an open-source library for scalable online learning algorithms, and is particularly suitable for learning with high-dimensional data. The library provides a family of regular and sparse online learning algorithms for large-scale binary and multi-class classification tasks with high efficiency, scalability, portability, and extensibility. SOL was implemented in C++, and provided with a collection of easy-to-use command-line tools, python wrappers and library calls for users and develope...

  8. Design issues for numerical libraries on scalable multicore architectures

    International Nuclear Information System (INIS)

    Heroux, M A

    2008-01-01

    Future generations of scalable computers will rely on multicore nodes for a significant portion of overall system performance. At present, most applications and libraries cannot exploit multiple cores beyond running addition MPI processes per node. In this paper we discuss important multicore architecture issues, programming models, algorithms requirements and software design related to effective use of scalable multicore computers. In particular, we focus on important issues for library research and development, making recommendations for how to effectively develop libraries for future scalable computer systems

  9. Randomized Algorithms for Scalable Machine Learning

    OpenAIRE

    Kleiner, Ariel Jacob

    2012-01-01

    Many existing procedures in machine learning and statistics are computationally intractable in the setting of large-scale data. As a result, the advent of rapidly increasing dataset sizes, which should be a boon yielding improved statistical performance, instead severely blunts the usefulness of a variety of existing inferential methods. In this work, we use randomness to ameliorate this lack of scalability by reducing complex, computationally difficult inferential problems to larger sets o...

  10. Enabling Highly-Scalable Remote Memory Access Programming with MPI-3 One Sided

    Directory of Open Access Journals (Sweden)

    Robert Gerstenberger

    2014-01-01

    Full Text Available Modern interconnects offer remote direct memory access (RDMA features. Yet, most applications rely on explicit message passing for communications albeit their unwanted overheads. The MPI-3.0 standard defines a programming interface for exploiting RDMA networks directly, however, it's scalability and practicability has to be demonstrated in practice. In this work, we develop scalable bufferless protocols that implement the MPI-3.0 specification. Our protocols support scaling to millions of cores with negligible memory consumption while providing highest performance and minimal overheads. To arm programmers, we provide a spectrum of performance models for all critical functions and demonstrate the usability of our library and models with several application studies with up to half a million processes. We show that our design is comparable to, or better than UPC and Fortran Coarrays in terms of latency, bandwidth and message rate. We also demonstrate application performance improvements with comparable programming complexity.

  11. DISP: Optimizations towards Scalable MPI Startup

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Huansong [Florida State University, Tallahassee; Pophale, Swaroop S [ORNL; Gorentla Venkata, Manjunath [ORNL; Yu, Weikuan [Florida State University, Tallahassee

    2016-01-01

    Despite the popularity of MPI for high performance computing, the startup of MPI programs faces a scalability challenge as both the execution time and memory consumption increase drastically at scale. We have examined this problem using the collective modules of Cheetah and Tuned in Open MPI as representative implementations. Previous improvements for collectives have focused on algorithmic advances and hardware off-load. In this paper, we examine the startup cost of the collective module within a communicator and explore various techniques to improve its efficiency and scalability. Accordingly, we have developed a new scalable startup scheme with three internal techniques, namely Delayed Initialization, Module Sharing and Prediction-based Topology Setup (DISP). Our DISP scheme greatly benefits the collective initialization of the Cheetah module. At the same time, it helps boost the performance of non-collective initialization in the Tuned module. We evaluate the performance of our implementation on Titan supercomputer at ORNL with up to 4096 processes. The results show that our delayed initialization can speed up the startup of Tuned and Cheetah by an average of 32.0% and 29.2%, respectively, our module sharing can reduce the memory consumption of Tuned and Cheetah by up to 24.1% and 83.5%, respectively, and our prediction-based topology setup can speed up the startup of Cheetah by up to 80%.

  12. Environmentally benign and scalable synthesis of LiFePO4 nanoplates with high capacity and excellent rate cycling performance for lithium ion batteries

    International Nuclear Information System (INIS)

    Zhao, Chunsong; Wang, Lu-Ning; Chen, Jitao; Gao, Min

    2017-01-01

    Highlights: •LiFePO 4 precursors were successfully prepared in pure water phase under atmosphere. •LiFePO 4 nanostructures were also regenerated by recycling filtrate. •LiFePO 4 /C delivers high discharge capacity of 160 mAh g −1 at 0.2 C and high rate capacity of 107 mAh g −1 at 20C. •LiFePO 4 /C delivers a capacity retention rate closed to 97% after 240 cycles at 20C. -- Abstract: An economical and scalable synthesis route of LiFePO 4 nanoplate precursors is successfully prepared in pure water phase under atmosphere without employing environmentally toxic surfactants or high temperature and high pressure compared with traditional hydrothermal or solvothermal methods, which also involves recycling the filtrate to regenerate LiFePO 4 nanoplate precursors and collecting by-product Na 2 SO 4 . The LiFePO 4 precursors present a plate-like morphology with mean thickness and length of 50–100 and 100–300 nm, respectively. After carbon coating, the LiFePO 4 /C nanoparticles with particle size around 200 nm can be observed which exhibit a high discharge capacity of 160 mAh g −1 at 0.2 C and 107 mAh g −1 at 20 C. A high capacity retention closed to 91% can be reached after 500 cycles even at a high current rate of 20C with coulombic efficiency of 99.5%. This work suggests a simple, economic and environmentally benign method in preparation of LiFePO 4 /C cathode material for power batteries that would be feasible for large scale industrial production.

  13. Relationship Between Green Logistics Tendency and Logistics Performance: A Comparative Case Study on Logistics Service Providers

    OpenAIRE

    Ayşenur DOĞRU; Cemile SOLAK FIŞKIN

    2016-01-01

    Increasing concerns related to environmental side effects of the logistics services and competition between the logistics service providers are two pressuring factors on logistics service providers. This study seeks to explore the relation between green logistics tendency and logistic performance from the perspective of logistics service providers. In order to reach this aim, two logistics service providers are investigated by comparative case study method. Findings showed the effects of g...

  14. Scalable Algorithms for Adaptive Statistical Designs

    Directory of Open Access Journals (Sweden)

    Robert Oehmke

    2000-01-01

    Full Text Available We present a scalable, high-performance solution to multidimensional recurrences that arise in adaptive statistical designs. Adaptive designs are an important class of learning algorithms for a stochastic environment, and we focus on the problem of optimally assigning patients to treatments in clinical trials. While adaptive designs have significant ethical and cost advantages, they are rarely utilized because of the complexity of optimizing and analyzing them. Computational challenges include massive memory requirements, few calculations per memory access, and multiply-nested loops with dynamic indices. We analyze the effects of various parallelization options, and while standard approaches do not work well, with effort an efficient, highly scalable program can be developed. This allows us to solve problems thousands of times more complex than those solved previously, which helps make adaptive designs practical. Further, our work applies to many other problems involving neighbor recurrences, such as generalized string matching.

  15. Scalable Atomistic Simulation Algorithms for Materials Research

    Directory of Open Access Journals (Sweden)

    Aiichiro Nakano

    2002-01-01

    Full Text Available A suite of scalable atomistic simulation programs has been developed for materials research based on space-time multiresolution algorithms. Design and analysis of parallel algorithms are presented for molecular dynamics (MD simulations and quantum-mechanical (QM calculations based on the density functional theory. Performance tests have been carried out on 1,088-processor Cray T3E and 1,280-processor IBM SP3 computers. The linear-scaling algorithms have enabled 6.44-billion-atom MD and 111,000-atom QM calculations on 1,024 SP3 processors with parallel efficiency well over 90%. production-quality programs also feature wavelet-based computational-space decomposition for adaptive load balancing, spacefilling-curve-based adaptive data compression with user-defined error bound for scalable I/O, and octree-based fast visibility culling for immersive and interactive visualization of massive simulation data.

  16. Scalable and template-free synthesis of nanostructured Na{sub 1.08}V{sub 6}O{sub 15} as high-performance cathode material for lithium-ion batteries

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Shili, E-mail: slzheng@ipe.ac.cn [National Engineering Laboratory for Hydrometallurgical Cleaner Production Technology, Key Laboratory of Green Process and Engineering, Institute of Process Engineering, Chinese Academy of Sciences, Beijing (China); Wang, Xinran; Yan, Hong [National Engineering Laboratory for Hydrometallurgical Cleaner Production Technology, Key Laboratory of Green Process and Engineering, Institute of Process Engineering, Chinese Academy of Sciences, Beijing (China); University of Chinese Academy of Sciences, No. 19A Yuquan Road, Beijing (China); Du, Hao; Zhang, Yi [National Engineering Laboratory for Hydrometallurgical Cleaner Production Technology, Key Laboratory of Green Process and Engineering, Institute of Process Engineering, Chinese Academy of Sciences, Beijing (China)

    2016-09-15

    Highlights: • Nanostructured Na{sub 1.08}V{sub 6}O{sub 15} was synthesized through additive-free sol-gel process. • Prepared Na{sub 1.08}V{sub 6}O{sub 15} demonstrated high capacity and sufficient cycling stability. • The reaction temperature was optimized to allow scalable Na{sub 1.08}V{sub 6}O{sub 15} fabrication. - Abstract: Developing high-capacity cathode material with feasibility and scalability is still challenging for lithium-ion batteries (LIBs). In this study, a high-capacity ternary sodium vanadate compound, nanostructured NaV{sub 6}O{sub 15}, was template-free synthesized through sol-gel process with high producing efficiency. The as-prepared sample was systematically post-treated at different temperature and the post-annealing temperature was found to determine the cycling stability and capacity of NaV{sub 6}O{sub 15}. The well-crystallized one exhibited good electrochemical performance with a high specific capacity of 302 mAh g{sup −1} when cycled at current density of 0.03 mA g{sup −1}. Its relatively long-term cycling stability was characterized by the cell performance under the current density of 1 A g{sup −1}, delivering a reversible capacity of 118 mAh g{sup −1} after 300 cycles with 79% capacity retention and nearly 100% coulombic efficiency: all demonstrating its significant promise of proposed strategy for large-scale synthesis of NaV{sub 6}O{sub 15} as cathode with high-capacity and high energy density for LIBs.

  17. Scuba: scalable kernel-based gene prioritization.

    Science.gov (United States)

    Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio

    2018-01-25

    The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .

  18. Measuring Provider Performance for Physicians Participating in the Merit-Based Incentive Payment System.

    Science.gov (United States)

    Squitieri, Lee; Chung, Kevin C

    2017-07-01

    In 2017, the Centers for Medicare and Medicaid Services began requiring all eligible providers to participate in the Quality Payment Program or face financial reimbursement penalty. The Quality Payment Program outlines two paths for provider participation: the Merit-Based Incentive Payment System and Advanced Alternative Payment Models. For the first performance period beginning in January of 2017, the Centers for Medicare and Medicaid Services estimates that approximately 83 to 90 percent of eligible providers will not qualify for participation in an Advanced Alternative Payment Model and therefore must participate in the Merit-Based Incentive Payment System program. The Merit-Based Incentive Payment System path replaces existing quality-reporting programs and adds several new measures to evaluate providers using four categories of data: (1) quality, (2) cost/resource use, (3) improvement activities, and (4) advancing care information. These categories will be combined to calculate a weighted composite score for each provider or provider group. Composite Merit-Based Incentive Payment System scores based on 2017 performance data will be used to adjust reimbursed payment in 2019. In this article, the authors provide relevant background for understanding value-based provider performance measurement. The authors also discuss Merit-Based Incentive Payment System reporting requirements and scoring methodology to provide plastic surgeons with the necessary information to critically evaluate their own practice capabilities in the context of current performance metrics under the Quality Payment Program.

  19. Procurement risk management practices and supply chain performance among mobile phone service providers in Kenya

    OpenAIRE

    Emily Adhiambo Okonjo; Peterson Obara Magutu; Richard Bitange Nyaoga

    2016-01-01

    The aim of this study was to establish the relationship between procurement risk management practices and supply chain performance among mobile phone service providers in Kenya. The study specifically set out to establish the extent to which mobile phone service providers have implemented procurement risk management practices and to determine the relationship between procurement risk management practices and supply chain performance. The study adopted a descriptive study design by collecting ...

  20. 16 CFR 1401.5 - Providing performance and technical data to purchasers by labeling.

    Science.gov (United States)

    2010-01-01

    ... POINT OF PURCHASE OF PERFORMANCE AND TECHNICAL DATA § 1401.5 Providing performance and technical data to... the time of original purchase and to the first purchaser for purposes other than resale. The data... to be read and understood by ordinary individuals under normal conditions of purchase. This...

  1. Provider Monitoring and Pay-for-Performance When Multiple Providers Affect Outcomes: An Application to Renal Dialysis

    Science.gov (United States)

    Hirth, Richard A; Turenne, Marc N; Wheeler, John RC; Pan, Qing; Ma, Yu; Messana, Joseph M

    2009-01-01

    Objective To characterize the influence of dialysis facilities and nephrologists on resource use and patient outcomes in the dialysis population and to illustrate how such information can be used to inform payment system design. Data Sources Medicare claims for all hemodialysis patients for whom Medicare was the primary payer in 2004, combined with the Medicare Enrollment Database and the CMS Medical Evidence Form (CMS Form 2728), which is completed at onset of renal replacement therapy. Study Design Resource use (mainly drugs and laboratory tests) per dialysis session and two clinical outcomes (achieving targets for anemia management and dose of dialysis) were modeled at the patient level with random effects for nephrologist and dialysis facility, controlling for patient characteristics. Results For each measure, both the physician and the facility had significant effects. However, facilities were more influential than physicians, as measured by the standard deviation of the random effects. Conclusions The success of tools such as P4P and provider profiling relies upon the identification of providers most able to enhance efficiency and quality. This paper demonstrates a method for determining the extent to which variation in health care costs and quality of care can be attributed to physicians and institutional providers. Because variation in quality and cost attributable to facilities is consistently larger than that attributable to physicians, if provider profiling or financial incentives are targeted to only one type of provider, the facility appears to be the appropriate locus. PMID:19555398

  2. Towards Scalable Strain Gauge-Based Joint Torque Sensors

    Science.gov (United States)

    D’Imperio, Mariapaola; Cannella, Ferdinando; Caldwell, Darwin G.; Cuschieri, Alfred

    2017-01-01

    During recent decades, strain gauge-based joint torque sensors have been commonly used to provide high-fidelity torque measurements in robotics. Although measurement of joint torque/force is often required in engineering research and development, the gluing and wiring of strain gauges used as torque sensors pose difficulties during integration within the restricted space available in small joints. The problem is compounded by the need for a scalable geometric design to measure joint torque. In this communication, we describe a novel design of a strain gauge-based mono-axial torque sensor referred to as square-cut torque sensor (SCTS), the significant features of which are high degree of linearity, symmetry, and high scalability in terms of both size and measuring range. Most importantly, SCTS provides easy access for gluing and wiring of the strain gauges on sensor surface despite the limited available space. We demonstrated that the SCTS was better in terms of symmetry (clockwise and counterclockwise rotation) and more linear. These capabilities have been shown through finite element modeling (ANSYS) confirmed by observed data obtained by load testing experiments. The high performance of SCTS was confirmed by studies involving changes in size, material and/or wings width and thickness. Finally, we demonstrated that the SCTS can be successfully implementation inside the hip joints of miniaturized hydraulically actuated quadruped robot-MiniHyQ. This communication is based on work presented at the 18th International Conference on Climbing and Walking Robots (CLAWAR). PMID:28820446

  3. SWAP-Assembler: scalable and efficient genome assembly towards thousands of cores.

    Science.gov (United States)

    Meng, Jintao; Wang, Bingqiang; Wei, Yanjie; Feng, Shengzhong; Balaji, Pavan

    2014-01-01

    There is a widening gap between the throughput of massive parallel sequencing machines and the ability to analyze these sequencing data. Traditional assembly methods requiring long execution time and large amount of memory on a single workstation limit their use on these massive data. This paper presents a highly scalable assembler named as SWAP-Assembler for processing massive sequencing data using thousands of cores, where SWAP is an acronym for Small World Asynchronous Parallel model. In the paper, a mathematical description of multi-step bi-directed graph (MSG) is provided to resolve the computational interdependence on merging edges, and a highly scalable computational framework for SWAP is developed to automatically preform the parallel computation of all operations. Graph cleaning and contig extension are also included for generating contigs with high quality. Experimental results show that SWAP-Assembler scales up to 2048 cores on Yanhuang dataset using only 26 minutes, which is better than several other parallel assemblers, such as ABySS, Ray, and PASHA. Results also show that SWAP-Assembler can generate high quality contigs with good N50 size and low error rate, especially it generated the longest N50 contig sizes for Fish and Yanhuang datasets. In this paper, we presented a highly scalable and efficient genome assembly software, SWAP-Assembler. Compared with several other assemblers, it showed very good performance in terms of scalability and contig quality. This software is available at: https://sourceforge.net/projects/swapassembler.

  4. Using provider performance incentives to increase HIV testing and counseling services in Rwanda.

    Science.gov (United States)

    de Walque, Damien; Gertler, Paul J; Bautista-Arredondo, Sergio; Kwan, Ada; Vermeersch, Christel; de Dieu Bizimana, Jean; Binagwaho, Agnès; Condo, Jeanine

    2015-03-01

    Paying for performance provides financial rewards to medical care providers for improvements in performance measured by utilization and quality of care indicators. In 2006, Rwanda began a pay for performance scheme to improve health services delivery, including HIV/AIDS services. Using a prospective quasi-experimental design, this study examines the scheme's impact on individual and couples HIV testing. We find a positive impact of pay for performance on HIV testing among married individuals (10.2 percentage points increase). Paying for performance also increased testing by both partners by 14.7 percentage point among discordant couples in which only one of the partners is an AIDS patient. Copyright © 2014. Published by Elsevier B.V.

  5. Scalable algorithms for contact problems

    CERN Document Server

    Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít

    2016-01-01

    This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...

  6. Factors affecting the performance of maternal health care providers in Armenia

    Directory of Open Access Journals (Sweden)

    Voltero Lauren

    2004-06-01

    Full Text Available Abstract Background Over the last five years, international development organizations began to modify and adapt the conventional Performance Improvement Model for use in low-resource settings. This model outlines the five key factors believed to influence performance outcomes: job expectations, performance feedback, environment and tools, motivation and incentives, and knowledge and skills. Each of these factors should be supplied by the organization in which the provider works, and thus, organizational support is considered as an overarching element for analysis. Little research, domestically or internationally, has been conducted on the actual effects of each of the factors on performance outcomes and most PI practitioners assume that all the factors are needed in order for performance to improve. This study presents a unique exploration of how the factors, individually as well as in combination, affect the performance of primary reproductive health providers (nurse-midwives in two regions of Armenia. Methods Two hundred and eighty-five nurses and midwives were observed conducting real or simulated antenatal and postpartum/neonatal care services and interviewed about the presence or absence of the performance factors within their work environment. Results were analyzed to compare average performance with the existence or absence of the factors; then, multiple regression analysis was conducted with the merged datasets to obtain the best models of "predictors" of performance within each clinical service. Results Baseline results revealed that performance was sub-standard in several areas and several performance factors were deficient or nonexistent. The multivariate analysis showed that (a training in the use of the clinic tools; and (b receiving recognition from the employer or the client/community, are factors strongly associated with performance, followed by (c receiving performance feedback in postpartum care. Other – extraneous

  7. Procurement risk management practices and supply chain performance among mobile phone service providers in Kenya

    Directory of Open Access Journals (Sweden)

    Emily Adhiambo Okonjo

    2016-02-01

    Full Text Available The aim of this study was to establish the relationship between procurement risk management practices and supply chain performance among mobile phone service providers in Kenya. The study specifically set out to establish the extent to which mobile phone service providers have implemented procurement risk management practices and to determine the relationship between procurement risk management practices and supply chain performance. The study adopted a descriptive study design by collecting data from the four (4 mobile telecommunication companies in Kenya using a self-administered questionnaire. Means, standard deviation, and regression analysis were used to analyze the data collected. The study established that most of the mobile phone service providers in Kenya had implemented procurement risk management practices. It was also clear that there was a very significant relationship between procurement risk management practices and supply chain performance.

  8. Effect of Prior Cardiopulmonary Resuscitation Knowledge on Compression Performance by Hospital Providers

    Directory of Open Access Journals (Sweden)

    Joshua N. Burkhardt

    2014-07-01

    Full Text Available Introduction: The purpose of this study was to determine cardiopulmonary resuscitation (CPR knowledge of hospital providers and whether knowledge affects performance of effective compressions during a simulated cardiac arrest. Methods: This cross-sectional study evaluated the CPR knowledge and performance of medical students and ED personnel with current CPR certification. We collected data regarding compression rate, hand placement, depth, and recoil via a questionnaire to determine knowledge, and then we assessed performance using 60 seconds of compressions on a simulation mannequin. Results: Data from 200 enrollments were analyzed by evaluators blinded to subject knowledge. Regarding knowledge, 94% of participants correctly identified parameters for rate, 58% for hand placement, 74% for depth, and 94% for recoil. Participants identifying an effective rate of ≥100 performed compressions at a significantly higher rate than participants identifying <100 (µ=117 vs. 94, p<0.001. Participants identifying correct hand placement performed significantly more compressions adherent to guidelines than those identifying incorrect placement (µ=86% vs. 72%, p<0.01. No significant differences were found in depth or recoil performance based on knowledge of guidelines. Conclusion: Knowledge of guidelines was variable; however, CPR knowledge significantly impacted certain aspects of performance, namely rate and hand placement, whereas depth and recoil were not affected. Depth of compressions was poor regardless of prior knowledge, and knowledge did not correlate with recoil performance. Overall performance was suboptimal and additional training may be needed to ensure consistent, effective performance and therefore better outcomes after cardiopulmonary arrest.

  9. Composite Measures of Health Care Provider Performance: A Description of Approaches

    Science.gov (United States)

    Shwartz, Michael; Restuccia, Joseph D; Rosen, Amy K

    2015-01-01

    Context Since the Institute of Medicine’s 2001 report Crossing the Quality Chasm, there has been a rapid proliferation of quality measures used in quality-monitoring, provider-profiling, and pay-for-performance (P4P) programs. Although individual performance measures are useful for identifying specific processes and outcomes for improvement and tracking progress, they do not easily provide an accessible overview of performance. Composite measures aggregate individual performance measures into a summary score. By reducing the amount of data that must be processed, they facilitate (1) benchmarking of an organization’s performance, encouraging quality improvement initiatives to match performance against high-performing organizations, and (2) profiling and P4P programs based on an organization’s overall performance. Methods We describe different approaches to creating composite measures, discuss their advantages and disadvantages, and provide examples of their use. Findings The major issues in creating composite measures are (1) whether to aggregate measures at the patient level through all-or-none approaches or the facility level, using one of the several possible weighting schemes; (2) when combining measures on different scales, how to rescale measures (using z scores, range percentages, ranks, or 5-star categorizations); and (3) whether to use shrinkage estimators, which increase precision by smoothing rates from smaller facilities but also decrease transparency. Conclusions Because provider rankings and rewards under P4P programs may be sensitive to both context and the data, careful analysis is warranted before deciding to implement a particular method. A better understanding of both when and where to use composite measures and the incentives created by composite measures are likely to be important areas of research as the use of composite measures grows. PMID:26626986

  10. Final Project Report: DOE Award FG02-04ER25606 Overlay Transit Networking for Scalable, High Performance Data Communication across Heterogeneous Infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Beck, Micah; Moore, Terry

    2007-08-31

    As the flood of data associated with leading edge computational science continues to escalate, the challenge of supporting the distributed collaborations that are now characteristic of it becomes increasingly daunting. The chief obstacles to progress on this front lie less in the synchronous elements of collaboration, which have been reasonably well addressed by new global high performance networks, than in the asynchronous elements, where appropriate shared storage infrastructure seems to be lacking. The recent report from the Department of Energy on the emerging 'data management challenge' captures the multidimensional nature of this problem succinctly: Data inevitably needs to be buffered, for periods ranging from seconds to weeks, in order to be controlled as it moves through the distributed and collaborative research process. To meet the diverse and changing set of application needs that different research communities have, large amounts of non-archival storage are required for transitory buffering, and it needs to be widely dispersed, easily available, and configured to maximize flexibility of use. In today's grid fabric, however, massive storage is mostly concentrated in data centers, available only to those with user accounts and membership in the appropriate virtual organizations, allocated as if its usage were non-transitory, and encapsulated behind legacy interfaces that inhibit the flexibility of use and scheduling. This situation severely restricts the ability of application communities to access and schedule usable storage where and when they need to in order to make their workflow more productive. (p.69f) One possible strategy to deal with this problem lies in creating a storage infrastructure that can be universally shared because it provides only the most generic of asynchronous services. Different user communities then define higher level services as necessary to meet their needs. One model of such a service is a Storage Network

  11. Dashboard report on performance on select quality indicators to cancer care providers.

    Science.gov (United States)

    Stattin, Pär; Sandin, Fredrik; Sandbäck, Torsten; Damber, Jan-Erik; Franck Lissbrant, Ingela; Robinson, David; Bratt, Ola; Lambe, Mats

    2016-01-01

    Cancer quality registers are attracting increasing attention as important, but still underutilized sources of clinical data. To optimize the use of registers in quality assurance and improvement, data have to be rapidly collected, collated and presented as actionable, at-a-glance information to the reporting departments. This article presents a dashboard performance report on select quality indicators to cancer care providers. Ten quality indicators registered on an individual patient level in the National Prostate Cancer Register of Sweden and recommended by the National Prostate Cancer Guidelines were selected. Data reported to the National Prostate Cancer Register are uploaded within 24 h to the Information Network for Cancer Care platform. Launched in 2014, "What''s Going On, Prostate Cancer" provides rapid, at-a-glance performance feedback to care providers. The indicators include time to report to the National Prostate Cancer Register, waiting times, designated clinical nurse specialist, multidisciplinary conference, adherence to guidelines for diagnostic work-up and treatment, and documentation and outcome of treatment. For each indicator, three performance levels were defined. What's Going On, a dashboard performance report on 10 selected quality indicators to cancer care providers, provides an example of how data in cancer quality registers can be transformed into condensed, at-a-glance information to be used as actionable metrics for quality assurance and improvement.

  12. Can pay-for-performance to primary care providers stimulate appropriate use of antibiotics?

    DEFF Research Database (Denmark)

    Dietrichson, Jens; Maria Ellegård, Lina; Anell, Anders

    -for-performance in a difference-in-differences analysis. Despite that the monetary incentives were small, we find that P4P significantly increased narrow-spectrum antibiotics' share of RTI antibiotics consumption. We further find larger effects in areas where there were many private providers, where the incentive was formulated...... as a penalty rather than a reward, and where all providers were close to a P4P target....

  13. Implementation of the Timepix ASIC in the Scalable Readout System

    Energy Technology Data Exchange (ETDEWEB)

    Lupberger, M., E-mail: lupberger@physik.uni-bonn.de; Desch, K.; Kaminski, J.

    2016-09-11

    We report on the development of electronics hardware, FPGA firmware and software to provide a flexible multi-chip readout of the Timepix ASIC within the framework of the Scalable Readout System (SRS). The system features FPGA-based zero-suppression and the possibility to read out up to 4×8 chips with a single Front End Concentrator (FEC). By operating several FECs in parallel, in principle an arbitrary number of chips can be read out, exploiting the scaling features of SRS. Specifically, we tested the system with a setup consisting of 160 Timepix ASICs, operated as GridPix devices in a large TPC field cage in a 1 T magnetic field at a DESY test beam facility providing an electron beam of up to 6 GeV. We discuss the design choices, the dedicated hardware components, the FPGA firmware as well as the performance of the system in the test beam.

  14. A Secure and Scalable Data Communication Scheme in Smart Grids

    Directory of Open Access Journals (Sweden)

    Chunqiang Hu

    2018-01-01

    Full Text Available The concept of smart grid gained tremendous attention among researchers and utility providers in recent years. How to establish a secure communication among smart meters, utility companies, and the service providers is a challenging issue. In this paper, we present a communication architecture for smart grids and propose a scheme to guarantee the security and privacy of data communications among smart meters, utility companies, and data repositories by employing decentralized attribute based encryption. The architecture is highly scalable, which employs an access control Linear Secret Sharing Scheme (LSSS matrix to achieve a role-based access control. The security analysis demonstrated that the scheme ensures security and privacy. The performance analysis shows that the scheme is efficient in terms of computational cost.

  15. Silicon nanophotonics for scalable quantum coherent feedback networks

    Energy Technology Data Exchange (ETDEWEB)

    Sarovar, Mohan; Brif, Constantin [Sandia National Laboratories, Livermore, CA (United States); Soh, Daniel B.S. [Sandia National Laboratories, Livermore, CA (United States); Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States); Cox, Jonathan; DeRose, Christopher T.; Camacho, Ryan; Davids, Paul [Sandia National Laboratories, Albuquerque, NM (United States)

    2016-12-15

    The emergence of coherent quantum feedback control (CQFC) as a new paradigm for precise manipulation of dynamics of complex quantum systems has led to the development of efficient theoretical modeling and simulation tools and opened avenues for new practical implementations. This work explores the applicability of the integrated silicon photonics platform for implementing scalable CQFC networks. If proven successful, on-chip implementations of these networks would provide scalable and efficient nanophotonic components for autonomous quantum information processing devices and ultra-low-power optical processing systems at telecommunications wavelengths. We analyze the strengths of the silicon photonics platform for CQFC applications and identify the key challenges to both the theoretical formalism and experimental implementations. In particular, we determine specific extensions to the theoretical CQFC framework (which was originally developed with bulk-optics implementations in mind), required to make it fully applicable to modeling of linear and nonlinear integrated optics networks. We also report the results of a preliminary experiment that studied the performance of an in situ controllable silicon nanophotonic network of two coupled cavities and analyze the properties of this device using the CQFC formalism. (orig.)

  16. The TOTEM DAQ based on the Scalable Readout System (SRS)

    Science.gov (United States)

    Quinto, Michele; Cafagna, Francesco S.; Fiergolski, Adrian; Radicioni, Emilio

    2018-02-01

    The TOTEM (TOTal cross section, Elastic scattering and diffraction dissociation Measurement at the LHC) experiment at LHC, has been designed to measure the total proton-proton cross-section and study the elastic and diffractive scattering at the LHC energies. In order to cope with the increased machine luminosity and the higher statistic required by the extension of the TOTEM physics program, approved for the LHC's Run Two phase, the previous VME based data acquisition system has been replaced with a new one based on the Scalable Readout System. The system features an aggregated data throughput of 2GB / s towards the online storage system. This makes it possible to sustain a maximum trigger rate of ˜ 24kHz, to be compared with the 1KHz rate of the previous system. The trigger rate is further improved by implementing zero-suppression and second-level hardware algorithms in the Scalable Readout System. The new system fulfils the requirements for an increased efficiency, providing higher bandwidth, and increasing the purity of the data recorded. Moreover full compatibility has been guaranteed with the legacy front-end hardware, as well as with the DAQ interface of the CMS experiment and with the LHC's Timing, Trigger and Control distribution system. In this contribution we describe in detail the architecture of full system and its performance measured during the commissioning phase at the LHC Interaction Point.

  17. Silicon nanophotonics for scalable quantum coherent feedback networks

    International Nuclear Information System (INIS)

    Sarovar, Mohan; Brif, Constantin; Soh, Daniel B.S.; Cox, Jonathan; DeRose, Christopher T.; Camacho, Ryan; Davids, Paul

    2016-01-01

    The emergence of coherent quantum feedback control (CQFC) as a new paradigm for precise manipulation of dynamics of complex quantum systems has led to the development of efficient theoretical modeling and simulation tools and opened avenues for new practical implementations. This work explores the applicability of the integrated silicon photonics platform for implementing scalable CQFC networks. If proven successful, on-chip implementations of these networks would provide scalable and efficient nanophotonic components for autonomous quantum information processing devices and ultra-low-power optical processing systems at telecommunications wavelengths. We analyze the strengths of the silicon photonics platform for CQFC applications and identify the key challenges to both the theoretical formalism and experimental implementations. In particular, we determine specific extensions to the theoretical CQFC framework (which was originally developed with bulk-optics implementations in mind), required to make it fully applicable to modeling of linear and nonlinear integrated optics networks. We also report the results of a preliminary experiment that studied the performance of an in situ controllable silicon nanophotonic network of two coupled cavities and analyze the properties of this device using the CQFC formalism. (orig.)

  18. Does the new conceptual framework provide adequate concepts for reporting relevant information about performance?

    NARCIS (Netherlands)

    Brouwer, A.; Faramarzi, A; Hoogendoorn, M.

    2014-01-01

    The basic question we raise in this paper is whether the 2013 Discussion Paper (DP 2013) on the Conceptual Framework provides adequate principles for reporting an entity’s performance and what improvements could be made in light of both user needs and evidence from academic literature. DP 2013

  19. Performance of Healthcare Providers Regarding Iranian Women Experiencing Physical Domestic Violence in Isfahan.

    Science.gov (United States)

    Yousefnia, Nasim; Nekuei, Nafisehsadat; Farajzadegan, Ziba; Yadegarfar, Ghasem

    2018-01-01

    Domestic violence (DV) can threaten women's health. Healthcare providers (HCPs) may be the first to come into contact with a victim of DV. Their appropriate performance regarding a DV victim can decrease its complications. The aim of the present study was to investigate HCPs' performance regarding women experiencing DV in emergency and maternity wards of hospitals in Isfahan, Iran. The present descriptive, cross-sectional study was conducted among 300 HCPs working in emergency and maternity wards in hospitals in Isfahan. The participants were selected using quota random sampling from February to May 2016. A researcher-made questionnaire containing the five items of HCPs performance regarding DV (assessment, intervention, documentation, reference, and follow-up) was used to collect data. The reliability and validity of the questionnaire were confirmed, and the collected data were analyzed using SPSS software. Cronbach's alpha was used to assess the reliability of the questionnaires. To present a general description of the data (variables, mean, and standard deviation), the table of frequencies was designed. The performance of the participants regarding DV in the assessment (mean = 64.22), intervention (mean = 68.55), and reference stages (mean = 68.32) were average. However, in the documentation (mean = 72.55) and follow-up stages (mean = 23.10), their performance was good and weak respectively (criterion from 100). Based on the results, because of defects in providing services for women experiencing DV, a practical indigenous guideline should be provided to treat and support these women.

  20. Scalable shared-memory multiprocessing

    CERN Document Server

    Lenoski, Daniel E

    1995-01-01

    Dr. Lenoski and Dr. Weber have experience with leading-edge research and practical issues involved in implementing large-scale parallel systems. They were key contributors to the architecture and design of the DASH multiprocessor. Currently, they are involved with commercializing scalable shared-memory technology.

  1. Measuring Healthcare Providers' Performances Within Managed Competition Using Multidimensional Quality and Cost Indicators.

    Science.gov (United States)

    Portrait, France R M; van der Galiën, Onno; Van den Berg, Bernard

    2016-04-01

    The Dutch healthcare system is in transition towards managed competition. In theory, a system of managed competition involves incentives for quality and efficiency of provided care. This is mainly because health insurers contract on behalf of their clients with healthcare providers on, potentially, quality and costs. The paper develops a strategy to comprehensively analyse available multidimensional data on quality and costs to assess and report on the relative performance of healthcare providers within managed competition. We had access to individual information on 2409 clients of 19 Dutch diabetes care groups on a broad range of (outcome and process related) quality and cost indicators. We carried out a cost-consequences analysis and corrected for differences in case mix to reduce incentives for risk selection by healthcare providers. There is substantial heterogeneity between diabetes care groups' performances as measured using multidimensional indicators on quality and costs. Better quality diabetes care can be achieved with lower or higher costs. Routine monitoring using multidimensional data on quality and costs merged at the individual level would allow a systematic and comprehensive analysis of healthcare providers' performances within managed competition. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Decentralized control of a scalable photovoltaic (PV)-battery hybrid power system

    International Nuclear Information System (INIS)

    Kim, Myungchin; Bae, Sungwoo

    2017-01-01

    Highlights: • This paper introduces the design and control of a PV-battery hybrid power system. • Reliable and scalable operation of hybrid power systems is achieved. • System and power control are performed without a centralized controller. • Reliability and scalability characteristics are studied in a quantitative manner. • The system control performance is verified using realistic solar irradiation data. - Abstract: This paper presents the design and control of a sustainable standalone photovoltaic (PV)-battery hybrid power system (HPS). The research aims to develop an approach that contributes to increased level of reliability and scalability for an HPS. To achieve such objectives, a PV-battery HPS with a passively connected battery was studied. A quantitative hardware reliability analysis was performed to assess the effect of energy storage configuration to the overall system reliability. Instead of requiring the feedback control information of load power through a centralized supervisory controller, the power flow in the proposed HPS is managed by a decentralized control approach that takes advantage of the system architecture. Reliable system operation of an HPS is achieved through the proposed control approach by not requiring a separate supervisory controller. Furthermore, performance degradation of energy storage can be prevented by selecting the controller gains such that the charge rate does not exceed operational requirements. The performance of the proposed system architecture with the control strategy was verified by simulation results using realistic irradiance data and a battery model in which its temperature effect was considered. With an objective to support scalable operation, details on how the proposed design could be applied were also studied so that the HPS could satisfy potential system growth requirements. Such scalability was verified by simulating various cases that involve connection and disconnection of sources and loads. The

  3. Federated or cached searches: providing expected performance from multiple invasive species databases

    Science.gov (United States)

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-01-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search “deep” web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  4. Scalability Optimization of Seamless Positioning Service

    Directory of Open Access Journals (Sweden)

    Juraj Machaj

    2016-01-01

    Full Text Available Recently positioning services are getting more attention not only within research community but also from service providers. From the service providers point of view positioning service that will be able to work seamlessly in all environments, for example, indoor, dense urban, and rural, has a huge potential to open new markets. However, such system does not only need to provide accurate position estimates but have to be scalable and resistant to fake positioning requests. In the previous works we have proposed a modular system, which is able to provide seamless positioning in various environments. The system automatically selects optimal positioning module based on available radio signals. The system currently consists of three positioning modules—GPS, GSM based positioning, and Wi-Fi based positioning. In this paper we will propose algorithm which will reduce time needed for position estimation and thus allow higher scalability of the modular system and thus allow providing positioning services to higher amount of users. Such improvement is extremely important, for real world application where large number of users will require position estimates, since positioning error is affected by response time of the positioning server.

  5. Scalable high-performance algorithm for the simulation of exciton-dynamics. Application to the light harvesting complex II in the presence of resonant vibrational modes

    DEFF Research Database (Denmark)

    Kreisbeck, Christoph; Kramer, Tobias; Aspuru-Guzik, Alán

    2014-01-01

    high-performance many-core platforms using the Open Compute Language (OpenCL). For the light-harvesting complex II (LHC II) found in spinach, the HEOM results deviate from predictions of approximate theories and clarify the time-scale of the transfer-process. We investigate the impact of resonantly...

  6. Scalable Simulation of Electromagnetic Hybrid Codes

    International Nuclear Information System (INIS)

    Perumalla, Kalyan S.; Fujimoto, Richard; Karimabadi, Dr. Homa

    2006-01-01

    New discrete-event formulations of physics simulation models are emerging that can outperform models based on traditional time-stepped techniques. Detailed simulation of the Earth's magnetosphere, for example, requires execution of sub-models that are at widely differing timescales. In contrast to time-stepped simulation which requires tightly coupled updates to entire system state at regular time intervals, the new discrete event simulation (DES) approaches help evolve the states of sub-models on relatively independent timescales. However, parallel execution of DES-based models raises challenges with respect to their scalability and performance. One of the key challenges is to improve the computation granularity to offset synchronization and communication overheads within and across processors. Our previous work was limited in scalability and runtime performance due to the parallelization challenges. Here we report on optimizations we performed on DES-based plasma simulation models to improve parallel performance. The net result is the capability to simulate hybrid particle-in-cell (PIC) models with over 2 billion ion particles using 512 processors on supercomputing platforms

  7. The influence of system quality characteristics on health care providers' performance: Empirical evidence from Malaysia.

    Science.gov (United States)

    Mohd Salleh, Mohd Idzwan; Zakaria, Nasriah; Abdullah, Rosni

    The Ministry of Health Malaysia initiated the total hospital information system (THIS) as the first national electronic health record system for use in selected public hospitals across the country. Since its implementation 15 years ago, there has been the critical requirement for a systematic evaluation to assess its effectiveness in coping with the current system, task complexity, and rapid technological changes. The study aims to assess system quality factors to predict the performance of electronic health in a single public hospital in Malaysia. Non-probability sampling was employed for data collection among selected providers in a single hospital for two months. Data cleaning and bias checking were performed before final analysis in partial least squares-structural equation modeling. Convergent and discriminant validity assessments were satisfied the required criterions in the reflective measurement model. The structural model output revealed that the proposed adequate infrastructure, system interoperability, security control, and system compatibility were the significant predictors, where system compatibility became the most critical characteristic to influence an individual health care provider's performance. The previous DeLone and McLean information system success models should be extended to incorporate these technological factors in the medical system research domain to examine the effectiveness of modern electronic health record systems. In this study, care providers' performance was expected when the system usage fits with patients' needs that eventually increased their productivity. Copyright © 2016 King Saud Bin Abdulaziz University for Health Sciences. Published by Elsevier Ltd. All rights reserved.

  8. Managing poorly performing clinicians: health care providers' willingness to pay for independent help.

    Science.gov (United States)

    Watson, Verity; Sussex, Jon; Ryan, Mandy; Tetteh, Ebenezer

    2012-03-01

    To determine the willingness to pay (WTP) of senior managers in the UK National Health Service (NHS) for services to help manage performance concerns with doctors, dentists and pharmacists. A discrete choice experiment (DCE) was used to elicit senior managers' preferences for a support service to help manage clinical performance concerns. The DCE was based on: a literature review; interviews with support service providers and clinical professional bodies; and discussion groups with managers. From the DCE responses, we estimate marginal WTP for aspects of support services. 451 NHS managers completed the DCE questionnaire. NHS managers are willing to pay for: advice, 'facilitation', and behavioural, health, clinical and organisational assessments. Telephone advice with written confirmation was valued most highly. NHS managers were willing to pay £161.56 (CI: £160.81-£162.32) per year per whole time equivalent doctor, dentist or pharmacist, for support to help manage clinical performance concerns. Marginal WTP varied across respondent subgroups but was always positive. Health care managers valued help in managing the clinicians' performance, and were willing to pay for it from their organisations' limited funds. Their WTP exceeds the current cost of a UK body providing similar support. Establishing a central body to provide such services across a health care system, with the associated economies of scale including cumulative experience, is an option that policy makers should consider seriously. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  9. Myria: Scalable Analytics as a Service

    Science.gov (United States)

    Howe, B.; Halperin, D.; Whitaker, A.

    2014-12-01

    At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.

  10. Enhanced jump performance when providing augmented feedback compared to an external or internal focus of attention.

    Science.gov (United States)

    Keller, Martin; Lauber, Benedikt; Gottschalk, Marius; Taube, Wolfgang

    2015-01-01

    Factors such as an external focus of attention (EF) and augmented feedback (AF) have been shown to improve performance. However, the efficacy of providing AF to enhance motor performance has never been compared with the effects of an EF or an internal focus of attention (IF). Therefore, the aim of the present study was to identify which of the three conditions (AF, EF or IF) leads to the highest performance in a countermovement jump (CMJ). Nineteen volunteers performed 12 series of 8 maximum CMJs. Changes in jump height between conditions and within the series were analysed. Jump heights differed between conditions (P jump heights at the end of the series in AF (+1.60%) and lower jump heights at the end of the series in EF (-1.79%) and IF (-1.68%) were observed. Muscle activity did not differ between conditions. The differences between conditions and within the series provide evidence that AF leads to higher performance and better progression within one series than EF and IF. Consequently, AF seems to outperform EF and IF when maximising jump height.

  11. Scalability of RAID Systems

    OpenAIRE

    Li, Yan

    2010-01-01

    RAID systems (Redundant Arrays of Inexpensive Disks) have dominated backend storage systems for more than two decades and have grown continuously in size and complexity. Currently they face unprecedented challenges from data intensive applications such as image processing, transaction processing and data warehousing. As the size of RAID systems increases, designers are faced with both performance and reliability challenges. These challenges include limited back-end network band...

  12. Programming Scala Scalability = Functional Programming + Objects

    CERN Document Server

    Wampler, Dean

    2009-01-01

    Learn how to be more productive with Scala, a new multi-paradigm language for the Java Virtual Machine (JVM) that integrates features of both object-oriented and functional programming. With this book, you'll discover why Scala is ideal for highly scalable, component-based applications that support concurrency and distribution. Programming Scala clearly explains the advantages of Scala as a JVM language. You'll learn how to leverage the wealth of Java class libraries to meet the practical needs of enterprise and Internet projects more easily. Packed with code examples, this book provides us

  13. Use of Provider-Level Dashboards and Pay-for-Performance in Venous Thromboembolism Prophylaxis*

    Science.gov (United States)

    Michtalik, Henry J.; Carolan, Howard T.; Haut, Elliott R.; Lau, Brandyn D.; Streiff, Michael B.; Finkelstein, Joseph; Pronovost, Peter J.; Durkin, Nowella; Brotman, Daniel J.

    2014-01-01

    Background Despite safe and cost-effective venous thromboembolism (VTE) prevention measures, VTE prophylaxis rates are often suboptimal. Healthcare reform efforts emphasize transparency through programs to report performance, and payment incentives through programs to pay-for-performance. Objective To sequentially examine an individualized physician dashboard and pay-for-performance program to improve VTE prophylaxis rates amongst hospitalists. Design Retrospective analysis of 3144 inpatient admissions. After a baseline observation period, VTE prophylaxis compliance was compared during both interventions. Setting 1060-bed tertiary care medical center. Participants 38 part- and full-time academic hospitalists. Interventions A Web-based hospitalist dashboard provided VTE prophylaxis feedback. After 6 months of feedback only, a pay-for-performance program was incorporated, with graduated payouts for compliance rates of 80-100%. Measurements Prescription of American College of Chest Physicians guideline-compliant VTE prophylaxis and subsequent pay-for-performance payments. Results Monthly VTE prophylaxis compliance rates were 86% (95% CI: 85, 88), 90% (95% CI: 88, 93), and 94% (95% CI: 93, 96) during the baseline, dashboard, and combined dashboard/pay-for-performance periods, respectively. Compliance significantly improved with the use of the dashboard (p=0.01) and addition of the pay-for-performance program (p=0.01). The highest rate of improvement occurred with the dashboard (1.58%/month; p=0.01). Annual individual physician performance payments ranged from $53 to $1244 (mean $633; SD ±350). Conclusions Direct feedback using dashboards was associated with significantly improved compliance, with further improvement after incorporating an individual physician pay-for-performance program. Real-time dashboards and physician-level incentives may assist hospitals in achieving higher safety and quality benchmarks. PMID:25545690

  14. Should trained lay providers perform HIV testing? A systematic review to inform World Health Organization guidelines.

    Science.gov (United States)

    Kennedy, C E; Yeh, P T; Johnson, C; Baggaley, R

    2017-12-01

    New strategies for HIV testing services (HTS) are needed to achieve UN 90-90-90 targets, including diagnosis of 90% of people living with HIV. Task-sharing HTS to trained lay providers may alleviate health worker shortages and better reach target groups. We conducted a systematic review of studies evaluating HTS by lay providers using rapid diagnostic tests (RDTs). Peer-reviewed articles were included if they compared HTS using RDTs performed by trained lay providers to HTS by health professionals, or to no intervention. We also reviewed data on end-users' values and preferences around lay providers preforming HTS. Searching was conducted through 10 online databases, reviewing reference lists, and contacting experts. Screening and data abstraction were conducted in duplicate using systematic methods. Of 6113 unique citations identified, 5 studies were included in the effectiveness review and 6 in the values and preferences review. One US-based randomized trial found patients' uptake of HTS doubled with lay providers (57% vs. 27%, percent difference: 30, 95% confidence interval: 27-32, p lay providers. Studies from Cambodia, Malawi, and South Africa comparing testing quality between lay providers and laboratory staff found little discordance and high sensitivity and specificity (≥98%). Values and preferences studies generally found support for lay providers conducting HTS, particularly in non-hypothetical scenarios. Based on evidence supporting using trained lay providers, a WHO expert panel recommended lay providers be allowed to conduct HTS using HIV RDTs. Uptake of this recommendation could expand HIV testing to more people globally.

  15. Effects of performance measure implementation on clinical manager and provider motivation.

    Science.gov (United States)

    Damschroder, Laura J; Robinson, Claire H; Francis, Joseph; Bentley, Douglas R; Krein, Sarah L; Rosland, Ann-Marie; Hofer, Timothy P; Kerr, Eve A

    2014-12-01

    Clinical performance measurement has been a key element of efforts to transform the Veterans Health Administration (VHA). However, there are a number of signs that current performance measurement systems used within and outside the VHA may be reaching the point of maximum benefit to care and in some settings, may be resulting in negative consequences to care, including overtreatment and diminished attention to patient needs and preferences. Our research group has been involved in a long-standing partnership with the office responsible for clinical performance measurement in the VHA to understand and develop potential strategies to mitigate the unintended consequences of measurement. Our aim was to understand how the implementation of diabetes performance measures (PMs) influences management actions and day-to-day clinical practice. This is a mixed methods study design based on quantitative administrative data to select study facilities and quantitative data from semi-structured interviews. Sixty-two network-level and facility-level executives, managers, front-line providers and staff participated in the study. Qualitative content analyses were guided by a team-based consensus approach using verbatim interview transcripts. A published interpretive motivation theory framework is used to describe potential contributions of local implementation strategies to unintended consequences of PMs. Implementation strategies used by management affect providers' response to PMs, which in turn potentially undermines provision of high-quality patient-centered care. These include: 1) feedback reports to providers that are dissociated from a realistic capability to address performance gaps; 2) evaluative criteria set by managers that are at odds with patient-centered care; and 3) pressure created by managers' narrow focus on gaps in PMs that is viewed as more punitive than motivating. Next steps include working with VHA leaders to develop and test implementation approaches to help

  16. Performance of healthcare providers regarding iranian women experiencing physical domestic violence in Isfahan

    Directory of Open Access Journals (Sweden)

    Nasim Yousefnia

    2018-01-01

    Full Text Available Background: Domestic violence (DV can threaten women's health. Healthcare providers (HCPs may be the first to come into contact with a victim of DV. Their appropriate performance regarding a DV victim can decrease its complications. The aim of the present study was to investigate HCPs' performance regarding women experiencing DV in emergency and maternity wards of hospitals in Isfahan, Iran. Materials and Methods: The present descriptive, cross-sectional study was conducted among 300 HCPs working in emergency and maternity wards in hospitals in Isfahan. The participants were selected using quota random sampling from February to May 2016. A researcher-made questionnaire containing the five items of HCPs performance regarding DV (assessment, intervention, documentation, reference, and follow-up was used to collect data. The reliability and validity of the questionnaire were confirmed, and the collected data were analyzed using SPSS software. Cronbach's alpha was used to assess the reliability of the questionnaires. To present a general description of the data (variables, mean, and standard deviation, the table of frequencies was designed. Results: The performance of the participants regarding DV in the assessment (mean = 64.22, intervention (mean = 68.55, and reference stages (mean = 68.32 were average. However, in the documentation (mean = 72.55 and follow-up stages (mean = 23.10, their performance was good and weak respectively (criterion from 100. Conclusions: Based on the results, because of defects in providing services for women experiencing DV, a practical indigenous guideline should be provided to treat and support these women.

  17. Wind farms providing secondary frequency regulation: evaluating the performance of model-based receding horizon control

    Directory of Open Access Journals (Sweden)

    C. R. Shapiro

    2018-01-01

    Full Text Available This paper is an extended version of our paper presented at the 2016 TORQUE conference (Shapiro et al., 2016. We investigate the use of wind farms to provide secondary frequency regulation for a power grid using a model-based receding horizon control framework. In order to enable real-time implementation, the control actions are computed based on a time-varying one-dimensional wake model. This model describes wake advection and wake interactions, both of which play an important role in wind farm power production. In order to test the control strategy, it is implemented in a large-eddy simulation (LES model of an 84-turbine wind farm using the actuator disk turbine representation. Rotor-averaged velocity measurements at each turbine are used to provide feedback for error correction. The importance of including the dynamics of wake advection in the underlying wake model is tested by comparing the performance of this dynamic-model control approach to a comparable static-model control approach that relies on a modified Jensen model. We compare the performance of both control approaches using two types of regulation signals, RegA and RegD, which are used by PJM, an independent system operator in the eastern United States. The poor performance of the static-model control relative to the dynamic-model control demonstrates that modeling the dynamics of wake advection is key to providing the proposed type of model-based coordinated control of large wind farms. We further explore the performance of the dynamic-model control via composite performance scores used by PJM to qualify plants for regulation services or markets. Our results demonstrate that the dynamic-model-controlled wind farm consistently performs well, passing the qualification threshold for all fast-acting RegD signals. For the RegA signal, which changes over slower timescales, the dynamic-model control leads to average performance that surpasses the qualification threshold, but further

  18. Providing hierarchical approach for measuring supply chain performance using AHP and DEMATEL methodologies

    Directory of Open Access Journals (Sweden)

    Ali Najmi

    2010-06-01

    Full Text Available Measuring the performance of a supply chain is normally of a function of various parameters. Such a problem often involves in a multiple criteria decision making (MCMD problem where different criteria need to be defined and calculated, properly. During the past two decades, Analytical hierarchy procedure (AHP and DEMATEL have been some of the most popular MCDM approaches for prioritizing various attributes. The study of this paper uses a new methodology which is a combination of AHP and DEMATEL to rank various parameters affecting the performance of the supply chain. The DEMATEL is used for understanding the relationship between comparison metrics and AHP is used for the integration to provide a value for the overall performance.

  19. Scalable Multi-Platform Distribution of Spatial 3d Contents

    Science.gov (United States)

    Klimke, J.; Hagedorn, B.; Döllner, J.

    2013-09-01

    Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.

  20. Neuromorphic adaptive plastic scalable electronics: analog learning systems.

    Science.gov (United States)

    Srinivasa, Narayan; Cruz-Albrecht, Jose

    2012-01-01

    Decades of research to build programmable intelligent machines have demonstrated limited utility in complex, real-world environments. Comparing their performance with biological systems, these machines are less efficient by a factor of 1 million1 billion in complex, real-world environments. The Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program is a multifaceted Defense Advanced Research Projects Agency (DARPA) project that seeks to break the programmable machine paradigm and define a new path for creating useful, intelligent machines. Since real-world systems exhibit infinite combinatorial complexity, electronic neuromorphic machine technology would be preferable in a host of applications, but useful and practical implementations still do not exist. HRL Laboratories LLC has embarked on addressing these challenges, and, in this article, we provide an overview of our project and progress made thus far.

  1. Can trained lay providers perform HIV testing services? A review of national HIV testing policies.

    Science.gov (United States)

    Flynn, David E; Johnson, Cheryl; Sands, Anita; Wong, Vincent; Figueroa, Carmen; Baggaley, Rachel

    2017-01-04

    Only an estimated 54% of people living with HIV are aware of their status. Despite progress scaling up HIV testing services (HTS), a testing gap remains. Delivery of HTS by lay providers may help close this testing gap, while also increasing uptake and acceptability of HIV testing among key populations and other priority groups. 50 National HIV testing policies were collated from WHO country intelligence databases, contacts and testing program websites. Data regarding lay provider use for HTS was extracted and collated. Our search had no geographical or language restrictions. This data was then compared with reported data from the Global AIDS Response Progress Reporting (GARPR) from July 2015. Forty-two percent of countries permit lay providers to perform HIV testing and 56% permit lay providers to administer pre-and post-test counseling. Comparative analysis with GARPR found that less than half (46%) of reported data from countries were consistent with their corresponding national HIV testing policy. Given the low uptake of lay provider use globally and their proven use in increasing HIV testing, countries should consider revising policies to support lay provider testing using rapid diagnostic tests.

  2. GPU-based Scalable Volumetric Reconstruction for Multi-view Stereo

    Energy Technology Data Exchange (ETDEWEB)

    Kim, H; Duchaineau, M; Max, N

    2011-09-21

    We present a new scalable volumetric reconstruction algorithm for multi-view stereo using a graphics processing unit (GPU). It is an effectively parallelized GPU algorithm that simultaneously uses a large number of GPU threads, each of which performs voxel carving, in order to integrate depth maps with images from multiple views. Each depth map, triangulated from pair-wise semi-dense correspondences, represents a view-dependent surface of the scene. This algorithm also provides scalability for large-scale scene reconstruction in a high resolution voxel grid by utilizing streaming and parallel computation. The output is a photo-realistic 3D scene model in a volumetric or point-based representation. We demonstrate the effectiveness and the speed of our algorithm with a synthetic scene and real urban/outdoor scenes. Our method can also be integrated with existing multi-view stereo algorithms such as PMVS2 to fill holes or gaps in textureless regions.

  3. Wind farms providing secondary frequency regulation: Evaluating the performance of model-based receding horizon control

    International Nuclear Information System (INIS)

    Shapiro, Carl R.; Meneveau, Charles; Gayme, Dennice F.; Meyers, Johan

    2016-01-01

    We investigate the use of wind farms to provide secondary frequency regulation for a power grid. Our approach uses model-based receding horizon control of a wind farm that is tested using a large eddy simulation (LES) framework. In order to enable real-time implementation, the control actions are computed based on a time-varying one-dimensional wake model. This model describes wake advection and interactions, both of which play an important role in wind farm power production. This controller is implemented in an LES model of an 84-turbine wind farm represented by actuator disk turbine models. Differences between the velocities at each turbine predicted by the wake model and measured in LES are used for closed-loop feedback. The controller is tested on two types of regulation signals, “RegA” and “RegD”, obtained from PJM, an independent system operator in the eastern United States. Composite performance scores, which are used by PJM to qualify plants for regulation, are used to evaluate the performance of the controlled wind farm. Our results demonstrate that the controlled wind farm consistently performs well, passing the qualification threshold for all fastacting RegD signals. For the RegA signal, which changes over slower time scales, the controlled wind farm's average performance surpasses the threshold, but further work is needed to enable the controlled system to achieve qualifying performance all of the time. (paper)

  4. The Impact of Order Source Misattribution on Computerized Provider Order Entry (CPOE) Performance Metrics.

    Science.gov (United States)

    Gellert, George A; Catzoela, Linda; Patel, Lajja; Bruner, Kylynn; Friedman, Felix; Ramirez, Ricardo; Saucedo, Lilliana; Webster, S Luke; Gillean, John A

    2017-01-01

    One strategy to foster adoption of computerized provider order entry (CPOE) by physicians is the monthly distribution of a list identifying the number and use rate percentage of orders entered electronically versus on paper by each physician in the facility. Physicians care about CPOE use rate reports because they support the patient safety and quality improvement objectives of CPOE implementation. Certain physician groups are also motivated because they participate in contracted financial and performance arrangements that include incentive payments or financial penalties for meeting (or failing to meet) a specified CPOE use rate target. Misattribution of order sources can hinder accurate measurement of individual physician CPOE use and can thereby undermine providers' confidence in their reported performance, as well as their motivation to utilize CPOE. Misattribution of order sources also has significant patient safety, quality, and medicolegal implications. This analysis sought to evaluate the magnitude and sources of misattribution among hospitalists with high CPOE use and, if misattribution was found, to formulate strategies to prevent and reduce its recurrence, thereby ensuring the integrity and credibility of individual and facility CPOE use rate reporting. A detailed manual order source review and validation of all orders issued by one hospitalist group at a midsize community hospital was conducted for a one-month study period. We found that a small but not dismissible percentage of orders issued by hospitalists-up to 4.18 percent (95 percent confidence interval, 3.84-4.56 percent) per month-were attributed inaccurately. Sources of misattribution by department or function were as follows: nursing, 42 percent; pharmacy, 38 percent; laboratory, 15 percent; unit clerk, 3 percent; and radiology, 2 percent. Order management and protocol were the most common correct order sources that were incorrectly attributed. Order source misattribution can negatively affect

  5. Building a scalable event-level metadata service for ATLAS

    International Nuclear Information System (INIS)

    Cranshaw, J; Malon, D; Goosens, L; Viegas, F T A; McGlone, H

    2008-01-01

    The ATLAS TAG Database is a multi-terabyte event-level metadata selection system, intended to allow discovery, selection of and navigation to events of interest to an analysis. The TAG Database encompasses file- and relational-database-resident event-level metadata, distributed across all ATLAS Tiers. An oracle hosted global TAG relational database, containing all ATLAS events, implemented in Oracle, will exist at Tier O. Implementing a system that is both performant and manageable at this scale is a challenge. A 1 TB relational TAG Database has been deployed at Tier 0 using simulated tag data. The database contains one billion events, each described by two hundred event metadata attributes, and is currently undergoing extensive testing in terms of queries, population and manageability. These 1 TB tests aim to demonstrate and optimise the performance and scalability of an Oracle TAG Database on a global scale. Partitioning and indexing strategies are crucial to well-performing queries and manageability of the database and have implications for database population and distribution, so these are investigated. Physics query patterns are anticipated, but a crucial feature of the system must be to support a broad range of queries across all attributes. Concurrently, event tags from ATLAS Computing System Commissioning distributed simulations are accumulated in an Oracle-hosted database at CERN, providing an event-level selection service valuable for user experience and gathering information about physics query patterns. In this paper we describe the status of the Global TAG relational database scalability work and highlight areas of future direction

  6. Scalable fast multipole accelerated vortex methods

    KAUST Repository

    Hu, Qi

    2014-05-01

    The fast multipole method (FMM) is often used to accelerate the calculation of particle interactions in particle-based methods to simulate incompressible flows. To evaluate the most time-consuming kernels - the Biot-Savart equation and stretching term of the vorticity equation, we mathematically reformulated it so that only two Laplace scalar potentials are used instead of six. This automatically ensuring divergence-free far-field computation. Based on this formulation, we developed a new FMM-based vortex method on heterogeneous architectures, which distributed the work between multicore CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm uses new data structures which can dynamically manage inter-node communication and load balance efficiently, with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching calculation for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s.

  7. Motor unit recruitment by size does not provide functional advantages for motor performance.

    Science.gov (United States)

    Dideriksen, Jakob L; Farina, Dario

    2013-12-15

    It is commonly assumed that the orderly recruitment of motor units by size provides a functional advantage for the performance of movements compared with a random recruitment order. On the other hand, the excitability of a motor neuron depends on its size and this is intrinsically linked to its innervation number. A range of innervation numbers among motor neurons corresponds to a range of sizes and thus to a range of excitabilities ordered by size. Therefore, if the excitation drive is similar among motor neurons, the recruitment by size is inevitably due to the intrinsic properties of motor neurons and may not have arisen to meet functional demands. In this view, we tested the assumption that orderly recruitment is necessarily beneficial by determining if this type of recruitment produces optimal motor output. Using evolutionary algorithms and without any a priori assumptions, the parameters of neuromuscular models were optimized with respect to several criteria for motor performance. Interestingly, the optimized model parameters matched well known neuromuscular properties, but none of the optimization criteria determined a consistent recruitment order by size unless this was imposed by an association between motor neuron size and excitability. Further, when the association between size and excitability was imposed, the resultant model of recruitment did not improve the motor performance with respect to the absence of orderly recruitment. A consistent observation was that optimal solutions for a variety of criteria of motor performance always required a broad range of innervation numbers in the population of motor neurons, skewed towards the small values. These results indicate that orderly recruitment of motor units in itself does not provide substantial functional advantages for motor control. Rather, the reason for its near-universal presence in human movements is that motor functions are optimized by a broad range of innervation numbers.

  8. BUSINESS PERFORMANCE OF HEALTH TOURISM SERVICE PROVIDERS IN THE REPUBLIC OF CROATIA.

    Science.gov (United States)

    Vrkljan, Sanela; Hendija, Zvjezdana

    2016-03-01

    Health tourism can be generally divided into medical, health spa and wellness tourism. Health spa tourism services are provided in special hospitals for medical rehabilitation and health resorts, and include under medical supervision controlled use of natural healing factors and physical therapy in order to improve and preserve health. There are 13 special hospitals for medical rehabilitation and health resorts in Croatia. Most of them are financed through the state budget and lesser by sale on the market. More than half of their accommodation capacity is offered for sale on the market while the rest is under contract with the Croatian Health Insurance Fund. Domestic overnights are several times higher than foreign overnights. The aim of this study was to analyze business performance of special hospitals for medical rehabilitation and health resorts in Croatia in relation to the sources of financing and the structure of service users. The assumption was that those who are more market-oriented achieve better business performance. In proving these assumptions, an empirical research was conducted and the assumptions were tested. A positive correlation was proven in tested indicators of business performance of the analyzed service providers of health-spa tourism with a higher amount of overnight stays realized through sales on the market in relation to total overnight stays, with a greater share of foreign overnights in total of overnights and with a higher share of realized revenue on the market out of total revenue. The results of the research show that special hospitals for medical rehabilitation and health resorts that are more market-oriented are more successful in their business performance. These findings are important for planning the health and tourism policies in countries like Croatia.

  9. An exploratory study of factors influencing resuscitation skills retention and performance among health providers.

    Science.gov (United States)

    Curran, Vernon; Fleet, Lisa; Greene, Melanie

    2012-01-01

    Resuscitation and life support skills training comprises a significant proportion of continuing education programming for health professionals. The purpose of this study was to explore the perceptions and attitudes of certified resuscitation providers toward the retention of resuscitation skills, regular skills updating, and methods for enhancing retention. A mixed-methods, explanatory study design was undertaken utilizing focus groups and an online survey-questionnaire of rural and urban health care providers. Rural providers reported less experience with real codes and lower abilities across a variety of resuscitation areas. Mock codes, practice with an instructor and a team, self-practice with a mannequin, and e-learning were popular methods for skills updating. Aspects of team performance that were felt to influence resuscitation performance included: discrepancies in skill levels, lack of communication, and team leaders not up to date on their skills. Confidence in resuscitation abilities was greatest after one had recently practiced or participated in an update or an effective debriefing session. Lowest confidence was reported when team members did not work well together, there was no clear leader of the resuscitation code, or if team members did not communicate. The study findings highlight the importance of access to update methods for improving providers' confidence and abilities, and the need for emphasis on teamwork training in resuscitation. An eclectic approach combining methods may be the best strategy for addressing the needs of health professionals across various clinical departments and geographic locales. Copyright © 2012 The Alliance for Continuing Education in the Health Professions, the Society for Academic Continuing Medical Education, and the Council on CME, Association for Hospital Medical Education.

  10. Performance evaluation of hospitals that provide care in the public health system, Brazil.

    Science.gov (United States)

    Ramos, Marcelo Cristiano de Azevedo; da Cruz, Lucila Pedroso; Kishima, Vanessa Chaer; Pollara, Wilson Modesto; de Lira, Antônio Carlos Onofre; Couttolenc, Bernard François

    2015-01-01

    OBJECTIVE To analyze if size, administrative level, legal status, type of unit and educational activity influence the hospital network performance in providing services to the Brazilian Unified Health System. METHODS This cross-sectional study evaluated data from the Hospital Information System and the Cadastro Nacional de Estabelecimentos de Saúde (National Registry of Health Facilities), 2012, in Sao Paulo, Southeastern Brazil. We calculated performance indicators, such as: the ratio of hospital employees per bed; mean amount paid for admission; bed occupancy rate; average length of stay; bed turnover index and hospital mortality rate. Data were expressed as mean and standard deviation. The groups were compared using analysis of variance (ANOVA) and Bonferroni correction. RESULTS The hospital occupancy rate in small hospitals was lower than in medium, big and special-sized hospitals. Higher hospital occupancy rate and bed turnover index were observed in hospitals that include education in their activities. The hospital mortality rate was lower in specialized hospitals compared to general ones, despite their higher proportion of highly complex admissions. We found no differences between hospitals in the direct and indirect administration for most of the indicators analyzed. CONCLUSIONS The study indicated the importance of the scale effect on efficiency, and larger hospitals had a higher performance. Hospitals that include education in their activities had a higher operating performance, albeit with associated importance of using human resources and highly complex structures. Specialized hospitals had a significantly lower rate of mortality than general hospitals, indicating the positive effect of the volume of procedures and technology used on clinical outcomes. The analysis related to the administrative level and legal status did not show any significant performance differences between the categories of public hospitals.

  11. Performance evaluation of hospitals that provide care in the public health system, Brazil

    Directory of Open Access Journals (Sweden)

    Marcelo Cristiano de Azevedo Ramos

    2015-01-01

    Full Text Available OBJECTIVE To analyze if size, administrative level, legal status, type of unit and educational activity influence the hospital network performance in providing services to the Brazilian Unified Health System.METHODS This cross-sectional study evaluated data from the Hospital Information System and the Cadastro Nacional de Estabelecimentos de Saúde (National Registry of Health Facilities, 2012, in Sao Paulo, Southeastern Brazil. We calculated performance indicators, such as: the ratio of hospital employees per bed; mean amount paid for admission; bed occupancy rate; average length of stay; bed turnover index and hospital mortality rate. Data were expressed as mean and standard deviation. The groups were compared using analysis of variance (ANOVA and Bonferroni correction.RESULTS The hospital occupancy rate in small hospitals was lower than in medium, big and special-sized hospitals. Higher hospital occupancy rate and bed turnover index were observed in hospitals that include education in their activities. The hospital mortality rate was lower in specialized hospitals compared to general ones, despite their higher proportion of highly complex admissions. We found no differences between hospitals in the direct and indirect administration for most of the indicators analyzed.CONCLUSIONS The study indicated the importance of the scale effect on efficiency, and larger hospitals had a higher performance. Hospitals that include education in their activities had a higher operating performance, albeit with associated importance of using human resources and highly complex structures. Specialized hospitals had a significantly lower rate of mortality than general hospitals, indicating the positive effect of the volume of procedures and technology used on clinical outcomes. The analysis related to the administrative level and legal status did not show any significant performance differences between the categories of public hospitals.

  12. Joint-layer encoder optimization for HEVC scalable extensions

    Science.gov (United States)

    Tsai, Chia-Ming; He, Yuwen; Dong, Jie; Ye, Yan; Xiu, Xiaoyu; He, Yong

    2014-09-01

    Scalable video coding provides an efficient solution to support video playback on heterogeneous devices with various channel conditions in heterogeneous networks. SHVC is the latest scalable video coding standard based on the HEVC standard. To improve enhancement layer coding efficiency, inter-layer prediction including texture and motion information generated from the base layer is used for enhancement layer coding. However, the overall performance of the SHVC reference encoder is not fully optimized because rate-distortion optimization (RDO) processes in the base and enhancement layers are independently considered. It is difficult to directly extend the existing joint-layer optimization methods to SHVC due to the complicated coding tree block splitting decisions and in-loop filtering process (e.g., deblocking and sample adaptive offset (SAO) filtering) in HEVC. To solve those problems, a joint-layer optimization method is proposed by adjusting the quantization parameter (QP) to optimally allocate the bit resource between layers. Furthermore, to make more proper resource allocation, the proposed method also considers the viewing probability of base and enhancement layers according to packet loss rate. Based on the viewing probability, a novel joint-layer RD cost function is proposed for joint-layer RDO encoding. The QP values of those coding tree units (CTUs) belonging to lower layers referenced by higher layers are decreased accordingly, and the QP values of those remaining CTUs are increased to keep total bits unchanged. Finally the QP values with minimal joint-layer RD cost are selected to match the viewing probability. The proposed method was applied to the third temporal level (TL-3) pictures in the Random Access configuration. Simulation results demonstrate that the proposed joint-layer optimization method can improve coding performance by 1.3% for these TL-3 pictures compared to the SHVC reference encoder without joint-layer optimization.

  13. The role of kaizen in creating radical performance results in a logistics service provider

    Directory of Open Access Journals (Sweden)

    Erez Agmoni

    2016-09-01

    Full Text Available Background: This study investigates the role of an incremental change in organizational process in creating radical performance results in a service provider company. The role of Kaizen is established prominently in manufacturing, but is nascent in service applications. This study examines the impact of introducing Kaizen as an ODI tool-how it is applied, how it works, and whether participants believe it helps service groups form more effective working relationships that result in significant performance improvements. Methods: Exploring the evolving role of Kaizen in service contexts, this study explores a variety of facets of human communication in the context of continuous improvement and teamwork inter-organizationally. The paper consists of an archival study and an action research case study. A pre-intervention study consisting of observations, interviews, and submission of questionnaires to employees of a manufacturing and air-sea freight firm was conducted. A Kaizen intervention occurred subsequently, and a post-intervention study was then conducted. Results: Radical improvements in both companies such as 30% financial growth, 81% productivity improvement and more are demonstrated in this paper. Conclusions: Findings offer unique insights into the effects of Kaizen in creating radical performance improvements in a service company and its customer. Both qualitative and quantitative results of business, satisfaction, and productivity suggest time invested in introducing Kaizen into a service organization helps the companies improve relationships and improve the bottom line dramatically.

  14. Quality Scalability Compression on Single-Loop Solution in HEVC

    Directory of Open Access Journals (Sweden)

    Mengmeng Zhang

    2014-01-01

    Full Text Available This paper proposes a quality scalable extension design for the upcoming high efficiency video coding (HEVC standard. In the proposed design, the single-loop decoder solution is extended into the proposed scalable scenario. A novel interlayer intra/interprediction is added to reduce the amount of bits representation by exploiting the correlation between coding layers. The experimental results indicate that the average Bjøntegaard delta rate decrease of 20.50% can be gained compared with the simulcast encoding. The proposed technique achieved 47.98% Bjøntegaard delta rate reduction compared with the scalable video coding extension of the H.264/AVC. Consequently, significant rate savings confirm that the proposed method achieves better performance.

  15. Application Scalability and Performance on Multicore Architectures

    National Research Council Canada - National Science Library

    Simon, Tyler A; Cable, Sam B; Mahmoodi, Mahin

    2007-01-01

    The US Army Engineer Research and Development Center Major Shared Resource Center has recently upgraded its Cray XT3 system from single-core to dual-core AMD Opteron processors and has procured a quad...

  16. PROVIDING ENGLISH LANGUAGE INPUT: DECREASING STUDENTS’ ANXIETY IN READING COMPREHENSION PERFORMANCE

    Directory of Open Access Journals (Sweden)

    Elva Yohana

    2016-11-01

    Full Text Available The primary condition for successful in second or foreign language learning is providing an adequate environment. It is as a medium of increasing the students’ language exposure in order to be able to success in acquiring second or foreign language profciency. This study was designed to propose the adequate English language input that can decrease the students’ anxiety in reading comprehension performance. Of the four skills, somehow reading can be regarded as especially important because reading is assumed to be the central means for learning new information. Some students, however, still encounter many problems in reading. It is because of their anxiety when they are reading. Providing and creating an interesting-contextual reading material and gratifed teachers can make out this problem which occurs mostly in Indonesian’s classrooms. It revealed that the younger learners of English there do not received adequate amount of the target language input in their learning of English. Hence, it suggested the adoption of extensive reading programs as the most effective means in the creation of an input-rich environment in EFL learning contexts. Besides they also give suggestion to book writers and publisher to provide myriad books that appropriate and readable for their students.

  17. Market competition, ownership, payment systems and the performance of health care providers - a panel study among Finnish occupational health services providers.

    Science.gov (United States)

    Kankaanpää, Eila; Linnosmaa, Ismo; Valtonen, Hannu

    2013-10-01

    Many health care reforms rely on competition although health care differs in many respects from the assumptions of perfect competition. Finnish occupational health services provide an opportunity to study empirically competition, ownership and payment systems and the performance of providers. In these markets employers (purchasers) choose the provider and prices are market determined. The price regulation of public providers was abolished in 1995. We had data on providers from 1992, 1995, 1997, 2000 and 2004. The unbalanced panel consisted of 1145 providers and 4059 observations. Our results show that in more competitive markets providers in general offered a higher share of medical care compared to preventive services. The association between unit prices and revenues and market environment varied according to the provider type. For-profit providers had lower prices and revenues in markets with numerous providers. The public providers in more competitive regions were more sensitive to react to the abolishment of their price regulation by raising their prices. Employer governed providers had weaker association between unit prices or revenues and competition. The market share of for-profit providers was negatively associated with productivity, which was the only sign of market spillovers we found in our study.

  18. GPU-FS-kNN: a software tool for fast and scalable kNN computation using GPUs.

    Directory of Open Access Journals (Sweden)

    Ahmed Shamsul Arefin

    Full Text Available BACKGROUND: The analysis of biological networks has become a major challenge due to the recent development of high-throughput techniques that are rapidly producing very large data sets. The exploding volumes of biological data are craving for extreme computational power and special computing facilities (i.e. super-computers. An inexpensive solution, such as General Purpose computation based on Graphics Processing Units (GPGPU, can be adapted to tackle this challenge, but the limitation of the device internal memory can pose a new problem of scalability. An efficient data and computational parallelism with partitioning is required to provide a fast and scalable solution to this problem. RESULTS: We propose an efficient parallel formulation of the k-Nearest Neighbour (kNN search problem, which is a popular method for classifying objects in several fields of research, such as pattern recognition, machine learning and bioinformatics. Being very simple and straightforward, the performance of the kNN search degrades dramatically for large data sets, since the task is computationally intensive. The proposed approach is not only fast but also scalable to large-scale instances. Based on our approach, we implemented a software tool GPU-FS-kNN (GPU-based Fast and Scalable k-Nearest Neighbour for CUDA enabled GPUs. The basic approach is simple and adaptable to other available GPU architectures. We observed speed-ups of 50-60 times compared with CPU implementation on a well-known breast microarray study and its associated data sets. CONCLUSION: Our GPU-based Fast and Scalable k-Nearest Neighbour search technique (GPU-FS-kNN provides a significant performance improvement for nearest neighbour computation in large-scale networks. Source code and the software tool is available under GNU Public License (GPL at https://sourceforge.net/p/gpufsknn/.

  19. The Impact of Order Source Misattribution on Computerized Provider Order Entry (CPOE) Performance Metrics

    Science.gov (United States)

    Gellert, George A.; Catzoela, Linda; Patel, Lajja; Bruner, Kylynn; Friedman, Felix; Ramirez, Ricardo; Saucedo, Lilliana; Webster, S. Luke; Gillean, John A.

    2017-01-01

    Background One strategy to foster adoption of computerized provider order entry (CPOE) by physicians is the monthly distribution of a list identifying the number and use rate percentage of orders entered electronically versus on paper by each physician in the facility. Physicians care about CPOE use rate reports because they support the patient safety and quality improvement objectives of CPOE implementation. Certain physician groups are also motivated because they participate in contracted financial and performance arrangements that include incentive payments or financial penalties for meeting (or failing to meet) a specified CPOE use rate target. Misattribution of order sources can hinder accurate measurement of individual physician CPOE use and can thereby undermine providers’ confidence in their reported performance, as well as their motivation to utilize CPOE. Misattribution of order sources also has significant patient safety, quality, and medicolegal implications. Objective This analysis sought to evaluate the magnitude and sources of misattribution among hospitalists with high CPOE use and, if misattribution was found, to formulate strategies to prevent and reduce its recurrence, thereby ensuring the integrity and credibility of individual and facility CPOE use rate reporting. Methods A detailed manual order source review and validation of all orders issued by one hospitalist group at a midsize community hospital was conducted for a one-month study period. Results We found that a small but not dismissible percentage of orders issued by hospitalists—up to 4.18 percent (95 percent confidence interval, 3.84–4.56 percent) per month—were attributed inaccurately. Sources of misattribution by department or function were as follows: nursing, 42 percent; pharmacy, 38 percent; laboratory, 15 percent; unit clerk, 3 percent; and radiology, 2 percent. Order management and protocol were the most common correct order sources that were incorrectly attributed

  20. CloudTPS: Scalable Transactions for Web Applications in the Cloud

    NARCIS (Netherlands)

    Zhou, W.; Pierre, G.E.O.; Chi, C.-H.

    2010-01-01

    NoSQL Cloud data services provide scalability and high availability properties for web applications but at the same time they sacrifice data consistency. However, many applications cannot afford any data inconsistency. CloudTPS is a scalable transaction manager to allow cloud database services to

  1. 42 CFR 493.53 - Notification requirements for laboratories issued a certificate for provider-performed microscopy...

    Science.gov (United States)

    2010-10-01

    ... certificate for provider-performed microscopy (PPM) procedures. 493.53 Section 493.53 Public Health CENTERS... CERTIFICATION LABORATORY REQUIREMENTS Registration Certificate, Certificate for Provider-performed Microscopy... certificate for provider-performed microscopy (PPM) procedures. Laboratories issued a certificate for PPM...

  2. Scalable Techniques for Formal Verification

    CERN Document Server

    Ray, Sandip

    2010-01-01

    This book presents state-of-the-art approaches to formal verification techniques to seamlessly integrate different formal verification methods within a single logical foundation. It should benefit researchers and practitioners looking to get a broad overview of the spectrum of formal verification techniques, as well as approaches to combining such techniques within a single framework. Coverage includes a range of case studies showing how such combination is fruitful in developing a scalable verification methodology for industrial designs. This book outlines both theoretical and practical issue

  3. Developing Scalable Information Security Systems

    Directory of Open Access Journals (Sweden)

    Valery Konstantinovich Ablekov

    2013-06-01

    Full Text Available Existing physical security systems has wide range of lacks, including: high cost, a large number of vulnerabilities, problems of modification and support system. This paper covers an actual problem of developing systems without this list of drawbacks. The paper presents the architecture of the information security system, which operates through the network protocol TCP/IP, including the ability to connect different types of devices and integration with existing security systems. The main advantage is a significant increase in system reliability, scalability, both vertically and horizontally, with minimal cost of both financial and time resources.

  4. Scalable storage for a DBMS using transparent distribution

    NARCIS (Netherlands)

    J.S. Karlsson; M.L. Kersten (Martin)

    1997-01-01

    textabstractScalable Distributed Data Structures (SDDSs) provide a self-managing and self-organizing data storage of potentially unbounded size. This stands in contrast to common distribution schemas deployed in conventional distributed DBMS. SDDSs, however, have mostly been used in synthetic

  5. Cascaded column generation for scalable predictive demand side management

    NARCIS (Netherlands)

    Toersche, Hermen; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2014-01-01

    We propose a nested Dantzig-Wolfe decomposition, combined with dynamic programming, for the distributed scheduling of a large heterogeneous fleet of residential appliances with nonlinear behavior. A cascaded column generation approach gives a scalable optimization strategy, provided that the problem

  6. Scalable electro-photonic integration concept based on polymer waveguides

    NARCIS (Netherlands)

    Bosman, E.; Steenberge, G. van; Boersma, A.; Wiegersma, S.; Harmsma, P.J.; Karppinen, M.; Korhonen, T.; Offrein, B.J.; Dangel, R.; Daly, A.; Ortsiefer, M.; Justice, J.; Corbett, B.; Dorrestein, S.; Duis, J.

    2016-01-01

    A novel method for fabricating a single mode optical interconnection platform is presented. The method comprises the miniaturized assembly of optoelectronic single dies, the scalable fabrication of polymer single mode waveguides and the coupling to glass fiber arrays providing the I/O's. The low

  7. Scalable multifunction RF system concepts for joint operations

    NARCIS (Netherlands)

    Otten, M.P.G.; Wit, J.J.M. de; Smits, F.M.A.; Rossum, W.L. van; Huizing, A.

    2010-01-01

    RF systems based on modular architectures have the potential of better re-use of technology, decreasing development time, and decreasing life cycle cost. Moreover, modular architectures provide scalability, allowing low cost upgrades and adaptability to different platforms. To achieve maximum

  8. Optimizing Nanoelectrode Arrays for Scalable Intracellular Electrophysiology.

    Science.gov (United States)

    Abbott, Jeffrey; Ye, Tianyang; Ham, Donhee; Park, Hongkun

    2018-03-20

    , clarifying how the nanoelectrode attains intracellular access. This understanding will be translated into a circuit model for the nanobio interface, which we will then use to lay out the strategies for improving the interface. The intracellular interface of the nanoelectrode is currently inferior to that of the patch clamp electrode; reaching this benchmark will be an exciting challenge that involves optimization of electrode geometries, materials, chemical modifications, electroporation protocols, and recording/stimulation electronics, as we describe in the Account. Another important theme of this Account, beyond the optimization of the individual nanoelectrode-cell interface, is the scalability of the nanoscale electrodes. We will discuss this theme using a recent development from our groups as an example, where an array of ca. 1000 nanoelectrode pixels fabricated on a CMOS integrated circuit chip performs parallel intracellular recording from a few hundreds of cardiomyocytes, which marks a new milestone in electrophysiology.

  9. Improving the performance of solar still by using nanofluids and providing vacuum

    International Nuclear Information System (INIS)

    Kabeel, A.E.; Omara, Z.M.; Essa, F.A.

    2014-01-01

    Highlights: • Modified solar still integrated with an external condenser and used nanofluids were studied. • The results obtained that using cuprous oxide increased the distilled productivity by 133.64%. • Using the aluminum oxide–water nanofluid increased the distillate productivity of the modified still by 125.0%. - Abstract: The experimental modifications were carried out into the conventional solar still, considerably increasing the distillate water productivity. The effects of using different types of nanomaterials on the performance of solar still were studied. The investigated solid nanoparticles are the cuprous and aluminum oxides. The performance was investigated at different weight fraction concentrations of nanoparticles in the basin water with and without providing vacuum. These additions and modifications greatly improve the evaporation and condensation rates and hence the distillate yield was augmented. The research was conducted for range of concentrations starting from 0.02% to 0.2% with a step of 0.02%. The maxima productivity was obtained for using the cuprous oxide nanoparticles with a concentration of 0.2% with operating the vacuum fan. The results obtained that using cuprous oxide nanoparticles increased the distilled productivity by 133.64% and 93.87% with and without the fan respectively. On the other hand, using aluminum oxide nanoparticles enhanced the distillate by 125.0% and 88.97% with and without the fan respectively as compared to the conventional still. The estimated cost of 1.0 l of distillate are approximately 0.035$, 0.045$ when using the cuprous oxide nanomaterial with and without the fan and, as well as the aluminum oxide nanoparticles, 0.038$ and 0.051$ respectively, and for the conventional still is 0.048$

  10. Providing Extrinsic Reward for Test Performance Undermines Long-Term Memory Acquisition

    Directory of Open Access Journals (Sweden)

    Christof eKuhbandner

    2016-02-01

    Full Text Available Based on numerous studies showing that testing studied material can improve long-term retention more than restudying the same material, it is often suggested that the number of tests in education should be increased to enhance knowledge acquisition. However, testing in real-life educational settings often entails a high degree of extrinsic motivation of learners due to the common practice of placing important consequences on the outcome of a test. Such an effect on the motivation of learners may undermine the beneficial effects of testing on long-term memory because it has been shown that extrinsic motivation can reduce the quality of learning. To examine this issue, participants learned foreign language vocabulary words, followed by an immediate test in which one third of the words were tested and one third restudied. To manipulate extrinsic motivation during immediate testing, participants received either monetary reward contingent on test performance or no reward. After one week, memory for all words was tested. In the immediate test, reward reduced correct recall and increased commission errors, indicating that reward reduced the number of items that can benefit from successful retrieval. The results in the delayed test revealed that reward additionally reduced the gain received from successful retrieval because memory for initially successfully retrieved words was lower in the reward condition. However, testing was still more effective than restudying under reward conditions because reward undermined long-term memory for concurrently restudied material as well. These findings indicate that providing performance-contingent reward in a test can undermine long-term knowledge acquisition.

  11. Providing Extrinsic Reward for Test Performance Undermines Long-Term Memory Acquisition.

    Science.gov (United States)

    Kuhbandner, Christof; Aslan, Alp; Emmerdinger, Kathrin; Murayama, Kou

    2016-01-01

    Based on numerous studies showing that testing studied material can improve long-term retention more than restudying the same material, it is often suggested that the number of tests in education should be increased to enhance knowledge acquisition. However, testing in real-life educational settings often entails a high degree of extrinsic motivation of learners due to the common practice of placing important consequences on the outcome of a test. Such an effect on the motivation of learners may undermine the beneficial effects of testing on long-term memory because it has been shown that extrinsic motivation can reduce the quality of learning. To examine this issue, participants learned foreign language vocabulary words, followed by an immediate test in which one-third of the words were tested and one-third restudied. To manipulate extrinsic motivation during immediate testing, participants received either monetary reward contingent on test performance or no reward. After 1 week, memory for all words was tested. In the immediate test, reward reduced correct recall and increased commission errors, indicating that reward reduced the number of items that can benefit from successful retrieval. The results in the delayed test revealed that reward additionally reduced the gain received from successful retrieval because memory for initially successfully retrieved words was lower in the reward condition. However, testing was still more effective than restudying under reward conditions because reward undermined long-term memory for concurrently restudied material as well. These findings indicate that providing performance-contingent reward in a test can undermine long-term knowledge acquisition.

  12. When to Stop CPR and When to Perform Rhythm Analysis: Potential Confusion Among ACLS Providers.

    Science.gov (United States)

    Giberson, Brandon; Uber, Amy; F Gaieski, David; Miller, Joseph B; Wira, Charles; Berg, Katherine; Giberson, Tyler; Cocchi, Michael N; S Abella, Benjamin; Donnino, Michael W

    2016-09-01

    Health care providers nationwide are routinely trained in Advanced Cardiac Life Support (ACLS), an American Heart Association program that teaches cardiac arrest management. Recent changes in the ACLS approach have de-emphasized routine pulse checks in an effort to promote uninterrupted chest compressions. We hypothesized that this new ACLS algorithm may lead to uncertainty regarding the appropriate action following detection of a pulse during a cardiac arrest. We conducted an observational study in which a Web-based survey was sent to ACLS-trained medical providers at 4 major urban tertiary care centers in the United States. The survey consisted of 5 multiple-choice, scenario-based ACLS questions, including our question of interest. Adult staff members with a valid ACLS certification were included. A total of 347 surveys were analyzed. The response rate was 28.1%. The majority (53.6%) of responders were between 18 and 32 years old, and 59.9% were female. The majority (54.2%) of responders incorrectly stated that they would continue CPR and possibly administer additional therapies when a team member detects a pulse immediately following defibrillation. Secondarily, only 51.9% of respondents correctly chose to perform a rhythm check following 2 minutes of CPR. The other 3 survey questions were correctly answered an average of 89.1% of the time. Confusion exists regarding whether or not CPR and cardiac medications should be continued in the presence of a pulse. Education may be warranted to emphasize avoiding compressions and medications when a palpable pulse is detected. © The Author(s) 2014.

  13. A scalable variational inequality approach for flow through porous media models with pressure-dependent viscosity

    Science.gov (United States)

    Mapakshi, N. K.; Chang, J.; Nakshatrala, K. B.

    2018-04-01

    Mathematical models for flow through porous media typically enjoy the so-called maximum principles, which place bounds on the pressure field. It is highly desirable to preserve these bounds on the pressure field in predictive numerical simulations, that is, one needs to satisfy discrete maximum principles (DMP). Unfortunately, many of the existing formulations for flow through porous media models do not satisfy DMP. This paper presents a robust, scalable numerical formulation based on variational inequalities (VI), to model non-linear flows through heterogeneous, anisotropic porous media without violating DMP. VI is an optimization technique that places bounds on the numerical solutions of partial differential equations. To crystallize the ideas, a modification to Darcy equations by taking into account pressure-dependent viscosity will be discretized using the lowest-order Raviart-Thomas (RT0) and Variational Multi-scale (VMS) finite element formulations. It will be shown that these formulations violate DMP, and, in fact, these violations increase with an increase in anisotropy. It will be shown that the proposed VI-based formulation provides a viable route to enforce DMP. Moreover, it will be shown that the proposed formulation is scalable, and can work with any numerical discretization and weak form. A series of numerical benchmark problems are solved to demonstrate the effects of heterogeneity, anisotropy and non-linearity on DMP violations under the two chosen formulations (RT0 and VMS), and that of non-linearity on solver convergence for the proposed VI-based formulation. Parallel scalability on modern computational platforms will be illustrated through strong-scaling studies, which will prove the efficiency of the proposed formulation in a parallel setting. Algorithmic scalability as the problem size is scaled up will be demonstrated through novel static-scaling studies. The performed static-scaling studies can serve as a guide for users to be able to select

  14. Impact of packet losses in scalable 3D holoscopic video coding

    Science.gov (United States)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2014-05-01

    Holoscopic imaging became a prospective glassless 3D technology to provide more natural 3D viewing experiences to the end user. Additionally, holoscopic systems also allow new post-production degrees of freedom, such as controlling the plane of focus or the viewing angle presented to the user. However, to successfully introduce this technology into the consumer market, a display scalable coding approach is essential to achieve backward compatibility with legacy 2D and 3D displays. Moreover, to effectively transmit 3D holoscopic content over error-prone networks, e.g., wireless networks or the Internet, error resilience techniques are required to mitigate the impact of data impairments in the user quality perception. Therefore, it is essential to deeply understand the impact of packet losses in terms of decoding video quality for the specific case of 3D holoscopic content, notably when a scalable approach is used. In this context, this paper studies the impact of packet losses when using a three-layer display scalable 3D holoscopic video coding architecture previously proposed, where each layer represents a different level of display scalability (i.e., L0 - 2D, L1 - stereo or multiview, and L2 - full 3D holoscopic). For this, a simple error concealment algorithm is used, which makes use of inter-layer redundancy between multiview and 3D holoscopic content and the inherent correlation of the 3D holoscopic content to estimate lost data. Furthermore, a study of the influence of 2D views generation parameters used in lower layers on the performance of the used error concealment algorithm is also presented.

  15. A graph algebra for scalable visual analytics.

    Science.gov (United States)

    Shaverdian, Anna A; Zhou, Hao; Michailidis, George; Jagadish, Hosagrahar V

    2012-01-01

    Visual analytics (VA), which combines analytical techniques with advanced visualization features, is fast becoming a standard tool for extracting information from graph data. Researchers have developed many tools for this purpose, suggesting a need for formal methods to guide these tools' creation. Increased data demands on computing requires redesigning VA tools to consider performance and reliability in the context of analysis of exascale datasets. Furthermore, visual analysts need a way to document their analyses for reuse and results justification. A VA graph framework encapsulated in a graph algebra helps address these needs. Its atomic operators include selection and aggregation. The framework employs a visual operator and supports dynamic attributes of data to enable scalable visual exploration of data.

  16. Parallel scalability of Hartree-Fock calculations

    Science.gov (United States)

    Chow, Edmond; Liu, Xing; Smelyanskiy, Mikhail; Hammond, Jeff R.

    2015-03-01

    Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree-Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.

  17. iSIGHT-FD scalability test report.

    Energy Technology Data Exchange (ETDEWEB)

    Clay, Robert L.; Shneider, Max S.

    2008-07-01

    The engineering analysis community at Sandia National Laboratories uses a number of internal and commercial software codes and tools, including mesh generators, preprocessors, mesh manipulators, simulation codes, post-processors, and visualization packages. We define an analysis workflow as the execution of an ordered, logical sequence of these tools. Various forms of analysis (and in particular, methodologies that use multiple function evaluations or samples) involve executing parameterized variations of these workflows. As part of the DART project, we are evaluating various commercial workflow management systems, including iSIGHT-FD from Engineous. This report documents the results of a scalability test that was driven by DAKOTA and conducted on a parallel computer (Thunderbird). The purpose of this experiment was to examine the suitability and performance of iSIGHT-FD for large-scale, parameterized analysis workflows. As the results indicate, we found iSIGHT-FD to be suitable for this type of application.

  18. Scalable graphene aptasensors for drug quantification

    Science.gov (United States)

    Vishnubhotla, Ramya; Ping, Jinglei; Gao, Zhaoli; Lee, Abigail; Saouaf, Olivia; Vrudhula, Amey; Johnson, A. T. Charlie

    2017-11-01

    Simpler and more rapid approaches for therapeutic drug-level monitoring are highly desirable to enable use at the point-of-care. We have developed an all-electronic approach for detection of the HIV drug tenofovir based on scalable fabrication of arrays of graphene field-effect transistors (GFETs) functionalized with a commercially available DNA aptamer. The shift in the Dirac voltage of the GFETs varied systematically with the concentration of tenofovir in deionized water, with a detection limit less than 1 ng/mL. Tests against a set of negative controls confirmed the specificity of the sensor response. This approach offers the potential for further development into a rapid and convenient point-of-care tool with clinically relevant performance.

  19. Constraints on muscle performance provide a novel explanation for the scaling of posture in terrestrial animals.

    Science.gov (United States)

    Usherwood, James R

    2013-08-23

    Larger terrestrial animals tend to support their weight with more upright limbs. This makes structural sense, reducing the loading on muscles and bones, which is disproportionately challenging in larger animals. However, it does not account for why smaller animals are more crouched; instead, they could enjoy relatively more slender supporting structures or higher safety factors. Here, an alternative account for the scaling of posture is proposed, with close parallels to the scaling of jump performance. If the costs of locomotion are related to the volume of active muscle, and the active muscle volume required depends on both the work and the power demanded during the push-off phase of each step (not just the net positive work), then the disproportional scaling of requirements for work and push-off power are revealing. Larger animals require relatively greater active muscle volumes for dynamically similar gaits (e.g. top walking speed)-which may present an ultimate constraint to the size of running animals. Further, just as for jumping, animals with shorter legs and briefer push-off periods are challenged to provide the power (not the work) required for push-off. This can be ameliorated by having relatively long push-off periods, potentially accounting for the crouched stance of small animals.

  20. DJFS: Providing Highly Reliable and High‐Performance File System with Small‐Sized

    Directory of Open Access Journals (Sweden)

    Junghoon Kim

    2017-11-01

    Full Text Available File systems and applications try to implement their own update protocols to guarantee data consistency, which is one of the most crucial aspects of computing systems. However, we found that the storage devices are substantially under‐utilized when preserving data consistency because they generate massive storage write traffic with many disk cache flush operations and force‐unit‐access (FUA commands. In this paper, we present DJFS (Delta‐Journaling File System that provides both a high level of performance and data consistency for different applications. We made three technical contributions to achieve our goal. First, to remove all storage accesses with disk cache flush operations and FUA commands, DJFS uses small‐sized NVRAM for a file system journal. Second, to reduce the access latency and space requirements of NVRAM, DJFS attempts to journal compress the differences in the modified blocks. Finally, to relieve explicit checkpointing overhead, DJFS aggressively reflects the checkpoint transactions to file system area in the unit of the specified region. Our evaluation on TPC‐C SQLite benchmark shows that, using our novel optimization schemes, DJFS outperforms Ext4 by up to 64.2 times with only 128 MB of NVRAM.

  1. Scalable group level probabilistic sparse factor analysis

    DEFF Research Database (Denmark)

    Hinrich, Jesper Løve; Nielsen, Søren Føns Vind; Riis, Nicolai Andre Brogaard

    2017-01-01

    Many data-driven approaches exist to extract neural representations of functional magnetic resonance imaging (fMRI) data, but most of them lack a proper probabilistic formulation. We propose a scalable group level probabilistic sparse factor analysis (psFA) allowing spatially sparse maps, component...... pruning using automatic relevance determination (ARD) and subject specific heteroscedastic spatial noise modeling. For task-based and resting state fMRI, we show that the sparsity constraint gives rise to components similar to those obtained by group independent component analysis. The noise modeling...... shows that noise is reduced in areas typically associated with activation by the experimental design. The psFA model identifies sparse components and the probabilistic setting provides a natural way to handle parameter uncertainties. The variational Bayesian framework easily extends to more complex...

  2. Scalable on-chip quantum state tomography

    Science.gov (United States)

    Titchener, James G.; Gräfe, Markus; Heilmann, René; Solntsev, Alexander S.; Szameit, Alexander; Sukhorukov, Andrey A.

    2018-03-01

    Quantum information systems are on a path to vastly exceed the complexity of any classical device. The number of entangled qubits in quantum devices is rapidly increasing, and the information required to fully describe these systems scales exponentially with qubit number. This scaling is the key benefit of quantum systems, however it also presents a severe challenge. To characterize such systems typically requires an exponentially long sequence of different measurements, becoming highly resource demanding for large numbers of qubits. Here we propose and demonstrate a novel and scalable method for characterizing quantum systems based on expanding a multi-photon state to larger dimensionality. We establish that the complexity of this new measurement technique only scales linearly with the number of qubits, while providing a tomographically complete set of data without a need for reconfigurability. We experimentally demonstrate an integrated photonic chip capable of measuring two- and three-photon quantum states with statistical reconstruction fidelity of 99.71%.

  3. The Concept of Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten

    2015-01-01

    The power of business models lies in their ability to visualize and clarify how firms’ may configure their value creation processes. Among the key aspects of business model thinking are a focus on what the customer values, how this value is best delivered to the customer and how strategic partners...... are leveraged in this value creation, delivery and realization exercise. Central to the mainstream understanding of business models is the value proposition towards the customer and the hypothesis generated is that if the firm delivers to the customer what he/she requires, then there is a good foundation...... for a long-term profitable business. However, the message conveyed in this article is that while providing a good value proposition may help the firm ‘get by’, the really successful businesses of today are those able to reach the sweet-spot of business model scalability. This article introduces and discusses...

  4. Developing a scalable modeling architecture for studying survivability technologies

    Science.gov (United States)

    Mohammad, Syed; Bounker, Paul; Mason, James; Brister, Jason; Shady, Dan; Tucker, David

    2006-05-01

    To facilitate interoperability of models in a scalable environment, and provide a relevant virtual environment in which Survivability technologies can be evaluated, the US Army Research Development and Engineering Command (RDECOM) Modeling Architecture for Technology Research and Experimentation (MATREX) Science and Technology Objective (STO) program has initiated the Survivability Thread which will seek to address some of the many technical and programmatic challenges associated with the effort. In coordination with different Thread customers, such as the Survivability branches of various Army labs, a collaborative group has been formed to define the requirements for the simulation environment that would in turn provide them a value-added tool for assessing models and gauge system-level performance relevant to Future Combat Systems (FCS) and the Survivability requirements of other burgeoning programs. An initial set of customer requirements has been generated in coordination with the RDECOM Survivability IPT lead, through the Survivability Technology Area at RDECOM Tank-automotive Research Development and Engineering Center (TARDEC, Warren, MI). The results of this project are aimed at a culminating experiment and demonstration scheduled for September, 2006, which will include a multitude of components from within RDECOM and provide the framework for future experiments to support Survivability research. This paper details the components with which the MATREX Survivability Thread was created and executed, and provides insight into the capabilities currently demanded by the Survivability faculty within RDECOM.

  5. Compressed multiwall carbon nanotube composite electrodes provide enhanced electroanalytical performance for determination of serotonin

    International Nuclear Information System (INIS)

    Fagan-Murphy, Aidan; Patel, Bhavik Anil

    2014-01-01

    Serotonin (5-HT) is an important neurochemical that is present in high concentrations within the intestinal tract. Carbon fibre and boron-doped diamond based electrodes have been widely used to date for monitoring 5-HT, however these electrodes are prone to fouling and are difficult to fabricate in certain sizes and geometries. Carbon nanotubes have shown potential as a suitable material for electroanalytical monitoring of 5-HT but can be difficult to manipulate into a suitable form. The fabrication of composite electrodes is an approach that can shape conductive materials into practical electrode geometries suitable for biological environments. This work investigated how compression of multiwall carbon nanotubes (MWCNTs) epoxy composite electrodes can influence their electroanalytical performance. Highly compressed composite electrodes displayed significant improvements in their electrochemical properties along with decreased internal and charge transfer resistance, reproducible behaviour and improved batch to batch variability when compared to non-compressed composite electrodes. Compression of MWCNT epoxy composite electrodes resulted in an increased current response for potassium ferricyanide, ruthenium hexaammine and dopamine, by preferentially removing the epoxy during compression and increasing the electrochemical active surface of the final electrode. For the detection of serotonin, compressed electrodes have a lower limit of detection and improved sensitivity compared to non-compressed electrodes. Fouling studies were carried out in 10 μM serotonin where the MWCNT compressed electrodes were shown to be less prone to fouling than non-compressed electrodes. This work indicates that the compression of MWCNT carbon-epoxy can result in a highly conductive material that can be moulded to various geometries, thus providing scope for electroanalytical measurements and the production of a wide range of analytical devices for a variety of systems

  6. TDCCREC: AN EFFICIENT AND SCALABLE WEB-BASED RECOMMENDATION SYSTEM

    Directory of Open Access Journals (Sweden)

    K.Latha

    2010-10-01

    Full Text Available Web browsers are provided with complex information space where the volume of information available to them is huge. There comes the Recommender system which effectively recommends web pages that are related to the current webpage, to provide the user with further customized reading material. To enhance the performance of the recommender systems, we include an elegant proposed web based recommendation system; Truth Discovery based Content and Collaborative RECommender (TDCCREC which is capable of addressing scalability. Existing approaches such as Learning automata deals with usage and navigational patterns of users. On the other hand, Weighted Association Rule is applied for recommending web pages by assigning weights to each page in all the transactions. Both of them have their own disadvantages. The websites recommended by the search engines have no guarantee for information correctness and often delivers conflicting information. To solve them, content based filtering and collaborative filtering techniques are introduced for recommending web pages to the active user along with the trustworthiness of the website and confidence of facts which outperforms the existing methods. Our results show how the proposed recommender system performs better in predicting the next request of web users.

  7. The performance implications of outsourcing customer support to service providers in emerging versus established economies

    NARCIS (Netherlands)

    Raassens, N.; Wuyts, S.H.K.; Geyskens, I.

    Recent discussions in the business press query the contribution of customer-support outsourcing to firm performance. Despite the controversy surrounding its performance implications, customer-support outsourcing is still on the rise, especially to emerging markets. Against this backdrop, we study

  8. The performance implications of outsourcing customer support to service providers in emerging versus established economies

    NARCIS (Netherlands)

    Raassens, N.; Wuyts, S.; Geyskens, I.

    2014-01-01

    Recent discussions in the business press query the contribution of customer-support outsourcing to firm performance. Despite the controversy surrounding its performance implications, customer-support outsourcing is still on the rise, especially to emerging markets. Against this backdrop, we study

  9. Kinesthetic Imagery Provides Additive Benefits to Internal Visual Imagery on Slalom Task Performance.

    Science.gov (United States)

    Callow, Nichola; Jiang, Dan; Roberts, Ross; Edwards, Martin G

    2017-02-01

    Recent brain imaging research demonstrates that the use of internal visual imagery (IVI) or kinesthetic imagery (KIN) activates common and distinct brain areas. In this paper, we argue that combining the imagery modalities (IVI and KIN) will lead to a greater cognitive representation (with more brain areas activated), and this will cause a greater slalom-based motor performance compared with using IVI alone. To examine this assertion, we randomly allocated 56 participants to one of the three groups: IVI, IVI and KIN, or a math control group. Participants performed a slalom-based driving task in a driving simulator, with average lap time used as a measure of performance. Results revealed that the IVI and KIN group achieved significantly quicker lap times than the IVI and the control groups. The discussion includes a theoretical advancement on why the combination of imagery modalities might facilitate performance, with links made to the cognitive neuroscience literature and applied practice.

  10. Requirements for Scalable Access Control and Security Management Architectures

    National Research Council Canada - National Science Library

    Keromytis, Angelos D; Smith, Jonathan M

    2005-01-01

    Maximizing local autonomy has led to a scalable Internet. Scalability and the capacity for distributed control have unfortunately not extended well to resource access control policies and mechanisms...

  11. GASPRNG: GPU accelerated scalable parallel random number generator library

    Science.gov (United States)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or

  12. Performance of the measures of processes of care for adults and service providers in rehabilitation settings.

    Science.gov (United States)

    Bamm, Elena L; Rosenbaum, Peter; Wilkins, Seanne; Stratford, Paul

    2015-01-01

    In recent years, client-centered care has been embraced as a new philosophy of care by many organizations around the world. Clinicians and researchers have identified the need for valid and reliable outcome measures that are easy to use to evaluate success of implementation of new concepts. The current study was developed to complete adaptation and field testing of the companion patient-reported measures of processes of care for adults (MPOC-A) and the service provider self-reflection measure of processes of care for service providers working with adult clients (MPOC-SP(A)). A validation study. In-patient rehabilitation facilities. MPOC-A and measure of processes of care for service providers working with adult clients (MPOC-SP(A)). Three hundred and eighty-four health care providers, 61 patients, and 16 family members completed the questionnaires. Good to excellent internal consistency (0.71-0.88 for health care professionals, 0.82-0.90 for patients, and 0.87-0.94 for family members), as well as moderate to good correlations between domains (0.40-0.78 for health care professionals and 0.52-0.84 for clients) supported internal reliability of the tools. Exploratory factor analysis of the MPOC-SP(A) responses supported the multidimensionality of the questionnaire. MPOC-A and MPOC-SP(A) are valid and reliable tools to assess patient and service-provider accounts, respectively, of the extent to which they experience, or are able to provide, client-centered service. Research should now be undertaken to explore in more detail the relationships between client experience and provider reports of their own behavior.

  13. Evaluation of 3D printed anatomically scalable transfemoral prosthetic knee.

    Science.gov (United States)

    Ramakrishnan, Tyagi; Schlafly, Millicent; Reed, Kyle B

    2017-07-01

    This case study compares a transfemoral amputee's gait while using the existing Ossur Total Knee 2000 and our novel 3D printed anatomically scalable transfemoral prosthetic knee. The anatomically scalable transfemoral prosthetic knee is 3D printed out of a carbon-fiber and nylon composite that has a gear-mesh coupling with a hard-stop weight-actuated locking mechanism aided by a cross-linked four-bar spring mechanism. This design can be scaled using anatomical dimensions of a human femur and tibia to have a unique fit for each user. The transfemoral amputee who was tested is high functioning and walked on the Computer Assisted Rehabilitation Environment (CAREN) at a self-selected pace. The motion capture and force data that was collected showed that there were distinct differences in the gait dynamics. The data was used to perform the Combined Gait Asymmetry Metric (CGAM), where the scores revealed that the overall asymmetry of the gait on the Ossur Total Knee was more asymmetric than the anatomically scalable transfemoral prosthetic knee. The anatomically scalable transfemoral prosthetic knee had higher peak knee flexion that caused a large step time asymmetry. This made walking on the anatomically scalable transfemoral prosthetic knee more strenuous due to the compensatory movements in adapting to the different dynamics. This can be overcome by tuning the cross-linked spring mechanism to emulate the dynamics of the subject better. The subject stated that the knee would be good for daily use and has the potential to be adapted as a running knee.

  14. Towards Reliable, Scalable, and Energy Efficient Cognitive Radio Systems

    KAUST Repository

    Sboui, Lokman

    2017-11-01

    The cognitive radio (CR) concept is expected to be adopted along with many technologies to meet the requirements of the next generation of wireless and mobile systems, the 5G. Consequently, it is important to determine the performance of the CR systems with respect to these requirements. In this thesis, after briefly describing the 5G requirements, we present three main directions in which we aim to enhance the CR performance. The first direction is the reliability. We study the achievable rate of a multiple-input multiple-output (MIMO) relay-assisted CR under two scenarios; an unmanned aerial vehicle (UAV) one-way relaying (OWR) and a fixed two-way relaying (TWR). We propose special linear precoding schemes that enable the secondary user (SU) to take advantage of the primary-free channel eigenmodes. We study the SU rate sensitivity to the relay power, the relay gain, the UAV altitude, the number of antennas and the line of sight availability. The second direction is the scalability. We first study a multiple access channel (MAC) with multiple SUs scenario. We propose a particular linear precoding and SUs selection scheme maximizing their sum-rate. We show that the proposed scheme provides a significant sum-rate improvement as the number of SUs increases. Secondly, we expand our scalability study to cognitive cellular networks. We propose a low-complexity algorithm for base station activation/deactivation and dynamic spectrum management maximizing the profits of primary and secondary networks subject to green constraints. We show that our proposed algorithms achieve performance close to those obtained with the exhaustive search method. The third direction is the energy efficiency (EE). We present a novel power allocation scheme based on maximizing the EE of both single-input and single-output (SISO) and MIMO systems. We solve a non-convex problem and derive explicit expressions of the corresponding optimal power. When the instantaneous channel is not available, we

  15. Vis-A-Plan /visualize a plan/ management technique provides performance-time scale

    Science.gov (United States)

    Ranck, N. H.

    1967-01-01

    Vis-A-Plan is a bar-charting technique for representing and evaluating project activities on a performance-time basis. This rectilinear method presents the logic diagram of a project as a series of horizontal time bars. It may be used supplementary to PERT or independently.

  16. Nano-islands Based Charge Trapping Memory: A Scalability Study

    KAUST Repository

    Elatab, Nazek; Saadat, Irfan; Saraswat, Krishna; Nayfeh, Ammar

    2017-01-01

    Zinc-oxide (ZnO) and zirconia (ZrO2) metal oxides have been studied extensively in the past few decades with several potential applications including memory devices. In this work, a scalability study, based on the ITRS roadmap, is conducted on memory devices with ZnO and ZrO2 nano-islands charge trapping layer. Both nano-islands are deposited using atomic layer deposition (ALD), however, the different sizes, distribution and properties of the materials result in different memory performance. The results show that at the 32-nm node charge trapping memory with 127 ZrO2 nano-islands can provide a 9.4 V memory window. However, with ZnO only 31 nano-islands can provide a window of 2.5 V. The results indicate that ZrO2 nano-islands are more promising than ZnO in scaled down devices due to their higher density, higher-k, and absence of quantum confinement effects.

  17. Nano-islands Based Charge Trapping Memory: A Scalability Study

    KAUST Repository

    Elatab, Nazek

    2017-10-19

    Zinc-oxide (ZnO) and zirconia (ZrO2) metal oxides have been studied extensively in the past few decades with several potential applications including memory devices. In this work, a scalability study, based on the ITRS roadmap, is conducted on memory devices with ZnO and ZrO2 nano-islands charge trapping layer. Both nano-islands are deposited using atomic layer deposition (ALD), however, the different sizes, distribution and properties of the materials result in different memory performance. The results show that at the 32-nm node charge trapping memory with 127 ZrO2 nano-islands can provide a 9.4 V memory window. However, with ZnO only 31 nano-islands can provide a window of 2.5 V. The results indicate that ZrO2 nano-islands are more promising than ZnO in scaled down devices due to their higher density, higher-k, and absence of quantum confinement effects.

  18. A Scalable proxy cache for Grid Data Access

    International Nuclear Information System (INIS)

    Cristian Cirstea, Traian; Just Keijser, Jan; Arthur Koeroo, Oscar; Starink, Ronald; Alan Templon, Jeffrey

    2012-01-01

    We describe a prototype grid proxy cache system developed at Nikhef, motivated by a desire to construct the first building block of a future https-based Content Delivery Network for grid infrastructures. Two goals drove the project: firstly to provide a “native view” of the grid for desktop-type users, and secondly to improve performance for physics-analysis type use cases, where multiple passes are made over the same set of data (residing on the grid). We further constrained the design by requiring that the system should be made of standard components wherever possible. The prototype that emerged from this exercise is a horizontally-scalable, cooperating system of web server / cache nodes, fronted by a customized webDAV server. The webDAV server is custom only in the sense that it supports http redirects (providing horizontal scaling) and that the authentication module has, as back end, a proxy delegation chain that can be used by the cache nodes to retrieve files from the grid. The prototype was deployed at Nikhef and tested at a scale of several terabytes of data and approximately one hundred fast cores of computing. Both small and large files were tested, in a number of scenarios, and with various numbers of cache nodes, in order to understand the scaling properties of the system. For properly-dimensioned cache-node hardware, the system showed speedup of several integer factors for the analysis-type use cases. These results and others are presented and discussed.

  19. ALADDIN - enhancing applicability and scalability

    International Nuclear Information System (INIS)

    Roverso, Davide

    2001-02-01

    The ALADDIN project aims at the study and development of flexible, accurate, and reliable techniques and principles for computerised event classification and fault diagnosis for complex machinery and industrial processes. The main focus of the project is on advanced numerical techniques, such as wavelets, and empirical modelling with neural networks. This document reports on recent important advancements, which significantly widen the practical applicability of the developed principles, both in terms of flexibility of use, and in terms of scalability to large problem domains. In particular, two novel techniques are here described. The first, which we call Wavelet On- Line Pre-processing (WOLP), is aimed at extracting, on-line, relevant dynamic features from the process data streams. This technique allows a system a greater flexibility in detecting and processing transients at a range of different time scales. The second technique, which we call Autonomous Recursive Task Decomposition (ARTD), is aimed at tackling the problem of constructing a classifier able to discriminate among a large number of different event/fault classes, which is often the case when the application domain is a complex industrial process. ARTD also allows for incremental application development (i.e. the incremental addition of new classes to an existing classifier, without the need of retraining the entire system), and for simplified application maintenance. The description of these novel techniques is complemented by reports of quantitative experiments that show in practice the extent of these improvements. (Author)

  20. Fast and scalable inequality joins

    KAUST Repository

    Khayyat, Zuhair

    2016-09-07

    Inequality joins, which is to join relations with inequality conditions, are used in various applications. Optimizing joins has been the subject of intensive research ranging from efficient join algorithms such as sort-merge join, to the use of efficient indices such as (Formula presented.)-tree, (Formula presented.)-tree and Bitmap. However, inequality joins have received little attention and queries containing such joins are notably very slow. In this paper, we introduce fast inequality join algorithms based on sorted arrays and space-efficient bit-arrays. We further introduce a simple method to estimate the selectivity of inequality joins which is then used to optimize multiple predicate queries and multi-way joins. Moreover, we study an incremental inequality join algorithm to handle scenarios where data keeps changing. We have implemented a centralized version of these algorithms on top of PostgreSQL, a distributed version on top of Spark SQL, and an existing data cleaning system, Nadeef. By comparing our algorithms against well-known optimization techniques for inequality joins, we show our solution is more scalable and several orders of magnitude faster. © 2016 Springer-Verlag Berlin Heidelberg

  1. Advanced technologies for scalable ATLAS conditions database access on the grid

    International Nuclear Information System (INIS)

    Basset, R; Canali, L; Girone, M; Hawkings, R; Valassi, A; Viegas, F; Dimitrov, G; Nevski, P; Vaniachine, A; Walker, R; Wong, A

    2010-01-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysis of server performance under stress tests indicates that Conditions Db data access is limited by the disk I/O throughput. An unacceptable side-effect of the disk I/O saturation is a degradation of the WLCG 3D Services that update Conditions Db data at all ten ATLAS Tier-1 sites using the technology of Oracle Streams. To avoid such bottlenecks we prototyped and tested a novel approach for database peak load avoidance in Grid computing. Our approach is based upon the proven idea of pilot job submission on the Grid: instead of the actual query, an ATLAS utility library sends to the database server a pilot query first.

  2. Rising labor costs, earnings management, and financial performance of health care providers around the world.

    Science.gov (United States)

    Dong, Gang Nathan

    2015-01-01

    Amid increasing interest in how government regulation and market competition affect the cost and financial sustainability in health care sector, it remains unclear whether health care providers behave similarly to their counterparts in other industries. The goal of this chapter is to study the degree to which health care providers manipulate accruals in periods of financial difficulties caused, in part, by the rising costs of labor. We collected the financial information of health care provider in 43 countries from 1984 to 2013 and conducted a pooled cross-sectional study with country and year fixed-effects. The empirical evidence shows that health care providers with higher wage costs are more likely to smooth their earnings in order to maintain financial sustainability. The finding of this study not only informs regulators that earnings management is pervasive in health care organizations around the world, but also contributes to the studies of financial booktax reporting alignment, given the existing empirical evidence linking earnings management to corporate tax avoidance in this very sector.

  3. Measuring Healthcare Providers' Performances Within Managed Competition using Multidimensional Quality and Cost Indicators

    NARCIS (Netherlands)

    Portrait, F.R.M.; van den Berg, B.

    2015-01-01

    Background and objectives: The Dutch healthcare system is in transition towards managed competition. In theory, a system of managed competition involves incentives for quality and efficiency of provided care. This is mainly because health insurers contract on behalf of their clients with healthcare

  4. A Scalable Heuristic for Viral Marketing Under the Tipping Model

    Science.gov (United States)

    2013-09-01

    Flixster is a social media website that allows users to share reviews and other information about cinema . [35] It was extracted in Dec. 2010. – FourSquare...work of Reichman were developed independently . We also note that Reichman performs no experimental evaluation of the algorithm. A Scalable Heuristic...other dif- fusion models, such as the independent cascade model [21] and evolutionary graph theory [25] as well as probabilistic variants of the

  5. Enhancing Data Transfer Performance Utilizing a DTN between Cloud Service Providers

    Directory of Open Access Journals (Sweden)

    Wontaek Hong

    2018-04-01

    Full Text Available The rapid transfer of massive data in the cloud environment is required to prepare for unexpected situations like disaster recovery. With regard to this requirement, we propose a new approach to transferring cloud virtual machine images rapidly in the cloud environment utilizing dedicated Data Transfer Nodes (DTNs. The overall procedure is composed of local/remote copy processes and a DTN-to-DTN transfer process. These processes are coordinated and executed based on a fork system call in the proposed algorithm. In addition, we especially focus on the local copy process between a cloud controller and DTNs and improve data transfer performance through the well-tuned mount techniques in Network File System (NFS-based connections. Several experiments have been performed considering the combination of synchronous/asynchronous modes and the network buffer size. We show the results of throughput in all the experiment cases and compare them. Consequently, the best throughput in write operations has been obtained in the case of an NFS server in a DTN and an NFS client in a cloud controller running entirely in the asynchronous mode.

  6. Mixing students and performance artists to provide innovative ways of communicating scientific research

    Science.gov (United States)

    van Manen, S. M.

    2007-12-01

    In May 2007 the Open University (U.K.) in conjunction with the MK (Milton Keynes) Gallery invited performance artists Noble and Silver to work with a group of students to design innovative methods of disseminating their research to a general audience. The students created a multitude of well-received live and multimedia performances based on their research. Students found they greatly benefited from the artists' and each others' different viewpoints and backgrounds, resulting in improved communication skills and varying interpretations of their own topic of interest. This work focuses on research aimed at identifying precursory activity at volcanoes using temperature, earthquake and ground movement data, to aid improvement of early warning systems. For this project an aspect of the research relevant to the public was chosen: the importance of appropriately timed warnings regarding the possibility of an eruption. If a warning is issued too early it may cause complacency and apathy towards the situation, whereas issuing a warning too late may endanger lives and property. An interactive DVD was produced which leads the user through the events preceding a volcanic eruption. The goal is to warn the public about the impending eruption at the most appropriate time. Data is presented in short film clips, after which questions are posed. Based on the player's answers the consequences or follow-up events of the choices are explored. We aim to improve and expand upon this concept in the near future, as well as making the DVD available to schools for educational purposes.

  7. Performance and first results of fission product release and transport provided by the VERDON facility

    Energy Technology Data Exchange (ETDEWEB)

    Gallais-During, A., E-mail: annelise.gallais-during@cea.fr [CEA, DEN, DEC, F-13108 Saint-Paul-lez-Durance (France); Bonnin, J.; Malgouyres, P.-P. [CEA, DEN, DEC, F-13108 Saint-Paul-lez-Durance (France); Morin, S. [IRSN, F-13108 Saint-Paul-lez-Durance (France); Bernard, S.; Gleizes, B.; Pontillon, Y.; Hanus, E.; Ducros, G. [CEA, DEN, DEC, F-13108 Saint-Paul-lez-Durance (France)

    2014-10-01

    Highlights: • A new facility to perform experimental LWR severe accidents sequences is evaluated. • In the furnace a fuel sample is heated up to 2600 °C under a controlled gas atmosphere. • Innovative thermal gradient tubes are used to study fission product transport. • The new VERDON facility shows an excellent consistency with results from VERCORS. • Fission product re-vapourization results confirm the correct functioning of the gradient tubes. - Abstract: One of the most important areas of research concerning a hypothetical severe accident in a light water reactor (LWR) is determining the source term, i.e. quantifying the nature, release kinetics and global released fraction of the fission products (FPs) and other radioactive materials. In line with the former VERCORS programme to improve source term estimates, the new VERDON laboratory has recently been implemented at the CEA Cadarache Centre in the LECA-STAR facility. The present paper deals with the evaluation of the experimental equipment of this new VERDON laboratory (furnace, release and transport loops) and demonstrates its capability to perform experimental sequences representative of LWR severe accidents and to supply the databases necessary for source term assessments and FP behaviour modelling.

  8. NPTool: Towards Scalability and Reliability of Business Process Management

    Science.gov (United States)

    Braghetto, Kelly Rosa; Ferreira, João Eduardo; Pu, Calton

    Currently one important challenge in business process management is provide at the same time scalability and reliability of business process executions. This difficulty becomes more accentuated when the execution control assumes complex countless business processes. This work presents NavigationPlanTool (NPTool), a tool to control the execution of business processes. NPTool is supported by Navigation Plan Definition Language (NPDL), a language for business processes specification that uses process algebra as formal foundation. NPTool implements the NPDL language as a SQL extension. The main contribution of this paper is a description of the NPTool showing how the process algebra features combined with a relational database model can be used to provide a scalable and reliable control in the execution of business processes. The next steps of NPTool include reuse of control-flow patterns and support to data flow management.

  9. The Effect of an Electronic Checklist on Critical Care Provider Workload, Errors, and Performance.

    Science.gov (United States)

    Thongprayoon, Charat; Harrison, Andrew M; O'Horo, John C; Berrios, Ronaldo A Sevilla; Pickering, Brian W; Herasevich, Vitaly

    2016-03-01

    The strategy used to improve effective checklist use in intensive care unit (ICU) setting is essential for checklist success. This study aimed to test the hypothesis that an electronic checklist could reduce ICU provider workload, errors, and time to checklist completion, as compared to a paper checklist. This was a simulation-based study conducted at an academic tertiary hospital. All participants completed checklists for 6 ICU patients: 3 using an electronic checklist and 3 using an identical paper checklist. In both scenarios, participants had full access to the existing electronic medical record system. The outcomes measured were workload (defined using the National Aeronautics and Space Association task load index [NASA-TLX]), the number of checklist errors, and time to checklist completion. Two independent clinician reviewers, blinded to participant results, served as the reference standard for checklist error calculation. Twenty-one ICU providers participated in this study. This resulted in the generation of 63 simulated electronic checklists and 63 simulated paper checklists. The median NASA-TLX score was 39 for the electronic checklist and 50 for the paper checklist (P = .005). The median number of checklist errors for the electronic checklist was 5, while the median number of checklist errors for the paper checklist was 8 (P = .003). The time to checklist completion was not significantly different between the 2 checklist formats (P = .76). The electronic checklist significantly reduced provider workload and errors without any measurable difference in the amount of time required for checklist completion. This demonstrates that electronic checklists are feasible and desirable in the ICU setting. © The Author(s) 2014.

  10. Composite Indexes Economic and Social Performance: Do they Provide Valuable Information?

    Directory of Open Access Journals (Sweden)

    Nasierowski Wojciech

    2016-01-01

    Full Text Available This paper examines the information content of the selected composite indexes, namely the Global Competitiveness Report Index, the Human Development Index, the Knowledge Economy Index, the Innovation Union Scoreboard, and the like. These indexes are examined from the viewpoint of country rankings. It is argued that these indexes provide highly similar information, which brings to question the usefulness of such a variety of approaches. This paper also explores the drawbacks of composite indexes, and questions whether these indexes can adequately serve as policy-setting mechanisms.

  11. Hyper-Realistic, Team-Centered Fleet Surgical Team Training Provides Sustained Improvements in Performance.

    Science.gov (United States)

    Hoang, Tuan N; Kang, Jeff; Siriratsivawong, Kris; LaPorta, Anthony; Heck, Amber; Ferraro, Jessica; Robinson, Douglas; Walsh, Jonathan

    2016-01-01

    The high-stress, fast-paced environment of combat casualty care relies on effective teamwork and communication which translates into quality patient care. A training course was developed for U.S. Navy Fleet Surgical Teams to address these aspects of patient care by emphasizing efficiency and appropriate patient care. An effective training course provides knowledge and skills to pass the course evaluation and sustain the knowledge and skills acquired over time. The course included classroom didactic hours, and hands-on simulation sessions. A pretest was administered before the course, a posttest upon completion, and a sustainment test 5 months following course completion. The evaluation process measured changes in patient time to disposition and critical errors made during patient care. Naval Base San Diego, with resuscitation and surgical simulations carried out within the shipboard medical spaces. United States Navy medical personnel including physicians of various specialties, corpsmen, nurses, and nurse anesthetists deploying aboard ships. Time to disposition improved significantly, 11 ± 3 minutes, from pretest to posttest, and critical errors improved by 4 ± 1 errors per encounter. From posttest to sustainment test, time to disposition increased by 3 ± 1, and critical errors decreased by 1 ± 1. This course showed value in improving teamwork and communication skills of participants, immediately upon completion of the course, and after 5 months had passed. Therefore, with ongoing sustainment activities within 6 months, this course can substantially improve trauma care provided by shipboard deployed Navy medical personnel to wounded service members. Published by Elsevier Inc.

  12. The IPE Database: providing information on plant design, core damage frequency and containment performance

    International Nuclear Information System (INIS)

    Lehner, J.R.; Lin, C.C.; Pratt, W.T.; Su, T.; Danziger, L.

    1996-01-01

    A database, called the IPE Database has been developed that stores data obtained from the Individual Plant Examinations (IPEs) which licensees of nuclear power plants have conducted in response to the Nuclear Regulatory Commission's (NRC) Generic Letter GL88-20. The IPE Database is a collection of linked files which store information about plant design, core damage frequency (CDF), and containment performance in a uniform, structured way. The information contained in the various files is based on data contained in the IPE submittals. The information extracted from the submittals and entered into the IPE Database can be manipulated so that queries regarding individual or groups of plants can be answered using the IPE Database

  13. Can pay-for-performance to primary care providers stimulate appropriate use of antibiotics?

    DEFF Research Database (Denmark)

    Dietrichson, Jens; Maria Ellegård, Lina; Anell, Anders

    2018-01-01

    ' choice of antibiotics. During 2006-2013, eight Swedish health care authorities adopted P4P to make physicians select narrow-spectrum antibiotics more often in the treatment of children with RTI. Exploiting register data on all purchases of RTI antibiotics in a difference-in-differences analysis, wefind......, which act on fewer bacteria types, when possible. Inappropriate use of antibiotics is nonetheless widespread, not least for respiratory tract infections (RTI), a common reason for antibiotics prescriptions. We examine if pay-for-performance (P4P) presents a way to influence primary care physicians...... that P4P significantly increased the share of narrow-spectrum antibiotics. There are no signs that physicians gamed the system by issuing more prescriptions overall....

  14. Nutritional Supplement of Hatchery Eggshell Membrane Improves Poultry Performance and Provides Resistance against Endotoxin Stress.

    Directory of Open Access Journals (Sweden)

    S K Makkar

    Full Text Available Eggshells are significant part of hatchery waste which consist of calcium carbonate crust, membranes, and proteins and peptides of embryonic origins along with other entrapped contaminants including microbes. We hypothesized that using this product as a nutritional additive in poultry diet may confer better immunity to the chickens in the paradigm of mammalian milk that enhances immunity. Therefore, we investigated the effect of hatchery eggshell membranes (HESM as a short term feed supplement on growth performance and immunity of chickens under bacterial lipopolysaccharide (LPS challenged condition. Three studies were conducted to find the effect of HESM supplement on post hatch chickens. In the first study, the chickens were fed either a control diet or diets containing 0.5% whey protein or HESM as supplement and evaluated at 5 weeks of age using growth, hematology, clinical chemistry, plasma immunoglobulins, and corticosterone as variables. The second and third studies were done to compare the effects of LPS on control and HESM fed birds at 5 weeks of age following at 4 and 24 h of treatment where the HESM was also sterilized with ethanol to deplete bacterial factors. HESM supplement caused weight gain in 2 experiments and decreased blood corticosterone concentrations. While LPS caused a significant loss in body weight at 24 h following its administration, the HESM supplemented birds showed significantly less body weight loss compared with the control fed birds. The WBC, heterophil/lymphocyte ratio, and the levels of IgG were low in chickens fed diets with HESM supplement compared with control diet group. LPS challenge increased the expression of pro-inflammatory cytokine gene IL-6 but the HESM fed birds showed its effect curtailed, also, which also, favored the up-regulation of anti-inflammatory genes compared with control diet fed chickens. Post hatch supplementation of HESM appears to improve performance, modulate immunity, and increase

  15. Nutritional Supplement of Hatchery Eggshell Membrane Improves Poultry Performance and Provides Resistance against Endotoxin Stress.

    Science.gov (United States)

    Makkar, S K; Rath, N C; Packialakshmi, B; Zhou, Z Y; Huff, G R; Donoghue, A M

    2016-01-01

    Eggshells are significant part of hatchery waste which consist of calcium carbonate crust, membranes, and proteins and peptides of embryonic origins along with other entrapped contaminants including microbes. We hypothesized that using this product as a nutritional additive in poultry diet may confer better immunity to the chickens in the paradigm of mammalian milk that enhances immunity. Therefore, we investigated the effect of hatchery eggshell membranes (HESM) as a short term feed supplement on growth performance and immunity of chickens under bacterial lipopolysaccharide (LPS) challenged condition. Three studies were conducted to find the effect of HESM supplement on post hatch chickens. In the first study, the chickens were fed either a control diet or diets containing 0.5% whey protein or HESM as supplement and evaluated at 5 weeks of age using growth, hematology, clinical chemistry, plasma immunoglobulins, and corticosterone as variables. The second and third studies were done to compare the effects of LPS on control and HESM fed birds at 5 weeks of age following at 4 and 24 h of treatment where the HESM was also sterilized with ethanol to deplete bacterial factors. HESM supplement caused weight gain in 2 experiments and decreased blood corticosterone concentrations. While LPS caused a significant loss in body weight at 24 h following its administration, the HESM supplemented birds showed significantly less body weight loss compared with the control fed birds. The WBC, heterophil/lymphocyte ratio, and the levels of IgG were low in chickens fed diets with HESM supplement compared with control diet group. LPS challenge increased the expression of pro-inflammatory cytokine gene IL-6 but the HESM fed birds showed its effect curtailed, also, which also, favored the up-regulation of anti-inflammatory genes compared with control diet fed chickens. Post hatch supplementation of HESM appears to improve performance, modulate immunity, and increase resistance of

  16. A Scalable Communication Architecture for Advanced Metering Infrastructure

    OpenAIRE

    Ngo Hoang , Giang; Liquori , Luigi; Nguyen Chan , Hung

    2013-01-01

    Advanced Metering Infrastructure (AMI), seen as foundation for overall grid modernization, is an integration of many technologies that provides an intelligent connection between consumers and system operators [ami 2008]. One of the biggest challenge that AMI faces is to scalable collect and manage a huge amount of data from a large number of customers. In our paper, we address this challenge by introducing a mixed peer-to-peer (P2P) and client-server communication architecture for AMI in whic...

  17. Standards - the common element in providing the safety, quality and performance of the medical practice

    Science.gov (United States)

    Greabu, Mihai

    2009-01-01

    Knowing and applying standards is an opportunity of the years 2007-2008 in any kind of field where a successful activity is intended and this assures a certain way towards competence and quality. The most recent German studies highlighted, to the surprise of the specialists, that standardization holds the second place, after the material means, in the row of the elements considered to be decisive for the success of a business. The existence of standards and the concern for their implementation in the activity provides a high technical and quality level of the products services offered to the clients and the increase in the level of competence of the personnel, who will be able to cope with all the challenges. This need comes from the process of Romania’s accession to the European Union. There are a lot of reasons why standards represent a fundamental part of our daily life. Practically, we are surrounded by standards. Everything is „working” well and it is efficient if the standards used as a base for manufacturing „things” have been correctly developed and applied. Standards open communication channels and commercial channels, promote the understanding of technical products, the compatibility of products and services, facilitate mass production and, most importantly, they are the necessary base for the achievement of the objectives in the fields of health and safety and a better quality of life. The transition towards the global market needs an instrument for the removal of the barriers to the application of the latest discoveries in the field of medical instruments, materials and manual labor. Each medical device, equipment and material used in the Dental and General Medicine is standardized, in fact that leads to their better knowing and provides controllable treatment for manual labor with predictable and repeatable results. This presentation intends to make a survey of some general aspects on standardization as well as a review of the standards in

  18. The Relationship between Environmental Turbulence, Management Support, Organizational Collaboration, Information Technology Solution Realization, and Process Performance, in Healthcare Provider Organizations

    Science.gov (United States)

    Muglia, Victor O.

    2010-01-01

    The Problem: The purpose of this study was to investigate relationships between environmental turbulence, management support, organizational collaboration, information technology solution realization, and process performance in healthcare provider organizations. Method: A descriptive/correlational study of Hospital medical services process…

  19. Air Force Health Care Providers Incidence of Performing Testicular Exams and Instruction of Testicular Self-Exam

    National Research Council Canada - National Science Library

    Adams, Nicola

    1999-01-01

    ...) within an Air Force healthcare setting. The study is also designed to determine any significant differences among providers, Family Practice Physicians, Nurse Practitioners, and Physicians Assistants, regarding the incidence of performing...

  20. Feedback providing improvement strategies and reflection on feedback use: Effects on students' writing motivation, process, and performance

    NARCIS (Netherlands)

    Duijnhouwer, H.; Prins, F.J.; Stokking, K.M.

    2012-01-01

    This study investigated the effects of feedback providing improvement strategies and a reflection assignment on students’ writing motivation, process, and performance. Students in the experimental feedback condition (n = 41) received feedback including improvement strategies, whereas students in the

  1. Can pay-for-performance to primary care providers stimulate appropriate use of antibiotics?

    Science.gov (United States)

    Ellegård, Lina Maria; Dietrichson, Jens; Anell, Anders

    2018-01-01

    Antibiotic resistance is a major threat to public health worldwide. As the healthcare sector's use of antibiotics is an important contributor to the development of resistance, it is crucial that physicians only prescribe antibiotics when needed and that they choose narrow-spectrum antibiotics, which act on fewer bacteria types, when possible. Inappropriate use of antibiotics is nonetheless widespread, not least for respiratory tract infections (RTI), a common reason for antibiotics prescriptions. We examine if pay-for-performance (P4P) presents a way to influence primary care physicians' choice of antibiotics. During 2006-2013, 8 Swedish healthcare authorities adopted P4P to make physicians select narrow-spectrum antibiotics more often in the treatment of children with RTI. Exploiting register data on all purchases of RTI antibiotics in a difference-in-differences analysis, we find that P4P significantly increased the share of narrow-spectrum antibiotics. There are no signs that physicians gamed the system by issuing more prescriptions overall. © 2017 The Authors Health Economics Published by John Wiley & Sons Ltd.

  2. Creatine co-ingestion with carbohydrate or cinnamon extract provides no added benefit to anaerobic performance.

    Science.gov (United States)

    Islam, Hashim; Yorgason, Nick J; Hazell, Tom J

    2016-09-01

    The insulin response following carbohydrate ingestion enhances creatine transport into muscle. Cinnamon extract is promoted to have insulin-like effects, therefore this study examined if creatine co-ingestion with carbohydrates or cinnamon extract improved anaerobic capacity, muscular strength, and muscular endurance. Active young males (n = 25; 23.7 ± 2.5 y) were stratified into 3 groups: (1) creatine only (CRE); (2) creatine+ 70 g carbohydrate (CHO); or (3) creatine+ 500 mg cinnamon extract (CIN), based on anaerobic capacity (peak power·kg(-1)) and muscular strength at baseline. Three weeks of supplementation consisted of a 5 d loading phase (20 g/d) and a 16 d maintenance phase (5 g/d). Pre- and post-supplementation measures included a 30-s Wingate and a 30-s maximal running test (on a self-propelled treadmill) for anaerobic capacity. Muscular strength was measured as the one-repetition maximum 1-RM for chest, back, quadriceps, hamstrings, and leg press. Additional sets of the number of repetitions performed at 60% 1-RM until fatigue measured muscular endurance. All three groups significantly improved Wingate relative peak power (CRE: 15.4% P = .004; CHO: 14.6% P = .004; CIN: 15.7%, P = .003), and muscular strength for chest (CRE: 6.6% P creatine ingestion lead to similar changes in anaerobic power, strength, and endurance.

  3. Resource-aware complexity scalability for mobile MPEG encoding

    NARCIS (Netherlands)

    Mietens, S.O.; With, de P.H.N.; Hentschel, C.; Panchanatan, S.; Vasudev, B.

    2004-01-01

    Complexity scalability attempts to scale the required resources of an algorithm with the chose quality settings, in order to broaden the application range. In this paper, we present complexity-scalable MPEG encoding of which the core processing modules are modified for scalability. Scalability is

  4. A practical multilayered conducting polymer actuator with scalable work output

    International Nuclear Information System (INIS)

    Ikushima, Kimiya; John, Stephen; Yokoyama, Kazuo; Nagamitsu, Sachio

    2009-01-01

    Household assistance robots are expected to become more prominent in the future and will require inherently safe design. Conducting polymer-based artificial muscle actuators are one potential option for achieving this safety, as they are flexible, lightweight and can be driven using low input voltages, unlike electromagnetic motors; however, practical implementation also requires a scalable structure and stability in air. In this paper we propose and practically implement a multilayer conducting polymer actuator which could achieve these targets using polypyrrole film and ionic liquid-soaked separators. The practical work density of a nine-layer multilayer actuator was 1.4 kJ m −3 at 0.5 Hz, when the volumes of the electrolyte and counter electrodes were included, which approaches the performance of mammalian muscle. To achieve air stability, we analyzed the effect of air-stable ionic liquid gels on actuator displacement using finite element simulation and it was found that the majority of strain could be retained when the elastic modulus of the gel was kept below 3 kPa. As a result of this work, we have shown that multilayered conducting polymer actuators are a feasible idea for household robotics, as they provide a substantial practical work density in a compact structure and can be easily scaled as required

  5. Scalable global grid catalogue for Run3 and beyond

    Science.gov (United States)

    Martinez Pedreira, M.; Grigoras, C.; ALICE Collaboration

    2017-10-01

    The AliEn (ALICE Environment) file catalogue is a global unique namespace providing mapping between a UNIX-like logical name structure and the corresponding physical files distributed over 80 storage elements worldwide. Powerful search tools and hierarchical metadata information are integral parts of the system and are used by the Grid jobs as well as local users to store and access all files on the Grid storage elements. The catalogue has been in production since 2005 and over the past 11 years has grown to more than 2 billion logical file names. The backend is a set of distributed relational databases, ensuring smooth growth and fast access. Due to the anticipated fast future growth, we are looking for ways to enhance the performance and scalability by simplifying the catalogue schema while keeping the functionality intact. We investigated different backend solutions, such as distributed key value stores, as replacement for the relational database. This contribution covers the architectural changes in the system, together with the technology evaluation, benchmark results and conclusions.

  6. The Scalable Coherent Interface and related standards projects

    International Nuclear Information System (INIS)

    Gustavson, D.B.

    1991-09-01

    The Scalable Coherent Interface (SCI) project (IEEE P1596) found a way to avoid the limits that are inherent in bus technology. SCI provides bus-like services by transmitting packets on a collection of point-to-point unidirectional links. The SCI protocols support cache coherence in a distributed-shared-memory multiprocessor model, message passing, I/O, and local-area-network-like communication over fiber optic or wire links. VLSI circuits that operate parallel links at 1000 MByte/s and serial links at 1000 Mbit/s will be available early in 1992. Several ongoing SCI-related projects are applying the SCI technology to new areas or extending it to more difficult problems. P1596.1 defines the architecture of a bridge between SCI and VME; P1596.2 compatibly extends the cache coherence mechanism for efficient operation with kiloprocessor systems; P1596.3 defines new low-voltage (about 0.25 V) differential signals suitable for low power interfaces for CMOS or GaAs VLSI implementations of SCI; P1596.4 defines a high performance memory chip interface using these signals; P1596.5 defines data transfer formats for efficient interprocessor communication in heterogeneous multiprocessor systems. This paper reports the current status of SCI, related standards, and new projects. 16 refs

  7. Scalable and cost-effective NGS genotyping in the cloud.

    Science.gov (United States)

    Souilmi, Yassine; Lancaster, Alex K; Jung, Jae-Yoon; Rizzo, Ettore; Hawkins, Jared B; Powles, Ryan; Amzazi, Saaïd; Ghazal, Hassan; Tonellato, Peter J; Wall, Dennis P

    2015-10-15

    While next-generation sequencing (NGS) costs have plummeted in recent years, cost and complexity of computation remain substantial barriers to the use of NGS in routine clinical care. The clinical potential of NGS will not be realized until robust and routine whole genome sequencing data can be accurately rendered to medically actionable reports within a time window of hours and at scales of economy in the 10's of dollars. We take a step towards addressing this challenge, by using COSMOS, a cloud-enabled workflow management system, to develop GenomeKey, an NGS whole genome analysis workflow. COSMOS implements complex workflows making optimal use of high-performance compute clusters. Here we show that the Amazon Web Service (AWS) implementation of GenomeKey via COSMOS provides a fast, scalable, and cost-effective analysis of both public benchmarking and large-scale heterogeneous clinical NGS datasets. Our systematic benchmarking reveals important new insights and considerations to produce clinical turn-around of whole genome analysis optimization and workflow management including strategic batching of individual genomes and efficient cluster resource configuration.

  8. THE IMPACT OF THE LEVERAGE PROVIDED BY THE FUTURES ONTHE PERFORMANCE OF TECHNICAL INDICATORS:EVIDENCEFROMTURKEY

    Directory of Open Access Journals (Sweden)

    Hakan ER

    2012-07-01

    Full Text Available Although in developed countries the futures markets have been in existence sincemid-nineteenth century, they are relatively new in the developing countries. In thelate twentieth and early twenty first century, many new futures exchanges wereestablishedin developing countries and in the majority of these newly establishedexchanges substantial growth in futures trading have been observed within a smallperiod of time. This fast growth in the futures trading volume was mainly due tothe tremendous leverage the futures provides to speculators. Thanks to futuresmargining system, by committing only a small fraction of the money needed tomaintain a position on the underlying security in the spot market, a speculator canattain a much higher return potentialby buying or selling a futures contract. Thispaper studies this effect by employing daily return data on 19 selected stockslisted continuously in IMKB-30 (Istanbul Stock Exchange 30,Turkey fromJanuary 2005 to December 2010. Populartechnical indicators are used to generatebuy and sell signals in both the spot market (Istanbul Stock Exchange and thefutures market (VOB, a fast growing derivatives exchange located in İzmir,Turkey. The profit/loss resulting from trading strategies are then calculated andcompared. The results of the study show that, although the amount invested inboth markets is the same, the profit generated from the strategies applied onfutures is significantly higher than that on spot market.A CAPM (Capital AssetPricing Model based hedge ratio is used to apply the trading strategies generated from spot market data on futures. The results show that this strategy generatessuperior returns in the futures market.

  9. Scalable quantum memory in the ultrastrong coupling regime.

    Science.gov (United States)

    Kyaw, T H; Felicetti, S; Romero, G; Solano, E; Kwek, L-C

    2015-03-02

    Circuit quantum electrodynamics, consisting of superconducting artificial atoms coupled to on-chip resonators, represents a prime candidate to implement the scalable quantum computing architecture because of the presence of good tunability and controllability. Furthermore, recent advances have pushed the technology towards the ultrastrong coupling regime of light-matter interaction, where the qubit-resonator coupling strength reaches a considerable fraction of the resonator frequency. Here, we propose a qubit-resonator system operating in that regime, as a quantum memory device and study the storage and retrieval of quantum information in and from the Z2 parity-protected quantum memory, within experimentally feasible schemes. We are also convinced that our proposal might pave a way to realize a scalable quantum random-access memory due to its fast storage and readout performances.

  10. Scalable, full-colour and controllable chromotropic plasmonic printing

    Science.gov (United States)

    Xue, Jiancai; Zhou, Zhang-Kai; Wei, Zhiqiang; Su, Rongbin; Lai, Juan; Li, Juntao; Li, Chao; Zhang, Tengwei; Wang, Xue-Hua

    2015-01-01

    Plasmonic colour printing has drawn wide attention as a promising candidate for the next-generation colour-printing technology. However, an efficient approach to realize full colour and scalable fabrication is still lacking, which prevents plasmonic colour printing from practical applications. Here we present a scalable and full-colour plasmonic printing approach by combining conjugate twin-phase modulation with a plasmonic broadband absorber. More importantly, our approach also demonstrates controllable chromotropic capability, that is, the ability of reversible colour transformations. This chromotropic capability affords enormous potentials in building functionalized prints for anticounterfeiting, special label, and high-density data encryption storage. With such excellent performances in functional colour applications, this colour-printing approach could pave the way for plasmonic colour printing in real-world commercial utilization. PMID:26567803

  11. Scalability of Sustainable Business Models in Hybrid Organizations

    Directory of Open Access Journals (Sweden)

    Adam Jabłoński

    2016-02-01

    Full Text Available The dynamics of change in modern business create new mechanisms for company management to determine their pursuit and the achievement of their high performance. This performance maintained over a long period of time becomes a source of ensuring business continuity by companies. An ontological being enabling the adoption of such assumptions is such a business model that has the ability to generate results in every possible market situation and, moreover, it has the features of permanent adaptability. A feature that describes the adaptability of the business model is its scalability. Being a factor ensuring more work and more efficient work with an increasing number of components, scalability can be applied to the concept of business models as the company’s ability to maintain similar or higher performance through it. Ensuring the company’s performance in the long term helps to build the so-called sustainable business model that often balances the objectives of stakeholders and shareholders, and that is created by the implemented principles of value-based management and corporate social responsibility. This perception of business paves the way for building hybrid organizations that integrate business activities with pro-social ones. The combination of an approach typical of hybrid organizations in designing and implementing sustainable business models pursuant to the scalability criterion seems interesting from the cognitive point of view. Today, hybrid organizations are great spaces for building effective and efficient mechanisms for dialogue between business and society. This requires the appropriate business model. The purpose of the paper is to present the conceptualization and operationalization of scalability of sustainable business models that determine the performance of a hybrid organization in the network environment. The paper presents the original concept of applying scalability in sustainable business models with detailed

  12. Scalable Notch Antenna System for Multiport Applications

    Directory of Open Access Journals (Sweden)

    Abdurrahim Toktas

    2016-01-01

    Full Text Available A novel and compact scalable antenna system is designed for multiport applications. The basic design is built on a square patch with an electrical size of 0.82λ0×0.82λ0 (at 2.4 GHz on a dielectric substrate. The design consists of four symmetrical and orthogonal triangular notches with circular feeding slots at the corners of the common patch. The 4-port antenna can be simply rearranged to 8-port and 12-port systems. The operating band of the system can be tuned by scaling (S the size of the system while fixing the thickness of the substrate. The antenna system with S: 1/1 in size of 103.5×103.5 mm2 operates at the frequency band of 2.3–3.0 GHz. By scaling the antenna with S: 1/2.3, a system of 45×45 mm2 is achieved, and thus the operating band is tuned to 4.7–6.1 GHz with the same scattering characteristic. A parametric study is also conducted to investigate the effects of changing the notch dimensions. The performance of the antenna is verified in terms of the antenna characteristics as well as diversity and multiplexing parameters. The antenna system can be tuned by scaling so that it is applicable to the multiport WLAN, WIMAX, and LTE devices with port upgradability.

  13. Space Situational Awareness Data Processing Scalability Utilizing Google Cloud Services

    Science.gov (United States)

    Greenly, D.; Duncan, M.; Wysack, J.; Flores, F.

    Space Situational Awareness (SSA) is a fundamental and critical component of current space operations. The term SSA encompasses the awareness, understanding and predictability of all objects in space. As the population of orbital space objects and debris increases, the number of collision avoidance maneuvers grows and prompts the need for accurate and timely process measures. The SSA mission continually evolves to near real-time assessment and analysis demanding the need for higher processing capabilities. By conventional methods, meeting these demands requires the integration of new hardware to keep pace with the growing complexity of maneuver planning algorithms. SpaceNav has implemented a highly scalable architecture that will track satellites and debris by utilizing powerful virtual machines on the Google Cloud Platform. SpaceNav algorithms for processing CDMs outpace conventional means. A robust processing environment for tracking data, collision avoidance maneuvers and various other aspects of SSA can be created and deleted on demand. Migrating SpaceNav tools and algorithms into the Google Cloud Platform will be discussed and the trials and tribulations involved. Information will be shared on how and why certain cloud products were used as well as integration techniques that were implemented. Key items to be presented are: 1.Scientific algorithms and SpaceNav tools integrated into a scalable architecture a) Maneuver Planning b) Parallel Processing c) Monte Carlo Simulations d) Optimization Algorithms e) SW Application Development/Integration into the Google Cloud Platform 2. Compute Engine Processing a) Application Engine Automated Processing b) Performance testing and Performance Scalability c) Cloud MySQL databases and Database Scalability d) Cloud Data Storage e) Redundancy and Availability

  14. Continuity-Aware Scheduling Algorithm for Scalable Video Streaming

    Directory of Open Access Journals (Sweden)

    Atinat Palawan

    2016-05-01

    Full Text Available The consumer demand for retrieving and delivering visual content through consumer electronic devices has increased rapidly in recent years. The quality of video in packet networks is susceptible to certain traffic characteristics: average bandwidth availability, loss, delay and delay variation (jitter. This paper presents a scheduling algorithm that modifies the stream of scalable video to combat jitter. The algorithm provides unequal look-ahead by safeguarding the base layer (without the need for overhead of the scalable video. The results of the experiments show that our scheduling algorithm reduces the number of frames with a violated deadline and significantly improves the continuity of the video stream without compromising the average Y Peek Signal-to-Noise Ratio (PSNR.

  15. Robust and scalable optical one-way quantum computation

    International Nuclear Information System (INIS)

    Wang Hefeng; Yang Chuiping; Nori, Franco

    2010-01-01

    We propose an efficient approach for deterministically generating scalable cluster states with photons. This approach involves unitary transformations performed on atoms coupled to optical cavities. Its operation cost scales linearly with the number of qubits in the cluster state, and photon qubits are encoded such that single-qubit operations can be easily implemented by using linear optics. Robust optical one-way quantum computation can be performed since cluster states can be stored in atoms and then transferred to photons that can be easily operated and measured. Therefore, this proposal could help in performing robust large-scale optical one-way quantum computation.

  16. Scalable inference for stochastic block models

    KAUST Repository

    Peng, Chengbin

    2017-12-08

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference algorithms for such a model are increasingly limited due to their high time complexity and poor scalability. In this paper, we propose a multi-stage maximum likelihood approach to recover the latent parameters of the stochastic block model, in time linear with respect to the number of edges. We also propose a parallel algorithm based on message passing. Our algorithm can overlap communication and computation, providing speedup without compromising accuracy as the number of processors grows. For example, to process a real-world graph with about 1.3 million nodes and 10 million edges, our algorithm requires about 6 seconds on 64 cores of a contemporary commodity Linux cluster. Experiments demonstrate that the algorithm can produce high quality results on both benchmark and real-world graphs. An example of finding more meaningful communities is illustrated consequently in comparison with a popular modularity maximization algorithm.

  17. Good, better, best? A comprehensive comparison of healthcare providers' performance: An application to physiotherapy practices in primary care.

    Science.gov (United States)

    Steenhuis, Sander; Groeneweg, Niels; Koolman, Xander; Portrait, France

    2017-12-01

    Most payment methods in healthcare stimulate volume-driven care, rather than value-driven care. Value-based payment methods such as Pay-For-Performance have the potential to reduce costs and improve quality of care. Ideally, outcome indicators are used in the assessment of providers' performance. The aim of this paper is to describe the feasibility of assessing and comparing the performances of providers using a comprehensive set of quality and cost data. We had access to unique and extensive datasets containing individual data on PROMs, PREMs and costs of physiotherapy practices in Dutch primary care. We merged these datasets at the patient-level and compared the performances of these practices using case-mix corrected linear regression models. Several significant differences in performance were detected between practices. These results can be used by both physiotherapists, to improve treatment given, and insurers to support their purchasing decisions. The study demonstrates that it is feasible to compare the performance of providers using PROMs and PREMs. However, it would take an extra effort to increase usefulness and it remains unclear under which conditions this effort is cost-effective. Healthcare providers need to be aware of the added value of registering outcomes to improve their quality. Insurers need to facilitate this by designing value-based contracts with the right incentives. Only then can payment methods contribute to value-based healthcare and increase value for patients. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Impact of Providing Compassion on Job Performance and Mental Health: The Moderating Effect of Interpersonal Relationship Quality.

    Science.gov (United States)

    Chu, Li-Chuan

    2017-07-01

    To examine the relationships of providing compassion at work with job performance and mental health, as well as to identify the role of interpersonal relationship quality in moderating these relationships. This study adopted a two-stage survey completed by 235 registered nurses employed by hospitals in Taiwan. All hypotheses were tested using hierarchical regression analyses. The results show that providing compassion is an effective predictor of job performance and mental health, whereas interpersonal relationship quality can moderate the relationships of providing compassion with job performance and mental health. When nurses are frequently willing to listen, understand, and help their suffering colleagues, the enhancement engendered by providing compassion can improve the provider's job performance and mental health. Creating high-quality relationships in the workplace can strengthen the positive benefits of providing compassion. Motivating employees to spontaneously exhibit compassion is crucial to an organization. Hospitals can establish value systems, belief systems, and cultural systems that support a compassionate response to suffering. In addition, nurses can internalize altruistic belief systems into their own personal value systems through a long process of socialization in the workplace. © 2017 Sigma Theta Tau International.

  19. Scalable and near-optimal design space exploration for embedded systems

    CERN Document Server

    Kritikakou, Angeliki; Goutis, Costas

    2014-01-01

    This book describes scalable and near-optimal, processor-level design space exploration (DSE) methodologies.  The authors present design methodologies for data storage and processing in real-time, cost-sensitive data-dominated embedded systems.  Readers will be enabled to reduce time-to-market, while satisfying system requirements for performance, area, and energy consumption, thereby minimizing the overall cost of the final design.   • Describes design space exploration (DSE) methodologies for data storage and processing in embedded systems, which achieve near-optimal solutions with scalable exploration time; • Presents a set of principles and the processes which support the development of the proposed scalable and near-optimal methodologies; • Enables readers to apply scalable and near-optimal methodologies to the intra-signal in-place optimization step for both regular and irregular memory accesses.

  20. SuperLU{_}DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiaoye S.; Demmel, James W.

    2002-03-27

    In this paper, we present the main algorithmic features in the software package SuperLU{_}DIST, a distributed-memory sparse direct solver for large sets of linear equations. We give in detail our parallelization strategies, with focus on scalability issues, and demonstrate the parallel performance and scalability on current machines. The solver is based on sparse Gaussian elimination, with an innovative static pivoting strategy proposed earlier by the authors. The main advantage of static pivoting over classical partial pivoting is that it permits a priori determination of data structures and communication pattern for sparse Gaussian elimination, which makes it more scalable on distributed memory machines. Based on this a priori knowledge, we designed highly parallel and scalable algorithms for both LU decomposition and triangular solve and we show that they are suitable for large-scale distributed memory machines.

  1. Scalable fractionation of iron oxide nanoparticles using a CO2 gas-expanded liquid system

    International Nuclear Information System (INIS)

    Vengsarkar, Pranav S.; Xu, Rui; Roberts, Christopher B.

    2015-01-01

    Iron oxide nanoparticles exhibit highly size-dependent physicochemical properties that are important in applications such as catalysis and environmental remediation. In order for these size-dependent properties to be effectively harnessed for industrial applications scalable and cost-effective techniques for size-controlled synthesis or size separation must be developed. The synthesis of monodisperse iron oxide nanoparticles can be a prohibitively expensive process on a large scale. An alternative involves the use of inexpensive synthesis procedures followed by a size-selective processing technique. While there are many techniques available to fractionate nanoparticles, many of the techniques are unable to efficiently fractionate iron oxide nanoparticles in a scalable and inexpensive manner. A scalable apparatus capable of fractionating large quantities of iron oxide nanoparticles into distinct fractions of different sizes and size distributions has been developed. Polydisperse iron oxide nanoparticles (2–20 nm) coated with oleic acid used in this study were synthesized using a simple and inexpensive version of the popular coprecipitation technique. This apparatus uses hexane as a CO 2 gas-expanded liquid to controllably precipitate nanoparticles inside a 1L high-pressure reactor. This paper demonstrates the operation of this new apparatus and for the first time shows the successful fractionation results on a system of metal oxide nanoparticles, with initial nanoparticle concentrations in the gram-scale. The analysis of the obtained fractions was performed using transmission electron microscopy and dynamic light scattering. The use of this simple apparatus provides a pathway to separate large quantities of iron oxide nanoparticles based upon their size for use in various industrial applications.

  2. Temporal Scalability through Adaptive -Band Filter Banks for Robust H.264/MPEG-4 AVC Video Coding

    Directory of Open Access Journals (Sweden)

    Pau G

    2006-01-01

    Full Text Available This paper presents different structures that use adaptive -band hierarchical filter banks for temporal scalability. Open-loop and closed-loop configurations are introduced and illustrated using existing video codecs. In particular, it is shown that the H.264/MPEG-4 AVC codec allows us to introduce scalability by frame shuffling operations, thus keeping backward compatibility with the standard. The large set of shuffling patterns introduced here can be exploited to adapt the encoding process to the video content features, as well as to the user equipment and transmission channel characteristics. Furthermore, simulation results show that this scalability is obtained with no degradation in terms of subjective and objective quality in error-free environments, while in error-prone channels the scalable versions provide increased robustness.

  3. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    Thomas André

    2007-03-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  4. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    André Thomas

    2007-01-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  5. Superlinearly scalable noise robustness of redundant coupled dynamical systems.

    Science.gov (United States)

    Kohar, Vivek; Kia, Behnam; Lindner, John F; Ditto, William L

    2016-03-01

    We illustrate through theory and numerical simulations that redundant coupled dynamical systems can be extremely robust against local noise in comparison to uncoupled dynamical systems evolving in the same noisy environment. Previous studies have shown that the noise robustness of redundant coupled dynamical systems is linearly scalable and deviations due to noise can be minimized by increasing the number of coupled units. Here, we demonstrate that the noise robustness can actually be scaled superlinearly if some conditions are met and very high noise robustness can be realized with very few coupled units. We discuss these conditions and show that this superlinear scalability depends on the nonlinearity of the individual dynamical units. The phenomenon is demonstrated in discrete as well as continuous dynamical systems. This superlinear scalability not only provides us an opportunity to exploit the nonlinearity of physical systems without being bogged down by noise but may also help us in understanding the functional role of coupled redundancy found in many biological systems. Moreover, engineers can exploit superlinear noise suppression by starting a coupled system near (not necessarily at) the appropriate initial condition.

  6. Event metadata records as a testbed for scalable data mining

    International Nuclear Information System (INIS)

    Gemmeren, P van; Malon, D

    2010-01-01

    At a data rate of 200 hertz, event metadata records ('TAGs,' in ATLAS parlance) provide fertile grounds for development and evaluation of tools for scalable data mining. It is easy, of course, to apply HEP-specific selection or classification rules to event records and to label such an exercise 'data mining,' but our interest is different. Advanced statistical methods and tools such as classification, association rule mining, and cluster analysis are common outside the high energy physics community. These tools can prove useful, not for discovery physics, but for learning about our data, our detector, and our software. A fixed and relatively simple schema makes TAG export to other storage technologies such as HDF5 straightforward. This simplifies the task of exploiting very-large-scale parallel platforms such as Argonne National Laboratory's BlueGene/P, currently the largest supercomputer in the world for open science, in the development of scalable tools for data mining. Using a domain-neutral scientific data format may also enable us to take advantage of existing data mining components from other communities. There is, further, a substantial literature on the topic of one-pass algorithms and stream mining techniques, and such tools may be inserted naturally at various points in the event data processing and distribution chain. This paper describes early experience with event metadata records from ATLAS simulation and commissioning as a testbed for scalable data mining tool development and evaluation.

  7. Scalable Coverage Maintenance for Dense Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jun Lu

    2007-06-01

    Full Text Available Owing to numerous potential applications, wireless sensor networks have been attracting significant research effort recently. The critical challenge that wireless sensor networks often face is to sustain long-term operation on limited battery energy. Coverage maintenance schemes can effectively prolong network lifetime by selecting and employing a subset of sensors in the network to provide sufficient sensing coverage over a target region. We envision future wireless sensor networks composed of a vast number of miniaturized sensors in exceedingly high density. Therefore, the key issue of coverage maintenance for future sensor networks is the scalability to sensor deployment density. In this paper, we propose a novel coverage maintenance scheme, scalable coverage maintenance (SCOM, which is scalable to sensor deployment density in terms of communication overhead (i.e., number of transmitted and received beacons and computational complexity (i.e., time and space complexity. In addition, SCOM achieves high energy efficiency and load balancing over different sensors. We have validated our claims through both analysis and simulations.

  8. Rate control scheme for consistent video quality in scalable video codec.

    Science.gov (United States)

    Seo, Chan-Won; Han, Jong-Ki; Nguyen, Truong Q

    2011-08-01

    Multimedia data delivered to mobile devices over wireless channels or the Internet are complicated by bandwidth fluctuation and the variety of mobile devices. Scalable video coding has been developed as an extension of H.264/AVC to solve this problem. Since scalable video codec provides various scalabilities to adapt the bitstream for the channel conditions and terminal types, scalable codec is one of the useful codecs for wired or wireless multimedia communication systems, such as IPTV and streaming services. In such scalable multimedia communication systems, video quality fluctuation degrades the visual perception significantly. It is important to efficiently use the target bits in order to maintain a consistent video quality or achieve a small distortion variation throughout the whole video sequence. The scheme proposed in this paper provides a useful function to control video quality in applications supporting scalability, whereas conventional schemes have been proposed to control video quality in the H.264 and MPEG-4 systems. The proposed algorithm decides the quantization parameter of the enhancement layer to maintain a consistent video quality throughout the entire sequence. The video quality of the enhancement layer is controlled based on a closed-form formula which utilizes the residual data and quantization error of the base layer. The simulation results show that the proposed algorithm controls the frame quality of the enhancement layer in a simple operation, where the parameter decision algorithm is applied to each frame.

  9. Declarative and Scalable Selection for Map Visualizations

    DEFF Research Database (Denmark)

    Kefaloukos, Pimin Konstantin Balic

    and is itself a source and cause of prolific data creation. This calls for scalable map processing techniques that can handle the data volume and which play well with the predominant data models on the Web. (4) Maps are now consumed around the clock by a global audience. While historical maps were singleuser......-defined constraints as well as custom objectives. The purpose of the language is to derive a target multi-scale database from a source database according to holistic specifications. (b) The Glossy SQL compiler allows Glossy SQL to be scalably executed in a spatial analytics system, such as a spatial relational......, there are indications that the method is scalable for databases that contain millions of records, especially if the target language of the compiler is substituted by a cluster-ready variant of SQL. While several realistic use cases for maps have been implemented in CVL, additional non-geographic data visualization uses...

  10. Scalable Density-Based Subspace Clustering

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Günnemann, Stephan

    2011-01-01

    For knowledge discovery in high dimensional databases, subspace clustering detects clusters in arbitrary subspace projections. Scalability is a crucial issue, as the number of possible projections is exponential in the number of dimensions. We propose a scalable density-based subspace clustering...... method that steers mining to few selected subspace clusters. Our novel steering technique reduces subspace processing by identifying and clustering promising subspaces and their combinations directly. Thereby, it narrows down the search space while maintaining accuracy. Thorough experiments on real...... and synthetic databases show that steering is efficient and scalable, with high quality results. For future work, our steering paradigm for density-based subspace clustering opens research potential for speeding up other subspace clustering approaches as well....

  11. Microscopic Characterization of Scalable Coherent Rydberg Superatoms

    Directory of Open Access Journals (Sweden)

    Johannes Zeiher

    2015-08-01

    Full Text Available Strong interactions can amplify quantum effects such that they become important on macroscopic scales. Controlling these coherently on a single-particle level is essential for the tailored preparation of strongly correlated quantum systems and opens up new prospects for quantum technologies. Rydberg atoms offer such strong interactions, which lead to extreme nonlinearities in laser-coupled atomic ensembles. As a result, multiple excitation of a micrometer-sized cloud can be blocked while the light-matter coupling becomes collectively enhanced. The resulting two-level system, often called a “superatom,” is a valuable resource for quantum information, providing a collective qubit. Here, we report on the preparation of 2 orders of magnitude scalable superatoms utilizing the large interaction strength provided by Rydberg atoms combined with precise control of an ensemble of ultracold atoms in an optical lattice. The latter is achieved with sub-shot-noise precision by local manipulation of a two-dimensional Mott insulator. We microscopically confirm the superatom picture by in situ detection of the Rydberg excitations and observe the characteristic square-root scaling of the optical coupling with the number of atoms. Enabled by the full control over the atomic sample, including the motional degrees of freedom, we infer the overlap of the produced many-body state with a W state from the observed Rabi oscillations and deduce the presence of entanglement. Finally, we investigate the breakdown of the superatom picture when two Rydberg excitations are present in the system, which leads to dephasing and a loss of coherence.

  12. Feedback Providing Improvement Strategies and Reflection on Feedback Use: Effects on Students' Writing Motivation, Process, and Performance

    Science.gov (United States)

    Duijnhouwer, Hendrien; Prins, Frans J.; Stokking, Karel M.

    2012-01-01

    This study investigated the effects of feedback providing improvement strategies and a reflection assignment on students' writing motivation, process, and performance. Students in the experimental feedback condition (n = 41) received feedback including improvement strategies, whereas students in the control feedback condition (n = 41) received…

  13. Relationships among providing maternal, child, and adolescent health services; implementing various financial strategy responses; and performance of local health departments.

    Science.gov (United States)

    Issel, L Michele; Olorunsaiye, Comfort; Snebold, Laura; Handler, Arden

    2015-04-01

    We explored the relationships between local health department (LHD) structure, capacity, and macro-context variables and performance of essential public health services (EPHS). In 2012, we assessed a stratified, random sample of 195 LHDs that provided data via an online survey regarding performance of EPHS, the services provided or contracted out, the financial strategies used in response to budgetary pressures, and the extent of collaborations. We performed weighted analyses that included analysis of variance, pairwise correlations by jurisdiction population size, and linear regressions. On average, LHDs provided approximately 13 (36%) of 35 possible services either directly or by contract. Rather than cut services or externally consolidating, LHDs took steps to generate more revenue and maximize capacity. Higher LHD performance of EPHS was significantly associated with delivering more services, initiating more financial strategies, and engaging in collaboration, after adjusting for the effects of the Affordable Care Act and jurisdiction size. During changing economic and health care environments, we found that strong structural capacity enhanced local health department EPHS performance for maternal, child, and adolescent health.

  14. From Digital Disruption to Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten; Thomsen, Peter Poulsen

    2017-01-01

    This article discusses the terms disruption, digital disruption, business models and business model scalability. It illustrates how managers should be using these terms for the benefit of their business by developing business models capable of achieving exponentially increasing returns to scale...... will seldom lead to business model scalability capable of competing with digital disruption(s)....... as a response to digital disruption. A series of case studies illustrate that besides frequent existing messages in the business literature relating to the importance of creating agile businesses, both in growing and declining economies, as well as hard to copy value propositions or value propositions that take...

  15. Scalable electro-photonic integration concept based on polymer waveguides

    Science.gov (United States)

    Bosman, E.; Van Steenberge, G.; Boersma, A.; Wiegersma, S.; Harmsma, P.; Karppinen, M.; Korhonen, T.; Offrein, B. J.; Dangel, R.; Daly, A.; Ortsiefer, M.; Justice, J.; Corbett, B.; Dorrestein, S.; Duis, J.

    2016-03-01

    A novel method for fabricating a single mode optical interconnection platform is presented. The method comprises the miniaturized assembly of optoelectronic single dies, the scalable fabrication of polymer single mode waveguides and the coupling to glass fiber arrays providing the I/O's. The low cost approach for the polymer waveguide fabrication is based on the nano-imprinting of a spin-coated waveguide core layer. The assembly of VCSELs and photodiodes is performed before waveguide layers are applied. By embedding these components in deep reactive ion etched pockets in the silicon substrate, the planarity of the substrate for subsequent layer processing is guaranteed and the thermal path of chip-to-substrate is minimized. Optical coupling of the embedded devices to the nano-imprinted waveguides is performed by laser ablating 45 degree trenches which act as optical mirror for 90 degree deviation of the light from VCSEL to waveguide. Laser ablation is also implemented for removing parts of the polymer stack in order to mount a custom fabricated connector containing glass fiber arrays. A demonstration device was built to show the proof of principle of the novel fabrication, packaging and optical coupling principles as described above, combined with a set of sub-demonstrators showing the functionality of the different techniques separately. The paper represents a significant part of the electro-photonic integration accomplishments in the European 7th Framework project "Firefly" and not only discusses the development of the different assembly processes described above, but the efforts on the complete integration of all process approaches into the single device demonstrator.

  16. ADEpedia: a scalable and standardized knowledge base of Adverse Drug Events using semantic web technology.

    Science.gov (United States)

    Jiang, Guoqian; Solbrig, Harold R; Chute, Christopher G

    2011-01-01

    A source of semantically coded Adverse Drug Event (ADE) data can be useful for identifying common phenotypes related to ADEs. We proposed a comprehensive framework for building a standardized ADE knowledge base (called ADEpedia) through combining ontology-based approach with semantic web technology. The framework comprises four primary modules: 1) an XML2RDF transformation module; 2) a data normalization module based on NCBO Open Biomedical Annotator; 3) a RDF store based persistence module; and 4) a front-end module based on a Semantic Wiki for the review and curation. A prototype is successfully implemented to demonstrate the capability of the system to integrate multiple drug data and ontology resources and open web services for the ADE data standardization. A preliminary evaluation is performed to demonstrate the usefulness of the system, including the performance of the NCBO annotator. In conclusion, the semantic web technology provides a highly scalable framework for ADE data source integration and standard query service.

  17. Scalable Integrated Multi-Mission Support System Simulator Release 3.0

    Science.gov (United States)

    Kim, John; Velamuri, Sarma; Casey, Taylor; Bemann, Travis

    2012-01-01

    The Scalable Integrated Multi-mission Support System (SIMSS) is a tool that performs a variety of test activities related to spacecraft simulations and ground segment checks. SIMSS is a distributed, component-based, plug-and-play client-server system useful for performing real-time monitoring and communications testing. SIMSS runs on one or more workstations and is designed to be user-configurable or to use predefined configurations for routine operations. SIMSS consists of more than 100 modules that can be configured to create, receive, process, and/or transmit data. The SIMSS/GMSEC innovation is intended to provide missions with a low-cost solution for implementing their ground systems, as well as significantly reducing a mission s integration time and risk.

  18. A Scalable and Reliable Message Transport Service for the ATLAS Trigger and Data Acquisition System

    CERN Document Server

    Kazarov, A; The ATLAS collaboration; Kolos, S; Lehmann Miotto, G; Soloviev, I

    2014-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) is a large distributed computing system composed of several thousands of interconnected computers and tens of thousands applications. During a run, TDAQ applications produce a lot of control and information messages with variable rates, addressed to TDAQ operators or to other applications. Reliable, fast and accurate delivery of the messages is important for the functioning of the whole TDAQ system. The Message Transport Service (MTS) provides facilities for the reliable transport, the filtering and the routing of the messages, basing on publish-subscribe-notify communication pattern with content-based message filtering. During the ongoing LHC shutdown, the MTS was re-implemented, taking into account important requirements like reliability, scalability and performance, handling of slow subscribers case and also simplicity of the design and the implementation. MTS uses CORBA middleware, a common layer for TDAQ infrastructure, and provides sending/subscribing APIs i...

  19. Scalable Parallel Distributed Coprocessor System for Graph Searching Problems with Massive Data

    Directory of Open Access Journals (Sweden)

    Wanrong Huang

    2017-01-01

    Full Text Available The Internet applications, such as network searching, electronic commerce, and modern medical applications, produce and process massive data. Considerable data parallelism exists in computation processes of data-intensive applications. A traversal algorithm, breadth-first search (BFS, is fundamental in many graph processing applications and metrics when a graph grows in scale. A variety of scientific programming methods have been proposed for accelerating and parallelizing BFS because of the poor temporal and spatial locality caused by inherent irregular memory access patterns. However, new parallel hardware could provide better improvement for scientific methods. To address small-world graph problems, we propose a scalable and novel field-programmable gate array-based heterogeneous multicore system for scientific programming. The core is multithread for streaming processing. And the communication network InfiniBand is adopted for scalability. We design a binary search algorithm to address mapping to unify all processor addresses. Within the limits permitted by the Graph500 test bench after 1D parallel hybrid BFS algorithm testing, our 8-core and 8-thread-per-core system achieved superior performance and efficiency compared with the prior work under the same degree of parallelism. Our system is efficient not as a special acceleration unit but as a processor platform that deals with graph searching applications.

  20. Secure and scalable deduplication of horizontally partitioned health data for privacy-preserving distributed statistical computation.

    Science.gov (United States)

    Yigzaw, Kassaye Yitbarek; Michalas, Antonis; Bellika, Johan Gustav

    2017-01-03

    Techniques have been developed to compute statistics on distributed datasets without revealing private information except the statistical results. However, duplicate records in a distributed dataset may lead to incorrect statistical results. Therefore, to increase the accuracy of the statistical analysis of a distributed dataset, secure deduplication is an important preprocessing step. We designed a secure protocol for the deduplication of horizontally partitioned datasets with deterministic record linkage algorithms. We provided a formal security analysis of the protocol in the presence of semi-honest adversaries. The protocol was implemented and deployed across three microbiology laboratories located in Norway, and we ran experiments on the datasets in which the number of records for each laboratory varied. Experiments were also performed on simulated microbiology datasets and data custodians connected through a local area network. The security analysis demonstrated that the protocol protects the privacy of individuals and data custodians under a semi-honest adversarial model. More precisely, the protocol remains secure with the collusion of up to N - 2 corrupt data custodians. The total runtime for the protocol scales linearly with the addition of data custodians and records. One million simulated records distributed across 20 data custodians were deduplicated within 45 s. The experimental results showed that the protocol is more efficient and scalable than previous protocols for the same problem. The proposed deduplication protocol is efficient and scalable for practical uses while protecting the privacy of patients and data custodians.

  1. Novel flat datacenter network architecture based on scalable and flow-controlled optical switch system.

    Science.gov (United States)

    Miao, Wang; Luo, Jun; Di Lucente, Stefano; Dorren, Harm; Calabretta, Nicola

    2014-02-10

    We propose and demonstrate an optical flat datacenter network based on scalable optical switch system with optical flow control. Modular structure with distributed control results in port-count independent optical switch reconfiguration time. RF tone in-band labeling technique allowing parallel processing of the label bits ensures the low latency operation regardless of the switch port-count. Hardware flow control is conducted at optical level by re-using the label wavelength without occupying extra bandwidth, space, and network resources which further improves the performance of latency within a simple structure. Dynamic switching including multicasting operation is validated for a 4 x 4 system. Error free operation of 40 Gb/s data packets has been achieved with only 1 dB penalty. The system could handle an input load up to 0.5 providing a packet loss lower that 10(-5) and an average latency less that 500 ns when a buffer size of 16 packets is employed. Investigation on scalability also indicates that the proposed system could potentially scale up to large port count with limited power penalty.

  2. GSKY: A scalable distributed geospatial data server on the cloud

    Science.gov (United States)

    Rozas Larraondo, Pablo; Pringle, Sean; Antony, Joseph; Evans, Ben

    2017-04-01

    Earth systems, environmental and geophysical datasets are an extremely valuable sources of information about the state and evolution of the Earth. Being able to combine information coming from different geospatial collections is in increasing demand by the scientific community, and requires managing and manipulating data with different formats and performing operations such as map reprojections, resampling and other transformations. Due to the large data volume inherent in these collections, storing multiple copies of them is unfeasible and so such data manipulation must be performed on-the-fly using efficient, high performance techniques. Ideally this should be performed using a trusted data service and common system libraries to ensure wide use and reproducibility. Recent developments in distributed computing based on dynamic access to significant cloud infrastructure opens the door for such new ways of processing geospatial data on demand. The National Computational Infrastructure (NCI), hosted at the Australian National University (ANU), has over 10 Petabytes of nationally significant research data collections. Some of these collections, which comprise a variety of observed and modelled geospatial data, are now made available via a highly distributed geospatial data server, called GSKY (pronounced [jee-skee]). GSKY supports on demand processing of large geospatial data products such as satellite earth observation data as well as numerical weather products, allowing interactive exploration and analysis of the data. It dynamically and efficiently distributes the required computations among cloud nodes providing a scalable analysis framework that can adapt to serve large number of concurrent users. Typical geospatial workflows handling different file formats and data types, or blending data in different coordinate projections and spatio-temporal resolutions, is handled transparently by GSKY. This is achieved by decoupling the data ingestion and indexing process as

  3. Experiment Dashboard - a generic, scalable solution for monitoring of the LHC computing activities, distributed sites and services

    International Nuclear Information System (INIS)

    Andreeva, J; Cinquilli, M; Dieguez, D; Dzhunov, I; Karavakis, E; Karhula, P; Kenyon, M; Kokoszkiewicz, L; Nowotka, M; Ro, G; Saiz, P; Tuckett, D; Sargsyan, L; Schovancova, J

    2012-01-01

    The Experiment Dashboard system provides common solutions for monitoring job processing, data transfers and site/service usability. Over the last seven years, it proved to play a crucial role in the monitoring of the LHC computing activities, distributed sites and services. It has been one of the key elements during the commissioning of the distributed computing systems of the LHC experiments. The first years of data taking represented a serious test for Experiment Dashboard in terms of functionality, scalability and performance. And given that the usage of the Experiment Dashboard applications has been steadily increasing over time, it can be asserted that all the objectives were fully accomplished.

  4. Case of two electrostatics problems: Can providing a diagram adversely impact introductory physics students’ problem solving performance?

    Directory of Open Access Journals (Sweden)

    Alexandru Maries

    2018-03-01

    Full Text Available Drawing appropriate diagrams is a useful problem solving heuristic that can transform a problem into a representation that is easier to exploit for solving it. One major focus while helping introductory physics students learn effective problem solving is to help them understand that drawing diagrams can facilitate problem solution. We conducted an investigation in which two different interventions were implemented during recitation quizzes in a large enrollment algebra-based introductory physics course. Students were either (i asked to solve problems in which the diagrams were drawn for them or (ii explicitly told to draw a diagram. A comparison group was not given any instruction regarding diagrams. We developed rubrics to score the problem solving performance of students in different intervention groups and investigated ten problems. We found that students who were provided diagrams never performed better and actually performed worse than the other students on three problems, one involving standing sound waves in a tube (discussed elsewhere and two problems in electricity which we focus on here. These two problems were the only problems in electricity that involved considerations of initial and final conditions, which may partly account for why students provided with diagrams performed significantly worse than students who were not provided with diagrams. In order to explore potential reasons for this finding, we conducted interviews with students and found that some students provided with diagrams may have spent less time on the conceptual analysis and planning stage of the problem solving process. In particular, those provided with the diagram were more likely to jump into the implementation stage of problem solving early without fully analyzing and understanding the problem, which can increase the likelihood of mistakes in solutions.

  5. Case of two electrostatics problems: Can providing a diagram adversely impact introductory physics students' problem solving performance?

    Science.gov (United States)

    Maries, Alexandru; Singh, Chandralekha

    2018-06-01

    Drawing appropriate diagrams is a useful problem solving heuristic that can transform a problem into a representation that is easier to exploit for solving it. One major focus while helping introductory physics students learn effective problem solving is to help them understand that drawing diagrams can facilitate problem solution. We conducted an investigation in which two different interventions were implemented during recitation quizzes in a large enrollment algebra-based introductory physics course. Students were either (i) asked to solve problems in which the diagrams were drawn for them or (ii) explicitly told to draw a diagram. A comparison group was not given any instruction regarding diagrams. We developed rubrics to score the problem solving performance of students in different intervention groups and investigated ten problems. We found that students who were provided diagrams never performed better and actually performed worse than the other students on three problems, one involving standing sound waves in a tube (discussed elsewhere) and two problems in electricity which we focus on here. These two problems were the only problems in electricity that involved considerations of initial and final conditions, which may partly account for why students provided with diagrams performed significantly worse than students who were not provided with diagrams. In order to explore potential reasons for this finding, we conducted interviews with students and found that some students provided with diagrams may have spent less time on the conceptual analysis and planning stage of the problem solving process. In particular, those provided with the diagram were more likely to jump into the implementation stage of problem solving early without fully analyzing and understanding the problem, which can increase the likelihood of mistakes in solutions.

  6. Scalable Domain Decomposed Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  7. Scalable Open Source Smart Grid Simulator (SGSim)

    DEFF Research Database (Denmark)

    Ebeid, Emad Samuel Malki; Jacobsen, Rune Hylsberg; Stefanni, Francesco

    2017-01-01

    . This paper presents an open source smart grid simulator (SGSim). The simulator is based on open source SystemC Network Simulation Library (SCNSL) and aims to model scalable smart grid applications. SGSim has been tested under different smart grid scenarios that contain hundreds of thousands of households...

  8. Cooperative Scalable Moving Continuous Query Processing

    DEFF Research Database (Denmark)

    Li, Xiaohui; Karras, Panagiotis; Jensen, Christian S.

    2012-01-01

    of the global view and handle the majority of the workload. Meanwhile, moving clients, having basic memory and computation resources, handle small portions of the workload. This model is further enhanced by dynamic region allocation and grid size adjustment mechanisms that reduce the communication...... and computation cost for both servers and clients. An experimental study demonstrates that our approaches offer better scalability than competitors...

  9. Scalable optical switches for computing applications

    NARCIS (Netherlands)

    White, I.H.; Aw, E.T.; Williams, K.A.; Wang, Haibo; Wonfor, A.; Penty, R.V.

    2009-01-01

    A scalable photonic interconnection network architecture is proposed whereby a Clos network is populated with broadcast-and-select stages. This enables the efficient exploitation of an emerging class of photonic integrated switch fabric. A low distortion space switch technology based on recently

  10. Responsive, Flexible and Scalable Broader Impacts (Invited)

    Science.gov (United States)

    Decharon, A.; Companion, C.; Steinman, M.

    2010-12-01

    In many educator professional development workshops, scientists present content in a slideshow-type format and field questions afterwards. Drawbacks of this approach include: inability to begin the lecture with content that is responsive to audience needs; lack of flexible access to specific material within the linear presentation; and “Q&A” sessions are not easily scalable to broader audiences. Often this type of traditional interaction provides little direct benefit to the scientists. The Centers for Ocean Sciences Education Excellence - Ocean Systems (COSEE-OS) applies the technique of concept mapping with demonstrated effectiveness in helping scientists and educators “get on the same page” (deCharon et al., 2009). A key aspect is scientist professional development geared towards improving face-to-face and online communication with non-scientists. COSEE-OS promotes scientist-educator collaboration, tests the application of scientist-educator maps in new contexts through webinars, and is piloting the expansion of maps as long-lived resources for the broader community. Collaboration - COSEE-OS has developed and tested a workshop model bringing scientists and educators together in a peer-oriented process, often clarifying common misconceptions. Scientist-educator teams develop online concept maps that are hyperlinked to “assets” (i.e., images, videos, news) and are responsive to the needs of non-scientist audiences. In workshop evaluations, 91% of educators said that the process of concept mapping helped them think through science topics and 89% said that concept mapping helped build a bridge of communication with scientists (n=53). Application - After developing a concept map, with COSEE-OS staff assistance, scientists are invited to give webinar presentations that include live “Q&A” sessions. The webinars extend the reach of scientist-created concept maps to new contexts, both geographically and topically (e.g., oil spill), with a relatively small

  11. Interactive segmentation: a scalable superpixel-based method

    Science.gov (United States)

    Mathieu, Bérengère; Crouzil, Alain; Puel, Jean-Baptiste

    2017-11-01

    This paper addresses the problem of interactive multiclass segmentation of images. We propose a fast and efficient new interactive segmentation method called superpixel α fusion (SαF). From a few strokes drawn by a user over an image, this method extracts relevant semantic objects. To get a fast calculation and an accurate segmentation, SαF uses superpixel oversegmentation and support vector machine classification. We compare SαF with competing algorithms by evaluating its performances on reference benchmarks. We also suggest four new datasets to evaluate the scalability of interactive segmentation methods, using images from some thousand to several million pixels. We conclude with two applications of SαF.

  12. Scalable and Hybrid Radio Resource Management for Future Wireless Networks

    DEFF Research Database (Denmark)

    Mino, E.; Luo, Jijun; Tragos, E.

    2007-01-01

    The concept of ubiquitous and scalable system is applied in the IST WINNER II [1] project to deliver optimum performance for different deployment scenarios, from local area to wide area wireless networks. The integration in a unique radio system of a cellular and local area type networks supposes...... a great advantage for the final user and for the operator, compared with the current situation, with disconnected systems, usually with different subscriptions, radio interfaces and terminals. To be a ubiquitous wireless system, the IST project WINNER II has defined three system modes. This contribution...

  13. Using Python to Construct a Scalable Parallel Nonlinear Wave Solver

    KAUST Repository

    Mandli, Kyle

    2011-01-01

    Computational scientists seek to provide efficient, easy-to-use tools and frameworks that enable application scientists within a specific discipline to build and/or apply numerical models with up-to-date computing technologies that can be executed on all available computing systems. Although many tools could be useful for groups beyond a specific application, it is often difficult and time consuming to combine existing software, or to adapt it for a more general purpose. Python enables a high-level approach where a general framework can be supplemented with tools written for different fields and in different languages. This is particularly important when a large number of tools are necessary, as is the case for high performance scientific codes. This motivated our development of PetClaw, a scalable distributed-memory solver for time-dependent nonlinear wave propagation, as a case-study for how Python can be used as a highlevel framework leveraging a multitude of codes, efficient both in the reuse of code and programmer productivity. We present scaling results for computations on up to four racks of Shaheen, an IBM BlueGene/P supercomputer at King Abdullah University of Science and Technology. One particularly important issue that PetClaw has faced is the overhead associated with dynamic loading leading to catastrophic scaling. We use the walla library to solve the issue which does so by supplanting high-cost filesystem calls with MPI operations at a low enough level that developers may avoid any changes to their codes.

  14. Understanding the motivations of health-care providers in performing female genital mutilation: an integrative review of the literature.

    Science.gov (United States)

    Doucet, Marie-Hélène; Pallitto, Christina; Groleau, Danielle

    2017-03-23

    Female genital mutilation (FGM) is a traditional harmful practice that can cause severe physical and psychological damages to girls and women. Increasingly, trained health-care providers carry out the practice at the request of families. It is important to understand the motivations of providers in order to reduce the medicalization of FGM. This integrative review identifies, appraises and summarizes qualitative and quantitative literature exploring the factors that are associated with the medicalization of FGM and/or re-infibulation. Literature searches were conducted in PubMed, CINAHL and grey literature databases. Hand searches of identified studies were also examined. The "CASP Qualitative Research Checklist" and the "STROBE Statement" were used to assess the methodological quality of the qualitative and quantitative studies respectively. A total of 354 articles were reviewed for inclusion. Fourteen (14) studies, conducted in countries where FGM is largely practiced as well as in countries hosting migrants from these regions, were included. The main findings about the motivations of health-care providers to practice FGM were: (1) the belief that performing FGM would be less harmful for girls or women than the procedure being performed by a traditional practitioner (the so-called "harm reduction" perspective); (2) the belief that the practice was justified for cultural reasons; (3) the financial gains of performing the procedure; (4) responding to requests of the community or feeling pressured by the community to perform FGM. The main reasons given by health-care providers for not performing FGM were that they (1) are concerned about the risks that FGM can cause for girls' and women's health; (2) are preoccupied by the legal sanctions that might result from performing FGM; and (3) consider FGM to be a "bad practice". The findings of this review can inform public health program planners, policy makers and researchers to adapt or create strategies to end

  15. Scalable Nonlinear AUC Maximization Methods

    OpenAIRE

    Khalid, Majdi; Ray, Indrakshi; Chitsaz, Hamidreza

    2017-01-01

    The area under the ROC curve (AUC) is a measure of interest in various machine learning and data mining applications. It has been widely used to evaluate classification performance on heavily imbalanced data. The kernelized AUC maximization machines have established a superior generalization ability compared to linear AUC machines because of their capability in modeling the complex nonlinear structure underlying most real world-data. However, the high training complexity renders the kernelize...

  16. Scalable video on demand adaptive Internet-based distribution

    CERN Document Server

    Zink, Michael

    2013-01-01

    In recent years, the proliferation of available video content and the popularity of the Internet have encouraged service providers to develop new ways of distributing content to clients. Increasing video scaling ratios and advanced digital signal processing techniques have led to Internet Video-on-Demand applications, but these currently lack efficiency and quality. Scalable Video on Demand: Adaptive Internet-based Distribution examines how current video compression and streaming can be used to deliver high-quality applications over the Internet. In addition to analysing the problems

  17. Scalable web services for the PSIPRED Protein Analysis Workbench.

    Science.gov (United States)

    Buchan, Daniel W A; Minneci, Federico; Nugent, Tim C O; Bryson, Kevin; Jones, David T

    2013-07-01

    Here, we present the new UCL Bioinformatics Group's PSIPRED Protein Analysis Workbench. The Workbench unites all of our previously available analysis methods into a single web-based framework. The new web portal provides a greatly streamlined user interface with a number of new features to allow users to better explore their results. We offer a number of additional services to enable computationally scalable execution of our prediction methods; these include SOAP and XML-RPC web server access and new HADOOP packages. All software and services are available via the UCL Bioinformatics Group website at http://bioinf.cs.ucl.ac.uk/.

  18. Focal plane array with modular pixel array components for scalability

    Science.gov (United States)

    Kay, Randolph R; Campbell, David V; Shinde, Subhash L; Rienstra, Jeffrey L; Serkland, Darwin K; Holmes, Michael L

    2014-12-09

    A modular, scalable focal plane array is provided as an array of integrated circuit dice, wherein each die includes a given amount of modular pixel array circuitry. The array of dice effectively multiplies the amount of modular pixel array circuitry to produce a larger pixel array without increasing die size. Desired pixel pitch across the enlarged pixel array is preserved by forming die stacks with each pixel array circuitry die stacked on a separate die that contains the corresponding signal processing circuitry. Techniques for die stack interconnections and die stack placement are implemented to ensure that the desired pixel pitch is preserved across the enlarged pixel array.

  19. On the assessment of performance and emissions characteristics of a SI engine provided with a laser ignition system

    Science.gov (United States)

    Birtas, A.; Boicea, N.; Draghici, F.; Chiriac, R.; Croitoru, G.; Dinca, M.; Dascalu, T.; Pavel, N.

    2017-10-01

    Performance and exhaust emissions of spark ignition engines are strongly dependent on the development of the combustion process. Controlling this process in order to improve the performance and to reduce emissions by ensuring rapid and robust combustion depends on how ignition stage is achieved. An ignition system that seems to be able for providing such an enhanced combustion process is that based on plasma generation using a Q-switched solid state laser that delivers pulses with high peak power (of MW-order level). The laser-spark devices used in the present investigations were realized using compact diffusion-bonded Nd:YAG/Cr4+:YAG ceramic media. The laser igniter was designed, integrated and built to resemble a classical spark plug and therefore it could be mounted directly on the cylinder head of a passenger car engine. In this study are reported the results obtained using such ignition system provided for a K7M 710 engine currently produced by Renault-Dacia, where the standard calibrations were changed towards the lean mixtures combustion zone. Results regarding the performance, the exhaust emissions and the combustion characteristics in optimized spark timing conditions, which demonstrate the potential of such an innovative ignition system, are presented.

  20. What information is provided in transcripts and Medical Student Performance Records from Canadian Medical Schools? A retrospective cohort study.

    Science.gov (United States)

    Robins, Jason A; McInnes, Matthew D F; Esmail, Kaisra

    2014-01-01

    Resident selection committees must rely on information provided by medical schools in order to evaluate candidates. However, this information varies between institutions, limiting its value in comparing individuals and fairly assessing their quality. This study investigates what is included in candidates' documentation, the heterogeneity therein, as well as its objective data. Samples of recent transcripts and Medical Student Performance Records were anonymised prior to evaluation. Data were then extracted by two independent reviewers blinded to the submitting university, assessing for the presence of pre-selected criteria; disagreement was resolved through consensus. The data were subsequently analysed in multiple subgroups. Inter-rater agreement equalled 92%. Inclusion of important criteria varied by school, ranging from 22.2% inclusion to 70.4%; the mean equalled 47.4%. The frequency of specific criteria was highly variable as well. Only 17.7% of schools provided any basis for comparison of academic performance; the majority detailed only status regarding pass or fail, without any further qualification. Considerable heterogeneity exists in the information provided in official medical school documentation, as well as markedly little objective data. Standardization may be necessary in order to facilitate fair comparison of graduates from different institutions. Implementation of objective data may allow more effective intra- and inter-scholastic comparison.

  1. What information is provided in transcripts and Medical Student Performance Records from Canadian Medical Schools? A retrospective cohort study

    Directory of Open Access Journals (Sweden)

    Jason A. Robins

    2014-09-01

    Full Text Available Background: Resident selection committees must rely on information provided by medical schools in order to evaluate candidates. However, this information varies between institutions, limiting its value in comparing individuals and fairly assessing their quality. This study investigates what is included in candidates’ documentation, the heterogeneity therein, as well as its objective data. Methods: Samples of recent transcripts and Medical Student Performance Records were anonymised prior to evaluation. Data were then extracted by two independent reviewers blinded to the submitting university, assessing for the presence of pre-selected criteria; disagreement was resolved through consensus. The data were subsequently analysed in multiple subgroups. Results: Inter-rater agreement equalled 92%. Inclusion of important criteria varied by school, ranging from 22.2% inclusion to 70.4%; the mean equalled 47.4%. The frequency of specific criteria was highly variable as well. Only 17.7% of schools provided any basis for comparison of academic performance; the majority detailed only status regarding pass or fail, without any further qualification. Conclusions: Considerable heterogeneity exists in the information provided in official medical school documentation, as well as markedly little objective data. Standardization may be necessary in order to facilitate fair comparison of graduates from different institutions. Implementation of objective data may allow more effective intra- and inter-scholastic comparison.

  2. A scalable neural chip with synaptic electronics using CMOS integrated memristors

    International Nuclear Information System (INIS)

    Cruz-Albrecht, Jose M; Derosier, Timothy; Srinivasa, Narayan

    2013-01-01

    The design and simulation of a scalable neural chip with synaptic electronics using nanoscale memristors fully integrated with complementary metal–oxide–semiconductor (CMOS) is presented. The circuit consists of integrate-and-fire neurons and synapses with spike-timing dependent plasticity (STDP). The synaptic conductance values can be stored in memristors with eight levels, and the topology of connections between neurons is reconfigurable. The circuit has been designed using a 90 nm CMOS process with via connections to on-chip post-processed memristor arrays. The design has about 16 million CMOS transistors and 73 728 integrated memristors. We provide circuit level simulations of the entire chip performing neuronal and synaptic computations that result in biologically realistic functional behavior. (paper)

  3. Scalable Multicasting over Next-Generation Internet Design, Analysis and Applications

    CERN Document Server

    Tian, Xiaohua

    2013-01-01

    Next-generation Internet providers face high expectations, as contemporary users worldwide expect high-quality multimedia functionality in a landscape of ever-expanding network applications. This volume explores the critical research issue of turning today’s greatly enhanced hardware capacity to good use in designing a scalable multicast  protocol for supporting large-scale multimedia services. Linking new hardware to improved performance in the Internet’s next incarnation is a research hot-spot in the computer communications field.   The methodical presentation deals with the key questions in turn: from the mechanics of multicast protocols to current state-of-the-art designs, and from methods of theoretical analysis of these protocols to applying them in the ns2 network simulator, known for being hard to extend. The authors’ years of research in the field inform this thorough treatment, which covers details such as applying AOM (application-oriented multicast) protocol to IPTV provision and resolving...

  4. PetClaw: A scalable parallel nonlinear wave propagation solver for Python

    KAUST Repository

    Alghamdi, Amal; Ahmadia, Aron; Ketcheson, David I.; Knepley, Matthew; Mandli, Kyle; Dalcin, Lisandro

    2011-01-01

    We present PetClaw, a scalable distributed-memory solver for time-dependent nonlinear wave propagation. PetClaw unifies two well-known scientific computing packages, Clawpack and PETSc, using Python interfaces into both. We rely on Clawpack to provide the infrastructure and kernels for time-dependent nonlinear wave propagation. Similarly, we rely on PETSc to manage distributed data arrays and the communication between them.We describe both the implementation and performance of PetClaw as well as our challenges and accomplishments in scaling a Python-based code to tens of thousands of cores on the BlueGene/P architecture. The capabilities of PetClaw are demonstrated through application to a novel problem involving elastic waves in a heterogeneous medium. Very finely resolved simulations are used to demonstrate the suppression of shock formation in this system.

  5. Scalable Motion Estimation Processor Core for Multimedia System-on-Chip Applications

    Science.gov (United States)

    Lai, Yeong-Kang; Hsieh, Tian-En; Chen, Lien-Fei

    2007-04-01

    In this paper, we describe a high-throughput and scalable motion estimation processor architecture for multimedia system-on-chip applications. The number of processing elements (PEs) is scalable according to the variable algorithm parameters and the performance required for different applications. Using the PE rings efficiently and an intelligent memory-interleaving organization, the efficiency of the architecture can be increased. Moreover, using efficient on-chip memories and a data management technique can effectively decrease the power consumption and memory bandwidth. Techniques for reducing the number of interconnections and external memory accesses are also presented. Our results demonstrate that the proposed scalable PE-ringed architecture is a flexible and high-performance processor core in multimedia system-on-chip applications.

  6. Laplacian embedded regression for scalable manifold regularization.

    Science.gov (United States)

    Chen, Lin; Tsang, Ivor W; Xu, Dong

    2012-06-01

    Semi-supervised learning (SSL), as a powerful tool to learn from a limited number of labeled data and a large number of unlabeled data, has been attracting increasing attention in the machine learning community. In particular, the manifold regularization framework has laid solid theoretical foundations for a large family of SSL algorithms, such as Laplacian support vector machine (LapSVM) and Laplacian regularized least squares (LapRLS). However, most of these algorithms are limited to small scale problems due to the high computational cost of the matrix inversion operation involved in the optimization problem. In this paper, we propose a novel framework called Laplacian embedded regression by introducing an intermediate decision variable into the manifold regularization framework. By using ∈-insensitive loss, we obtain the Laplacian embedded support vector regression (LapESVR) algorithm, which inherits the sparse solution from SVR. Also, we derive Laplacian embedded RLS (LapERLS) corresponding to RLS under the proposed framework. Both LapESVR and LapERLS possess a simpler form of a transformed kernel, which is the summation of the original kernel and a graph kernel that captures the manifold structure. The benefits of the transformed kernel are two-fold: (1) we can deal with the original kernel matrix and the graph Laplacian matrix in the graph kernel separately and (2) if the graph Laplacian matrix is sparse, we only need to perform the inverse operation for a sparse matrix, which is much more efficient when compared with that for a dense one. Inspired by kernel principal component analysis, we further propose to project the introduced decision variable into a subspace spanned by a few eigenvectors of the graph Laplacian matrix in order to better reflect the data manifold, as well as accelerate the calculation of the graph kernel, allowing our methods to efficiently and effectively cope with large scale SSL problems. Extensive experiments on both toy and real

  7. Scalable manufacturing processes with soft materials

    OpenAIRE

    White, Edward; Case, Jennifer; Kramer, Rebecca

    2014-01-01

    The emerging field of soft robotics will benefit greatly from new scalable manufacturing techniques for responsive materials. Currently, most of soft robotic examples are fabricated one-at-a-time, using techniques borrowed from lithography and 3D printing to fabricate molds. This limits both the maximum and minimum size of robots that can be fabricated, and hinders batch production, which is critical to gain wider acceptance for soft robotic systems. We have identified electrical structures, ...

  8. Architecture Knowledge for Evaluating Scalable Databases

    Science.gov (United States)

    2015-01-16

    Architecture Knowledge for Evaluating Scalable Databases 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Nurgaliev... Scala , Erlang, Javascript Cursor-based queries Supported, Not Supported JOIN queries Supported, Not Supported Complex data types Lists, maps, sets...is therefore needed, using technology such as machine learning to extract content from product documentation. The terminology used in the database

  9. Bitcoin-NG: A Scalable Blockchain Protocol

    OpenAIRE

    Eyal, Ittay; Gencer, Adem Efe; Sirer, Emin Gun; van Renesse, Robbert

    2015-01-01

    Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin's blockchain protocol, Bitcoin-NG is By...

  10. Scalable fast multipole methods for vortex element methods

    KAUST Repository

    Hu, Qi

    2012-11-01

    We use a particle-based method to simulate incompressible flows, where the Fast Multipole Method (FMM) is used to accelerate the calculation of particle interactions. The most time-consuming kernelsâ\\'the Biot-Savart equation and stretching term of the vorticity equationâ\\'are mathematically reformulated so that only two Laplace scalar potentials are used instead of six, while automatically ensuring divergence-free far-field computation. Based on this formulation, and on our previous work for a scalar heterogeneous FMM algorithm, we develop a new FMM-based vortex method capable of simulating general flows including turbulence on heterogeneous architectures, which distributes the work between multi-core CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm also uses new data structures which can dynamically manage inter-node communication and load balance efficiently but with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s. © 2012 IEEE.

  11. A scalable distributed RRT for motion planning

    KAUST Repository

    Jacobs, Sam Ade

    2013-05-01

    Rapidly-exploring Random Tree (RRT), like other sampling-based motion planning methods, has been very successful in solving motion planning problems. Even so, sampling-based planners cannot solve all problems of interest efficiently, so attention is increasingly turning to parallelizing them. However, one challenge in parallelizing RRT is the global computation and communication overhead of nearest neighbor search, a key operation in RRTs. This is a critical issue as it limits the scalability of previous algorithms. We present two parallel algorithms to address this problem. The first algorithm extends existing work by introducing a parameter that adjusts how much local computation is done before a global update. The second algorithm radially subdivides the configuration space into regions, constructs a portion of the tree in each region in parallel, and connects the subtrees,i removing cycles if they exist. By subdividing the space, we increase computation locality enabling a scalable result. We show that our approaches are scalable. We present results demonstrating almost linear scaling to hundreds of processors on a Linux cluster and a Cray XE6 machine. © 2013 IEEE.

  12. A scalable distributed RRT for motion planning

    KAUST Repository

    Jacobs, Sam Ade; Stradford, Nicholas; Rodriguez, Cesar; Thomas, Shawna; Amato, Nancy M.

    2013-01-01

    Rapidly-exploring Random Tree (RRT), like other sampling-based motion planning methods, has been very successful in solving motion planning problems. Even so, sampling-based planners cannot solve all problems of interest efficiently, so attention is increasingly turning to parallelizing them. However, one challenge in parallelizing RRT is the global computation and communication overhead of nearest neighbor search, a key operation in RRTs. This is a critical issue as it limits the scalability of previous algorithms. We present two parallel algorithms to address this problem. The first algorithm extends existing work by introducing a parameter that adjusts how much local computation is done before a global update. The second algorithm radially subdivides the configuration space into regions, constructs a portion of the tree in each region in parallel, and connects the subtrees,i removing cycles if they exist. By subdividing the space, we increase computation locality enabling a scalable result. We show that our approaches are scalable. We present results demonstrating almost linear scaling to hundreds of processors on a Linux cluster and a Cray XE6 machine. © 2013 IEEE.

  13. Scalable robotic biofabrication of tissue spheroids

    International Nuclear Information System (INIS)

    Mehesz, A Nagy; Hajdu, Z; Visconti, R P; Markwald, R R; Mironov, V; Brown, J; Beaver, W; Da Silva, J V L

    2011-01-01

    Development of methods for scalable biofabrication of uniformly sized tissue spheroids is essential for tissue spheroid-based bioprinting of large size tissue and organ constructs. The most recent scalable technique for tissue spheroid fabrication employs a micromolded recessed template prepared in a non-adhesive hydrogel, wherein the cells loaded into the template self-assemble into tissue spheroids due to gravitational force. In this study, we present an improved version of this technique. A new mold was designed to enable generation of 61 microrecessions in each well of a 96-well plate. The microrecessions were seeded with cells using an EpMotion 5070 automated pipetting machine. After 48 h of incubation, tissue spheroids formed at the bottom of each microrecession. To assess the quality of constructs generated using this technology, 600 tissue spheroids made by this method were compared with 600 spheroids generated by the conventional hanging drop method. These analyses showed that tissue spheroids fabricated by the micromolded method are more uniform in diameter. Thus, use of micromolded recessions in a non-adhesive hydrogel, combined with automated cell seeding, is a reliable method for scalable robotic fabrication of uniform-sized tissue spheroids.

  14. Scalable robotic biofabrication of tissue spheroids

    Energy Technology Data Exchange (ETDEWEB)

    Mehesz, A Nagy; Hajdu, Z; Visconti, R P; Markwald, R R; Mironov, V [Advanced Tissue Biofabrication Center, Department of Regenerative Medicine and Cell Biology, Medical University of South Carolina, Charleston, SC (United States); Brown, J [Department of Mechanical Engineering, Clemson University, Clemson, SC (United States); Beaver, W [York Technical College, Rock Hill, SC (United States); Da Silva, J V L, E-mail: mironovv@musc.edu [Renato Archer Information Technology Center-CTI, Campinas (Brazil)

    2011-06-15

    Development of methods for scalable biofabrication of uniformly sized tissue spheroids is essential for tissue spheroid-based bioprinting of large size tissue and organ constructs. The most recent scalable technique for tissue spheroid fabrication employs a micromolded recessed template prepared in a non-adhesive hydrogel, wherein the cells loaded into the template self-assemble into tissue spheroids due to gravitational force. In this study, we present an improved version of this technique. A new mold was designed to enable generation of 61 microrecessions in each well of a 96-well plate. The microrecessions were seeded with cells using an EpMotion 5070 automated pipetting machine. After 48 h of incubation, tissue spheroids formed at the bottom of each microrecession. To assess the quality of constructs generated using this technology, 600 tissue spheroids made by this method were compared with 600 spheroids generated by the conventional hanging drop method. These analyses showed that tissue spheroids fabricated by the micromolded method are more uniform in diameter. Thus, use of micromolded recessions in a non-adhesive hydrogel, combined with automated cell seeding, is a reliable method for scalable robotic fabrication of uniform-sized tissue spheroids.

  15. Maternal and newborn healthcare providers in rural Tanzania: in-depth interviews exploring influences on motivation, performance and job satisfaction.

    Science.gov (United States)

    Prytherch, H; Kakoko, D C V; Leshabari, M T; Sauerborn, R; Marx, M

    2012-01-01

    Major improvements in maternal and neonatal health (MNH) remain elusive in Tanzania. The causes are closely related to the health system and overall human resource policy. Just 35% of the required workforce is actually in place and 43% of available staff consists of lower-level cadres such as auxiliaries. Staff motivation is also a challenge. In rural areas the problems of recruiting and retaining health staff are most pronounced. Yet, it is here that the majority of the population continues to reside. A detailed understanding of the influences on the motivation, performance and job satisfaction of providers at rural, primary level facilities was sought to inform a research project in its early stages. The providers approached were those found to be delivering MNH care on the ground, and thus include auxiliary staff. Much of the previous work on motivation has focused on defined professional groups such as physicians and nurses. While attention has recently broadened to also include mid-level providers, the views of auxiliary health workers have seldom been explored. In-depth interviews were the methodology of choice. An interview guideline was prepared with the involvement of Tanzanian psychologists, sociologists and health professionals to ensure the instrument was rooted in the socio-cultural setting of its application. Interviews were conducted with 25 MNH providers, 8 facility and district managers, and 2 policy-makers. Key sources of encouragement for all the types of respondents included community appreciation, perceived government and development partner support for MNH, and on-the-job learning. Discouragements were overwhelmingly financial in nature, but also included facility understaffing and the resulting workload, malfunction of the promotion system as well as health and safety, and security issues. Low-level cadres were found to be particularly discouraged. Difficulties and weaknesses in the management of rural facilities were revealed. Basic steps

  16. VPLS: an effective technology for building scalable transparent LAN services

    Science.gov (United States)

    Dong, Ximing; Yu, Shaohua

    2005-02-01

    Virtual Private LAN Service (VPLS) is generating considerable interest with enterprises and service providers as it offers multipoint transparent LAN service (TLS) over MPLS networks. This paper describes an effective technology - VPLS, which links virtual switch instances (VSIs) through MPLS to form an emulated Ethernet switch and build Scalable Transparent Lan Services. It first focuses on the architecture of VPLS with Ethernet bridging technique at the edge and MPLS at the core, then it tries to elucidate the data forwarding mechanism within VPLS domain, including learning and aging MAC addresses on a per LSP basis, flooding of unknown frames and replication for unknown, multicast, and broadcast frames. The loop-avoidance mechanism, known as split horizon forwarding, is also analyzed. Another important aspect of VPLS service is its basic operation, including autodiscovery and signaling, is discussed. From the perspective of efficiency and scalability the paper compares two important signaling mechanism, BGP and LDP, which are used to set up a PW between the PEs and bind the PWs to a particular VSI. With the extension of VPLS and the increase of full mesh of PWs between PE devices (n*(n-1)/2 PWs in all, a n2 complete problem), VPLS instance could have a large number of remote PE associations, resulting in an inefficient use of network bandwidth and system resources as the ingress PE has to replicate each frame and append MPLS labels for remote PE. So the latter part of this paper focuses on the scalability issue: the Hierarchical VPLS. Within the architecture of HVPLS, this paper addresses two ways to cope with a possibly large number of MAC addresses, which make VPLS operate more efficiently.

  17. Radiologic image communication and archive service: a secure, scalable, shared approach

    Science.gov (United States)

    Fellingham, Linda L.; Kohli, Jagdish C.

    1995-11-01

    The Radiologic Image Communication and Archive (RICA) service is designed to provide a shared archive for medical images to the widest possible audience of customers. Images are acquired from a number of different modalities, each available from many different vendors. Images are acquired digitally from those modalities which support direct digital output and by digitizing films for projection x-ray exams. The RICA Central Archive receives standard DICOM 3.0 messages and data streams from the medical imaging devices at customer institutions over the public telecommunication network. RICA represents a completely scalable resource. The user pays only for what he is using today with the full assurance that as the volume of image data that he wishes to send to the archive increases, the capacity will be there to accept it. To provide this seamless scalability imposes several requirements on the RICA architecture: (1) RICA must support the full array of transport services. (2) The Archive Interface must scale cost-effectively to support local networks that range from the very small (one x-ray digitizer in a medical clinic) to the very large and complex (a large hospital with several CTs, MRs, Nuclear medicine devices, ultrasound machines, CRs, and x-ray digitizers). (3) The Archive Server must scale cost-effectively to support rapidly increasing demands for service providing storage for and access to millions of patients and hundreds of millions of images. The architecture must support the incorporation of improved technology as it becomes available to maintain performance and remain cost-effective as demand rises.

  18. The Node Monitoring Component of a Scalable Systems Software Environment

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Samuel James [Iowa State Univ., Ames, IA (United States)

    2006-01-01

    This research describes Fountain, a suite of programs used to monitor the resources of a cluster. A cluster is a collection of individual computers that are connected via a high speed communication network. They are traditionally used by users who desire more resources, such as processing power and memory, than any single computer can provide. A common drawback to effectively utilizing such a large-scale system is the management infrastructure, which often does not often scale well as the system grows. Large-scale parallel systems provide new research challenges in the area of systems software, the programs or tools that manage the system from boot-up to running a parallel job. The approach presented in this thesis utilizes a collection of separate components that communicate with each other to achieve a common goal. While systems software comprises a broad array of components, this thesis focuses on the design choices for a node monitoring component. We will describe Fountain, an implementation of the Scalable Systems Software (SSS) node monitor specification. It is targeted at aggregate node monitoring for clusters, focusing on both scalability and fault tolerance as its design goals. It leverages widely used technologies such as XML and HTTP to present an interface to other components in the SSS environment.

  19. Exploiting Redundancy and Application Scalability for Cost-Effective, Time-Constrained Execution of HPC Applications on Amazon EC2

    International Nuclear Information System (INIS)

    Marathe, Aniruddha P.; Harris, Rachel A.; Lowenthal, David K.; Supinski, Bronis R. de; Rountree, Barry L.; Schulz, Martin

    2015-01-01

    The use of clouds to execute high-performance computing (HPC) applications has greatly increased recently. Clouds provide several potential advantages over traditional supercomputers and in-house clusters. The most popular cloud is currently Amazon EC2, which provides fixed-cost and variable-cost, auction-based options. The auction market trades lower cost for potential interruptions that necessitate checkpointing; if the market price exceeds the bid price, a node is taken away from the user without warning. We explore techniques to maximize performance per dollar given a time constraint within which an application must complete. Specifically, we design and implement multiple techniques to reduce expected cost by exploiting redundancy in the EC2 auction market. We then design an adaptive algorithm that selects a scheduling algorithm and determines the bid price. We show that our adaptive algorithm executes programs up to seven times cheaper than using the on-demand market and up to 44 percent cheaper than the best non-redundant, auction-market algorithm. We extend our adaptive algorithm to incorporate application scalability characteristics for further cost savings. In conclusion, we show that the adaptive algorithm informed with scalability characteristics of applications achieves up to 56 percent cost savings compared to the expected cost for the base adaptive algorithm run at a fixed, user-defined scale.

  20. Scalable and balanced dynamic hybrid data assimilation

    Science.gov (United States)

    Kauranne, Tuomo; Amour, Idrissa; Gunia, Martin; Kallio, Kari; Lepistö, Ahti; Koponen, Sampsa

    2017-04-01

    Scalability of complex weather forecasting suites is dependent on the technical tools available for implementing highly parallel computational kernels, but to an equally large extent also on the dependence patterns between various components of the suite, such as observation processing, data assimilation and the forecast model. Scalability is a particular challenge for 4D variational assimilation methods that necessarily couple the forecast model into the assimilation process and subject this combination to an inherently serial quasi-Newton minimization process. Ensemble based assimilation methods are naturally more parallel, but large models force ensemble sizes to be small and that results in poor assimilation accuracy, somewhat akin to shooting with a shotgun in a million-dimensional space. The Variational Ensemble Kalman Filter (VEnKF) is an ensemble method that can attain the accuracy of 4D variational data assimilation with a small ensemble size. It achieves this by processing a Gaussian approximation of the current error covariance distribution, instead of a set of ensemble members, analogously to the Extended Kalman Filter EKF. Ensemble members are re-sampled every time a new set of observations is processed from a new approximation of that Gaussian distribution which makes VEnKF a dynamic assimilation method. After this a smoothing step is applied that turns VEnKF into a dynamic Variational Ensemble Kalman Smoother VEnKS. In this smoothing step, the same process is iterated with frequent re-sampling of the ensemble but now using past iterations as surrogate observations until the end result is a smooth and balanced model trajectory. In principle, VEnKF could suffer from similar scalability issues as 4D-Var. However, this can be avoided by isolating the forecast model completely from the minimization process by implementing the latter as a wrapper code whose only link to the model is calling for many parallel and totally independent model runs, all of them

  1. Impact of audit and feedback and pay-for-performance interventions on pediatric hospitalist discharge communication with primary care providers.

    Science.gov (United States)

    Tejedor-Sojo, Javier; Creek, Tracy; Leong, Traci

    2015-01-01

    The study team sought to improve hospitalist communication with primary care providers (PCPs) at discharge through interventions consisting of (a) audit and feedback and (b) inclusion of a discharge communication measure in the incentive compensation for pediatric hospitalists. The setting was a 16-physician pediatric hospitalist group within a tertiary pediatric hospital. Discharge summaries were selected randomly for documentation of communication with PCPs. At baseline, 57% of charts had documented communication with PCPs, increasing to 84% during the audit and feedback period. Following the addition of a financial incentive, documentation of communication with PCPs increased to 93% and was sustained during the combined intervention period. The number of physicians meeting the study's performance goal increased from 1 to 14 by the end of the study period. A financial incentive coupled with an audit and feedback tool was effective at modifying physician behavior, achieving focused, measurable quality improvement gains. © 2014 by the American College of Medical Quality.

  2. Solution-Processing of Organic Solar Cells: From In Situ Investigation to Scalable Manufacturing

    KAUST Repository

    Abdelsamie, Maged

    2016-12-05

    implementation of organic solar cells with high efficiency and manufacturability. In this dissertation, we investigate the mechanism of the BHJ layer formation during solution processing from common lab-based processes, such as spin-coating, with the aim of understanding the roles of materials, formulations and processing conditions and subsequently using this insight to enable the scalable manufacturing of high efficiency organic solar cells by such methods as wire-bar coating and blade-coating. To do so, we have developed state-of-the-art in situ diagnostics techniques to provide us with insight into the thin film formation process. As a first step, we have developed a modified spin-coater which allows us to perform in situ UV-visible absorption measurements during spin coating and provides key insight into the formation and evolution of polymer aggregates in solution and during the transformation to the solid state. Using this method, we have investigated the formation of organic BHJs made of a blend of poly (3-hexylthiophene) (P3HT) and fullerene, reference materials in the organic solar cell field. We show that process kinetics directly influence the microstructure and morphology of the bulk heterojunction, highlighting the value of in situ measurements. We have investigated the influence of crystallization dynamics of a wide-range of small-molecule donors and their solidification pathways on the processing routes needed for attaining high-performance solar cells. The study revealed the reason behind the need of empirically-adopted processing strategies such as solvent additives or alternatively thermal or solvent vapor annealing for achieving optimal performance. The study has provided a new perspective to materials design linking the need for solvent additives or annealing to the ease of crystallization of small-molecule donors and the presence or absence of transient phases before crystallization. From there, we have extended our investigation to small-molecule (p

  3. Omega-3 polyunsaturated fatty acids provided during embryonic development improve the growth performance and welfare of Muscovy ducks (Cairina moschata).

    Science.gov (United States)

    Baéza, E; Chartrin, P; Bordeau, T; Lessire, M; Thoby, J M; Gigaud, V; Blanchet, M; Alinier, A; Leterrier, C

    2017-09-01

    The welfare of ducks can be affected by unwanted behaviors such as excessive reactivity and feather pecking. Providing long-chain n-3 polyunsaturated fatty acids (LC n-3 PUFA) during gestation and early life has been shown to improve the brain development and function of human and rodent offspring. The aim of this study was to test whether the pecking behavior of Muscovy ducks during rearing could be reduced by providing LC n-3 PUFA during embryonic and/or post-hatching development of ducklings. Enrichment of eggs, and consequently embryos, with LC n-3 PUFA was achieved by feeding female ducks (n-3F) a diet containing docosahexaenoic (DHA) and linolenic acids (microalgae and linseed oil). A control group of female ducks (CF) was fed a diet containing linoleic acid (soybean oil). Offspring from both groups were fed starter and grower diets enriched with DHA and linolenic acid or only linoleic acid, resulting in four treatment groups with 48 ducklings in each. Several behavioral tests were performed between 1 and 3 weeks of age to analyze the adaptation ability of ducklings. The growth performance, time budget, social interactions, feather growth, and pecking behavior of ducklings were recorded regularly during the rearing period. No significant interaction between maternal and duckling feeding was found. Ducklings from n-3F ducks had a higher body weight at day 0, 28, and 56, a lower feed conversion ratio during the growth period, and lower reactivity to stress than ducklings from CF ducks. Ducklings from n-3F ducks also exhibited a significantly reduced feather pecking frequency at 49 and 56 days of age and for the whole rearing period. Moreover, consumption of diets enriched with n-3 PUFA during the starter and grower post-hatching periods significantly improved the tibia mineralization of ducklings and the fatty acid composition of thigh muscles at 84 days of age by increasing the n-3 FA content. © 2017 Poultry Science Association Inc.

  4. IEEE 802.15.4 Frame Aggregation Enhancement to Provide High Performance in Life-Critical Patient Monitoring Systems.

    Science.gov (United States)

    Akbar, Muhammad Sajjad; Yu, Hongnian; Cang, Shuang

    2017-01-28

    In wireless body area sensor networks (WBASNs), Quality of Service (QoS) provision for patient monitoring systems in terms of time-critical deadlines, high throughput and energy efficiency is a challenging task. The periodic data from these systems generates a large number of small packets in a short time period which needs an efficient channel access mechanism. The IEEE 802.15.4 standard is recommended for low power devices and widely used for many wireless sensor networks applications. It provides a hybrid channel access mechanism at the Media Access Control (MAC) layer which plays a key role in overall successful transmission in WBASNs. There are many WBASN's MAC protocols that use this hybrid channel access mechanism in variety of sensor applications. However, these protocols are less efficient for patient monitoring systems where life critical data requires limited delay, high throughput and energy efficient communication simultaneously. To address these issues, this paper proposes a frame aggregation scheme by using the aggregated-MAC protocol data unit (A-MPDU) which works with the IEEE 802.15.4 MAC layer. To implement the scheme accurately, we develop a traffic patterns analysis mechanism to understand the requirements of the sensor nodes in patient monitoring systems, then model the channel access to find the performance gap on the basis of obtained requirements, finally propose the design based on the needs of patient monitoring systems. The mechanism is initially verified using numerical modelling and then simulation is conducted using NS2.29, Castalia 3.2 and OMNeT++. The proposed scheme provides the optimal performance considering the required QoS.

  5. Business Performance of Health Spa Tourism Providers in Relation to the Structure of Employees in the Republic of Croatia.

    Science.gov (United States)

    Vrkljan, Sanela; Grazio, Simeon

    2017-12-01

    Health spa tourism services are provided in special hospitals for medical rehabilitation and health resorts, and include controlled use of natural healing factors and physical therapy under medical supervision in order to improve and preserve health. Health tourism is a service industry and therefore labor-intensive industry in which human resources are one of the key factors of business success. The aim of this study was to analyze business performance of special hospitals for medical rehabilitation and health resorts in Croatia in relation to the structure of employees, specifically the number of physicians and total medical personnel, as well as the share of physicians and medical personnel in the total number of employees. The assumption was that those who employ more physicians and medical employees are more successful. The empirical research was conducted and the assumption was tested firstly by correlation analysis and afterwards by regression analysis. The total number of employees in the researched health resorts and special hospitals amounted to 2,863, of which the share of physicians specialists accounted for almost 7%, while the share of total medical staff was almost 53%. From the results of our research, it can be concluded that special hospitals for medical rehabilitation and health resorts, which employ more physicians and medical personnel, are achieving better financial business performance. Based on the results obtained, it is possible to provide guidance for further growth and development in the direction of basing the primary offer on medical-health offer, rather than on wellness offer, which is a strong trend in the world. These findings are important for planning the health and tourism policies in Croatia and similar countries.

  6. Effect of a Neonatal Resuscitation Course on Healthcare Providers' Performances Assessed by Video Recording in a Low-Resource Setting.

    Science.gov (United States)

    Trevisanuto, Daniele; Bertuola, Federica; Lanzoni, Paolo; Cavallin, Francesco; Matediana, Eduardo; Manzungu, Olivier Wingi; Gomez, Ermelinda; Da Dalt, Liviana; Putoto, Giovanni

    2015-01-01

    We assessed the effect of an adapted neonatal resuscitation program (NRP) course on healthcare providers' performances in a low-resource setting through the use of video recording. A video recorder, mounted to the radiant warmers in the delivery rooms at Beira Central Hospital, Mozambique, was used to record all resuscitations. One-hundred resuscitations (50 before and 50 after participation in an adapted NRP course) were collected and assessed based on a previously published score. All 100 neonates received initial steps; from these, 77 and 32 needed bag-mask ventilation (BMV) and chest compressions (CC), respectively. There was a significant improvement in resuscitation scores in all levels of resuscitation from before to after the course: for "initial steps", the score increased from 33% (IQR 28-39) to 44% (IQR 39-56), pproviders improved after participation in an adapted NRP course. Video recording was well-accepted by the staff, useful for objective assessment of performance during resuscitation, and can be used as an educational tool in a low-resource setting.

  7. The engineering of a scalable multi-site communications system utilizing quantum key distribution (QKD)

    Science.gov (United States)

    Tysowski, Piotr K.; Ling, Xinhua; Lütkenhaus, Norbert; Mosca, Michele

    2018-04-01

    Quantum key distribution (QKD) is a means of generating keys between a pair of computing hosts that is theoretically secure against cryptanalysis, even by a quantum computer. Although there is much active research into improving the QKD technology itself, there is still significant work to be done to apply engineering methodology and determine how it can be practically built to scale within an enterprise IT environment. Significant challenges exist in building a practical key management service (KMS) for use in a metropolitan network. QKD is generally a point-to-point technique only and is subject to steep performance constraints. The integration of QKD into enterprise-level computing has been researched, to enable quantum-safe communication. A novel method for constructing a KMS is presented that allows arbitrary computing hosts on one site to establish multiple secure communication sessions with the hosts of another site. A key exchange protocol is proposed where symmetric private keys are granted to hosts while satisfying the scalability needs of an enterprise population of users. The KMS operates within a layered architectural style that is able to interoperate with various underlying QKD implementations. Variable levels of security for the host population are enforced through a policy engine. A network layer provides key generation across a network of nodes connected by quantum links. Scheduling and routing functionality allows quantum key material to be relayed across trusted nodes. Optimizations are performed to match the real-time host demand for key material with the capacity afforded by the infrastructure. The result is a flexible and scalable architecture that is suitable for enterprise use and independent of any specific QKD technology.

  8. The development of a scalable parallel 3-D CFD algorithm for turbomachinery. M.S. Thesis Final Report

    Science.gov (United States)

    Luke, Edward Allen

    1993-01-01

    Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.

  9. Parallel scalability and efficiency of vortex particle method for aeroelasticity analysis of bluff bodies

    Science.gov (United States)

    Tolba, Khaled Ibrahim; Morgenthal, Guido

    2018-01-01

    This paper presents an analysis of the scalability and efficiency of a simulation framework based on the vortex particle method. The code is applied for the numerical aerodynamic analysis of line-like structures. The numerical code runs on multicore CPU and GPU architectures using OpenCL framework. The focus of this paper is the analysis of the parallel efficiency and scalability of the method being applied to an engineering test case, specifically the aeroelastic response of a long-span bridge girder at the construction stage. The target is to assess the optimal configuration and the required computer architecture, such that it becomes feasible to efficiently utilise the method within the computational resources available for a regular engineering office. The simulations and the scalability analysis are performed on a regular gaming type computer.

  10. Ultracold molecules: vehicles to scalable quantum information processing

    International Nuclear Information System (INIS)

    Brickman Soderberg, Kathy-Anne; Gemelke, Nathan; Chin Cheng

    2009-01-01

    In this paper, we describe a novel scheme to implement scalable quantum information processing using Li-Cs molecular states to entangle 6 Li and 133 Cs ultracold atoms held in independent optical lattices. The 6 Li atoms will act as quantum bits to store information and 133 Cs atoms will serve as messenger bits that aid in quantum gate operations and mediate entanglement between distant qubit atoms. Each atomic species is held in a separate optical lattice and the atoms can be overlapped by translating the lattices with respect to each other. When the messenger and qubit atoms are overlapped, targeted single-spin operations and entangling operations can be performed by coupling the atomic states to a molecular state with radio-frequency pulses. By controlling the frequency and duration of the radio-frequency pulses, entanglement can be either created or swapped between a qubit messenger pair. We estimate operation fidelities for entangling two distant qubits and discuss scalability of this scheme and constraints on the optical lattice lasers. Finally we demonstrate experimental control of the optical potentials sufficient to translate atoms in the lattice.

  11. Funnel plot control limits to identify poorly performing healthcare providers when there is uncertainty in the value of the benchmark.

    Science.gov (United States)

    Manktelow, Bradley N; Seaton, Sarah E; Evans, T Alun

    2016-12-01

    There is an increasing use of statistical methods, such as funnel plots, to identify poorly performing healthcare providers. Funnel plots comprise the construction of control limits around a benchmark and providers with outcomes falling outside the limits are investigated as potential outliers. The benchmark is usually estimated from observed data but uncertainty in this estimate is usually ignored when constructing control limits. In this paper, the use of funnel plots in the presence of uncertainty in the value of the benchmark is reviewed for outcomes from a Binomial distribution. Two methods to derive the control limits are shown: (i) prediction intervals; (ii) tolerance intervals Tolerance intervals formally include the uncertainty in the value of the benchmark while prediction intervals do not. The probability properties of 95% control limits derived using each method were investigated through hypothesised scenarios. Neither prediction intervals nor tolerance intervals produce funnel plot control limits that satisfy the nominal probability characteristics when there is uncertainty in the value of the benchmark. This is not necessarily to say that funnel plots have no role to play in healthcare, but that without the development of intervals satisfying the nominal probability characteristics they must be interpreted with care. © The Author(s) 2014.

  12. ENDEAVOUR: A Scalable SDN Architecture for Real-World IXPs

    KAUST Repository

    Antichi, Gianni

    2017-10-25

    Innovation in interdomain routing has remained stagnant for over a decade. Recently, IXPs have emerged as economically-advantageous interconnection points for reducing path latencies and exchanging ever increasing traffic volumes among, possibly, hundreds of networks. Given their far-reaching implications on interdomain routing, IXPs are the ideal place to foster network innovation and extend the benefits of SDN to the interdomain level. In this paper, we present, evaluate, and demonstrate ENDEAVOUR, an SDN platform for IXPs. ENDEAVOUR can be deployed on a multi-hop IXP fabric, supports a large number of use cases, and is highly-scalable while avoiding broadcast storms. Our evaluation with real data from one of the largest IXPs, demonstrates the benefits and scalability of our solution: ENDEAVOUR requires around 70% fewer rules than alternative SDN solutions thanks to our rule partitioning mechanism. In addition, by providing an open source solution, we invite everyone from the community to experiment (and improve) our implementation as well as adapt it to new use cases.

  13. On eliminating synchronous communication in molecular simulations to improve scalability

    Science.gov (United States)

    Straatsma, T. P.; Chavarría-Miranda, Daniel G.

    2013-12-01

    Molecular dynamics simulation, as a complementary tool to experimentation, has become an important methodology for the understanding and design of molecular systems as it provides access to properties that are difficult, impossible or prohibitively expensive to obtain experimentally. Many of the available software packages have been parallelized to take advantage of modern massively concurrent processing resources. The challenge in achieving parallel efficiency is commonly attributed to the fact that molecular dynamics algorithms are communication intensive. This paper illustrates how an appropriately chosen data distribution and asynchronous one-sided communication approach can be used to effectively deal with the data movement within the Global Arrays/ARMCI programming model framework. A new put_notify capability is presented here, allowing the implementation of the molecular dynamics algorithm without any explicit global or local synchronization or global data reduction operations. In addition, this push-data model is shown to very effectively allow hiding data communication behind computation. Rather than data movement or explicit global reductions, the implicit synchronization of the algorithm becomes the primary challenge for scalability. Without any explicit synchronous operations, the scalability of molecular simulations is shown to depend only on the ability to evenly balance computational load.

  14. Scalable architecture for a room temperature solid-state quantum information processor.

    Science.gov (United States)

    Yao, N Y; Jiang, L; Gorshkov, A V; Maurer, P C; Giedke, G; Cirac, J I; Lukin, M D

    2012-04-24

    The realization of a scalable quantum information processor has emerged over the past decade as one of the central challenges at the interface of fundamental science and engineering. Here we propose and analyse an architecture for a scalable, solid-state quantum information processor capable of operating at room temperature. Our approach is based on recent experimental advances involving nitrogen-vacancy colour centres in diamond. In particular, we demonstrate that the multiple challenges associated with operation at ambient temperature, individual addressing at the nanoscale, strong qubit coupling, robustness against disorder and low decoherence rates can be simultaneously achieved under realistic, experimentally relevant conditions. The architecture uses a novel approach to quantum information transfer and includes a hierarchy of control at successive length scales. Moreover, it alleviates the stringent constraints currently limiting the realization of scalable quantum processors and will provide fundamental insights into the physics of non-equilibrium many-body quantum systems.

  15. Scalability Modeling for Optimal Provisioning of Data Centers in Telenor: A better balance between under- and over-provisioning

    OpenAIRE

    Rygg, Knut Helge

    2012-01-01

    The scalability of an information system describes the relationship between system ca-pacity and system size. This report studies the scalability of Microsoft Lync Server 2010 in order to provide guidelines for provisioning hardware resources. Optimal pro-visioning is required to reduce both deployment and operational costs, while keeping an acceptable service quality.All Lync servers in the test setup are virtualizedusingVMware ESXi 5.0 and the system runs on a Cisco Unified Computing System...

  16. Scalable Multifunction RF Systems: Combined vs. Separate Transmit and Receive Arrays

    NARCIS (Netherlands)

    Huizing, A.G.

    2008-01-01

    A scalable multifunction RF (SMRF) system allows the RF functionality (radar, electronic warfare and communications) to be easily extended and the RF performance to be scaled to the requirements of different missions and platforms. This paper presents the results of a trade-off study with respect to

  17. Wideband vs. Multiband Trade-offs for a Scalable Multifunction RF system

    NARCIS (Netherlands)

    Huizing, A.G.

    2005-01-01

    This paper presents a concept for a scalable multifunction RF (SMRF) system that allows the RF functionality (radar, electronic warfare and communications) to be easily extended and the RF performance to be scaled to the requirements of different missions and platforms. A trade-off analysis is

  18. Tip-Based Nanofabrication for Scalable Manufacturing

    Directory of Open Access Journals (Sweden)

    Huan Hu

    2017-03-01

    Full Text Available Tip-based nanofabrication (TBN is a family of emerging nanofabrication techniques that use a nanometer scale tip to fabricate nanostructures. In this review, we first introduce the history of the TBN and the technology development. We then briefly review various TBN techniques that use different physical or chemical mechanisms to fabricate features and discuss some of the state-of-the-art techniques. Subsequently, we focus on those TBN methods that have demonstrated potential to scale up the manufacturing throughput. Finally, we discuss several research directions that are essential for making TBN a scalable nano-manufacturing technology.

  19. Tip-Based Nanofabrication for Scalable Manufacturing

    International Nuclear Information System (INIS)

    Hu, Huan; Somnath, Suhas

    2017-01-01

    Tip-based nanofabrication (TBN) is a family of emerging nanofabrication techniques that use a nanometer scale tip to fabricate nanostructures. Here in this review, we first introduce the history of the TBN and the technology development. We then briefly review various TBN techniques that use different physical or chemical mechanisms to fabricate features and discuss some of the state-of-the-art techniques. Subsequently, we focus on those TBN methods that have demonstrated potential to scale up the manufacturing throughput. Finally, we discuss several research directions that are essential for making TBN a scalable nano-manufacturing technology.

  20. Towards a Scalable, Biomimetic, Antibacterial Coating

    Science.gov (United States)

    Dickson, Mary Nora

    Corneal afflictions are the second leading cause of blindness worldwide. When a corneal transplant is unavailable or contraindicated, an artificial cornea device is the only chance to save sight. Bacterial or fungal biofilm build up on artificial cornea devices can lead to serious complications including the need for systemic antibiotic treatment and even explantation. As a result, much emphasis has been placed on anti-adhesion chemical coatings and antibiotic leeching coatings. These methods are not long-lasting, and microorganisms can eventually circumvent these measures. Thus, I have developed a surface topographical antimicrobial coating. Various surface structures including rough surfaces, superhydrophobic surfaces, and the natural surfaces of insects' wings and sharks' skin are promising anti-biofilm candidates, however none meet the criteria necessary for implementation on the surface of an artificial cornea device. In this thesis I: 1) developed scalable fabrication protocols for a library of biomimetic nanostructure polymer surfaces 2) assessed the potential these for poly(methyl methacrylate) nanopillars to kill or prevent formation of biofilm by E. coli bacteria and species of Pseudomonas and Staphylococcus bacteria and improved upon a proposed mechanism for the rupture of Gram-negative bacterial cell walls 3) developed a scalable, commercially viable method for producing antibacterial nanopillars on a curved, PMMA artificial cornea device and 4) developed scalable fabrication protocols for implantation of antibacterial nanopatterned surfaces on the surfaces of thermoplastic polyurethane materials, commonly used in catheter tubings. This project constitutes a first step towards fabrication of the first entirely PMMA artificial cornea device. The major finding of this work is that by precisely controlling the topography of a polymer surface at the nano-scale, we can kill adherent bacteria and prevent biofilm formation of certain pathogenic bacteria

  1. Scalable Tensor Factorizations with Missing Data

    DEFF Research Database (Denmark)

    Acar, Evrim; Dunlavy, Daniel M.; Kolda, Tamara G.

    2010-01-01

    of missing data, many important data sets will be discarded or improperly analyzed. Therefore, we need a robust and scalable approach for factorizing multi-way arrays (i.e., tensors) in the presence of missing data. We focus on one of the most well-known tensor factorizations, CANDECOMP/PARAFAC (CP...... is shown to successfully factor tensors with noise and up to 70% missing data. Moreover, our approach is significantly faster than the leading alternative and scales to larger problems. To show the real-world usefulness of CP-WOPT, we illustrate its applicability on a novel EEG (electroencephalogram...

  2. Scalable and Anonymous Group Communication with MTor

    Directory of Open Access Journals (Sweden)

    Lin Dong

    2016-04-01

    Full Text Available This paper presents MTor, a low-latency anonymous group communication system. We construct MTor as an extension to Tor, allowing the construction of multi-source multicast trees on top of the existing Tor infrastructure. MTor does not depend on an external service to broker the group communication, and avoids central points of failure and trust. MTor’s substantial bandwidth savings and graceful scalability enable new classes of anonymous applications that are currently too bandwidth-intensive to be viable through traditional unicast Tor communication-e.g., group file transfer, collaborative editing, streaming video, and real-time audio conferencing.

  3. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  4. Olympic weightlifting and plyometric training with children provides similar or greater performance improvements than traditional resistance training.

    Science.gov (United States)

    Chaouachi, Anis; Hammami, Raouf; Kaabi, Sofiene; Chamari, Karim; Drinkwater, Eric J; Behm, David G

    2014-06-01

    A number of organizations recommend that advanced resistance training (RT) techniques can be implemented with children. The objective of this study was to evaluate the effectiveness of Olympic-style weightlifting (OWL), plyometrics, and traditional RT programs with children. Sixty-three children (10-12 years) were randomly allocated to a 12-week control OWL, plyometric, or traditional RT program. Pre- and post-training tests included body mass index (BMI), sum of skinfolds, countermovement jump (CMJ), horizontal jump, balance, 5- and 20-m sprint times, isokinetic force and power at 60 and 300° · s(-1). Magnitude-based inferences were used to analyze the likelihood of an effect having a standardized (Cohen's) effect size exceeding 0.20. All interventions were generally superior to the control group. Olympic weightlifting was >80% likely to provide substantially better improvements than plyometric training for CMJ, horizontal jump, and 5- and 20-m sprint times, whereas >75% likely to substantially exceed traditional RT for balance and isokinetic power at 300° · s(-1). Plyometric training was >78% likely to elicit substantially better training adaptations than traditional RT for balance, isokinetic force at 60 and 300° · s(-1), isokinetic power at 300° · s(-1), and 5- and 20-m sprints. Traditional RT only exceeded plyometric training for BMI and isokinetic power at 60° · s(-1). Hence, OWL and plyometrics can provide similar or greater performance adaptations for children. It is recommended that any of the 3 training modalities can be implemented under professional supervision with proper training progressions to enhance training adaptations in children.

  5. BAMSI: a multi-cloud service for scalable distributed filtering of massive genome data.

    Science.gov (United States)

    Ausmees, Kristiina; John, Aji; Toor, Salman Z; Hellander, Andreas; Nettelblad, Carl

    2018-06-26

    The advent of next-generation sequencing (NGS) has made whole-genome sequencing of cohorts of individuals a reality. Primary datasets of raw or aligned reads of this sort can get very large. For scientific questions where curated called variants are not sufficient, the sheer size of the datasets makes analysis prohibitively expensive. In order to make re-analysis of such data feasible without the need to have access to a large-scale computing facility, we have developed a highly scalable, storage-agnostic framework, an associated API and an easy-to-use web user interface to execute custom filters on large genomic datasets. We present BAMSI, a Software as-a Service (SaaS) solution for filtering of the 1000 Genomes phase 3 set of aligned reads, with the possibility of extension and customization to other sets of files. Unique to our solution is the capability of simultaneously utilizing many different mirrors of the data to increase the speed of the analysis. In particular, if the data is available in private or public clouds - an increasingly common scenario for both academic and commercial cloud providers - our framework allows for seamless deployment of filtering workers close to data. We show results indicating that such a setup improves the horizontal scalability of the system, and present a possible use case of the framework by performing an analysis of structural variation in the 1000 Genomes data set. BAMSI constitutes a framework for efficient filtering of large genomic data sets that is flexible in the use of compute as well as storage resources. The data resulting from the filter is assumed to be greatly reduced in size, and can easily be downloaded or routed into e.g. a Hadoop cluster for subsequent interactive analysis using Hive, Spark or similar tools. In this respect, our framework also suggests a general model for making very large datasets of high scientific value more accessible by offering the possibility for organizations to share the cost of

  6. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility

    Directory of Open Access Journals (Sweden)

    Jaschob Daniel

    2012-07-01

    Full Text Available Abstract Background Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. Results JobCenter is a client–server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or “in the cloud” and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. Conclusions JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  7. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility.

    Science.gov (United States)

    Jaschob, Daniel; Riffle, Michael

    2012-07-30

    Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. JobCenter is a client-server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or "in the cloud") and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  8. Scalable fractionation of iron oxide nanoparticles using a CO{sub 2} gas-expanded liquid system

    Energy Technology Data Exchange (ETDEWEB)

    Vengsarkar, Pranav S.; Xu, Rui; Roberts, Christopher B., E-mail: croberts@eng.auburn.edu [Auburn University, Department of Chemical Engineering (United States)

    2015-10-15

    Iron oxide nanoparticles exhibit highly size-dependent physicochemical properties that are important in applications such as catalysis and environmental remediation. In order for these size-dependent properties to be effectively harnessed for industrial applications scalable and cost-effective techniques for size-controlled synthesis or size separation must be developed. The synthesis of monodisperse iron oxide nanoparticles can be a prohibitively expensive process on a large scale. An alternative involves the use of inexpensive synthesis procedures followed by a size-selective processing technique. While there are many techniques available to fractionate nanoparticles, many of the techniques are unable to efficiently fractionate iron oxide nanoparticles in a scalable and inexpensive manner. A scalable apparatus capable of fractionating large quantities of iron oxide nanoparticles into distinct fractions of different sizes and size distributions has been developed. Polydisperse iron oxide nanoparticles (2–20 nm) coated with oleic acid used in this study were synthesized using a simple and inexpensive version of the popular coprecipitation technique. This apparatus uses hexane as a CO{sub 2} gas-expanded liquid to controllably precipitate nanoparticles inside a 1L high-pressure reactor. This paper demonstrates the operation of this new apparatus and for the first time shows the successful fractionation results on a system of metal oxide nanoparticles, with initial nanoparticle concentrations in the gram-scale. The analysis of the obtained fractions was performed using transmission electron microscopy and dynamic light scattering. The use of this simple apparatus provides a pathway to separate large quantities of iron oxide nanoparticles based upon their size for use in various industrial applications.

  9. Upgrade of the TOTEM DAQ using the Scalable Readout System (SRS)

    International Nuclear Information System (INIS)

    Quinto, M; Cafagna, F; Fiergolski, A; Radicioni, E

    2013-01-01

    The main goals of the TOTEM Experiment at the LHC are the measurements of the elastic and total p-p cross sections and the studies of the diffractive dissociation processes. At LHC, collisions are produced at a rate of 40 MHz, imposing strong requirements for the Data Acquisition Systems (DAQ) in terms of trigger rate and data throughput. The TOTEM DAQ adopts a modular approach that, in standalone mode, is based on VME bus system. The VME based Front End Driver (FED) modules, host mezzanines that receive data through optical fibres directly from the detectors. After data checks and formatting are applied in the mezzanine, data is retransmitted to the VME interface and to another mezzanine card plugged in the FED module. The VME bus maximum bandwidth limits the maximum first level trigger (L1A) to 1 kHz rate. In order to get rid of the VME bottleneck and improve scalability and the overall capabilities of the DAQ, a new system was designed and constructed based on the Scalable Readout System (SRS), developed in the framework of the RD51 Collaboration. The project aims to increase the efficiency of the actual readout system providing higher bandwidth, and increasing data filtering, implementing a second-level trigger event selection based on hardware pattern recognition algorithms. This goal is to be achieved preserving the maximum back compatibility with the LHC Timing, Trigger and Control (TTC) system as well as with the CMS DAQ. The obtained results and the perspectives of the project are reported. In particular, we describe the system architecture and the new Opto-FEC adapter card developed to connect the SRS with the FED mezzanine modules. A first test bench was built and validated during the last TOTEM data taking period (February 2013). Readout of a set of 3 TOTEM Roman Pot silicon detectors was carried out to verify performance in the real LHC environment. In addition, the test allowed a check of data consistency and quality

  10. Highly scalable Ab initio genomic motif identification

    KAUST Repository

    Marchand, Benoit; Bajic, Vladimir B.; Kaushik, Dinesh

    2011-01-01

    We present results of scaling an ab initio motif family identification system, Dragon Motif Finder (DMF), to 65,536 processor cores of IBM Blue Gene/P. DMF seeks groups of mutually similar polynucleotide patterns within a set of genomic sequences and builds various motif families from them. Such information is of relevance to many problems in life sciences. Prior attempts to scale such ab initio motif-finding algorithms achieved limited success. We solve the scalability issues using a combination of mixed-mode MPI-OpenMP parallel programming, master-slave work assignment, multi-level workload distribution, multi-level MPI collectives, and serial optimizations. While the scalability of our algorithm was excellent (94% parallel efficiency on 65,536 cores relative to 256 cores on a modest-size problem), the final speedup with respect to the original serial code exceeded 250,000 when serial optimizations are included. This enabled us to carry out many large-scale ab initio motiffinding simulations in a few hours while the original serial code would have needed decades of execution time. Copyright 2011 ACM.

  11. Algorithmic psychometrics and the scalable subject.

    Science.gov (United States)

    Stark, Luke

    2018-04-01

    Recent public controversies, ranging from the 2014 Facebook 'emotional contagion' study to psychographic data profiling by Cambridge Analytica in the 2016 American presidential election, Brexit referendum and elsewhere, signal watershed moments in which the intersecting trajectories of psychology and computer science have become matters of public concern. The entangled history of these two fields grounds the application of applied psychological techniques to digital technologies, and an investment in applying calculability to human subjectivity. Today, a quantifiable psychological subject position has been translated, via 'big data' sets and algorithmic analysis, into a model subject amenable to classification through digital media platforms. I term this position the 'scalable subject', arguing it has been shaped and made legible by algorithmic psychometrics - a broad set of affordances in digital platforms shaped by psychology and the behavioral sciences. In describing the contours of this 'scalable subject', this paper highlights the urgent need for renewed attention from STS scholars on the psy sciences, and on a computational politics attentive to psychology, emotional expression, and sociality via digital media.

  12. Computational scalability of large size image dissemination

    Science.gov (United States)

    Kooper, Rob; Bajcsy, Peter

    2011-01-01

    We have investigated the computational scalability of image pyramid building needed for dissemination of very large image data. The sources of large images include high resolution microscopes and telescopes, remote sensing and airborne imaging, and high resolution scanners. The term 'large' is understood from a user perspective which means either larger than a display size or larger than a memory/disk to hold the image data. The application drivers for our work are digitization projects such as the Lincoln Papers project (each image scan is about 100-150MB or about 5000x8000 pixels with the total number to be around 200,000) and the UIUC library scanning project for historical maps from 17th and 18th century (smaller number but larger images). The goal of our work is understand computational scalability of the web-based dissemination using image pyramids for these large image scans, as well as the preservation aspects of the data. We report our computational benchmarks for (a) building image pyramids to be disseminated using the Microsoft Seadragon library, (b) a computation execution approach using hyper-threading to generate image pyramids and to utilize the underlying hardware, and (c) an image pyramid preservation approach using various hard drive configurations of Redundant Array of Independent Disks (RAID) drives for input/output operations. The benchmarks are obtained with a map (334.61 MB, JPEG format, 17591x15014 pixels). The discussion combines the speed and preservation objectives.

  13. Big data integration: scalability and sustainability

    KAUST Repository

    Zhang, Zhang

    2016-01-26

    Integration of various types of omics data is critically indispensable for addressing most important and complex biological questions. In the era of big data, however, data integration becomes increasingly tedious, time-consuming and expensive, posing a significant obstacle to fully exploit the wealth of big biological data. Here we propose a scalable and sustainable architecture that integrates big omics data through community-contributed modules. Community modules are contributed and maintained by different committed groups and each module corresponds to a specific data type, deals with data collection, processing and visualization, and delivers data on-demand via web services. Based on this community-based architecture, we build Information Commons for Rice (IC4R; http://ic4r.org), a rice knowledgebase that integrates a variety of rice omics data from multiple community modules, including genome-wide expression profiles derived entirely from RNA-Seq data, resequencing-based genomic variations obtained from re-sequencing data of thousands of rice varieties, plant homologous genes covering multiple diverse plant species, post-translational modifications, rice-related literatures, and community annotations. Taken together, such architecture achieves integration of different types of data from multiple community-contributed modules and accordingly features scalable, sustainable and collaborative integration of big data as well as low costs for database update and maintenance, thus helpful for building IC4R into a comprehensive knowledgebase covering all aspects of rice data and beneficial for both basic and translational researches.

  14. Ventricular kinetic energy may provide a novel noninvasive way to assess ventricular performance in patients with repaired tetralogy of Fallot.

    Science.gov (United States)

    Jeong, Daniel; Anagnostopoulos, Petros V; Roldan-Alzate, Alejandro; Srinivasan, Shardha; Schiebler, Mark L; Wieben, Oliver; François, Christopher J

    2015-05-01

    Ventricular kinetic energy measurements may provide a novel imaging biomarker of declining ventricular efficiency in patients with repaired tetralogy of Fallot. Our purpose was to assess differences in ventricular kinetic energy with 4-dimensional flow magnetic resonance imaging between patients with repaired tetralogy of Fallot and healthy volunteers. Cardiac magnetic resonance, including 4-dimensional flow magnetic resonance imaging, was performed at rest in 10 subjects with repaired tetralogy of Fallot and 9 healthy volunteers using clinical 1.5T and 3T magnetic resonance imaging scanners. Right and left ventricular kinetic energy (KERV and KELV), main pulmonary artery flow (QMPA), and aortic flow (QAO) were quantified using 4-dimensional flow magnetic resonance imaging data. Right and left ventricular size and function were measured using standard cardiac magnetic resonance techniques. Differences in peak systolic KERV and KELV in addition to the QMPA/KERV and QAO/KELV ratios between groups were assessed. Kinetic energy indices were compared with conventional cardiac magnetic resonance parameters. Peak systolic KERV and KELV were higher in patients with repaired tetralogy of Fallot (6.06 ± 2.27 mJ and 3.55 ± 2.12 mJ, respectively) than in healthy volunteers (5.47 ± 2.52 mJ and 2.48 ± 0.75 mJ, respectively), but were not statistically significant (P = .65 and P = .47, respectively). The QMPA/KERV and QAO/KELV ratios were lower in patients with repaired tetralogy of Fallot (7.53 ± 5.37 mL/[cycle mJ] and 9.65 ± 6.61 mL/[cycle mJ], respectively) than in healthy volunteers (19.33 ± 18.52 mL/[cycle mJ] and 35.98 ± 7.66 mL/[cycle mJ], respectively; P tetralogy of Fallot. Quantification of ventricular kinetic energy in patients with repaired tetralogy of Fallot is a new observation. Future studies are needed to determine whether changes in ventricular kinetic energy can provide earlier evidence of ventricular dysfunction and guide future medical and

  15. A Highly Scalable Data Service (HSDS) using Cloud-based Storage Technologies for Earth Science Data

    Science.gov (United States)

    Michaelis, A.; Readey, J.; Votava, P.; Henderson, J.; Willmore, F.

    2017-12-01

    Cloud based infrastructure may offer several key benefits of scalability, built in redundancy, security mechanisms and reduced total cost of ownership as compared with a traditional data center approach. However, most of the tools and legacy software systems developed for online data repositories within the federal government were not developed with a cloud based infrastructure in mind and do not fully take advantage of commonly available cloud-based technologies. Moreover, services bases on object storage are well established and provided through all the leading cloud service providers (Amazon Web Service, Microsoft Azure, Google Cloud, etc…) of which can often provide unmatched "scale-out" capabilities and data availability to a large and growing consumer base at a price point unachievable from in-house solutions. We describe a system that utilizes object storage rather than traditional file system based storage to vend earth science data. The system described is not only cost effective, but shows a performance advantage for running many different analytics tasks in the cloud. To enable compatibility with existing tools and applications, we outline client libraries that are API compatible with existing libraries for HDF5 and NetCDF4. Performance of the system is demonstrated using clouds services running on Amazon Web Services.

  16. Using Audience Response Technology to provide formative feedback on pharmacology performance for non-medical prescribing students - a preliminary evaluation

    Science.gov (United States)

    2012-01-01

    Background The use of anonymous audience response technology (ART) to actively engage students in classroom learning has been evaluated positively across multiple settings. To date, however, there has been no empirical evaluation of the use of individualised ART handsets and formative feedback of ART scores. The present study investigates student perceptions of such a system and the relationship between formative feedback results and exam performance. Methods Four successive cohorts of Non-Medical Prescribing students (n=107) had access to the individualised ART system and three of these groups (n=72) completed a questionnaire about their perceptions of using ART. Semi-structured interviews were carried out with a purposive sample of seven students who achieved a range of scores on the formative feedback. Using data from all four cohorts of students, the relationship between mean ART scores and summative pharmacology exam score was examined using a non-parametric correlation. Results Questionnaire and interview data suggested that the use of ART enhanced the classroom environment, motivated students and promoted learning. Questionnaire data demonstrated that students found the formative feedback helpful for identifying their learning needs (95.6%), guiding their independent study (86.8%), and as a revision tool (88.3%). Interviewees particularly valued the objectivity of the individualised feedback which helped them to self-manage their learning. Interviewees’ initial anxiety about revealing their level of pharmacology knowledge to the lecturer and to themselves reduced over time as students focused on the learning benefits associated with the feedback. A significant positive correlation was found between students’ formative feedback scores and their summative pharmacology exam scores (Spearman’s rho = 0.71, N=107, pstudents rated the helpfulness of the individualised handsets and personalised formative feedback highly. The significant correlation between ART

  17. Simplifying Scalable Graph Processing with a Domain-Specific Language

    KAUST Repository

    Hong, Sungpack; Salihoglu, Semih; Widom, Jennifer; Olukotun, Kunle

    2014-01-01

    Large-scale graph processing, with its massive data sets, requires distributed processing. However, conventional frameworks for distributed graph processing, such as Pregel, use non-traditional programming models that are well-suited for parallelism and scalability but inconvenient for implementing non-trivial graph algorithms. In this paper, we use Green-Marl, a Domain-Specific Language for graph analysis, to intuitively describe graph algorithms and extend its compiler to generate equivalent Pregel implementations. Using the semantic information captured by Green-Marl, the compiler applies a set of transformation rules that convert imperative graph algorithms into Pregel's programming model. Our experiments show that the Pregel programs generated by the Green-Marl compiler perform similarly to manually coded Pregel implementations of the same algorithms. The compiler is even able to generate a Pregel implementation of a complicated graph algorithm for which a manual Pregel implementation is very challenging.

  18. Simplifying Scalable Graph Processing with a Domain-Specific Language

    KAUST Repository

    Hong, Sungpack

    2014-01-01

    Large-scale graph processing, with its massive data sets, requires distributed processing. However, conventional frameworks for distributed graph processing, such as Pregel, use non-traditional programming models that are well-suited for parallelism and scalability but inconvenient for implementing non-trivial graph algorithms. In this paper, we use Green-Marl, a Domain-Specific Language for graph analysis, to intuitively describe graph algorithms and extend its compiler to generate equivalent Pregel implementations. Using the semantic information captured by Green-Marl, the compiler applies a set of transformation rules that convert imperative graph algorithms into Pregel\\'s programming model. Our experiments show that the Pregel programs generated by the Green-Marl compiler perform similarly to manually coded Pregel implementations of the same algorithms. The compiler is even able to generate a Pregel implementation of a complicated graph algorithm for which a manual Pregel implementation is very challenging.

  19. A Scalable Framework and Prototype for CAS e-Science

    Directory of Open Access Journals (Sweden)

    Yuanchun Zhou

    2007-07-01

    Full Text Available Based on the Small-World model of CAS e-Science and the power low of Internet, this paper presents a scalable CAS e-Science Grid framework based on virtual region called Virtual Region Grid Framework (VRGF. VRGF takes virtual region and layer as logic manage-unit. In VRGF, the mode of intra-virtual region is pure P2P, and the model of inter-virtual region is centralized. Therefore, VRGF is decentralized framework with some P2P properties. Further more, VRGF is able to achieve satisfactory performance on resource organizing and locating at a small cost, and is well adapted to the complicated and dynamic features of scientific collaborations. We have implemented a demonstration VRGF based Grid prototype—SDG.

  20. Scalable Domain Decomposition Preconditioners for Heterogeneous Elliptic Problems

    Directory of Open Access Journals (Sweden)

    Pierre Jolivet

    2014-01-01

    Full Text Available Domain decomposition methods are, alongside multigrid methods, one of the dominant paradigms in contemporary large-scale partial differential equation simulation. In this paper, a lightweight implementation of a theoretically and numerically scalable preconditioner is presented in the context of overlapping methods. The performance of this work is assessed by numerical simulations executed on thousands of cores, for solving various highly heterogeneous elliptic problems in both 2D and 3D with billions of degrees of freedom. Such problems arise in computational science and engineering, in solid and fluid mechanics. While focusing on overlapping domain decomposition methods might seem too restrictive, it will be shown how this work can be applied to a variety of other methods, such as non-overlapping methods and abstract deflation based preconditioners. It is also presented how multilevel preconditioners can be used to avoid communication during an iterative process such as a Krylov method.

  1. Scalable Lunar Surface Networks and Adaptive Orbit Access

    Science.gov (United States)

    Wang, Xudong

    2015-01-01

    Teranovi Technologies, Inc., has developed innovative network architecture, protocols, and algorithms for both lunar surface and orbit access networks. A key component of the overall architecture is a medium access control (MAC) protocol that includes a novel mechanism of overlaying time division multiple access (TDMA) and carrier sense multiple access with collision avoidance (CSMA/CA), ensuring scalable throughput and quality of service. The new MAC protocol is compatible with legacy Institute of Electrical and Electronics Engineers (IEEE) 802.11 networks. Advanced features include efficiency power management, adaptive channel width adjustment, and error control capability. A hybrid routing protocol combines the advantages of ad hoc on-demand distance vector (AODV) routing and disruption/delay-tolerant network (DTN) routing. Performance is significantly better than AODV or DTN and will be particularly effective for wireless networks with intermittent links, such as lunar and planetary surface networks and orbit access networks.

  2. Efficient Delivery of Scalable Video Using a Streaming Class Model

    Directory of Open Access Journals (Sweden)

    Jason J. Quinlan

    2018-03-01

    Full Text Available When we couple the rise in video streaming with the growing number of portable devices (smart phones, tablets, laptops, we see an ever-increasing demand for high-definition video online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide a graceful changes in video quality, all while respecting viewing satisfaction. In this context, the use of well-known scalable/layered media streaming techniques, commonly known as scalable video coding (SVC, is an attractive solution. SVC encodes a number of video quality levels within a single media stream. This has been shown to be an especially effective and efficient solution, but it fares badly in the presence of datagram losses. While multiple description coding (MDC can reduce the effects of packet loss on scalable video delivery, the increased delivery cost is counterproductive for constrained networks. This situation is accentuated in cases where only the lower quality level is required. In this paper, we assess these issues and propose a new approach called Streaming Classes (SC through which we can define a key set of quality levels, each of which can be delivered in a self-contained manner. This facilitates efficient delivery, yielding reduced transmission byte-cost for devices requiring lower quality, relative to MDC and Adaptive Layer Distribution (ALD (42% and 76% respective reduction for layer 2, while also maintaining high levels of consistent quality. We also illustrate how selective packetisation technique can further reduce the effects of packet loss on viewable quality by

  3. Final Report: Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [William Marsh Rice University

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  4. Overview of the Scalable Coherent Interface, IEEE STD 1596 (SCI)

    International Nuclear Information System (INIS)

    Gustavson, D.B.; James, D.V.; Wiggers, H.A.

    1992-10-01

    The Scalable Coherent Interface standard defines a new generation of interconnection that spans the full range from supercomputer memory 'bus' to campus-wide network. SCI provides bus-like services and a shared-memory software model while using an underlying, packet protocol on many independent communication links. Initially these links are 1 GByte/s (wires) and 1 GBit/s (fiber), but the protocol scales well to future faster or lower-cost technologies. The interconnect may use switches, meshes, and rings. The SCI distributed-shared-memory model is simple and versatile, enabling for the first time a smooth integration of highly parallel multiprocessors, workstations, personal computers, I/O, networking and data acquisition

  5. Scalable Task Assignment for Heterogeneous Multi-Robot Teams

    Directory of Open Access Journals (Sweden)

    Paula García

    2013-02-01

    Full Text Available This work deals with the development of a dynamic task assignment strategy for heterogeneous multi-robot teams in typical real world scenarios. The strategy must be efficiently scalable to support problems of increasing complexity with minimum designer intervention. To this end, we have selected a very simple auction-based strategy, which has been implemented and analysed in a multi-robot cleaning problem that requires strong coordination and dynamic complex subtask organization. We will show that the selection of a simple auction strategy provides a linear computational cost increase with the number of robots that make up the team and allows the solving of highly complex assignment problems in dynamic conditions by means of a hierarchical sub-auction policy. To coordinate and control the team, a layered behaviour-based architecture has been applied that allows the reusing of the auction-based strategy to achieve different coordination levels.

  6. Using Audience Response Technology to provide formative feedback on pharmacology performance for non-medical prescribing students - a preliminary evaluation

    Directory of Open Access Journals (Sweden)

    Mostyn Alison

    2012-11-01

    Full Text Available Abstract Background The use of anonymous audience response technology (ART to actively engage students in classroom learning has been evaluated positively across multiple settings. To date, however, there has been no empirical evaluation of the use of individualised ART handsets and formative feedback of ART scores. The present study investigates student perceptions of such a system and the relationship between formative feedback results and exam performance. Methods Four successive cohorts of Non-Medical Prescribing students (n=107 had access to the individualised ART system and three of these groups (n=72 completed a questionnaire about their perceptions of using ART. Semi-structured interviews were carried out with a purposive sample of seven students who achieved a range of scores on the formative feedback. Using data from all four cohorts of students, the relationship between mean ART scores and summative pharmacology exam score was examined using a non-parametric correlation. Results Questionnaire and interview data suggested that the use of ART enhanced the classroom environment, motivated students and promoted learning. Questionnaire data demonstrated that students found the formative feedback helpful for identifying their learning needs (95.6%, guiding their independent study (86.8%, and as a revision tool (88.3%. Interviewees particularly valued the objectivity of the individualised feedback which helped them to self-manage their learning. Interviewees’ initial anxiety about revealing their level of pharmacology knowledge to the lecturer and to themselves reduced over time as students focused on the learning benefits associated with the feedback. A significant positive correlation was found between students’ formative feedback scores and their summative pharmacology exam scores (Spearman’s rho = 0.71, N=107, p Conclusions Despite initial anxiety about the use of individualised ART units, students rated the helpfulness of the

  7. Scalable graphene production: perspectives and challenges of plasma applications

    Science.gov (United States)

    Levchenko, Igor; Ostrikov, Kostya (Ken); Zheng, Jie; Li, Xingguo; Keidar, Michael; B. K. Teo, Kenneth

    2016-05-01

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h-1 m-2 was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of various

  8. Scalable graphene production: perspectives and challenges of plasma applications.

    Science.gov (United States)

    Levchenko, Igor; Ostrikov, Kostya Ken; Zheng, Jie; Li, Xingguo; Keidar, Michael; B K Teo, Kenneth

    2016-05-19

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h(-1) m(-2) was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of

  9. Scalable conditional induction variables (CIV) analysis

    DEFF Research Database (Denmark)

    Oancea, Cosmin Eugen; Rauchwerger, Lawrence

    2015-01-01

    parallelizing compiler and evaluated its impact on five Fortran benchmarks. We have found that that there are many important loops using CIV subscripts and that our analysis can lead to their scalable parallelization. This in turn has led to the parallelization of the benchmark programs they appear in.......Subscripts using induction variables that cannot be expressed as a formula in terms of the enclosing-loop indices appear in the low-level implementation of common programming abstractions such as filter, or stack operations and pose significant challenges to automatic parallelization. Because...... the complexity of such induction variables is often due to their conditional evaluation across the iteration space of loops we name them Conditional Induction Variables (CIV). This paper presents a flow-sensitive technique that summarizes both such CIV-based and affine subscripts to program level, using the same...

  10. Scalable Faceted Ranking in Tagging Systems

    Science.gov (United States)

    Orlicki, José I.; Alvarez-Hamelin, J. Ignacio; Fierens, Pablo I.

    Nowadays, web collaborative tagging systems which allow users to upload, comment on and recommend contents, are growing. Such systems can be represented as graphs where nodes correspond to users and tagged-links to recommendations. In this paper we analyze the problem of computing a ranking of users with respect to a facet described as a set of tags. A straightforward solution is to compute a PageRank-like algorithm on a facet-related graph, but it is not feasible for online computation. We propose an alternative: (i) a ranking for each tag is computed offline on the basis of tag-related subgraphs; (ii) a faceted order is generated online by merging rankings corresponding to all the tags in the facet. Based on the graph analysis of YouTube and Flickr, we show that step (i) is scalable. We also present efficient algorithms for step (ii), which are evaluated by comparing their results with two gold standards.

  11. A versatile scalable PET processing system

    International Nuclear Information System (INIS)

    Dong, H.; Weisenberger, A.; McKisson, J.; Wenze, Xi; Cuevas, C.; Wilson, J.; Zukerman, L.

    2011-01-01

    Positron Emission Tomography (PET) historically has major clinical and preclinical applications in cancerous oncology, neurology, and cardiovascular diseases. Recently, in a new direction, an application specific PET system is being developed at Thomas Jefferson National Accelerator Facility (Jefferson Lab) in collaboration with Duke University, University of Maryland at Baltimore (UMAB), and West Virginia University (WVU) targeted for plant eco-physiology research. The new plant imaging PET system is versatile and scalable such that it could adapt to several plant imaging needs - imaging many important plant organs including leaves, roots, and stems. The mechanical arrangement of the detectors is designed to accommodate the unpredictable and random distribution in space of the plant organs without requiring the plant be disturbed. Prototyping such a system requires a new data acquisition system (DAQ) and data processing system which are adaptable to the requirements of these unique and versatile detectors.

  12. The scalable coherent interface, IEEE P1596

    International Nuclear Information System (INIS)

    Gustavson, D.B.

    1990-01-01

    IEEE P1596, the scalable coherent interface (formerly known as SuperBus) is based on experience gained while developing Fastbus (ANSI/IEEE 960--1986, IEC 935), Futurebus (IEEE P896.x) and other modern 32-bit buses. SCI goals include a minimum bandwidth of 1 GByte/sec per processor in multiprocessor systems with thousands of processors; efficient support of a coherent distributed-cache image of distributed shared memory; support for repeaters which interface to existing or future buses; and support for inexpensive small rings as well as for general switched interconnections like Banyan, Omega, or crossbar networks. This paper presents a summary of current directions, reports the status of the work in progress, and suggests some applications in data acquisition and physics

  13. BASSET: Scalable Gateway Finder in Large Graphs

    Energy Technology Data Exchange (ETDEWEB)

    Tong, H; Papadimitriou, S; Faloutsos, C; Yu, P S; Eliassi-Rad, T

    2010-11-03

    Given a social network, who is the best person to introduce you to, say, Chris Ferguson, the poker champion? Or, given a network of people and skills, who is the best person to help you learn about, say, wavelets? The goal is to find a small group of 'gateways': persons who are close enough to us, as well as close enough to the target (person, or skill) or, in other words, are crucial in connecting us to the target. The main contributions are the following: (a) we show how to formulate this problem precisely; (b) we show that it is sub-modular and thus it can be solved near-optimally; (c) we give fast, scalable algorithms to find such gateways. Experiments on real data sets validate the effectiveness and efficiency of the proposed methods, achieving up to 6,000,000x speedup.

  14. Scalable quantum search using trapped ions

    International Nuclear Information System (INIS)

    Ivanov, S. S.; Ivanov, P. A.; Linington, I. E.; Vitanov, N. V.

    2010-01-01

    We propose a scalable implementation of Grover's quantum search algorithm in a trapped-ion quantum information processor. The system is initialized in an entangled Dicke state by using adiabatic techniques. The inversion-about-average and oracle operators take the form of single off-resonant laser pulses. This is made possible by utilizing the physical symmetries of the trapped-ion linear crystal. The physical realization of the algorithm represents a dramatic simplification: each logical iteration (oracle and inversion about average) requires only two physical interaction steps, in contrast to the large number of concatenated gates required by previous approaches. This not only facilitates the implementation but also increases the overall fidelity of the algorithm.

  15. Building scalable apps with Redis and Node.js

    CERN Document Server

    Johanan, Joshua

    2014-01-01

    If the phrase scalability sounds alien to you, then this is an ideal book for you. You will not need much Node.js experience as each framework is demonstrated in a way that requires no previous knowledge of the framework. You will be building scalable Node.js applications in no time! Knowledge of JavaScript is required.

  16. PyClaw: Accessible, Extensible, Scalable Tools for Wave Propagation Problems

    KAUST Repository

    Ketcheson, David I.

    2012-08-15

    Development of scientific software involves tradeoffs between ease of use, generality, and performance. We describe the design of a general hyperbolic PDE solver that can be operated with the convenience of MATLAB yet achieves efficiency near that of hand-coded Fortran and scales to the largest supercomputers. This is achieved by using Python for most of the code while employing automatically wrapped Fortran kernels for computationally intensive routines, and using Python bindings to interface with a parallel computing library and other numerical packages. The software described here is PyClaw, a Python-based structured grid solver for general systems of hyperbolic PDEs [K. T. Mandli et al., PyClaw Software, Version 1.0, http://numerics.kaust.edu.sa/pyclaw/ (2011)]. PyClaw provides a powerful and intuitive interface to the algorithms of the existing Fortran codes Clawpack and SharpClaw, simplifying code development and use while providing massive parallelism and scalable solvers via the PETSc library. The package is further augmented by use of PyWENO for generation of efficient high-order weighted essentially nonoscillatory reconstruction code. The simplicity, capability, and performance of this approach are demonstrated through application to example problems in shallow water flow, compressible flow, and elasticity.

  17. A wireless, compact, and scalable bioimpedance measurement system for energy-efficient multichannel body sensor solutions

    International Nuclear Information System (INIS)

    Ramos, J; Ausín, J L; Lorido, A M; Redondo, F; Duque-Carrillo, J F

    2013-01-01

    In this paper, we present the design, realization and evaluation of a multichannel measurement system based on a cost-effective high-performance integrated circuit for electrical bioimpedance (EBI) measurements in the frequency range from 1 kHz to 1 MHz, and a low-cost commercially available radio frequency transceiver device, which provides reliable wireless communication. The resulting on-chip spectrometer provides high measuring EBI capabilities and constitutes the basic node to built EBI wireless sensor networks (EBI-WSNs). The proposed EBI-WSN behaves as a high-performance wireless multichannel EBI spectrometer where the number of nodes, i.e., number of channels, is completely scalable to satisfy specific requirements of body sensor networks. One of its main advantages is its versatility, since each EBI node is independently configurable and capable of working simultaneously. A prototype of the EBI node leads to a very small printed circuit board of approximately 8 cm 2 including chip-antenna, which can operate several years on one 3-V coin cell battery. A specifically tailored graphical user interface (GUI) for EBI-WSN has been also designed and implemented in order to configure the operation of EBI nodes and the network topology. EBI analysis parameters, e.g., single-frequency or spectroscopy, time interval, analysis by EBI events, frequency and amplitude ranges of the excitation current, etc., are defined by the GUI.

  18. PyClaw: Accessible, Extensible, Scalable Tools for Wave Propagation Problems

    KAUST Repository

    Ketcheson, David I.; Mandli, Kyle; Ahmadia, Aron; Alghamdi, Amal; de Luna, Manuel Quezada; Parsani, Matteo; Knepley, Matthew G.; Emmett, Matthew

    2012-01-01

    Development of scientific software involves tradeoffs between ease of use, generality, and performance. We describe the design of a general hyperbolic PDE solver that can be operated with the convenience of MATLAB yet achieves efficiency near that of hand-coded Fortran and scales to the largest supercomputers. This is achieved by using Python for most of the code while employing automatically wrapped Fortran kernels for computationally intensive routines, and using Python bindings to interface with a parallel computing library and other numerical packages. The software described here is PyClaw, a Python-based structured grid solver for general systems of hyperbolic PDEs [K. T. Mandli et al., PyClaw Software, Version 1.0, http://numerics.kaust.edu.sa/pyclaw/ (2011)]. PyClaw provides a powerful and intuitive interface to the algorithms of the existing Fortran codes Clawpack and SharpClaw, simplifying code development and use while providing massive parallelism and scalable solvers via the PETSc library. The package is further augmented by use of PyWENO for generation of efficient high-order weighted essentially nonoscillatory reconstruction code. The simplicity, capability, and performance of this approach are demonstrated through application to example problems in shallow water flow, compressible flow, and elasticity.

  19. Improving diabetes medication adherence: successful, scalable interventions

    Directory of Open Access Journals (Sweden)

    Zullig LL

    2015-01-01

    Full Text Available Leah L Zullig,1,2 Walid F Gellad,3,4 Jivan Moaddeb,2,5 Matthew J Crowley,1,2 William Shrank,6 Bradi B Granger,7 Christopher B Granger,8 Troy Trygstad,9 Larry Z Liu,10 Hayden B Bosworth1,2,7,11 1Center for Health Services Research in Primary Care, Durham Veterans Affairs Medical Center, Durham, NC, USA; 2Department of Medicine, Duke University, Durham, NC, USA; 3Center for Health Equity Research and Promotion, Pittsburgh Veterans Affairs Medical Center, Pittsburgh, PA, USA; 4Division of General Internal Medicine, University of Pittsburgh, Pittsburgh, PA, USA; 5Institute for Genome Sciences and Policy, Duke University, Durham, NC, USA; 6CVS Caremark Corporation; 7School of Nursing, Duke University, Durham, NC, USA; 8Department of Medicine, Division of Cardiology, Duke University School of Medicine, Durham, NC, USA; 9North Carolina Community Care Networks, Raleigh, NC, USA; 10Pfizer, Inc., and Weill Medical College of Cornell University, New York, NY, USA; 11Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, NC, USA Abstract: Effective medications are a cornerstone of prevention and disease treatment, yet only about half of patients take their medications as prescribed, resulting in a common and costly public health challenge for the US healthcare system. Since poor medication adherence is a complex problem with many contributing causes, there is no one universal solution. This paper describes interventions that were not only effective in improving medication adherence among patients with diabetes, but were also potentially scalable (ie, easy to implement to a large population. We identify key characteristics that make these interventions effective and scalable. This information is intended to inform healthcare systems seeking proven, low resource, cost-effective solutions to improve medication adherence. Keywords: medication adherence, diabetes mellitus, chronic disease, dissemination research

  20. The ATS Web Page Provides "Tool Boxes" for: Access Opportunities, Performance, Interfaces, Volume, Environments, "Wish List" Entry and Educational Outreach

    Science.gov (United States)

    1999-01-01

    This viewgraph presentation gives an overview of the Access to Space website, including information on the 'tool boxes' available on the website for access opportunities, performance, interfaces, volume, environments, 'wish list' entry, and educational outreach.

  1. An Experimental Investigation of Improving Human Problem-Solving Performance by Guiding Attention and Adaptively Providing Details on Information Displays

    National Research Council Canada - National Science Library

    Narayanan, N. H

    2007-01-01

    .... Results showed that various display strategies for augmenting information presented based on knowledge about both the viewer's gaze patterns and the problem solving procedure he or she is employing could indeed improve problem-solving performance.

  2. fastBMA: scalable network inference and transitive reduction.

    Science.gov (United States)

    Hung, Ling-Hong; Shi, Kaiyuan; Wu, Migao; Young, William Chad; Raftery, Adrian E; Yeung, Ka Yee

    2017-10-01

    Inferring genetic networks from genome-wide expression data is extremely demanding computationally. We have developed fastBMA, a distributed, parallel, and scalable implementation of Bayesian model averaging (BMA) for this purpose. fastBMA also includes a computationally efficient module for eliminating redundant indirect edges in the network by mapping the transitive reduction to an easily solved shortest-path problem. We evaluated the performance of fastBMA on synthetic data and experimental genome-wide time series yeast and human datasets. When using a single CPU core, fastBMA is up to 100 times faster than the next fastest method, LASSO, with increased accuracy. It is a memory-efficient, parallel, and distributed application that scales to human genome-wide expression data. A 10 000-gene regulation network can be obtained in a matter of hours using a 32-core cloud cluster (2 nodes of 16 cores). fastBMA is a significant improvement over its predecessor ScanBMA. It is more accurate and orders of magnitude faster than other fast network inference methods such as the 1 based on LASSO. The improved scalability allows it to calculate networks from genome scale data in a reasonable time frame. The transitive reduction method can improve accuracy in denser networks. fastBMA is available as code (M.I.T. license) from GitHub (https://github.com/lhhunghimself/fastBMA), as part of the updated networkBMA Bioconductor package (https://www.bioconductor.org/packages/release/bioc/html/networkBMA.html) and as ready-to-deploy Docker images (https://hub.docker.com/r/biodepot/fastbma/). © The Authors 2017. Published by Oxford University Press.

  3. 16 CFR 1406.4 - Requirements to provide performance and technical notice to prospective purchasers and purchasers.

    Science.gov (United States)

    2010-01-01

    ... recommends the use of this 2 label format in order to provide more consumer awareness of the operation and... any identification of the manufacturer, brand, model, and similar designations. At the manufacturer's...

  4. A scalable method for parallelizing sampling-based motion planning algorithms

    KAUST Repository

    Jacobs, Sam Ade; Manavi, Kasra; Burgos, Juan; Denny, Jory; Thomas, Shawna; Amato, Nancy M.

    2012-01-01

    This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.

  5. A scalable method for parallelizing sampling-based motion planning algorithms

    KAUST Repository

    Jacobs, Sam Ade

    2012-05-01

    This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.

  6. MicROS-drt: supporting real-time and scalable data distribution in distributed robotic systems.

    Science.gov (United States)

    Ding, Bo; Wang, Huaimin; Fan, Zedong; Zhang, Pengfei; Liu, Hui

    A primary requirement in distributed robotic software systems is the dissemination of data to all interested collaborative entities in a timely and scalable manner. However, providing such a service in a highly dynamic and resource-limited robotic environment is a challenging task, and existing robot software infrastructure has limitations in this aspect. This paper presents a novel robot software infrastructure, micROS-drt, which supports real-time and scalable data distribution. The solution is based on a loosely coupled data publish-subscribe model with the ability to support various time-related constraints. And to realize this model, a mature data distribution standard, the data distribution service for real-time systems (DDS), is adopted as the foundation of the transport layer of this software infrastructure. By elaborately adapting and encapsulating the capability of the underlying DDS middleware, micROS-drt can meet the requirement of real-time and scalable data distribution in distributed robotic systems. Evaluation results in terms of scalability, latency jitter and transport priority as well as the experiment on real robots validate the effectiveness of this work.

  7. Formative evaluation of a telemedicine model for delivering clinical neurophysiology services part I: utility, technical performance and service provider perspective.

    LENUS (Irish Health Repository)

    Breen, Patricia

    2010-01-01

    Formative evaluation is conducted in the early stages of system implementation to assess how it works in practice and to identify opportunities for improving technical and process performance. A formative evaluation of a teleneurophysiology service was conducted to examine its technical and sociological dimensions.

  8. 78 FR 69438 - AGOA: Trade and Investment Performance Overview; AGOA: Economic Effects of Providing Duty-Free...

    Science.gov (United States)

    2013-11-19

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 332-542, Investigation No. 332-544, Investigation No. 332-545, Investigation No. 332-546] AGOA: Trade and Investment Performance Overview; AGOA... To Promote Regional Integration and Increase Exports to the United States; EU-South Africa FTA...

  9. VA residential substance use disorder treatment program providers' perceptions of facilitators and barriers to performance on pre-admission processes.

    Science.gov (United States)

    Ellerbe, Laura S; Manfredi, Luisa; Gupta, Shalini; Phelps, Tyler E; Bowe, Thomas R; Rubinsky, Anna D; Burden, Jennifer L; Harris, Alex H S

    2017-04-04

    In the U.S. Department of Veterans Affairs (VA), residential treatment programs are an important part of the continuum of care for patients with a substance use disorder (SUD). However, a limited number of program-specific measures to identify quality gaps in SUD residential programs exist. This study aimed to: (1) Develop metrics for two pre-admission processes: Wait Time and Engagement While Waiting, and (2) Interview program management and staff about program structures and processes that may contribute to performance on these metrics. The first aim sought to supplement the VA's existing facility-level performance metrics with SUD program-level metrics in order to identify high-value targets for quality improvement. The second aim recognized that not all key processes are reflected in the administrative data, and even when they are, new insight may be gained from viewing these data in the context of day-to-day clinical practice. VA administrative data from fiscal year 2012 were used to calculate pre-admission metrics for 97 programs (63 SUD Residential Rehabilitation Treatment Programs (SUD RRTPs); 34 Mental Health Residential Rehabilitation Treatment Programs (MH RRTPs) with a SUD track). Interviews were then conducted with management and front-line staff to learn what factors may have contributed to high or low performance, relative to the national average for their program type. We hypothesized that speaking directly to residential program staff may reveal innovative practices, areas for improvement, and factors that may explain system-wide variability in performance. Average wait time for admission was 16 days (SUD RRTPs: 17 days; MH RRTPs with a SUD track: 11 days), with 60% of Veterans waiting longer than 7 days. For these Veterans, engagement while waiting occurred in an average of 54% of the waiting weeks (range 3-100% across programs). Fifty-nine interviews representing 44 programs revealed factors perceived to potentially impact performance in

  10. Interpolating and Estimating Horizontal Diffuse Solar Irradiation to Provide UK-Wide Coverage: Selection of the Best Performing Models

    Directory of Open Access Journals (Sweden)

    Diane Palmer

    2017-02-01

    Full Text Available Plane-of-array (PoA irradiation data is a requirement to simulate the energetic performance of photovoltaic devices (PVs. Normally, solar data is only available as global horizontal irradiation, for a limited number of locations, and typically in hourly time resolution. One approach to handling this restricted data is to enhance it initially by interpolation to the location of interest; next, it must be translated to PoA data by separately considering the diffuse and the beam components. There are many methods of interpolation. This research selects ordinary kriging as the best performing technique by studying mathematical properties, experimentation and leave-one-out-cross validation. Likewise, a number of different translation models has been developed, most of them parameterised for specific measurement setups and locations. The work presented identifies the optimum approach for the UK on a national scale. The global horizontal irradiation will be split into its constituent parts. Divers separation models were tried. The results of each separation algorithm were checked against measured data distributed across the UK. It became apparent that while there is little difference between procedures (14 Wh/m2 mean bias error (MBE, 12 Wh/m2 root mean square error (RMSE, the Ridley, Boland, Lauret equation (a universal split algorithm consistently performed well. The combined interpolation/separation RMSE is 86 Wh/m2.

  11. Reynolds number scalability of bristled wings performing clap and fling

    Science.gov (United States)

    Jacob, Skyler; Kasoju, Vishwa; Santhanakrishnan, Arvind

    2017-11-01

    Tiny flying insects such as thrips show a distinctive physical adaptation in the use of bristled wings. Thrips use wing-wing interaction kinematics for flapping, in which a pair of wings clap together at the end of upstroke and fling apart at the beginning of downstroke. Previous studies have shown that the use of bristled wings can reduce the forces needed for clap and fling at Reynolds number (Re) on the order of 10. This study examines if the fluid dynamic advantages of using bristled wings also extend to higher Re on the order of 100. A robotic clap and fling platform was used for this study, in which a pair of physical wing models were programmed to execute clap and fling kinematics. Force measurements were conducted on solid (non-bristled) and bristled wing pairs. The results show lift and drag forces were both lower for bristled wings when compared to solid wings for Re ranging from 1-10, effectively increasing peak lift to peak drag ratio of bristled wings. However, peak lift to peak drag ratio was lower for bristled wings at Re =120 as compared to solid wings, suggesting that bristled wings may be uniquely advantageous for Re on the orders of 1-10. Flow structures visualized using particle image velocimetry (PIV) and their impact on force production will be presented.

  12. Band-to-Band Tunneling Transistors: Scalability and Circuit Performance

    Science.gov (United States)

    2013-05-01

    The largest company in the world is now a technology company (Apple Inc.) whose products are all enabled by transistors [2]. Any changes, for better...increasing standby battery life. The nVidia Tegra 3 mobile processor for applications in smartphones and tablets contains five cores: one low power...white paper, NVIDIA , 2011. 14. W. G. Vandenberghe, B. Sorée, W. Magnus, G. Groeseneken, and M. V. Fischetti, “Impact of field-induced quantum

  13. Linux containers networking: performance and scalability of kernel modules

    NARCIS (Netherlands)

    Claassen, J.; Koning, R.; Grosso, P.; Oktug Badonnel, S.; Ulema, M.; Cavdar, C.; Zambenedetti Granville, L.; dos Santos, C.R.P.

    2016-01-01

    Linux container virtualisation is gaining momentum as lightweight technology to support cloud and distributed computing. Applications relying on container architectures might at times rely on inter-container communication, and container networking solutions are emerging to address this need.

  14. Effect of provider experience on clinician-performed ultrasonography for hydronephrosis in patients with suspected renal colic.

    Science.gov (United States)

    Herbst, Meghan K; Rosenberg, Graeme; Daniels, Brock; Gross, Cary P; Singh, Dinesh; Molinaro, Annette M; Luty, Seth; Moore, Christopher L

    2014-09-01

    Hydronephrosis is readily visible on ultrasonography and is a strong predictor of ureteral stones, but ultrasonography is a user-dependent technology and the test characteristics of clinician-performed ultrasonography for hydronephrosis are incompletely characterized, as is the effect of ultrasound fellowship training on predictive accuracy. We seek to determine the test characteristics of ultrasonography for detecting hydronephrosis when performed by clinicians with a wide range of experience under conditions of direct patient care. This was a prospective study of patients presenting to an academic medical center emergency department with suspected renal colic. Before computed tomography (CT) results, an emergency clinician performed bedside ultrasonography, recording the presence and degree of hydronephrosis. CT data were abstracted from the dictated radiology report by an investigator blinded to the bedside ultrasonographic results. Test characteristics of bedside ultrasonography for hydronephrosis were calculated with the CT scan as the reference standard, with test characteristics compared by clinician experience stratified into 4 levels: attending physicians with emergency ultrasound fellowship training, attending physicians without emergency ultrasound fellowship training, ultrasound experienced non-attending physician clinicians (at least 2 weeks of ultrasound training), and ultrasound inexperienced non-attending physician clinicians (physician assistants, nurse practitioners, off-service rotators, and first-year emergency medicine residents with fewer than 2 weeks of ultrasound training). There were 670 interpretable bedside ultrasonographic tests performed by 144 unique clinicians, 80.9% of which were performed by clinicians directly involved in the care of the patient. On CT, 47.5% of all subjects had hydronephrosis and 47.0% had a ureteral stone. Among all clinicians, ultrasonography had a sensitivity of 72.6% (95% confidence interval [CI] 65.4% to 78

  15. Scalability Dilemma and Statistic Multiplexed Computing — A Theory and Experiment

    Directory of Open Access Journals (Sweden)

    Justin Yuan Shi

    2017-08-01

    Full Text Available The For the last three decades, end-to-end computing paradigms, such as MPI (Message Passing Interface, RPC (Remote Procedure Call and RMI (Remote Method Invocation, have been the de facto paradigms for distributed and parallel programming. Despite of the successes, applications built using these paradigms suffer due to the proportionality factor of crash in the application with its size. Checkpoint/restore and backup/recovery are the only means to save otherwise lost critical information. The scalability dilemma is such a practical challenge that the probability of the data losses increases as the application scales in size. The theoretical significance of this practical challenge is that it undermines the fundamental structure of the scientific discovery process and mission critical services in production today. In 1997, the direct use of end-to-end reference model in distributed programming was recognized as a fallacy. The scalability dilemma was predicted. However, this voice was overrun by the passage of time. Today, the rapidly growing digitized data demands solving the increasingly critical scalability challenges. Computing architecture scalability, although loosely defined, is now the front and center of large-scale computing efforts. Constrained only by the economic law of diminishing returns, this paper proposes a narrow definition of a Scalable Computing Service (SCS. Three scalability tests are also proposed in order to distinguish service architecture flaws from poor application programming. Scalable data intensive service requires additional treatments. Thus, the data storage is assumed reliable in this paper. A single-sided Statistic Multiplexed Computing (SMC paradigm is proposed. A UVR (Unidirectional Virtual Ring SMC architecture is examined under SCS tests. SMC was designed to circumvent the well-known impossibility of end-to-end paradigms. It relies on the proven statistic multiplexing principle to deliver reliable service

  16. Scalable and uniform 1D nanoparticles by synchronous polymerization, crystallization and self-assembly

    Science.gov (United States)

    Boott, Charlotte E.; Gwyther, Jessica; Harniman, Robert L.; Hayward, Dominic W.; Manners, Ian

    2017-08-01

    The preparation of well-defined nanoparticles based on soft matter, using solution-processing techniques on a commercially viable scale, is a major challenge of widespread importance. Self-assembly of block copolymers in solvents that selectively solvate one of the segments provides a promising route to core-corona nanoparticles (micelles) with a wide range of potential uses. Nevertheless, significant limitations to this approach also exist. For example, the solution processing of block copolymers generally follows a separate synthesis step and is normally performed at high dilution. Moreover, non-spherical micelles—which are promising for many applications—are generally difficult to access, samples are polydisperse and precise dimensional control is not possible. Here we demonstrate the formation of platelet and cylindrical micelles at concentrations up to 25% solids via a one-pot approach—starting from monomers—that combines polymerization-induced and crystallization-driven self-assembly. We also show that performing the procedure in the presence of small seed micelles allows the scalable formation of low dispersity samples of cylindrical micelles of controlled length up to three micrometres.

  17. Performance and scaling of locally-structured grid methods forpartial differential equations

    Energy Technology Data Exchange (ETDEWEB)

    Colella, Phillip; Bell, John; Keen, Noel; Ligocki, Terry; Lijewski, Michael; Van Straalen, Brian

    2007-07-19

    In this paper, we discuss some of the issues in obtaining high performance for block-structured adaptive mesh refinement software for partial differential equations. We show examples in which AMR scales to thousands of processors. We also discuss a number of metrics for performance and scalability that can provide a basis for understanding the advantages and disadvantages of this approach.

  18. Resource Allocation for OFDMA-Based Cognitive Radio Networks with Application to H.264 Scalable Video Transmission

    Directory of Open Access Journals (Sweden)

    Coon JustinP

    2011-01-01

    Full Text Available Resource allocation schemes for orthogonal frequency division multiple access- (OFDMA- based cognitive radio (CR networks that impose minimum and maximum rate constraints are considered. To demonstrate the practical application of such systems, we consider the transmission of scalable video sequences. An integer programming (IP formulation of the problem is presented, which provides the optimal solution when solved using common discrete programming methods. Due to the computational complexity involved in such an approach and its unsuitability for dynamic cognitive radio environments, we propose to use the method of lift-and-project to obtain a stronger formulation for the resource allocation problem such that the integrality gap between the integer program and its linear relaxation is reduced. A simple branching operation is then performed that eliminates any noninteger values at the output of the linear program solvers. Simulation results demonstrate that this simple technique results in solutions very close to the optimum.

  19. Embedded DCT and wavelet methods for fine granular scalable video: analysis and comparison

    Science.gov (United States)

    van der Schaar-Mitrea, Mihaela; Chen, Yingwei; Radha, Hayder

    2000-04-01

    Video transmission over bandwidth-varying networks is becoming increasingly important due to emerging applications such as streaming of video over the Internet. The fundamental obstacle in designing such systems resides in the varying characteristics of the Internet (i.e. bandwidth variations and packet-loss patterns). In MPEG-4, a new SNR scalability scheme, called Fine-Granular-Scalability (FGS), is currently under standardization, which is able to adapt in real-time (i.e. at transmission time) to Internet bandwidth variations. The FGS framework consists of a non-scalable motion-predicted base-layer and an intra-coded fine-granular scalable enhancement layer. For example, the base layer can be coded using a DCT-based MPEG-4 compliant, highly efficient video compression scheme. Subsequently, the difference between the original and decoded base-layer is computed, and the resulting FGS-residual signal is intra-frame coded with an embedded scalable coder. In order to achieve high coding efficiency when compressing the FGS enhancement layer, it is crucial to analyze the nature and characteristics of residual signals common to the SNR scalability framework (including FGS). In this paper, we present a thorough analysis of SNR residual signals by evaluating its statistical properties, compaction efficiency and frequency characteristics. The signal analysis revealed that the energy compaction of the DCT and wavelet transforms is limited and the frequency characteristic of SNR residual signals decay rather slowly. Moreover, the blockiness artifacts of the low bit-rate coded base-layer result in artificial high frequencies in the residual signal. Subsequently, a variety of wavelet and embedded DCT coding techniques applicable to the FGS framework are evaluated and their results are interpreted based on the identified signal properties. As expected from the theoretical signal analysis, the rate-distortion performances of the embedded wavelet and DCT-based coders are very

  20. Scalability of Semi-Implicit Time Integrators for Nonhydrostatic Galerkin-based Atmospheric Models on Large Scale Cluster

    Science.gov (United States)

    2011-01-01

    present performance statistics to explain the scalability behavior. Keywords-atmospheric models, time intergrators , MPI, scal- ability, performance; I...across inter-element bound- aries. Basis functions are constructed as tensor products of Lagrange polynomials ψi (x) = hα(ξ) ⊗ hβ(η) ⊗ hγ(ζ)., where hα

  1. CORAL Server and CORAL Server Proxy: Scalable Access to Relational Databases from CORAL Applications

    CERN Document Server

    Valassi, A; Kalkhof, A; Salnikov, A; Wache, M

    2011-01-01

    The CORAL software is widely used at CERN for accessing the data stored by the LHC experiments using relational database technologies. CORAL provides a C++ abstraction layer that supports data persistency for several backends and deployment models, including local access to SQLite files, direct client access to Oracle and MySQL servers, and read-only access to Oracle through the FroNTier web server and cache. Two new components have recently been added to CORAL to implement a model involving a middle tier "CORAL server" deployed close to the database and a tree of "CORAL server proxy" instances, with data caching and multiplexing functionalities, deployed close to the client. The new components are meant to provide advantages for read-only and read-write data access, in both offline and online use cases, in the areas of scalability and performance (multiplexing for several incoming connections, optional data caching) and security (authentication via proxy certificates). A first implementation of the two new c...

  2. Historical building monitoring using an energy-efficient scalable wireless sensor network architecture.

    Science.gov (United States)

    Capella, Juan V; Perles, Angel; Bonastre, Alberto; Serrano, Juan J

    2011-01-01

    We present a set of novel low power wireless sensor nodes designed for monitoring wooden masterpieces and historical buildings, in order to perform an early detection of pests. Although our previous star-based system configuration has been in operation for more than 13 years, it does not scale well for sensorization of large buildings or when deploying hundreds of nodes. In this paper we demonstrate the feasibility of a cluster-based dynamic-tree hierarchical Wireless Sensor Network (WSN) architecture where realistic assumptions of radio frequency data transmission are applied to cluster construction, and a mix of heterogeneous nodes are used to minimize economic cost of the whole system and maximize power saving of the leaf nodes. Simulation results show that the specialization of a fraction of the nodes by providing better antennas and some energy harvesting techniques can dramatically extend the life of the entire WSN and reduce the cost of the whole system. A demonstration of the proposed architecture with a new routing protocol and applied to termite pest detection has been implemented on a set of new nodes and should last for about 10 years, but it provides better scalability, reliability and deployment properties.

  3. Historical Building Monitoring Using an Energy-Efficient Scalable Wireless Sensor Network Architecture

    Science.gov (United States)

    Capella, Juan V.; Perles, Angel; Bonastre, Alberto; Serrano, Juan J.

    2011-01-01

    We present a set of novel low power wireless sensor nodes designed for monitoring wooden masterpieces and historical buildings, in order to perform an early detection of pests. Although our previous star-based system configuration has been in operation for more than 13 years, it does not scale well for sensorization of large buildings or when deploying hundreds of nodes. In this paper we demonstrate the feasibility of a cluster-based dynamic-tree hierarchical Wireless Sensor Network (WSN) architecture where realistic assumptions of radio frequency data transmission are applied to cluster construction, and a mix of heterogeneous nodes are used to minimize economic cost of the whole system and maximize power saving of the leaf nodes. Simulation results show that the specialization of a fraction of the nodes by providing better antennas and some energy harvesting techniques can dramatically extend the life of the entire WSN and reduce the cost of the whole system. A demonstration of the proposed architecture with a new routing protocol and applied to termite pest detection has been implemented on a set of new nodes and should last for about 10 years, but it provides better scalability, reliability and deployment properties. PMID:22346630

  4. Scalable synthesis of sequence-defined, unimolecular macromolecules by Flow-IEG

    Science.gov (United States)

    Leibfarth, Frank A.; Johnson, Jeremiah A.; Jamison, Timothy F.

    2015-01-01

    We report a semiautomated synthesis of sequence and architecturally defined, unimolecular macromolecules through a marriage of multistep flow synthesis and iterative exponential growth (Flow-IEG). The Flow-IEG system performs three reactions and an in-line purification in a total residence time of under 10 min, effectively doubling the molecular weight of an oligomeric species in an uninterrupted reaction sequence. Further iterations using the Flow-IEG system enable an exponential increase in molecular weight. Incorporating a variety of monomer structures and branching units provides control over polymer sequence and architecture. The synthesis of a uniform macromolecule with a molecular weight of 4,023 g/mol is demonstrated. The user-friendly nature, scalability, and modularity of Flow-IEG provide a general strategy for the automated synthesis of sequence-defined, unimolecular macromolecules. Flow-IEG is thus an enabling tool for theory validation, structure–property studies, and advanced applications in biotechnology and materials science. PMID:26269573

  5. Hardware Implementation of Lossless Adaptive and Scalable Hyperspectral Data Compression for Space

    Science.gov (United States)

    Aranki, Nazeeh; Keymeulen, Didier; Bakhshi, Alireza; Klimesh, Matthew

    2009-01-01

    On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. The technique also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware. A modified form of the algorithm that is better suited for data from pushbroom instruments is generally appropriate for flight implementation. A scalable field programmable gate array (FPGA) hardware implementation was developed. The FPGA implementation achieves a throughput performance of 58 Msamples/sec, which can be increased to over 100 Msamples/sec in a parallel implementation that uses twice the hardware resources This paper describes the hardware implementation of the 'Modified Fast Lossless' compression algorithm on an FPGA. The FPGA implementation targets the current state-of-the-art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for space applications.

  6. Parallel peak pruning for scalable SMP contour tree computation

    Energy Technology Data Exchange (ETDEWEB)

    Carr, Hamish A. [Univ. of Leeds (United Kingdom); Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Davis, CA (United States); Sewell, Christopher M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ahrens, James P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-09

    As data sets grow to exascale, automated data analysis and visualisation are increasingly important, to intermediate human understanding and to reduce demands on disk storage via in situ analysis. Trends in architecture of high performance computing systems necessitate analysis algorithms to make effective use of combinations of massively multicore and distributed systems. One of the principal analytic tools is the contour tree, which analyses relationships between contours to identify features of more than local importance. Unfortunately, the predominant algorithms for computing the contour tree are explicitly serial, and founded on serial metaphors, which has limited the scalability of this form of analysis. While there is some work on distributed contour tree computation, and separately on hybrid GPU-CPU computation, there is no efficient algorithm with strong formal guarantees on performance allied with fast practical performance. Here in this paper, we report the first shared SMP algorithm for fully parallel contour tree computation, withfor-mal guarantees of O(lgnlgt) parallel steps and O(n lgn) work, and implementations with up to 10x parallel speed up in OpenMP and up to 50x speed up in NVIDIA Thrust.

  7. Patient’s expectation on communication performances community of Dental Health Services providers located in urban and rural area

    Directory of Open Access Journals (Sweden)

    Taufan Bramantoro

    2013-03-01

    Full Text Available Background: The quality of dentist’s communication skills is considered as one of important aspects on the quality of dental health services assessment. During the initial interview conducted at Ketabang, Dupak, and Kepadangan community dental health services at Surabaya and Sidoarjo, Indonesia, it appeared that eighty percent of initial respondents were not satisfied with the communication aspect. Community Dental Health Services (CDHS need to assess the communication performances based on community characteristics in effort to promote the quality and effectiveness of the denta health services. Purpose: The objective of this study was to analyze patient’s expectation values priorities on dentists' communication performances in CDHS that located in urban and rural area. Methods: The study was conducted in Ketabang Surabaya, Dupak Surabaya and Kepadangan Sidoarjo CDHSs. The participants were 400 patients above 18 years old. Participants were assessed their expectation value using the communication performances of dental health services questionnaire. Results: Patients in urban CDHS appeared that there were two priority aspects which had high values, namely the clarity of instructions and the dentist’s ability of active listening to the patient, while patients in rural CDHS revealed that the clarity of instructions and dentist-patient relationship were the aspects with high values. Conclusion: Patients in CDHS that located in rural area expect more dentist-patient interpersonal relationship performance than patients in CDHS located in urban area. This finding becomes a valuable information for CDHS to develop communication strategies based on community characteristics.Latar belakang: Kualitas komunikasi dari dokter gigi merupakan salah satu aspek penting dalam penilaian kualitas layanan suatu sarana pelayanan kesehatan. Pada wawancara pendahuluan yang dilaksanakan di puskesmas Ketabang, Dupak dan Kepadangan di Surabaya dan Sidoarjo

  8. Building Scalable Knowledge Graphs for Earth Science

    Science.gov (United States)

    Ramachandran, R.; Maskey, M.; Gatlin, P. N.; Zhang, J.; Duan, X.; Bugbee, K.; Christopher, S. A.; Miller, J. J.

    2017-12-01

    Estimates indicate that the world's information will grow by 800% in the next five years. In any given field, a single researcher or a team of researchers cannot keep up with this rate of knowledge expansion without the help of cognitive systems. Cognitive computing, defined as the use of information technology to augment human cognition, can help tackle large systemic problems. Knowledge graphs, one of the foundational components of cognitive systems, link key entities in a specific domain with other entities via relationships. Researchers could mine these graphs to make probabilistic recommendations and to infer new knowledge. At this point, however, there is a dearth of tools to generate scalable Knowledge graphs using existing corpus of scientific literature for Earth science research. Our project is currently developing an end-to-end automated methodology for incrementally constructing Knowledge graphs for Earth Science. Semantic Entity Recognition (SER) is one of the key steps in this methodology. SER for Earth Science uses external resources (including metadata catalogs and controlled vocabulary) as references to guide entity extraction and recognition (i.e., labeling) from unstructured text, in order to build a large training set to seed the subsequent auto-learning component in our algorithm. Results from several SER experiments will be presented as well as lessons learned.

  9. CODA: A scalable, distributed data acquisition system

    International Nuclear Information System (INIS)

    Watson, W.A. III; Chen, J.; Heyes, G.; Jastrzembski, E.; Quarrie, D.

    1994-01-01

    A new data acquisition system has been designed for physics experiments scheduled to run at CEBAF starting in the summer of 1994. This system runs on Unix workstations connected via ethernet, FDDI, or other network hardware to multiple intelligent front end crates -- VME, CAMAC or FASTBUS. CAMAC crates may either contain intelligent processors, or may be interfaced to VME. The system is modular and scalable, from a single front end crate and one workstation linked by ethernet, to as may as 32 clusters of front end crates ultimately connected via a high speed network to a set of analysis workstations. The system includes an extensible, device independent slow controls package with drivers for CAMAC, VME, and high voltage crates, as well as a link to CEBAF accelerator controls. All distributed processes are managed by standard remote procedure calls propagating change-of-state requests, or reading and writing program variables. Custom components may be easily integrated. The system is portable to any front end processor running the VxWorks real-time kernel, and to most workstations supplying a few standard facilities such as rsh and X-windows, and Motif and socket libraries. Sample implementations exist for 2 Unix workstation families connected via ethernet or FDDI to VME (with interfaces to FASTBUS or CAMAC), and via ethernet to FASTBUS or CAMAC

  10. Ancestors protocol for scalable key management

    Directory of Open Access Journals (Sweden)

    Dieter Gollmann

    2010-06-01

    Full Text Available Group key management is an important functional building block for secure multicast architecture. Thereby, it has been extensively studied in the literature. The main proposed protocol is Adaptive Clustering for Scalable Group Key Management (ASGK. According to ASGK protocol, the multicast group is divided into clusters, where each cluster consists of areas of members. Each cluster uses its own Traffic Encryption Key (TEK. These clusters are updated periodically depending on the dynamism of the members during the secure session. The modified protocol has been proposed based on ASGK with some modifications to balance the number of affected members and the encryption/decryption overhead with any number of the areas when a member joins or leaves the group. This modified protocol is called Ancestors protocol. According to Ancestors protocol, every area receives the dynamism of the members from its parents. The main objective of the modified protocol is to reduce the number of affected members during the leaving and joining members, then 1 affects n overhead would be reduced. A comparative study has been done between ASGK protocol and the modified protocol. According to the comparative results, it found that the modified protocol is always outperforming the ASGK protocol.

  11. Scalable Combinatorial Tools for Health Disparities Research

    Directory of Open Access Journals (Sweden)

    Michael A. Langston

    2014-10-01

    Full Text Available Despite staggering investments made in unraveling the human genome, current estimates suggest that as much as 90% of the variance in cancer and chronic diseases can be attributed to factors outside an individual’s genetic endowment, particularly to environmental exposures experienced across his or her life course. New analytical approaches are clearly required as investigators turn to complicated systems theory and ecological, place-based and life-history perspectives in order to understand more clearly the relationships between social determinants, environmental exposures and health disparities. While traditional data analysis techniques remain foundational to health disparities research, they are easily overwhelmed by the ever-increasing size and heterogeneity of available data needed to illuminate latent gene x environment interactions. This has prompted the adaptation and application of scalable combinatorial methods, many from genome science research, to the study of population health. Most of these powerful tools are algorithmically sophisticated, highly automated and mathematically abstract. Their utility motivates the main theme of this paper, which is to describe real applications of innovative transdisciplinary models and analyses in an effort to help move the research community closer toward identifying the causal mechanisms and associated environmental contexts underlying health disparities. The public health exposome is used as a contemporary focus for addressing the complex nature of this subject.

  12. Percolator: Scalable Pattern Discovery in Dynamic Graphs

    Energy Technology Data Exchange (ETDEWEB)

    Choudhury, Sutanay; Purohit, Sumit; Lin, Peng; Wu, Yinghui; Holder, Lawrence B.; Agarwal, Khushbu

    2018-02-06

    We demonstrate Percolator, a distributed system for graph pattern discovery in dynamic graphs. In contrast to conventional mining systems, Percolator advocates efficient pattern mining schemes that (1) support pattern detection with keywords; (2) integrate incremental and parallel pattern mining; and (3) support analytical queries such as trend analysis. The core idea of Percolator is to dynamically decide and verify a small fraction of patterns and their in- stances that must be inspected in response to buffered updates in dynamic graphs, with a total mining cost independent of graph size. We demonstrate a) the feasibility of incremental pattern mining by walking through each component of Percolator, b) the efficiency and scalability of Percolator over the sheer size of real-world dynamic graphs, and c) how the user-friendly GUI of Percolator inter- acts with users to support keyword-based queries that detect, browse and inspect trending patterns. We also demonstrate two user cases of Percolator, in social media trend analysis and academic collaboration analysis, respectively.

  13. Scalable conditional induction variables (CIV) analysis

    KAUST Repository

    Oancea, Cosmin E.

    2015-02-01

    Subscripts using induction variables that cannot be expressed as a formula in terms of the enclosing-loop indices appear in the low-level implementation of common programming abstractions such as Alter, or stack operations and pose significant challenges to automatic parallelization. Because the complexity of such induction variables is often due to their conditional evaluation across the iteration space of loops we name them Conditional Induction Variables (CIV). This paper presents a flow-sensitive technique that summarizes both such CIV-based and affine subscripts to program level, using the same representation. Our technique requires no modifications of our dependence tests, which is agnostic to the original shape of the subscripts, and is more powerful than previously reported dependence tests that rely on the pairwise disambiguation of read-write references. We have implemented the CIV analysis in our parallelizing compiler and evaluated its impact on five Fortran benchmarks. We have found that that there are many important loops using CIV subscripts and that our analysis can lead to their scalable parallelization. This in turn has led to the parallelization of the benchmark programs they appear in.

  14. A Programmable, Scalable-Throughput Interleaver

    Directory of Open Access Journals (Sweden)

    E. J. C. Rijshouwer

    2010-01-01

    Full Text Available The interleaver stages of digital communication standards show a surprisingly large variation in throughput, state sizes, and permutation functions. Furthermore, data rates for 4G standards such as LTE-Advanced will exceed typical baseband clock frequencies of handheld devices. Multistream operation for Software Defined Radio and iterative decoding algorithms will call for ever higher interleave data rates. Our interleave machine is built around 8 single-port SRAM banks and can be programmed to generate up to 8 addresses every clock cycle. The scalable architecture combines SIMD and VLIW concepts with an efficient resolution of bank conflicts. A wide range of cellular, connectivity, and broadcast interleavers have been mapped on this machine, with throughputs up to more than 0.5 Gsymbol/second. Although it was designed for channel interleaving, the application domain of the interleaver extends also to Turbo interleaving. The presented configuration of the architecture is designed as a part of a programmable outer receiver on a prototype board. It offers (near universal programmability to enable the implementation of new interleavers. The interleaver measures 2.09 mm2 in 65 nm CMOS (including memories and proves functional on silicon.

  15. A Programmable, Scalable-Throughput Interleaver

    Directory of Open Access Journals (Sweden)

    Rijshouwer EJC

    2010-01-01

    Full Text Available The interleaver stages of digital communication standards show a surprisingly large variation in throughput, state sizes, and permutation functions. Furthermore, data rates for 4G standards such as LTE-Advanced will exceed typical baseband clock frequencies of handheld devices. Multistream operation for Software Defined Radio and iterative decoding algorithms will call for ever higher interleave data rates. Our interleave machine is built around 8 single-port SRAM banks and can be programmed to generate up to 8 addresses every clock cycle. The scalable architecture combines SIMD and VLIW concepts with an efficient resolution of bank conflicts. A wide range of cellular, connectivity, and broadcast interleavers have been mapped on this machine, with throughputs up to more than 0.5 Gsymbol/second. Although it was designed for channel interleaving, the application domain of the interleaver extends also to Turbo interleaving. The presented configuration of the architecture is designed as a part of a programmable outer receiver on a prototype board. It offers (near universal programmability to enable the implementation of new interleavers. The interleaver measures 2.09 m in 65 nm CMOS (including memories and proves functional on silicon.

  16. Decentralized Control for Scalable Quadcopter Formations

    Directory of Open Access Journals (Sweden)

    Qasim Ali

    2016-01-01

    Full Text Available An innovative framework has been developed for teamwork of two quadcopter formations, each having its specified formation geometry, assigned task, and matching control scheme. Position control for quadcopters in one of the formations has been implemented through a Linear Quadratic Regulator Proportional Integral (LQR PI control scheme based on explicit model following scheme. Quadcopters in the other formation are controlled through LQR PI servomechanism control scheme. These two control schemes are compared in terms of their performance and control effort. Both formations are commanded by respective ground stations through virtual leaders. Quadcopters in formations are able to track desired trajectories as well as hovering at desired points for selected time duration. In case of communication loss between ground station and any of the quadcopters, the neighboring quadcopter provides the command data, received from the ground station, to the affected unit. Proposed control schemes have been validated through extensive simulations using MATLAB®/Simulink® that provided favorable results.

  17. Electrochemical and micro-gravimetric corrosion studies on spent fuel provide relevant source term data for a repository performance assessment

    International Nuclear Information System (INIS)

    Wegen, Detlef H.; Bottomley, Paul D. W.; Glatz, Jean-Paul

    2004-01-01

    Various electrochemical methods (corrosion potential monitoring, AC impedance analysis and electrochemical noise monitoring) were used in the investigation of UO 2 samples: natural and doped with two different levels of 238 Pu (0.1 and 10 wt%) simulating the increasing α-intensities seen with time in the repository. The results were compared and were able to show the intense, but also the very local nature of the radiolysis and to demonstrate that corrosion rates were proportional to α-radiolysis and hence the 238 Pu content; the corrosion rates were in accordance with earlier work at ITU. By contrast it was seen that the redox potentials only gave information as to the bulk solution that did not reflect the true conditions at the electrode interface that were driving the corrosion processes of UO 2 dissolution in groundwaters. The study shows how electrochemical techniques can provide vital information on the corrosion mechanism at the UO 2 /solution interface

  18. A Bit Stream Scalable Speech/Audio Coder Combining Enhanced Regular Pulse Excitation and Parametric Coding

    Science.gov (United States)

    Riera-Palou, Felip; den Brinker, Albertus C.

    2007-12-01

    This paper introduces a new audio and speech broadband coding technique based on the combination of a pulse excitation coder and a standardized parametric coder, namely, MPEG-4 high-quality parametric coder. After presenting a series of enhancements to regular pulse excitation (RPE) to make it suitable for the modeling of broadband signals, it is shown how pulse and parametric codings complement each other and how they can be merged to yield a layered bit stream scalable coder able to operate at different points in the quality bit rate plane. The performance of the proposed coder is evaluated in a listening test. The major result is that the extra functionality of the bit stream scalability does not come at the price of a reduced performance since the coder is competitive with standardized coders (MP3, AAC, SSC).

  19. A Bit Stream Scalable Speech/Audio Coder Combining Enhanced Regular Pulse Excitation and Parametric Coding

    Directory of Open Access Journals (Sweden)

    Albertus C. den Brinker

    2007-01-01

    Full Text Available This paper introduces a new audio and speech broadband coding technique based on the combination of a pulse excitation coder and a standardized parametric coder, namely, MPEG-4 high-quality parametric coder. After presenting a series of enhancements to regular pulse excitation (RPE to make it suitable for the modeling of broadband signals, it is shown how pulse and parametric codings complement each other and how they can be merged to yield a layered bit stream scalable coder able to operate at different points in the quality bit rate plane. The performance of the proposed coder is evaluated in a listening test. The major result is that the extra functionality of the bit stream scalability does not come at the price of a reduced performance since the coder is competitive with standardized coders (MP3, AAC, SSC.

  20. Scalable and massively parallel Monte Carlo photon transport simulations for heterogeneous computing platforms.

    Science.gov (United States)

    Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian

    2018-01-01

    We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  1. A Secure Web Application Providing Public Access to High-Performance Data Intensive Scientific Resources - ScalaBLAST Web Application

    International Nuclear Information System (INIS)

    Curtis, Darren S.; Peterson, Elena S.; Oehmen, Chris S.

    2008-01-01

    This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroic effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster

  2. An evaluation of the performance of an integrated solar combined cycle plant provided with air-linear parabolic collectors

    International Nuclear Information System (INIS)

    Amelio, Mario; Ferraro, Vittorio; Marinelli, Valerio; Summaria, Antonio

    2014-01-01

    An evaluation of the performance of an innovative solar system integrated in a combined cycle plant is presented, in which the heat transfer fluid flowing in linear parabolic collectors is the same oxidant air that is introduced into the combustion chamber of the plant. This peculiarity allows a great simplification of the plant. There is a 22% saving of fossil fuel results in design conditions and 15.5% on an annual basis, when the plant works at nominal volumetric flow rate in the daily hours. The net average year efficiency is 60.9% against the value of 51.4% of a reference combined cycle plant without solar integration. Moreover, an economic evaluation of the plant is carried out, which shows that the extra-cost of the solar part is recovered in about 5 years. - Highlights: • A model to calculate an innovative ISCCS (Integrated solar Combined Cycle Systems) solar plant is presented. • The plant uses air as heat transfer fluid as well as oxidant in the combustor. • The plant presents a very high thermodynamic efficiency. • The plant is very simple in comparison with existing ISCCS

  3. Trifocal intraocular lenses: a comparison of the visual performance and quality of vision provided by two different lens designs

    Directory of Open Access Journals (Sweden)

    Gundersen KG

    2017-06-01

    Full Text Available Kjell G Gundersen,1 Rick Potvin2 1IFocus Øyeklinikk AS, Haugesund, Norway; 2Science in Vision, Akron, NY, USA Purpose: To compare two different diffractive trifocal intraocular lens (IOL designs, evaluating longer-term refractive outcomes, visual acuity (VA at various distances, low contrast VA and quality of vision.Patients and methods: Patients with binocularly implanted trifocal IOLs of two different designs (FineVision [FV] and Panoptix [PX] were evaluated 6 months to 2 years after surgery. Best distance-corrected and uncorrected VA were tested at distance (4 m, intermediate (80 and 60 cm and near (40 cm. A binocular defocus curve was collected with the subject’s best distance correction in place. The preferred reading distance was determined along with the VA at that distance. Low contrast VA at distance was also measured. Quality of vision was measured with the National Eye Institute Visual Function Questionnaire near subset and the Quality of Vision questionnaire.Results: Thirty subjects in each group were successfully recruited. The binocular defocus curves differed only at vergences of −1.0 D (FV better, P=0.02, −1.5 and −2.00 D (PX better, P<0.01 for both. Best distance-corrected and uncorrected binocular vision were significantly better for the PX lens at 60 cm (P<0.01 with no significant differences at other distances. The preferred reading distance was between 42 and 43 cm for both lenses, with the VA at the preferred reading distance slightly better with the PX lens (P=0.04. There were no statistically significant differences by lens for low contrast VA (P=0.1 or for quality of vision measures (P>0.3.Conclusion: Both trifocal lenses provided excellent distance, intermediate and near vision, but several measures indicated that the PX lens provided better intermediate vision at 60 cm. This may be important to users of tablets and other handheld devices. Quality of vision appeared similar between the two lens designs

  4. SVOPME: A Scalable Virtual Organization Privileges Management Environment

    International Nuclear Information System (INIS)

    Garzoglio, Gabriele; Sfiligoi, Igor; Levshina, Tanya; Wang, Nanbor; Ananthan, Balamurali

    2010-01-01

    Grids enable uniform access to resources by implementing standard interfaces to resource gateways. In the Open Science Grid (OSG), privileges are granted on the basis of the user's membership to a Virtual Organization (VO). However, Grid sites are solely responsible to determine and control access privileges to resources using users' identity and personal attributes, which are available through Grid credentials. While this guarantees full control on access rights to the sites, it makes VO privileges heterogeneous throughout the Grid and hardly fits with the Grid paradigm of uniform access to resources. To address these challenges, we are developing the Scalable Virtual Organization Privileges Management Environment (SVOPME), which provides tools for VOs to define and publish desired privileges and assists sites to provide the appropriate access policies. Moreover, SVOPME provides tools for Grid sites to analyze site access policies for various resources, verify compliance with preferred VO policies, and generate directives for site administrators on how the local access policies can be amended to achieve such compliance without taking control of local configurations away from site administrators. This paper discusses what access policies are of interest to the OSG community and how SVOPME implements privilege management for OSG.

  5. Innovative PCM-desiccant packet to provide dry microclimate and improve performance of cooling vest in hot environment

    International Nuclear Information System (INIS)

    Itani, Mariam; Ghaddar, Nesreen; Ghali, Kamel

    2017-01-01

    Highlights: • A PCM and desiccant packet is proposed for use in personal cooling vest to keep dry air next to skin. • A PCM-Desiccant model for clothed heated wet cylinder is developed and validated experimentally. • The microclimate air temperature was 0.6 °C higher in PCM-Desiccant case compared to PCM-only case. • Microclimate humidity content decreased due to desiccant from 21.23 to 19.74 g/kg dry air. • PCM melted fraction increased due to desiccant from 0.24 to 0.5. - Abstract: A novel combination of phase change material (PCM) and a solid desiccant layer is proposed for the aim of maintaining dry cool microclimate air adjacent to wet warm skin and hence improve PCM performance in cooling vests used in hot humid environment. A fabric-PCM-Desiccant model is developed to predict the temperature and moisture content of the microclimate air layer in the presence of a PCM-Desiccant packet. The developed model is validated through experiments conducted on a wet clothed heated cylinder for the two cases of using (i) a PCM only packet and (ii) a PCM-Desiccant packet. Microclimate air temperatures and humidity content as well as PCM and desiccant temperatures were measured experimentally and were compared with predicted values by the fabric-PCM-Desiccant model. Good agreement was attained with a maximum relative error of 7% in measured temperatures. A decrease is observed in the humidity content of the microclimate air in the presence of the solid desiccant from 21.23 g/kg dry air to 19.74 g/kg dry air and an increase in the melted fraction of the PCM at the end of the experiment from 0.24 to 0.5.

  6. A Testbed for Highly-Scalable Mission Critical Information Systems

    National Research Council Canada - National Science Library

    Birman, Kenneth P

    2005-01-01

    ... systems in a networked environment. Headed by Professor Ken Birman, the project is exploring a novel fusion of classical protocols for reliable multicast communication with a new style of peer-to-peer protocol called scalable "gossip...

  7. Provider performance in treating poor patients--factors influencing prescribing practices in lao PDR: a cross-sectional study.

    Science.gov (United States)

    Syhakhang, Lamphone; Soukaloun, Douangdao; Tomson, Göran; Petzold, Max; Rehnberg, Clas; Wahlström, Rolf

    2011-01-06

    Out-of-pocket payments make up about 80% of medical care spending at hospitals in Laos, thereby putting poor households at risk of catastrophic health expenditure. Social security schemes in the form of community-based health insurance and health equity funds have been introduced in some parts of the country. Drug and Therapeutics Committees (DTCs) have been established to ensure rational use of drugs and improve quality of care. The objective was to assess the appropriateness and expenditure for treatment for poor patients by health care providers at hospitals in three selected provinces of Laos and to explore associated factors. Cross-sectional study using four tracer conditions. Structured interviews with 828 in-patients at twelve provincial and district hospitals on the subject of insurance protection, income and expenditures for treatment, including informal payment. Evaluation of each patient's medical record for appropriateness of drug use using a checklist of treatment guidelines (maximum score=10). No significant difference in appropriateness of care for patients at different income levels, but higher expenditures for patients with the highest income level. The score for appropriate drug use in insured patients was significantly higher than uninsured patients (5.9 vs. 4.9), and the length of stay in days significantly shorter (2.7 vs. 3.7). Insured patients paid significantly less than uninsured patients, both for medicines (USD 14.8 vs. 43.9) and diagnostic tests (USD 5.9 vs. 9.2). On the contrary the score for appropriateness of drug use in patients making informal payments was significantly lower than patients not making informal payments (3.5 vs. 5.1), and the length of stay significantly longer (6.8 vs. 3.2), while expenditures were significantly higher both for medicines (USD 124.5 vs. 28.8) and diagnostic tests (USD 14.1 vs. 7.7). The lower expenditure for insured patients can help reduce the number of households experiencing catastrophic health

  8. Provider performance in treating poor patients - factors influencing prescribing practices in lao PDR: a cross-sectional study

    Directory of Open Access Journals (Sweden)

    Petzold Max

    2011-01-01

    Full Text Available Abstract Background Out-of-pocket payments make up about 80% of medical care spending at hospitals in Laos, thereby putting poor households at risk of catastrophic health expenditure. Social security schemes in the form of community-based health insurance and health equity funds have been introduced in some parts of the country. Drug and Therapeutics Committees (DTCs have been established to ensure rational use of drugs and improve quality of care. The objective was to assess the appropriateness and expenditure for treatment for poor patients by health care providers at hospitals in three selected provinces of Laos and to explore associated factors. Methods Cross-sectional study using four tracer conditions. Structured interviews with 828 in-patients at twelve provincial and district hospitals on the subject of insurance protection, income and expenditures for treatment, including informal payment. Evaluation of each patient's medical record for appropriateness of drug use using a checklist of treatment guidelines (maximum score = 10. Results No significant difference in appropriateness of care for patients at different income levels, but higher expenditures for patients with the highest income level. The score for appropriate drug use in insured patients was significantly higher than uninsured patients (5.9 vs. 4.9, and the length of stay in days significantly shorter (2.7 vs. 3.7. Insured patients paid significantly less than uninsured patients, both for medicines (USD 14.8 vs. 43.9 and diagnostic tests (USD 5.9 vs. 9.2. On the contrary the score for appropriateness of drug use in patients making informal payments was significantly lower than patients not making informal payments (3.5 vs. 5.1, and the length of stay significantly longer (6.8 vs. 3.2, while expenditures were significantly higher both for medicines (USD 124.5 vs. 28.8 and diagnostic tests (USD 14.1 vs. 7.7. Conclusions The lower expenditure for insured patients can help reduce

  9. Scalability Issues for Remote Sensing Infrastructure: A Case Study

    Directory of Open Access Journals (Sweden)

    Yang Liu

    2017-04-01

    Full Text Available For the past decade, a team of University of Calgary researchers has operated a large “sensor Web” to collect, analyze, and share scientific data from remote measurement instruments across northern Canada. This sensor Web receives real-time data streams from over a thousand Internet-connected sensors, with a particular emphasis on environmental data (e.g., space weather, auroral phenomena, atmospheric imaging. Through research collaborations, we had the opportunity to evaluate the performance and scalability of their remote sensing infrastructure. This article reports the lessons learned from our study, which considered both data collection and data dissemination aspects of their system. On the data collection front, we used benchmarking techniques to identify and fix a performance bottleneck in the system’s memory management for TCP data streams, while also improving system efficiency on multi-core architectures. On the data dissemination front, we used passive and active network traffic measurements to identify and reduce excessive network traffic from the Web robots and JavaScript techniques used for data sharing. While our results are from one specific sensor Web system, the lessons learned may apply to other scientific Web sites with remote sensing infrastructure.

  10. Detailed Modeling and Evaluation of a Scalable Multilevel Checkpointing System

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Moody, Adam [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bronevetsky, Greg [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); de Supinski, Bronis R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-09-01

    High-performance computing (HPC) systems are growing more powerful by utilizing more components. As the system mean time before failure correspondingly drops, applications must checkpoint frequently to make progress. But, at scale, the cost of checkpointing becomes prohibitive. A solution to this problem is multilevel checkpointing, which employs multiple types of checkpoints in a single run. Moreover, lightweight checkpoints can handle the most common failure modes, while more expensive checkpoints can handle severe failures. We designed a multilevel checkpointing library, the Scalable Checkpoint/Restart (SCR) library, that writes lightweight checkpoints to node-local storage in addition to the parallel file system. We present probabilistic Markov models of SCR's performance. We show that on future large-scale systems, SCR can lead to a gain in machine efficiency of up to 35 percent, and reduce the load on the parallel file system by a factor of two. In addition, we predict that checkpoint scavenging, or only writing checkpoints to the parallel file system on application termination, can reduce the load on the parallel file system by 20 × on today's systems and still maintain high application efficiency.

  11. Elastic pointer directory organization for scalable shared memory multiprocessors

    Institute of Scientific and Technical Information of China (English)

    Yuhang Liu; Mingfa Zhu; Limin Xiao

    2014-01-01

    In the field of supercomputing, one key issue for scal-able shared-memory multiprocessors is the design of the directory which denotes the sharing state for a cache block. A good direc-tory design intends to achieve three key attributes: reasonable memory overhead, sharer position precision and implementation complexity. However, researchers often face the problem that gain-ing one attribute may result in losing another. The paper proposes an elastic pointer directory (EPD) structure based on the analysis of shared-memory applications, taking the fact that the number of sharers for each directory entry is typical y smal . Analysis re-sults show that for 4 096 nodes, the ratio of memory overhead to the ful-map directory is 2.7%. Theoretical analysis and cycle-accurate execution-driven simulations on a 16 and 64-node cache coherence non uniform memory access (CC-NUMA) multiproces-sor show that the corresponding pointer overflow probability is reduced significantly. The performance is observed to be better than that of a limited pointers directory and almost identical to the ful-map directory, except for the slight implementation complex-ity. Using the directory cache to explore directory access locality is also studied. The experimental result shows that this is a promis-ing approach to be used in the state-of-the-art high performance computing domain.

  12. Modular Universal Scalable Ion-trap Quantum Computer

    Science.gov (United States)

    2016-06-02

    SECURITY CLASSIFICATION OF: The main goal of the original MUSIQC proposal was to construct and demonstrate a modular and universally- expandable ion...Distribution Unlimited UU UU UU UU 02-06-2016 1-Aug-2010 31-Jan-2016 Final Report: Modular Universal Scalable Ion-trap Quantum Computer The views...P.O. Box 12211 Research Triangle Park, NC 27709-2211 Ion trap quantum computation, scalable modular architectures REPORT DOCUMENTATION PAGE 11

  13. Architectures and Applications for Scalable Quantum Information Systems

    Science.gov (United States)

    2007-01-01

    Gershenfeld and I. Chuang. Quantum computing with molecules. Scientific American, June 1998. [16] A. Globus, D. Bailey, J. Han, R. Jaffe, C. Levit , R...AFRL-IF-RS-TR-2007-12 Final Technical Report January 2007 ARCHITECTURES AND APPLICATIONS FOR SCALABLE QUANTUM INFORMATION SYSTEMS...NUMBER 5b. GRANT NUMBER FA8750-01-2-0521 4. TITLE AND SUBTITLE ARCHITECTURES AND APPLICATIONS FOR SCALABLE QUANTUM INFORMATION SYSTEMS 5c

  14. On the scalability of LISP and advanced overlaid services

    OpenAIRE

    Coras, Florin

    2015-01-01

    In just four decades the Internet has gone from a lab experiment to a worldwide, business critical infrastructure that caters to the communication needs of almost a half of the Earth's population. With these figures on its side, arguing against the Internet's scalability would seem rather unwise. However, the Internet's organic growth is far from finished and, as billions of new devices are expected to be joined in the not so distant future, scalability, or lack thereof, is commonly believed ...

  15. Scalable, full-colour and controllable chromotropic plasmonic printing

    OpenAIRE

    Xue, Jiancai; Zhou, Zhang-Kai; Wei, Zhiqiang; Su, Rongbin; Lai, Juan; Li, Juntao; Li, Chao; Zhang, Tengwei; Wang, Xue-Hua

    2015-01-01

    Plasmonic colour printing has drawn wide attention as a promising candidate for the next-generation colour-printing technology. However, an efficient approach to realize full colour and scalable fabrication is still lacking, which prevents plasmonic colour printing from practical applications. Here we present a scalable and full-colour plasmonic printing approach by combining conjugate twin-phase modulation with a plasmonic broadband absorber. More importantly, our approach also demonstrates ...

  16. Scalable fabrication of strongly textured organic semiconductor micropatterns by capillary force lithography.

    Science.gov (United States)

    Jo, Pil Sung; Vailionis, Arturas; Park, Young Min; Salleo, Alberto

    2012-06-26

    Strongly textured organic semiconductor micropatterns made of the small molecule dioctylbenzothienobenzothiophene (C(8)-BTBT) are fabricated by using a method based on capillary force lithography (CFL). This technique provides the C(8)-BTBT solution with nucleation sites for directional growth, and can be used as a scalable way to produce high quality crystalline arrays in desired regions of a substrate for OFET applications. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Scalable Database Design of End-Game Model with Decoupled Countermeasure and Threat Information

    Science.gov (United States)

    2017-11-01

    the Army Modular Active Protection System (MAPS) program to provide end-to-end APS modeling and simulation capabilities. The SSES simulation features...research project of scalable database design was initiated in support of SSES modularization efforts with respect to 4 major software components...Iron Curtain KE kinetic energy MAPS Modular Active Protective System OLE DB object linking and embedding database RDB relational database RPG

  18. Scalable Security and Accounting Services for Content-Based Publish/Subscribe Systems

    OpenAIRE

    Himanshu Khurana; Radostina K. Koleva

    2006-01-01

    Content-based publish/subscribe systems offer an interaction scheme that is appropriate for a variety of large-scale dynamic applications. However, widespread use of these systems is hindered by a lack of suitable security services. In this paper, we present scalable solutions for confidentiality, integrity, and authentication for these systems. We also provide verifiable usage-based accounting services, which are required for e-commerce and e-business applications that use publish/subscribe ...

  19. Scalable Arbitrated Quantum Signature of Classical Messages with Multi-Signers

    International Nuclear Information System (INIS)

    Yang Yuguang; Wang Yuan; Teng Yiwei; Chai Haiping; Wen Qiaoyan

    2010-01-01

    Unconditionally secure signature is an important part of quantum cryptography. Usually, a signature scheme only provides an environment for a single signer. Nevertheless, in real applications, many signers may collaboratively send a message to the verifier and convince the verifier that the message is actually transmitted by them. In this paper, we give a scalable arbitrated signature protocol of classical messages with multi-signers. Its security is analyzed and proved to be secure even with a compromised arbitrator. (general)

  20. Scalable Photogrammetric Motion Capture System "mosca": Development and Application

    Science.gov (United States)

    Knyaz, V. A.

    2015-05-01

    Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.

  1. Interactive and Animated Scalable Vector Graphics and R Data Displays

    Directory of Open Access Journals (Sweden)

    Deborah Nolan

    2012-01-01

    Full Text Available We describe an approach to creating interactive and animated graphical displays using R's graphics engine and Scalable Vector Graphics, an XML vocabulary for describing two-dimensional graphical displays. We use the svg( graphics device in R and then post-process the resulting XML documents. The post-processing identities the elements in the SVG that correspond to the different components of the graphical display, e.g., points, axes, labels, lines. One can then annotate these elements to add interactivity and animation effects. One can also use JavaScript to provide dynamic interactive effects to the plot, enabling rich user interactions and compelling visualizations. The resulting SVG documents can be embedded withinHTML documents and can involve JavaScript code that integrates the SVG and HTML objects. The functionality is provided via the SVGAnnotation package and makes static plots generated via R graphics functions available as stand-alone, interactive and animated plots for the Web and other venues.

  2. Affordable and Scalable Manufacturing of Wearable Multi-Functional Sensory “Skin” for Internet of Everything Applications

    KAUST Repository

    Nassar, Joanna M.

    2017-10-01

    Demand for wearable electronics is expected to at least triple by 2020, embracing all sorts of Internet of Everything (IoE) applications, such as activity tracking, environmental mapping, and advanced healthcare monitoring, in the purpose of enhancing the quality of life. This entails the wide availability of free-form multifunctional sensory systems (i.e “skin” platforms) that can conform to the variety of uneven surfaces, providing intimate contact and adhesion with the skin, necessary for localized and enhanced sensing capabilities. However, current wearable devices appear to be bulky, rigid and not convenient for continuous wear in everyday life, hindering their implementation into advanced and unexplored applications beyond fitness tracking. Besides, they retail at high price tags which limits their availability to at least half of the World’s population. Hence, form factor (physical flexibility and/or stretchability), cost, and accessibility become the key drivers for further developments. To support this need in affordable and adaptive wearables and drive academic developments in “skin” platforms into practical and functional consumer devices, compatibility and integration into a high performance yet low power system is crucial to sustain the high data rates and large data management driven by IoE. Likewise, scalability becomes essential for batch fabrication and precision. Therefore, I propose to develop three distinct but necessary “skin” platforms using scalable and cost effective manufacturing techniques. My first approach is the fabrication of a CMOS-compatible “silicon skin”, crucial for any truly autonomous and conformal wearable device, where monolithic integration between heterogeneous material-based sensory platform and system components is a challenge yet to be addressed. My second approach displays an even more affordable and accessible “paper skin”, using recyclable and off-the-shelf materials, targeting environmental

  3. Development of a scalable generic platform for adaptive optics real time control

    Science.gov (United States)

    Surendran, Avinash; Burse, Mahesh P.; Ramaprakash, A. N.; Parihar, Padmakar

    2015-06-01

    The main objective of the present project is to explore the viability of an adaptive optics control system based exclusively on Field Programmable Gate Arrays (FPGAs), making strong use of their parallel processing capability. In an Adaptive Optics (AO) system, the generation of the Deformable Mirror (DM) control voltages from the Wavefront Sensor (WFS) measurements is usually through the multiplication of the wavefront slopes with a predetermined reconstructor matrix. The ability to access several hundred hard multipliers and memories concurrently in an FPGA allows performance far beyond that of a modern CPU or GPU for tasks with a well-defined structure such as Adaptive Optics control. The target of the current project is to generate a signal for a real time wavefront correction, from the signals coming from a Wavefront Sensor, wherein the system would be flexible to accommodate all the current Wavefront Sensing techniques and also the different methods which are used for wavefront compensation. The system should also accommodate for different data transmission protocols (like Ethernet, USB, IEEE 1394 etc.) for transmitting data to and from the FPGA device, thus providing a more flexible platform for Adaptive Optics control. Preliminary simulation results for the formulation of the platform, and a design of a fully scalable slope computer is presented.

  4. A highly scalable massively parallel fast marching method for the Eikonal equation

    Science.gov (United States)

    Yang, Jianming; Stern, Frederick

    2017-03-01

    The fast marching method is a widely used numerical method for solving the Eikonal equation arising from a variety of scientific and engineering fields. It is long deemed inherently sequential and an efficient parallel algorithm applicable to large-scale practical applications is not available in the literature. In this study, we present a highly scalable massively parallel implementation of the fast marching method using a domain decomposition approach. Central to this algorithm is a novel restarted narrow band approach that coordinates the frequency of communications and the amount of computations extra to a sequential run for achieving an unprecedented parallel performance. Within each restart, the narrow band fast marching method is executed; simple synchronous local exchanges and global reductions are adopted for communicating updated data in the overlapping regions between neighboring subdomains and getting the latest front status, respectively. The independence of front characteristics is exploited through special data structures and augmented status tags to extract the masked parallelism within the fast marching method. The efficiency, flexibility, and applicability of the parallel algorithm are demonstrated through several examples. These problems are extensively tested on six grids with up to 1 billion points using different numbers of processes ranging from 1 to 65536. Remarkable parallel speedups are achieved using tens of thousands of processes. Detailed pseudo-codes for both the sequential and parallel algorithms are provided to illustrate the simplicity of the parallel implementation and its similarity to the sequential narrow band fast marching algorithm.

  5. Approaches for scalable modeling and emulation of cyber systems : LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Mayo, Jackson R.; Minnich, Ronald G.; Armstrong, Robert C.; Rudish, Don W.

    2009-09-01

    The goal of this research was to combine theoretical and computational approaches to better understand the potential emergent behaviors of large-scale cyber systems, such as networks of {approx} 10{sup 6} computers. The scale and sophistication of modern computer software, hardware, and deployed networked systems have significantly exceeded the computational research community's ability to understand, model, and predict current and future behaviors. This predictive understanding, however, is critical to the development of new approaches for proactively designing new systems or enhancing existing systems with robustness to current and future cyber threats, including distributed malware such as botnets. We have developed preliminary theoretical and modeling capabilities that can ultimately answer questions such as: How would we reboot the Internet if it were taken down? Can we change network protocols to make them more secure without disrupting existing Internet connectivity and traffic flow? We have begun to address these issues by developing new capabilities for understanding and modeling Internet systems at scale. Specifically, we have addressed the need for scalable network simulation by carrying out emulations of a network with {approx} 10{sup 6} virtualized operating system instances on a high-performance computing cluster - a 'virtual Internet'. We have also explored mappings between previously studied emergent behaviors of complex systems and their potential cyber counterparts. Our results provide foundational capabilities for further research toward understanding the effects of complexity in cyber systems, to allow anticipating and thwarting hackers.

  6. A Secure and Stable Multicast Overlay Network with Load Balancing for Scalable IPTV Services

    Directory of Open Access Journals (Sweden)

    Tsao-Ta Wei

    2012-01-01

    Full Text Available The emerging multimedia Internet application IPTV over P2P network preserves significant advantages in scalability. IPTV media content delivered in P2P networks over public Internet still preserves the issues of privacy and intellectual property rights. In this paper, we use SIP protocol to construct a secure application-layer multicast overlay network for IPTV, called SIPTVMON. SIPTVMON can secure all the IPTV media delivery paths against eavesdroppers via elliptic-curve Diffie-Hellman (ECDH key exchange on SIP signaling and AES encryption. Its load-balancing overlay tree is also optimized from peer heterogeneity and churn of peer joining and leaving to minimize both service degradation and latency. The performance results from large-scale simulations and experiments on different optimization criteria demonstrate SIPTVMON's cost effectiveness in quality of privacy protection, stability from user churn, and good perceptual quality of objective PSNR values for scalable IPTV services over Internet.

  7. Towards Bandwidth Scalable Transceiver Technology for Optical Metro-Access Networks

    DEFF Research Database (Denmark)

    Spolitis, Sandis; Bobrovs, Vjaceslavs; Wagner, Christoph

    2015-01-01

    sliceable transceiver for 1 Gbit/s non-return to zero (NRZ) signal sliced into two slices is presented. Digital signal processing (DSP) power consumption and latency values for proposed sliceable transceiver technique are also discussed. In this research post FEC with 7% overhead error free transmission has......Massive fiber-to-the-home network deployment is creating a challenge for telecommunications network operators: exponential increase of the power consumption at the central offices and a never ending quest for equipment upgrades operating at higher bandwidth. In this paper, we report on flexible...... signal slicing technique, which allows transmission of high-bandwidth signals via low bandwidth electrical and optoelectrical equipment. The presented signal slicing technique is highly scalable in terms of bandwidth which is determined by the number of slices used. In this paper performance of scalable...

  8. A customizable, scalable scheduling and reporting system.

    Science.gov (United States)

    Wood, Jody L; Whitman, Beverly J; Mackley, Lisa A; Armstrong, Robert; Shotto, Robert T

    2014-06-01

    Scheduling is essential for running a facility smoothly and for summarizing activities in use reports. The Penn State Hershey Clinical Simulation Center has developed a scheduling interface that uses off-the-shelf components, with customizations that adapt to each institution's data collection and reporting needs. The system is designed using programs within the Microsoft Office 2010 suite. Outlook provides the scheduling component, while the reporting is performed using Access or Excel. An account with a calendar is created for the main schedule, with separate resource accounts created for each room within the center. The Outlook appointment form's 2 default tabs are used, in addition to a customized third tab. The data are then copied from the calendar into either a database table or a spreadsheet, where the reports are generated.Incorporating this system into an institution-wide structure allows integration of personnel lists and potentially enables all users to check the schedule from their desktop. Outlook also has a Web-based application for viewing the basic schedule from outside the institution, although customized data cannot be accessed. The scheduling and reporting functions have been used for a year at the Penn State Hershey Clinical Simulation Center. The schedule has increased workflow efficiency, improved the quality of recorded information, and provided more accurate reporting. The Penn State Hershey Clinical Simulation Center's scheduling and reporting system can be adapted easily to most simulation centers and can expand and change to meet future growth with little or no expense to the center.

  9. Using a quality improvement model to enhance providers' performance in maternal and newborn health care: a post-only intervention and comparison design.

    Science.gov (United States)

    Ayalew, Firew; Eyassu, Gizachew; Seyoum, Negash; van Roosmalen, Jos; Bazant, Eva; Kim, Young Mi; Tekleberhan, Alemnesh; Gibson, Hannah; Daniel, Ephrem; Stekelenburg, Jelle

    2017-04-12

    The Standards Based Management and Recognition (SBM-R © ) approach to quality improvement has been implemented in Ethiopia to strengthen routine maternal and newborn health (MNH) services. This evaluation assessed the effect of the intervention on MNH providers' performance of routine antenatal care (ANC), uncomplicated labor and delivery and immediate postnatal care (PNC) services. A post-only evaluation design was conducted at three hospitals and eight health centers implementing SBM-R and the same number of comparison health facilities. Structured checklists were used to observe MNH providers' performance on ANC (236 provider-client interactions), uncomplicated labor and delivery (226 provider-client interactions), and immediate PNC services in the six hours after delivery (232 provider-client interactions); observations were divided equally between intervention and comparison groups. Main outcomes were provider performance scores, calculated as the percentage of essential tasks in each service area completed by providers. Multilevel analysis was used to calculate adjusted mean percentage performance scores and standard errors to compare intervention and comparison groups. There was no statistically significant difference between intervention and comparison facilities in overall mean performance scores for ANC services (63.4% at intervention facilities versus 61.0% at comparison facilities, p = 0.650) or in any specific ANC skill area. MNH providers' overall mean performance score for uncomplicated labor and delivery care was 11.9 percentage points higher in the intervention than in the comparison group (77.5% versus 65.6%; p = 0.002). Overall mean performance scores for immediate PNC were 22.2 percentage points higher at intervention than at comparison facilities (72.8% versus 50.6%; p = 0.001); and there was a significant difference of 22 percentage points between intervention and comparison facilities for each PNC skill area: care for the newborn

  10. Scalable bonding of nanofibrous polytetrafluoroethylene (PTFE) membranes on microstructures

    Science.gov (United States)

    Mortazavi, Mehdi; Fazeli, Abdolreza; Moghaddam, Saeed

    2018-01-01

    Expanded polytetrafluoroethylene (ePTFE) nanofibrous membranes exhibit high porosity (80%-90%), high gas permeability, chemical inertness, and superhydrophobicity, which makes them a suitable choice in many demanding fields including industrial filtration, medical implants, bio-/nano- sensors/actuators and microanalysis (i.e. lab-on-a-chip). However, one of the major challenges that inhibit implementation of such membranes is their inability to bond to other materials due to their intrinsic low surface energy and chemical inertness. Prior attempts to improve adhesion of ePTFE membranes to other surfaces involved surface chemical treatments which have not been successful due to degradation of the mechanical integrity and the breakthrough pressure of the membrane. Here, we report a simple and scalable method of bonding ePTFE membranes to different surfaces via the introduction of an intermediate adhesive layer. While a variety of adhesives can be used with this technique, the highest bonding performance is obtained for adhesives that have moderate contact angles with the substrate and low contact angles with the membrane. A thin layer of an adhesive can be uniformly applied onto micro-patterned substrates with feature sizes down to 5 µm using a roll-coating process. Membrane-based microchannel and micropillar devices with burst pressures of up to 200 kPa have been successfully fabricated and tested. A thin layer of the membrane remains attached to the substrate after debonding, suggesting that mechanical interlocking through nanofiber engagement is the main mechanism of adhesion.

  11. Highly scalable parallel processing of extracellular recordings of Multielectrode Arrays.

    Science.gov (United States)

    Gehring, Tiago V; Vasilaki, Eleni; Giugliano, Michele

    2015-01-01

    Technological advances of Multielectrode Arrays (MEAs) used for multisite, parallel electrophysiological recordings, lead to an ever increasing amount of raw data being generated. Arrays with hundreds up to a few thousands of electrodes are slowly seeing widespread use and the expectation is that more sophisticated arrays will become available in the near future. In order to process the large data volumes resulting from MEA recordings there is a pressing need for new software tools able to process many data channels in parallel. Here we present a new tool for processing MEA data recordings that makes use of new programming paradigms and recent technology developments to unleash the power of modern highly parallel hardware, such as multi-core CPUs with vector instruction sets or GPGPUs. Our tool builds on and complements existing MEA data analysis packages. It shows high scalability and can be used to speed up some performance critical pre-processing steps such as data filtering and spike detection, helping to make the analysis of larger data sets tractable.

  12. Scalable Fault-Tolerant Location Management Scheme for Mobile IP

    Directory of Open Access Journals (Sweden)

    JinHo Ahn

    2001-11-01

    Full Text Available As the number of mobile nodes registering with a network rapidly increases in Mobile IP, multiple mobility (home of foreign agents can be allocated to a network in order to improve performance and availability. Previous fault tolerant schemes (denoted by PRT schemes to mask failures of the mobility agents use passive replication techniques. However, they result in high failure-free latency during registration process if the number of mobility agents in the same network increases, and force each mobility agent to manage bindings of all the mobile nodes registering with its network. In this paper, we present a new fault-tolerant scheme (denoted by CML scheme using checkpointing and message logging techniques. The CML scheme achieves low failure-free latency even if the number of mobility agents in a network increases, and improves scalability to a large number of mobile nodes registering with each network compared with the PRT schemes. Additionally, the CML scheme allows each failed mobility agent to recover bindings of the mobile nodes registering with the mobility agent when it is repaired even if all the other mobility agents in the same network concurrently fail.

  13. Scalable quality assurance for large SNOMED CT hierarchies using subject-based subtaxonomies.

    Science.gov (United States)

    Ochs, Christopher; Geller, James; Perl, Yehoshua; Chen, Yan; Xu, Junchuan; Min, Hua; Case, James T; Wei, Zhi

    2015-05-01

    Standards terminologies may be large and complex, making their quality assurance challenging. Some terminology quality assurance (TQA) methodologies are based on abstraction networks (AbNs), compact terminology summaries. We have tested AbNs and the performance of related TQA methodologies on small terminology hierarchies. However, some standards terminologies, for example, SNOMED, are composed of very large hierarchies. Scaling AbN TQA techniques to such hierarchies poses a significant challenge. We present a scalable subject-based approach for AbN TQA. An innovative technique is presented for scaling TQA by creating a new kind of subject-based AbN called a subtaxonomy for large hierarchies. New hypotheses about concentrations of erroneous concepts within the AbN are introduced to guide scalable TQA. We test the TQA methodology for a subject-based subtaxonomy for the Bleeding subhierarchy in SNOMED's large Clinical finding hierarchy. To test the error concentration hypotheses, three domain experts reviewed a sample of 300 concepts. A consensus-based evaluation identified 87 erroneous concepts. The subtaxonomy-based TQA methodology was shown to uncover statistically significantly more erroneous concepts when compared to a control sample. The scalability of TQA methodologies is a challenge for large standards systems like SNOMED. We demonstrated innovative subject-based TQA techniques by identifying groups of concepts with a higher likelihood of having errors within the subtaxonomy. Scalability is achieved by reviewing a large hierarchy by subject. An innovative methodology for scaling the derivation of AbNs and a TQA methodology was shown to perform successfully for the largest hierarchy of SNOMED. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Spatiotemporal Stochastic Modeling of IoT Enabled Cellular Networks: Scalability and Stability Analysis

    KAUST Repository

    Gharbieh, Mohammad; Elsawy, Hesham; Bader, Ahmed; Alouini, Mohamed-Slim

    2017-01-01

    The Internet of Things (IoT) is large-scale by nature, which is manifested by the massive number of connected devices as well as their vast spatial existence. Cellular networks, which provide ubiquitous, reliable, and efficient wireless access, will play fundamental rule in delivering the first-mile access for the data tsunami to be generated by the IoT. However, cellular networks may have scalability problems to provide uplink connectivity to massive numbers of connected things. To characterize the scalability of cellular uplink in the context of IoT networks, this paper develops a traffic-aware spatiotemporal mathematical model for IoT devices supported by cellular uplink connectivity. The developed model is based on stochastic geometry and queueing theory to account for the traffic requirement per IoT device, the different transmission strategies, and the mutual interference between the IoT devices. To this end, the developed model is utilized to characterize the extent to which cellular networks can accommodate IoT traffic as well as to assess and compare three different transmission strategies that incorporate a combination of transmission persistency, backoff, and power-ramping. The analysis and the results clearly illustrate the scalability problem imposed by IoT on cellular network and offer insights into effective scenarios for each transmission strategy.

  15. Development of a scalable suspension culture for cardiac differentiation from human pluripotent stem cells

    Directory of Open Access Journals (Sweden)

    Vincent C. Chen

    2015-09-01

    Full Text Available To meet the need of a large quantity of hPSC-derived cardiomyocytes (CM for pre-clinical and clinical studies, a robust and scalable differentiation system for CM production is essential. With a human pluripotent stem cells (hPSC aggregate suspension culture system we established previously, we developed a matrix-free, scalable, and GMP-compliant process for directing hPSC differentiation to CM in suspension culture by modulating Wnt pathways with small molecules. By optimizing critical process parameters including: cell aggregate size, small molecule concentrations, induction timing, and agitation rate, we were able to consistently differentiate hPSCs to >90% CM purity with an average yield of 1.5 to 2 × 109 CM/L at scales up to 1 L spinner flasks. CM generated from the suspension culture displayed typical genetic, morphological, and electrophysiological cardiac cell characteristics. This suspension culture system allows seamless transition from hPSC expansion to CM differentiation in a continuous suspension culture. It not only provides a cost and labor effective scalable process for large scale CM production, but also provides a bioreactor prototype for automation of cell manufacturing, which will accelerate the advance of hPSC research towards therapeutic applications.

  16. Spatiotemporal Stochastic Modeling of IoT Enabled Cellular Networks: Scalability and Stability Analysis

    KAUST Repository

    Gharbieh, Mohammad

    2017-05-02

    The Internet of Things (IoT) is large-scale by nature, which is manifested by the massive number of connected devices as well as their vast spatial existence. Cellular networks, which provide ubiquitous, reliable, and efficient wireless access, will play fundamental rule in delivering the first-mile access for the data tsunami to be generated by the IoT. However, cellular networks may have scalability problems to provide uplink connectivity to massive numbers of connected things. To characterize the scalability of cellular uplink in the context of IoT networks, this paper develops a traffic-aware spatiotemporal mathematical model for IoT devices supported by cellular uplink connectivity. The developed model is based on stochastic geometry and queueing theory to account for the traffic requirement per IoT device, the different transmission strategies, and the mutual interference between the IoT devices. To this end, the developed model is utilized to characterize the extent to which cellular networks can accommodate IoT traffic as well as to assess and compare three different transmission strategies that incorporate a combination of transmission persistency, backoff, and power-ramping. The analysis and the results clearly illustrate the scalability problem imposed by IoT on cellular network and offer insights into effective scenarios for each transmission strategy.

  17. Applications of the scalable coherent interface to data acquisition at LHC

    CERN Document Server

    Bogaerts, A; Divià, R; Müller, H; Parkman, C; Ponting, P J; Skaali, B; Midttun, G; Wormald, D; Wikne, J; Falciano, S; Cesaroni, F; Vinogradov, V I; Kristiansen, E H; Solberg, B; Guglielmi, A M; Worm, F H; Bovier, J; Davis, C; CERN. Geneva. Detector Research and Development Committee

    1991-01-01

    We propose to use the Scalable Coherent Interface (SCI) as a very high speed interconnect between LHC detector data buffers and farms of commercial trigger processors. Both the global second and third level trigger can be based on SCI as a reconfigurable and scalable system. SCI is a proposed IEEE standard which uses fast point-to-point links to provide computer-bus like services. It can connect a maximum of 65 536 nodes (memories or processors), providing data transfer rates of up to 1 Gbyte/s. Scalable data acquisition systems can be built using either simple SCI rings or complex switches. The interconnections may be flat cables, coaxial cables, or optical fibres. SCI protocols have been entirely implemented in VLSI, resulting in a significant simplification of data acquisition software. Novel SCI features allow efficient implementation of both data and processor driven readout architectures. In particular, a very efficient implementation of the third level trigger can be achieved by combining SCI's shared ...

  18. SAChES: Scalable Adaptive Chain-Ensemble Sampling.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Huang, Maoyi [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hou, Zhangshuan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bao, Jie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ren, Huiying [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-08-01

    We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the use of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.

  19. CX: A Scalable, Robust Network for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Peter Cappello

    2002-01-01

    Full Text Available CX, a network-based computational exchange, is presented. The system's design integrates variations of ideas from other researchers, such as work stealing, non-blocking tasks, eager scheduling, and space-based coordination. The object-oriented API is simple, compact, and cleanly separates application logic from the logic that supports interprocess communication and fault tolerance. Computations, of course, run to completion in the presence of computational hosts that join and leave the ongoing computation. Such hosts, or producers, use task caching and prefetching to overlap computation with interprocessor communication. To break a potential task server bottleneck, a network of task servers is presented. Even though task servers are envisioned as reliable, the self-organizing, scalable network of n- servers, described as a sibling-connected height-balanced fat tree, tolerates a sequence of n-1 server failures. Tasks are distributed throughout the server network via a simple "diffusion" process. CX is intended as a test bed for research on automated silent auctions, reputation services, authentication services, and bonding services. CX also provides a test bed for algorithm research into network-based parallel computation.

  20. Scalable Coating and Properties of Transparent, Flexible, Silver Nanowire Electrodes

    KAUST Repository

    Hu, Liangbing

    2010-05-25

    We report a comprehensive study of transparent and conductive silver nanowire (Ag NW) electrodes, including a scalable fabrication process, morphologies, and optical, mechanical adhesion, and flexibility properties, and various routes to improve the performance. We utilized a synthesis specifically designed for long and thin wires for improved performance in terms of sheet resistance and optical transmittance. Twenty Ω/sq and ∼ 80% specular transmittance, and 8 ohms/sq and 80% diffusive transmittance in the visible range are achieved, which fall in the same range as the best indium tin oxide (ITO) samples on plastic substrates for flexible electronics and solar cells. The Ag NW electrodes show optical transparencies superior to ITO for near-infrared wavelengths (2-fold higher transmission). Owing to light scattering effects, the Ag NW network has the largest difference between diffusive transmittance and specular transmittance when compared with ITO and carbon nanotube electrodes, a property which could greatly enhance solar cell performance. A mechanical study shows that Ag NW electrodes on flexible substrates show excellent robustness when subjected to bending. We also study the electrical conductance of Ag nanowires and their junctions and report a facile electrochemical method for a Au coating to reduce the wire-to-wire junction resistance for better overall film conductance. Simple mechanical pressing was also found to increase the NW film conductance due to the reduction of junction resistance. The overall properties of transparent Ag NW electrodes meet the requirements of transparent electrodes for many applications and could be an immediate ITO replacement for flexible electronics and solar cells. © 2010 American Chemical Society.

  1. Scalable Coating and Properties of Transparent, Flexible, Silver Nanowire Electrodes

    KAUST Repository

    Hu, Liangbing; Kim, Han Sun; Lee, Jung-Yong; Peumans, Peter; Cui, Yi

    2010-01-01

    We report a comprehensive study of transparent and conductive silver nanowire (Ag NW) electrodes, including a scalable fabrication process, morphologies, and optical, mechanical adhesion, and flexibility properties, and various routes to improve the performance. We utilized a synthesis specifically designed for long and thin wires for improved performance in terms of sheet resistance and optical transmittance. Twenty Ω/sq and ∼ 80% specular transmittance, and 8 ohms/sq and 80% diffusive transmittance in the visible range are achieved, which fall in the same range as the best indium tin oxide (ITO) samples on plastic substrates for flexible electronics and solar cells. The Ag NW electrodes show optical transparencies superior to ITO for near-infrared wavelengths (2-fold higher transmission). Owing to light scattering effects, the Ag NW network has the largest difference between diffusive transmittance and specular transmittance when compared with ITO and carbon nanotube electrodes, a property which could greatly enhance solar cell performance. A mechanical study shows that Ag NW electrodes on flexible substrates show excellent robustness when subjected to bending. We also study the electrical conductance of Ag nanowires and their junctions and report a facile electrochemical method for a Au coating to reduce the wire-to-wire junction resistance for better overall film conductance. Simple mechanical pressing was also found to increase the NW film conductance due to the reduction of junction resistance. The overall properties of transparent Ag NW electrodes meet the requirements of transparent electrodes for many applications and could be an immediate ITO replacement for flexible electronics and solar cells. © 2010 American Chemical Society.

  2. Heterogeneous scalable framework for multiphase flows

    Energy Technology Data Exchange (ETDEWEB)

    Morris, Karla Vanessa [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-09-01

    Two categories of challenges confront the developer of computational spray models: those related to the computation and those related to the physics. Regarding the computation, the trend towards heterogeneous, multi- and many-core platforms will require considerable re-engineering of codes written for the current supercomputing platforms. Regarding the physics, accurate methods for transferring mass, momentum and energy from the dispersed phase onto the carrier fluid grid have so far eluded modelers. Significant challenges also lie at the intersection between these two categories. To be competitive, any physics model must be expressible in a parallel algorithm that performs well on evolving computer platforms. This work created an application based on a software architecture where the physics and software concerns are separated in a way that adds flexibility to both. The develop spray-tracking package includes an application programming interface (API) that abstracts away the platform-dependent parallelization concerns, enabling the scientific programmer to write serial code that the API resolves into parallel processes and threads of execution. The project also developed the infrastructure required to provide similar APIs to other application. The API allow object-oriented Fortran applications direct interaction with Trilinos to support memory management of distributed objects in central processing units (CPU) and graphic processing units (GPU) nodes for applications using C++.

  3. Attitudes, subjective norms, and intention to perform routine oral examination for oropharyngeal candidiasis as perceived by primary health-care providers in Nairobi Province

    NARCIS (Netherlands)

    Koyio, L.N.; Kikwilu, E.N.; Mulder, J.; Frencken, J.E.F.M.

    2013-01-01

    Objectives: To assess attitudes, subjective norms, and intentions of primary health-care (PHC) providers in performing routine oral examination for oropharyngeal candidiasis (OPC) during outpatient consultations. Methods: A 47-item Theory of Planned Behaviour-based questionnaire was developed and

  4. The effect of interprofessional education on interprofessional performance and diabetes care knowledge of health care teams at the level one of health service providing

    Directory of Open Access Journals (Sweden)

    Nikoo Yamani

    2014-01-01

    Conclusion: It seems that inter-professional education can improve the quality of health care to some extent through influencing knowledge and collaborative performance of health care teams. It also can make the health-related messages provided to the covered population more consistent in addition to enhancing self-confidence of the personnel.

  5. Using a quality improvement model to enhance providers' performance in maternal and newborn health care : a post-only intervention and comparison design

    NARCIS (Netherlands)

    Ayalew, Firew; Eyassu, Gizachew; Seyoum, Negash; van Roosmalen, Jos; Bazant, Eva; Kim, Young Mi; Tekleberhan, Alemnesh; Gibson, Hannah; Daniel, Ephrem; Stekelenburg, Jelle

    2017-01-01

    Background: The Standards Based Management and Recognition (SBM-R (R)) approach to quality improvement has been implemented in Ethiopia to strengthen routine maternal and newborn health (MNH) services. This evaluation assessed the effect of the intervention on MNH providers' performance of routine

  6. Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach

    Science.gov (United States)

    Danyali, Habibiollah; Mertins, Alfred

    2011-01-01

    In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications. PMID:22606653

  7. ParaText : scalable solutions for processing and searching very large document collections : final LDRD report.

    Energy Technology Data Exchange (ETDEWEB)

    Crossno, Patricia Joyce; Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-09-01

    This report is a summary of the accomplishments of the 'Scalable Solutions for Processing and Searching Very Large Document Collections' LDRD, which ran from FY08 through FY10. Our goal was to investigate scalable text analysis; specifically, methods for information retrieval and visualization that could scale to extremely large document collections. Towards that end, we designed, implemented, and demonstrated a scalable framework for text analysis - ParaText - as a major project deliverable. Further, we demonstrated the benefits of using visual analysis in text analysis algorithm development, improved performance of heterogeneous ensemble models in data classification problems, and the advantages of information theoretic methods in user analysis and interpretation in cross language information retrieval. The project involved 5 members of the technical staff and 3 summer interns (including one who worked two summers). It resulted in a total of 14 publications, 3 new software libraries (2 open source and 1 internal to Sandia), several new end-user software applications, and over 20 presentations. Several follow-on projects have already begun or will start in FY11, with additional projects currently in proposal.

  8. Domain decomposition method of stochastic PDEs: a two-level scalable preconditioner

    International Nuclear Information System (INIS)

    Subber, Waad; Sarkar, Abhijit

    2012-01-01

    For uncertainty quantification in many practical engineering problems, the stochastic finite element method (SFEM) may be computationally challenging. In SFEM, the size of the algebraic linear system grows rapidly with the spatial mesh resolution and the order of the stochastic dimension. In this paper, we describe a non-overlapping domain decomposition method, namely the iterative substructuring method to tackle the large-scale linear system arising in the SFEM. The SFEM is based on domain decomposition in the geometric space and a polynomial chaos expansion in the probabilistic space. In particular, a two-level scalable preconditioner is proposed for the iterative solver of the interface problem for the stochastic systems. The preconditioner is equipped with a coarse problem which globally connects the subdomains both in the geometric and probabilistic spaces via their corner nodes. This coarse problem propagates the information quickly across the subdomains leading to a scalable preconditioner. For numerical illustrations, a two-dimensional stochastic elliptic partial differential equation (SPDE) with spatially varying non-Gaussian random coefficients is considered. The numerical scalability of the the preconditioner is investigated with respect to the mesh size, subdomain size, fixed problem size per subdomain and order of polynomial chaos expansion. The numerical experiments are performed on a Linux cluster using MPI and PETSc parallel libraries.

  9. A Numerical Study of Scalable Cardiac Electro-Mechanical Solvers on HPC Architectures

    Directory of Open Access Journals (Sweden)

    Piero Colli Franzone

    2018-04-01

    Full Text Available We introduce and study some scalable domain decomposition preconditioners for cardiac electro-mechanical 3D simulations on parallel HPC (High Performance Computing architectures. The electro-mechanical model of the cardiac tissue is composed of four coupled sub-models: (1 the static finite elasticity equations for the transversely isotropic deformation of the cardiac tissue; (2 the active tension model describing the dynamics of the intracellular calcium, cross-bridge binding and myofilament tension; (3 the anisotropic Bidomain model describing the evolution of the intra- and extra-cellular potentials in the deforming cardiac tissue; and (4 the ionic membrane model describing the dynamics of ionic currents, gating variables, ionic concentrations and stretch-activated channels. This strongly coupled electro-mechanical model is discretized in time with a splitting semi-implicit technique and in space with isoparametric finite elements. The resulting scalable parallel solver is based on Multilevel Additive Schwarz preconditioners for the solution of the Bidomain system and on BDDC preconditioned Newton-Krylov solvers for the non-linear finite elasticity system. The results of several 3D parallel simulations show the scalability of both linear and non-linear solvers and their application to the study of both physiological excitation-contraction cardiac dynamics and re-entrant waves in the presence of different mechano-electrical feedbacks.

  10. Particle Communication and Domain Neighbor Coupling: Scalable Domain Decomposed Algorithms for Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, M. J.; Brantley, P. S.

    2015-01-20

    In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 221 = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.

  11. Heat-treated stainless steel felt as scalable anode material for bioelectrochemical systems.

    Science.gov (United States)

    Guo, Kun; Soeriyadi, Alexander H; Feng, Huajun; Prévoteau, Antonin; Patil, Sunil A; Gooding, J Justin; Rabaey, Korneel

    2015-11-01

    This work reports a simple and scalable method to convert stainless steel (SS) felt into an effective anode for bioelectrochemical systems (BESs) by means of heat treatment. X-ray photoelectron spectroscopy and cyclic voltammetry elucidated that the heat treatment generated an iron oxide rich layer on the SS felt surface. The iron oxide layer dramatically enhanced the electroactive biofilm formation on SS felt surface in BESs. Consequently, the sustained current densities achieved on the treated electrodes (1 cm(2)) were around 1.5±0.13 mA/cm(2), which was seven times higher than the untreated electrodes (0.22±0.04 mA/cm(2)). To test the scalability of this material, the heat-treated SS felt was scaled up to 150 cm(2) and similar current density (1.5 mA/cm(2)) was achieved on the larger electrode. The low cost, straightforwardness of the treatment, high conductivity and high bioelectrocatalytic performance make heat-treated SS felt a scalable anodic material for BESs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Toward optimized light utilization in nanowire arrays using scalable nanosphere lithography and selected area growth.

    Science.gov (United States)

    Madaria, Anuj R; Yao, Maoqing; Chi, Chunyung; Huang, Ningfeng; Lin, Chenxi; Li, Ruijuan; Povinelli, Michelle L; Dapkus, P Daniel; Zhou, Chongwu

    2012-06-13

    Vertically aligned, catalyst-free semiconducting nanowires hold great potential for photovoltaic applications, in which achieving scalable synthesis and optimized optical absorption simultaneously is critical. Here, we report combining nanosphere lithography (NSL) and selected area metal-organic chemical vapor deposition (SA-MOCVD) for the first time for scalable synthesis of vertically aligned gallium arsenide nanowire arrays, and surprisingly, we show that such nanowire arrays with patterning defects due to NSL can be as good as highly ordered nanowire arrays in terms of optical absorption and reflection. Wafer-scale patterning for nanowire synthesis was done using a polystyrene nanosphere template as a mask. Nanowires grown from substrates patterned by NSL show similar structural features to those patterned using electron beam lithography (EBL). Reflection of photons from the NSL-patterned nanowire array was used as a measure of the effect of defects present in the structure. Experimentally, we show that GaAs nanowires as short as 130 nm show reflection of <10% over the visible range of the solar spectrum. Our results indicate that a highly ordered nanowire structure is not necessary: despite the "defects" present in NSL-patterned nanowire arrays, their optical performance is similar to "defect-free" structures patterned by more costly, time-consuming EBL methods. Our scalable approach for synthesis of vertical semiconducting nanowires can have application in high-throughput and low-cost optoelectronic devices, including solar cells.

  13. A repeatable and scalable fabrication method for sharp, hollow silicon microneedles

    Science.gov (United States)

    Kim, H.; Theogarajan, L. S.; Pennathur, S.

    2018-03-01

    Scalability and manufacturability are impeding the mass commercialization of microneedles in the medical field. Specifically, microneedle geometries need to be sharp, beveled, and completely controllable, difficult to achieve with microelectromechanical fabrication techniques. In this work, we performed a parametric study using silicon etch chemistries to optimize the fabrication of scalable and manufacturable beveled silicon hollow microneedles. We theoretically verified our parametric results with diffusion reaction equations and created a design guideline for a various set of miconeedles (80-160 µm needle base width, 100-1000 µm pitch, 40-50 µm inner bore diameter, and 150-350 µm height) to show the repeatability, scalability, and manufacturability of our process. As a result, hollow silicon microneedles with any dimensions can be fabricated with less than 2% non-uniformity across a wafer and 5% deviation between different processes. The key to achieving such high uniformity and consistency is a non-agitated HF-HNO3 bath, silicon nitride masks, and surrounding silicon filler materials with well-defined dimensions. Our proposed method is non-labor intensive, well defined by theory, and straightforward for wafer scale mass production, opening doors to a plethora of potential medical and biosensing applications.

  14. System and Method for Providing a Climate Data Persistence Service

    Science.gov (United States)

    Schnase, John L. (Inventor); Ripley, III, William David (Inventor); Duffy, Daniel Q. (Inventor); Thompson, John H. (Inventor); Strong, Savannah L. (Inventor); McInerney, Mark (Inventor); Sinno, Scott (Inventor); Tamkin, Glenn S. (Inventor); Nadeau, Denis (Inventor)

    2018-01-01

    A system, method and computer-readable storage devices for providing a climate data persistence service. A system configured to provide the service can include a climate data server that performs data and metadata storage and management functions for climate data objects, a compute-storage platform that provides the resources needed to support a climate data server, provisioning software that allows climate data server instances to be deployed as virtual climate data servers in a cloud computing environment, and a service interface, wherein persistence service capabilities are invoked by software applications running on a client device. The climate data objects can be in various formats, such as International Organization for Standards (ISO) Open Archival Information System (OAIS) Reference Model Submission Information Packages, Archive Information Packages, and Dissemination Information Packages. The climate data server can enable scalable, federated storage, management, discovery, and access, and can be tailored for particular use cases.

  15. Iterative Integration of Visual Insights during Scalable Patent Search and Analysis.

    Science.gov (United States)

    Koch, S; Bosch, H; Giereth, M; Ertl, T

    2011-05-01

    Patents are of growing importance in current economic markets. Analyzing patent information has, therefore, become a common task for many interest groups. As a prerequisite for patent analysis, extensive search for relevant patent information is essential. Unfortunately, the complexity of patent material inhibits a straightforward retrieval of all relevant patent documents and leads to iterative, time-consuming approaches in practice. Already the amount of patent data to be analyzed poses challenges with respect to scalability. Further scalability issues arise concerning the diversity of users and the large variety of analysis tasks. With "PatViz", a system for interactive analysis of patent information has been developed addressing scalability at various levels. PatViz provides a visual environment allowing for interactive reintegration of insights into subsequent search iterations, thereby bridging the gap between search and analytic processes. Because of its extensibility, we expect that the approach we have taken can be employed in different problem domains that require high quality of search results regarding their completeness.

  16. Vocal activity as a low cost and scalable index of seabird colony size.

    Science.gov (United States)

    Borker, Abraham L; McKown, Matthew W; Ackerman, Joshua T; Eagles-Smith, Collin A; Tershy, Bernie R; Croll, Donald A

    2014-08-01

    Although wildlife conservation actions have increased globally in number and complexity, the lack of scalable, cost-effective monitoring methods limits adaptive management and the evaluation of conservation efficacy. Automated sensors and computer-aided analyses provide a scalable and increasingly cost-effective tool for conservation monitoring. A key assumption of automated acoustic monitoring of birds is that measures of acoustic activity at colony sites are correlated with the relative abundance of nesting birds. We tested this assumption for nesting Forster's terns (Sterna forsteri) in San Francisco Bay for 2 breeding seasons. Sensors recorded ambient sound at 7 colonies that had 15-111 nests in 2009 and 2010. Colonies were spaced at least 250 m apart and ranged from 36 to 2,571 m(2) . We used spectrogram cross-correlation to automate the detection of tern calls from recordings. We calculated mean seasonal call rate and compared it with mean active nest count at each colony. Acoustic activity explained 71% of the variation in nest abundance between breeding sites and 88% of the change in colony size between years. These results validate a primary assumption of acoustic indices; that is, for terns, acoustic activity is correlated to relative abundance, a fundamental step toward designing rigorous and scalable acoustic monitoring programs to measure the effectiveness of conservation actions for colonial birds and other acoustically active wildlife. © 2014 Society for Conservation Biology.

  17. A Customer’s Possibilities to Increase the Performance of a Service Provider by Adding Value and Deepening the Partnership in Facility Management Service

    Directory of Open Access Journals (Sweden)

    Sillanpää Elina

    2016-06-01

    Full Text Available Reliable and good suppliers are an important competitive advantage for a customer and that is why the development of suppliers, improvement of performance and enhancement of customership are also in the interest of the customer. The purpose of this study is to clarify a customer’s possibilities to increase the performance of a service provider and to develop the service process in FM services and thus help to improve partnership development. This research is a qualitative research. The research complements the existing generic model of supplier development towards partnership development by customer and clarifies the special features that facility management services bring to this model. The data has been gathered from interviews of customers and service providers in the facility management service sector. The result is a model of customers’ possibilities to develop the performance of service providers from the viewpoint of value addition and relationship development and in that way ensure added value to the customer and the development of a long-term relationship. The results can be beneficial to customers when they develop the cooperation between the customer and the service provider toward being more strategic and more partnership focused.

  18. Recurrent, Robust and Scalable Patterns Underlie Human Approach and Avoidance

    Science.gov (United States)

    Kennedy, David N.; Lehár, Joseph; Lee, Myung Joo; Blood, Anne J.; Lee, Sang; Perlis, Roy H.; Smoller, Jordan W.; Morris, Robert; Fava, Maurizio

    2010-01-01

    Background Approach and avoidance behavior provide a means for assessing the rewarding or aversive value of stimuli, and can be quantified by a keypress procedure whereby subjects work to increase (approach), decrease (avoid), or do nothing about time of exposure to a rewarding/aversive stimulus. To investigate whether approach/avoidance behavior might be governed by quantitative principles that meet engineering criteria for lawfulness and that encode known features of reward/aversion function, we evaluated whether keypress responses toward pictures with potential motivational value produced any regular patterns, such as a trade-off between approach and avoidance, or recurrent lawful patterns as observed with prospect theory. Methodology/Principal Findings Three sets of experiments employed this task with beautiful face images, a standardized set of affective photographs, and pictures of food during controlled states of hunger and satiety. An iterative modeling approach to data identified multiple law-like patterns, based on variables grounded in the individual. These patterns were consistent across stimulus types, robust to noise, describable by a simple power law, and scalable between individuals and groups. Patterns included: (i) a preference trade-off counterbalancing approach and avoidance, (ii) a value function linking preference intensity to uncertainty about preference, and (iii) a saturation function linking preference intensity to its standard deviation, thereby setting limits to both. Conclusions/Significance These law-like patterns were compatible with critical features of prospect theory, the matching law, and alliesthesia. Furthermore, they appeared consistent with both mean-variance and expected utility approaches to the assessment of risk. Ordering of responses across categories of stimuli demonstrated three properties thought to be relevant for preference-based choice, suggesting these patterns might be grouped together as a relative preference

  19. Recurrent, robust and scalable patterns underlie human approach and avoidance.

    Directory of Open Access Journals (Sweden)

    Byoung Woo Kim

    2010-05-01

    Full Text Available Approach and avoidance behavior provide a means for assessing the rewarding or aversive value of stimuli, and can be quantified by a keypress procedure whereby subjects work to increase (approach, decrease (avoid, or do nothing about time of exposure to a rewarding/aversive stimulus. To investigate whether approach/avoidance behavior might be governed by quantitative principles that meet engineering criteria for lawfulness and that encode known features of reward/aversion function, we evaluated whether keypress responses toward pictures with potential motivational value produced any regular patterns, such as a trade-off between approach and avoidance, or recurrent lawful patterns as observed with prospect theory.Three sets of experiments employed this task with beautiful face images, a standardized set of affective photographs, and pictures of food during controlled states of hunger and satiety. An iterative modeling approach to data identified multiple law-like patterns, based on variables grounded in the individual. These patterns were consistent across stimulus types, robust to noise, describable by a simple power law, and scalable between individuals and groups. Patterns included: (i a preference trade-off counterbalancing approach and avoidance, (ii a value function linking preference intensity to uncertainty about preference, and (iii a saturation function linking preference intensity to its standard deviation, thereby setting limits to both.These law-like patterns were compatible with critical features of prospect theory, the matching law, and alliesthesia. Furthermore, they appeared consistent with both mean-variance and expected utility approaches to the assessment of risk. Ordering of responses across categories of stimuli demonstrated three properties thought to be relevant for preference-based choice, suggesting these patterns might be grouped together as a relative preference theory. Since variables in these patterns have been

  20. Scalable DeNoise-and-Forward in Bidirectional Relay Networks

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Krigslund, Rasmus; Popovski, Petar

    2010-01-01

    In this paper a scalable relaying scheme is proposed based on an existing concept called DeNoise-and-Forward, DNF. We call it Scalable DNF, S-DNF, and it targets the scenario with multiple communication flows through a single common relay. The idea of the scheme is to combine packets at the relay...... in order to save transmissions. To ensure decodability at the end-nodes, a priori information about the content of the combined packets must be available. This is gathered during the initial transmissions to the relay. The trade-off between decodability and number of necessary transmissions is analysed...