WorldWideScience

Sample records for producing robust scalable

  1. Superlinearly scalable noise robustness of redundant coupled dynamical systems.

    Science.gov (United States)

    Kohar, Vivek; Kia, Behnam; Lindner, John F; Ditto, William L

    2016-03-01

    We illustrate through theory and numerical simulations that redundant coupled dynamical systems can be extremely robust against local noise in comparison to uncoupled dynamical systems evolving in the same noisy environment. Previous studies have shown that the noise robustness of redundant coupled dynamical systems is linearly scalable and deviations due to noise can be minimized by increasing the number of coupled units. Here, we demonstrate that the noise robustness can actually be scaled superlinearly if some conditions are met and very high noise robustness can be realized with very few coupled units. We discuss these conditions and show that this superlinear scalability depends on the nonlinearity of the individual dynamical units. The phenomenon is demonstrated in discrete as well as continuous dynamical systems. This superlinear scalability not only provides us an opportunity to exploit the nonlinearity of physical systems without being bogged down by noise but may also help us in understanding the functional role of coupled redundancy found in many biological systems. Moreover, engineers can exploit superlinear noise suppression by starting a coupled system near (not necessarily at) the appropriate initial condition.

  2. Grassmann Averages for Scalable Robust PCA

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Black, Michael J.

    2014-01-01

    As the collection of large datasets becomes increasingly automated, the occurrence of outliers will increase—“big data” implies “big outliers”. While principal component analysis (PCA) is often used to reduce the size of data, and scalable solutions exist, it is well-known that outliers can...... to vectors (subspaces) or elements of vectors; we focus on the latter and use a trimmed average. The resulting Trimmed Grassmann Average (TGA) is particularly appropriate for computer vision because it is robust to pixel outliers. The algorithm has low computational complexity and minimal memory requirements...

  3. Robust and scalable optical one-way quantum computation

    International Nuclear Information System (INIS)

    Wang Hefeng; Yang Chuiping; Nori, Franco

    2010-01-01

    We propose an efficient approach for deterministically generating scalable cluster states with photons. This approach involves unitary transformations performed on atoms coupled to optical cavities. Its operation cost scales linearly with the number of qubits in the cluster state, and photon qubits are encoded such that single-qubit operations can be easily implemented by using linear optics. Robust optical one-way quantum computation can be performed since cluster states can be stored in atoms and then transferred to photons that can be easily operated and measured. Therefore, this proposal could help in performing robust large-scale optical one-way quantum computation.

  4. Temporal Scalability through Adaptive -Band Filter Banks for Robust H.264/MPEG-4 AVC Video Coding

    Directory of Open Access Journals (Sweden)

    Pau G

    2006-01-01

    Full Text Available This paper presents different structures that use adaptive -band hierarchical filter banks for temporal scalability. Open-loop and closed-loop configurations are introduced and illustrated using existing video codecs. In particular, it is shown that the H.264/MPEG-4 AVC codec allows us to introduce scalability by frame shuffling operations, thus keeping backward compatibility with the standard. The large set of shuffling patterns introduced here can be exploited to adapt the encoding process to the video content features, as well as to the user equipment and transmission channel characteristics. Furthermore, simulation results show that this scalability is obtained with no degradation in terms of subjective and objective quality in error-free environments, while in error-prone channels the scalable versions provide increased robustness.

  5. A robust and scalable neuromorphic communication system by combining synaptic time multiplexing and MIMO-OFDM.

    Science.gov (United States)

    Srinivasa, Narayan; Zhang, Deying; Grigorian, Beayna

    2014-03-01

    This paper describes a novel architecture for enabling robust and efficient neuromorphic communication. The architecture combines two concepts: 1) synaptic time multiplexing (STM) that trades space for speed of processing to create an intragroup communication approach that is firing rate independent and offers more flexibility in connectivity than cross-bar architectures and 2) a wired multiple input multiple output (MIMO) communication with orthogonal frequency division multiplexing (OFDM) techniques to enable a robust and efficient intergroup communication for neuromorphic systems. The MIMO-OFDM concept for the proposed architecture was analyzed by simulating large-scale spiking neural network architecture. Analysis shows that the neuromorphic system with MIMO-OFDM exhibits robust and efficient communication while operating in real time with a high bit rate. Through combining STM with MIMO-OFDM techniques, the resulting system offers a flexible and scalable connectivity as well as a power and area efficient solution for the implementation of very large-scale spiking neural architectures in hardware.

  6. Image retrieval by information fusion based on scalable vocabulary tree and robust Hausdorff distance

    Science.gov (United States)

    Che, Chang; Yu, Xiaoyang; Sun, Xiaoming; Yu, Boyang

    2017-12-01

    In recent years, Scalable Vocabulary Tree (SVT) has been shown to be effective in image retrieval. However, for general images where the foreground is the object to be recognized while the background is cluttered, the performance of the current SVT framework is restricted. In this paper, a new image retrieval framework that incorporates a robust distance metric and information fusion is proposed, which improves the retrieval performance relative to the baseline SVT approach. First, the visual words that represent the background are diminished by using a robust Hausdorff distance between different images. Second, image matching results based on three image signature representations are fused, which enhances the retrieval precision. We conducted intensive experiments on small-scale to large-scale image datasets: Corel-9, Corel-48, and PKU-198, where the proposed Hausdorff metric and information fusion outperforms the state-of-the-art methods by about 13, 15, and 15%, respectively.

  7. Recurrent, robust and scalable patterns underlie human approach and avoidance.

    Directory of Open Access Journals (Sweden)

    Byoung Woo Kim

    2010-05-01

    Full Text Available Approach and avoidance behavior provide a means for assessing the rewarding or aversive value of stimuli, and can be quantified by a keypress procedure whereby subjects work to increase (approach, decrease (avoid, or do nothing about time of exposure to a rewarding/aversive stimulus. To investigate whether approach/avoidance behavior might be governed by quantitative principles that meet engineering criteria for lawfulness and that encode known features of reward/aversion function, we evaluated whether keypress responses toward pictures with potential motivational value produced any regular patterns, such as a trade-off between approach and avoidance, or recurrent lawful patterns as observed with prospect theory.Three sets of experiments employed this task with beautiful face images, a standardized set of affective photographs, and pictures of food during controlled states of hunger and satiety. An iterative modeling approach to data identified multiple law-like patterns, based on variables grounded in the individual. These patterns were consistent across stimulus types, robust to noise, describable by a simple power law, and scalable between individuals and groups. Patterns included: (i a preference trade-off counterbalancing approach and avoidance, (ii a value function linking preference intensity to uncertainty about preference, and (iii a saturation function linking preference intensity to its standard deviation, thereby setting limits to both.These law-like patterns were compatible with critical features of prospect theory, the matching law, and alliesthesia. Furthermore, they appeared consistent with both mean-variance and expected utility approaches to the assessment of risk. Ordering of responses across categories of stimuli demonstrated three properties thought to be relevant for preference-based choice, suggesting these patterns might be grouped together as a relative preference theory. Since variables in these patterns have been

  8. CX: A Scalable, Robust Network for Parallel Computing

    Directory of Open Access Journals (Sweden)

    Peter Cappello

    2002-01-01

    Full Text Available CX, a network-based computational exchange, is presented. The system's design integrates variations of ideas from other researchers, such as work stealing, non-blocking tasks, eager scheduling, and space-based coordination. The object-oriented API is simple, compact, and cleanly separates application logic from the logic that supports interprocess communication and fault tolerance. Computations, of course, run to completion in the presence of computational hosts that join and leave the ongoing computation. Such hosts, or producers, use task caching and prefetching to overlap computation with interprocessor communication. To break a potential task server bottleneck, a network of task servers is presented. Even though task servers are envisioned as reliable, the self-organizing, scalable network of n- servers, described as a sibling-connected height-balanced fat tree, tolerates a sequence of n-1 server failures. Tasks are distributed throughout the server network via a simple "diffusion" process. CX is intended as a test bed for research on automated silent auctions, reputation services, authentication services, and bonding services. CX also provides a test bed for algorithm research into network-based parallel computation.

  9. Recurrent, Robust and Scalable Patterns Underlie Human Approach and Avoidance

    Science.gov (United States)

    Kennedy, David N.; Lehár, Joseph; Lee, Myung Joo; Blood, Anne J.; Lee, Sang; Perlis, Roy H.; Smoller, Jordan W.; Morris, Robert; Fava, Maurizio

    2010-01-01

    Background Approach and avoidance behavior provide a means for assessing the rewarding or aversive value of stimuli, and can be quantified by a keypress procedure whereby subjects work to increase (approach), decrease (avoid), or do nothing about time of exposure to a rewarding/aversive stimulus. To investigate whether approach/avoidance behavior might be governed by quantitative principles that meet engineering criteria for lawfulness and that encode known features of reward/aversion function, we evaluated whether keypress responses toward pictures with potential motivational value produced any regular patterns, such as a trade-off between approach and avoidance, or recurrent lawful patterns as observed with prospect theory. Methodology/Principal Findings Three sets of experiments employed this task with beautiful face images, a standardized set of affective photographs, and pictures of food during controlled states of hunger and satiety. An iterative modeling approach to data identified multiple law-like patterns, based on variables grounded in the individual. These patterns were consistent across stimulus types, robust to noise, describable by a simple power law, and scalable between individuals and groups. Patterns included: (i) a preference trade-off counterbalancing approach and avoidance, (ii) a value function linking preference intensity to uncertainty about preference, and (iii) a saturation function linking preference intensity to its standard deviation, thereby setting limits to both. Conclusions/Significance These law-like patterns were compatible with critical features of prospect theory, the matching law, and alliesthesia. Furthermore, they appeared consistent with both mean-variance and expected utility approaches to the assessment of risk. Ordering of responses across categories of stimuli demonstrated three properties thought to be relevant for preference-based choice, suggesting these patterns might be grouped together as a relative preference

  10. Scalable Robust Principal Component Analysis Using Grassmann Averages

    DEFF Research Database (Denmark)

    Hauberg, Søren; Feragen, Aasa; Enficiaud, Raffi

    2016-01-01

    In large datasets, manual data verification is impossible, and we must expect the number of outliers to increase with data size. While principal component analysis (PCA) can reduce data size, and scalable solutions exist, it is well-known that outliers can arbitrarily corrupt the results. Unfortu...

  11. Highly scalable and robust rule learner: performance evaluation and comparison.

    Science.gov (United States)

    Kurgan, Lukasz A; Cios, Krzysztof J; Dick, Scott

    2006-02-01

    Business intelligence and bioinformatics applications increasingly require the mining of datasets consisting of millions of data points, or crafting real-time enterprise-level decision support systems for large corporations and drug companies. In all cases, there needs to be an underlying data mining system, and this mining system must be highly scalable. To this end, we describe a new rule learner called DataSqueezer. The learner belongs to the family of inductive supervised rule extraction algorithms. DataSqueezer is a simple, greedy, rule builder that generates a set of production rules from labeled input data. In spite of its relative simplicity, DataSqueezer is a very effective learner. The rules generated by the algorithm are compact, comprehensible, and have accuracy comparable to rules generated by other state-of-the-art rule extraction algorithms. The main advantages of DataSqueezer are very high efficiency, and missing data resistance. DataSqueezer exhibits log-linear asymptotic complexity with the number of training examples, and it is faster than other state-of-the-art rule learners. The learner is also robust to large quantities of missing data, as verified by extensive experimental comparison with the other learners. DataSqueezer is thus well suited to modern data mining and business intelligence tasks, which commonly involve huge datasets with a large fraction of missing data.

  12. Scalable Production of Mechanically Robust Antireflection Film for Omnidirectional Enhanced Flexible Thin Film Solar Cells.

    Science.gov (United States)

    Wang, Min; Ma, Pengsha; Yin, Min; Lu, Linfeng; Lin, Yinyue; Chen, Xiaoyuan; Jia, Wei; Cao, Xinmin; Chang, Paichun; Li, Dongdong

    2017-09-01

    Antireflection (AR) at the interface between the air and incident window material is paramount to boost the performance of photovoltaic devices. 3D nanostructures have attracted tremendous interest to reduce reflection, while the structure is vulnerable to the harsh outdoor environment. Thus the AR film with improved mechanical property is desirable in an industrial application. Herein, a scalable production of flexible AR films is proposed with microsized structures by roll-to-roll imprinting process, which possesses hydrophobic property and much improved robustness. The AR films can be potentially used for a wide range of photovoltaic devices whether based on rigid or flexible substrates. As a demonstration, the AR films are integrated with commercial Si-based triple-junction thin film solar cells. The AR film works as an effective tool to control the light travel path and utilize the light inward more efficiently by exciting hybrid optical modes, which results in a broadband and omnidirectional enhanced performance.

  13. Reduce, reuse, recycle for robust cluster-state generation

    International Nuclear Information System (INIS)

    Horsman, Clare; Brown, Katherine L.; Kendon, Vivien M.; Munro, William J.

    2011-01-01

    Efficient generation of cluster states is crucial for engineering large-scale measurement-based quantum computers. Hybrid matter-optical systems offer a robust, scalable path to this goal. Such systems have an ancilla which acts as a bus connecting the qubits. We show that by generating the cluster in smaller sections of interlocking bricks, reusing one ancilla per brick, the cluster can be produced with maximal efficiency, requiring fewer than half the operations compared with no bus reuse. By reducing the time required to prepare sections of the cluster, bus reuse more than doubles the size of the computational workspace that can be used before decoherence effects dominate. A row of buses in parallel provides fully scalable cluster-state generation requiring only 20 controlled-phase gates per bus use.

  14. Swarm: robust and fast clustering method for amplicon-based studies

    Science.gov (United States)

    Rognes, Torbjørn; Quince, Christopher; de Vargas, Colomban; Dunthorn, Micah

    2014-01-01

    Popular de novo amplicon clustering methods suffer from two fundamental flaws: arbitrary global clustering thresholds, and input-order dependency induced by centroid selection. Swarm was developed to address these issues by first clustering nearly identical amplicons iteratively using a local threshold, and then by using clusters’ internal structure and amplicon abundances to refine its results. This fast, scalable, and input-order independent approach reduces the influence of clustering parameters and produces robust operational taxonomic units. PMID:25276506

  15. Swarm: robust and fast clustering method for amplicon-based studies

    Directory of Open Access Journals (Sweden)

    Frédéric Mahé

    2014-09-01

    Full Text Available Popular de novo amplicon clustering methods suffer from two fundamental flaws: arbitrary global clustering thresholds, and input-order dependency induced by centroid selection. Swarm was developed to address these issues by first clustering nearly identical amplicons iteratively using a local threshold, and then by using clusters’ internal structure and amplicon abundances to refine its results. This fast, scalable, and input-order independent approach reduces the influence of clustering parameters and produces robust operational taxonomic units.

  16. Mg(OH){sub 2} nanoparticles produced at room temperature by an innovative, facile, and scalable synthesis route

    Energy Technology Data Exchange (ETDEWEB)

    Taglieri, Giuliana, E-mail: giuliana.taglieri@univaq.it; Felice, Benito; Daniele, Valeria; Ferrante, Fabiola [University of L’Aquila, Department of Industrial and Information Engineering and Economics (Italy)

    2015-10-15

    Nanoparticles form the fundamental building blocks for many exciting applications in various scientific disciplines. However, the problem of the large-scale synthesis of nanoparticles remains challenging. An original, eco-friendly, single step, and scalable method to produce magnesium hydroxide nanoparticles in aqueous suspensions is here presented. The method, based on an exchange ion process, is extremely simple and rapid (few minutes). It employs cheap or renewable reactants, operates at room temperature and does not require intermediate steps (washings/purifications) to eliminate undesired compounds. Moreover, it is possible to regenerate the exchange material and to reuse it for new operation of synthesis, according to a cyclic procedure, providing potential aptitudes of scalability of nanoparticles production. Some of the synthesis parameters are varied, and structural and morphological features of the produced nanoparticles, after few seconds from the beginning of the synthesis up to the ending time, are investigated by means of several techniques, such as X-ray diffraction (profile fitting and Rietveld refinement), transmission electron microscopy, infrared spectroscopy, thermal analyses, and surface area measurements. In any case, pure and stable suspensions are produced, characterized by crystalline and mesoporous Mg(OH){sub 2} nanoparticles, with lamellar morphology. In particular, the nanolamellas appeared constituted by a superimposition of hexagonally plated and crystalline nanosized precursors (2–3 nm in dimensions), crystallographically oriented.

  17. A robust and scalable TCR-based reporter cell assay to measure HIV-1 Nef-mediated T cell immune evasion.

    Science.gov (United States)

    Anmole, Gursev; Kuang, Xiaomei T; Toyoda, Mako; Martin, Eric; Shahid, Aniqa; Le, Anh Q; Markle, Tristan; Baraki, Bemuluyigza; Jones, R Brad; Ostrowski, Mario A; Ueno, Takamasa; Brumme, Zabrina L; Brockman, Mark A

    2015-11-01

    HIV-1 evades cytotoxic T cell responses through Nef-mediated downregulation of HLA class I molecules from the infected cell surface. Methods to quantify the impact of Nef on T cell recognition typically employ patient-derived T cell clones; however, these assays are limited by the cost and effort required to isolate and maintain primary cell lines. The variable activity of different T cell clones and the limited number of cells generated by re-stimulation can also hinder assay reproducibility and scalability. Here, we describe a heterologous T cell receptor reporter assay and use it to study immune evasion by Nef. Induction of NFAT-driven luciferase following co-culture with peptide-pulsed or virus-infected target cells serves as a rapid, quantitative and antigen-specific measure of T cell recognition of its cognate peptide/HLA complex. We demonstrate that Nef-mediated downregulation of HLA on target cells correlates inversely with T cell receptor-dependent luminescent signal generated by effector cells. This method provides a robust, flexible and scalable platform that is suitable for studies to measure Nef function in the context of different viral peptide/HLA antigens, to assess the function of patient-derived Nef alleles, or to screen small molecule libraries to identify novel Nef inhibitors. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Environmentally Benign Production of Stretchable and Robust Superhydrophobic Silicone Monoliths.

    Science.gov (United States)

    Davis, Alexander; Surdo, Salvatore; Caputo, Gianvito; Bayer, Ilker S; Athanassiou, Athanassia

    2018-01-24

    Superhydrophobic materials hold an enormous potential in sectors as important as aerospace, food industries, or biomedicine. Despite this great promise, the lack of environmentally friendly production methods and limited robustness remain the two most pertinent barriers to the scalability, large-area production, and widespread use of superhydrophobic materials. In this work, highly robust superhydrophobic silicone monoliths are produced through a scalable and environmentally friendly emulsion technique. It is first found that stable and surfactantless water-in-polydimethylsiloxane (PDMS) emulsions can be formed through mechanical mixing. Increasing the internal phase fraction of the precursor emulsion is found to increase porosity and microtexture of the final monoliths, rendering them superhydrophobic. Silica nanoparticles can also be dispersed in the aqueous internal phase to create micro/nanotextured monoliths, giving further improvements in superhydrophobicity. Due to the elastomeric nature of PDMS, superhydrophobicity can be maintained even while the material is mechanically strained or compressed. In addition, because of their self-similarity, the monoliths show outstanding robustness to knife-scratch, tape-peel, and finger-wipe tests, as well as rigorous sandpaper abrasion. Superhydrophobicity was also unchanged when exposed to adverse environmental conditions including corrosive solutions, UV light, extreme temperatures, and high-energy droplet impact. Finally, important properties for eventual adoption in real-world applications including self-cleaning, stain-repellence, and blood-repellence are demonstrated.

  19. Development of Robust Metal-Supported SOFCs and Stack Components in EU METSAPP Consortium

    DEFF Research Database (Denmark)

    Sudireddy, Bhaskar Reddy; Nielsen, Jimmi; Persson, Åsa Helen

    2017-01-01

    -SOFCs to enhance their robustness. In addition, the manufacturing of metal-supported cells with different geometries, scalability of the manufacturing process was demonstrated and more than 200 cells with an area of ∼150 cm2 were produced. The electrochemical performance of different cell generations was evaluated...... in 90% reduction in Cr evaporation, three times lower Cr2O3 scale thickness and increased lifetime. The possibility of assembling these cells into two radically different stack designs was demonstrated....

  20. New Region-Scalable Discriminant and Fitting Energy Functional for Driving Geometric Active Contours in Medical Image Segmentation

    Directory of Open Access Journals (Sweden)

    Xuchu Wang

    2014-01-01

    that uses region-scalable discriminant and fitting energy functional for handling the intensity inhomogeneity and weak boundary problems in medical image segmentation. The region-scalable discriminant and fitting energy functional is defined to capture the image intensity characteristics in local and global regions for driving the evolution of active contour. The discriminant term in the model aims at separating background and foreground in scalable regions while the fitting term tends to fit the intensity in these regions. This model is then transformed into a variational level set formulation with a level set regularization term for accurate computation. The new model utilizes intensity information in the local and global regions as much as possible; so it not only handles better intensity inhomogeneity, but also allows more robustness to noise and more flexible initialization in comparison to the original global region and regional-scalable based models. Experimental results for synthetic and real medical image segmentation show the advantages of the proposed method in terms of accuracy and robustness.

  1. Human pluripotent stem cell-derived products: advances towards robust, scalable and cost-effective manufacturing strategies.

    Science.gov (United States)

    Jenkins, Michael J; Farid, Suzanne S

    2015-01-01

    The ability to develop cost-effective, scalable and robust bioprocesses for human pluripotent stem cells (hPSCs) will be key to their commercial success as cell therapies and tools for use in drug screening and disease modelling studies. This review outlines key process economic drivers for hPSCs and progress made on improving the economic and operational feasibility of hPSC bioprocesses. Factors influencing key cost metrics, namely capital investment and cost of goods, for hPSCs are discussed. Step efficiencies particularly for differentiation, media requirements and technology choice are amongst the key process economic drivers identified for hPSCs. Progress made to address these cost drivers in hPSC bioprocessing strategies is discussed. These include improving expansion and differentiation yields in planar and bioreactor technologies, the development of xeno-free media and microcarrier coatings, identification of optimal bioprocess operating conditions to control cell fate and the development of directed differentiation protocols that reduce reliance on expensive morphogens such as growth factors and small molecules. These approaches offer methods to further optimise hPSC bioprocessing in terms of its commercial feasibility. © 2014 The Authors. Biotechnology Journal published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

  2. All-organic superhydrophobic coatings with mechanochemical robustness and liquid impalement resistance

    Science.gov (United States)

    Peng, Chaoyi; Chen, Zhuyang; Tiwari, Manish K.

    2018-03-01

    Superhydrophobicity is a remarkable evolutionary adaption manifested by several natural surfaces. Artificial superhydrophobic coatings with good mechanical robustness, substrate adhesion and chemical robustness have been achieved separately. However, a simultaneous demonstration of these features along with resistance to liquid impalement via high-speed drop/jet impact is challenging. Here, we describe all-organic, flexible superhydrophobic nanocomposite coatings that demonstrate strong mechanical robustness under cyclic tape peels and Taber abrasion, sustain exposure to highly corrosive media, namely aqua regia and sodium hydroxide solutions, and can be applied to surfaces through scalable techniques such as spraying and brushing. In addition, the mechanical flexibility of our coatings enables impalement resistance to high-speed drops and turbulent jets at least up to 35 m s-1 and a Weber number of 43,000. With multifaceted robustness and scalability, these coatings should find potential usage in harsh chemical engineering as well as infrastructure, transport vehicles and communication equipment.

  3. Scalable architecture for a room temperature solid-state quantum information processor.

    Science.gov (United States)

    Yao, N Y; Jiang, L; Gorshkov, A V; Maurer, P C; Giedke, G; Cirac, J I; Lukin, M D

    2012-04-24

    The realization of a scalable quantum information processor has emerged over the past decade as one of the central challenges at the interface of fundamental science and engineering. Here we propose and analyse an architecture for a scalable, solid-state quantum information processor capable of operating at room temperature. Our approach is based on recent experimental advances involving nitrogen-vacancy colour centres in diamond. In particular, we demonstrate that the multiple challenges associated with operation at ambient temperature, individual addressing at the nanoscale, strong qubit coupling, robustness against disorder and low decoherence rates can be simultaneously achieved under realistic, experimentally relevant conditions. The architecture uses a novel approach to quantum information transfer and includes a hierarchy of control at successive length scales. Moreover, it alleviates the stringent constraints currently limiting the realization of scalable quantum processors and will provide fundamental insights into the physics of non-equilibrium many-body quantum systems.

  4. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    International Nuclear Information System (INIS)

    Desai, Ajit; Pettit, Chris; Poirel, Dominique; Sarkar, Abhijit

    2017-01-01

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolution in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.

  5. Scalable Tensor Factorizations with Missing Data

    DEFF Research Database (Denmark)

    Acar, Evrim; Dunlavy, Daniel M.; Kolda, Tamara G.

    2010-01-01

    of missing data, many important data sets will be discarded or improperly analyzed. Therefore, we need a robust and scalable approach for factorizing multi-way arrays (i.e., tensors) in the presence of missing data. We focus on one of the most well-known tensor factorizations, CANDECOMP/PARAFAC (CP...... is shown to successfully factor tensors with noise and up to 70% missing data. Moreover, our approach is significantly faster than the leading alternative and scales to larger problems. To show the real-world usefulness of CP-WOPT, we illustrate its applicability on a novel EEG (electroencephalogram...

  6. Scalable devices

    KAUST Repository

    Krüger, Jens J.

    2014-01-01

    In computer science in general and in particular the field of high performance computing and supercomputing the term scalable plays an important role. It indicates that a piece of hardware, a concept, an algorithm, or an entire system scales with the size of the problem, i.e., it can not only be used in a very specific setting but it\\'s applicable for a wide range of problems. From small scenarios to possibly very large settings. In this spirit, there exist a number of fixed areas of research on scalability. There are works on scalable algorithms, scalable architectures but what are scalable devices? In the context of this chapter, we are interested in a whole range of display devices, ranging from small scale hardware such as tablet computers, pads, smart-phones etc. up to large tiled display walls. What interests us mostly is not so much the hardware setup but mostly the visualization algorithms behind these display systems that scale from your average smart phone up to the largest gigapixel display walls.

  7. Content-Aware Scalability-Type Selection for Rate Adaptation of Scalable Video

    Directory of Open Access Journals (Sweden)

    Tekalp A Murat

    2007-01-01

    Full Text Available Scalable video coders provide different scaling options, such as temporal, spatial, and SNR scalabilities, where rate reduction by discarding enhancement layers of different scalability-type results in different kinds and/or levels of visual distortion depend on the content and bitrate. This dependency between scalability type, video content, and bitrate is not well investigated in the literature. To this effect, we first propose an objective function that quantifies flatness, blockiness, blurriness, and temporal jerkiness artifacts caused by rate reduction by spatial size, frame rate, and quantization parameter scaling. Next, the weights of this objective function are determined for different content (shot types and different bitrates using a training procedure with subjective evaluation. Finally, a method is proposed for choosing the best scaling type for each temporal segment that results in minimum visual distortion according to this objective function given the content type of temporal segments. Two subjective tests have been performed to validate the proposed procedure for content-aware selection of the best scalability type on soccer videos. Soccer videos scaled from 600 kbps to 100 kbps by the proposed content-aware selection of scalability type have been found visually superior to those that are scaled using a single scalability option over the whole sequence.

  8. Robust sigma delta converters : and their application in low-power highly-digitized flexible receivers

    NARCIS (Netherlands)

    Veldhoven, van R.H.M.; Roermund, van A.H.M.

    2011-01-01

    Sigma Delta converters are a very popular choice for the A/D converter in multi-standard, mobile and cellular receivers. Key A/D converter specifications are high dynamic range, robustness, scalability, low-power and low EMI. Robust Sigma Delta Converters presents a requirement derivation of a Sigma

  9. Error-Resilient Unequal Error Protection of Fine Granularity Scalable Video Bitstreams

    Science.gov (United States)

    Cai, Hua; Zeng, Bing; Shen, Guobin; Xiong, Zixiang; Li, Shipeng

    2006-12-01

    This paper deals with the optimal packet loss protection issue for streaming the fine granularity scalable (FGS) video bitstreams over IP networks. Unlike many other existing protection schemes, we develop an error-resilient unequal error protection (ER-UEP) method that adds redundant information optimally for loss protection and, at the same time, cancels completely the dependency among bitstream after loss recovery. In our ER-UEP method, the FGS enhancement-layer bitstream is first packetized into a group of independent and scalable data packets. Parity packets, which are also scalable, are then generated. Unequal protection is finally achieved by properly shaping the data packets and the parity packets. We present an algorithm that can optimally allocate the rate budget between data packets and parity packets, together with several simplified versions that have lower complexity. Compared with conventional UEP schemes that suffer from bit contamination (caused by the bit dependency within a bitstream), our method guarantees successful decoding of all received bits, thus leading to strong error-resilience (at any fixed channel bandwidth) and high robustness (under varying and/or unclean channel conditions).

  10. Scalable and cost-effective NGS genotyping in the cloud.

    Science.gov (United States)

    Souilmi, Yassine; Lancaster, Alex K; Jung, Jae-Yoon; Rizzo, Ettore; Hawkins, Jared B; Powles, Ryan; Amzazi, Saaïd; Ghazal, Hassan; Tonellato, Peter J; Wall, Dennis P

    2015-10-15

    While next-generation sequencing (NGS) costs have plummeted in recent years, cost and complexity of computation remain substantial barriers to the use of NGS in routine clinical care. The clinical potential of NGS will not be realized until robust and routine whole genome sequencing data can be accurately rendered to medically actionable reports within a time window of hours and at scales of economy in the 10's of dollars. We take a step towards addressing this challenge, by using COSMOS, a cloud-enabled workflow management system, to develop GenomeKey, an NGS whole genome analysis workflow. COSMOS implements complex workflows making optimal use of high-performance compute clusters. Here we show that the Amazon Web Service (AWS) implementation of GenomeKey via COSMOS provides a fast, scalable, and cost-effective analysis of both public benchmarking and large-scale heterogeneous clinical NGS datasets. Our systematic benchmarking reveals important new insights and considerations to produce clinical turn-around of whole genome analysis optimization and workflow management including strategic batching of individual genomes and efficient cluster resource configuration.

  11. A robust and rapid method of producing soluble, stable, and functional G-protein coupled receptors.

    Directory of Open Access Journals (Sweden)

    Karolina Corin

    Full Text Available Membrane proteins, particularly G-protein coupled receptors (GPCRs, are notoriously difficult to express. Using commercial E. coli cell-free systems with the detergent Brij-35, we could rapidly produce milligram quantities of 13 unique GPCRs. Immunoaffinity purification yielded receptors at >90% purity. Secondary structure analysis using circular dichroism indicated that the purified receptors were properly folded. Microscale thermophoresis, a novel label-free and surface-free detection technique that uses thermal gradients, showed that these receptors bound their ligands. The secondary structure and ligand-binding results from cell-free produced proteins were comparable to those expressed and purified from HEK293 cells. Our study demonstrates that cell-free protein production using commercially available kits and optimal detergents is a robust technology that can be used to produce sufficient GPCRs for biochemical, structural, and functional analyses. This robust and simple method may further stimulate others to study the structure and function of membrane proteins.

  12. Scalable Content Authentication in H.264/SVC Videos Using Perceptual Hashing based on Dempster-Shafer theory

    Directory of Open Access Journals (Sweden)

    Ye Dengpan

    2012-09-01

    Full Text Available The content authenticity of the multimedia delivery is important issue with rapid development and widely used of multimedia technology. Till now many authentication solutions had been proposed, such as cryptology and watermarking based methods. However, in latest heterogeneous network the video stream transmission has been coded in scalable way such as H.264/SVC, there is still no good authentication solution. In this paper, we firstly summarized related works and proposed a scalable content authentication scheme using a ratio of different energy (RDE based perceptual hashing in Q/S dimension, which is used Dempster-Shafer theory and combined with the latest scalable video coding (H.264/SVC construction. The idea of aldquo;sign once and verify in scalable wayardquo; can be realized. Comparing with previous methods, the proposed scheme based on perceptual hashing outperforms previous works in uncertainty (robustness and efficiencies in the H.264/SVC video streams. At last, the experiment results verified the performance of our scheme.

  13. Fabrication of Scalable Indoor Light Energy Harvester and Study for Agricultural IoT Applications

    International Nuclear Information System (INIS)

    Watanabe, M; Nakamura, A; Kunii, A; Kusano, K; Futagawa, M

    2015-01-01

    A scalable indoor light energy harvester was fabricated by microelectromechanical system (MEMS) and printing hybrid technology and evaluated for agricultural IoT applications under different environmental input power density conditions, such as outdoor farming under the sun, greenhouse farming under scattered lighting, and a plant factory under LEDs. We fabricated and evaluated a dye- sensitized-type solar cell (DSC) as a low cost and “scalable” optical harvester device. We developed a transparent conductive oxide (TCO)-less process with a honeycomb metal mesh substrate fabricated by MEMS technology. In terms of the electrical and optical properties, we achieved scalable harvester output power by cell area sizing. Second, we evaluated the dependence of the input power scalable characteristics on the input light intensity, spectrum distribution, and light inlet direction angle, because harvested environmental input power is unstable. The TiO 2 fabrication relied on nanoimprint technology, which was designed for optical optimization and fabrication, and we confirmed that the harvesters are robust to a variety of environments. Finally, we studied optical energy harvesting applications for agricultural IoT systems. These scalable indoor light harvesters could be used in many applications and situations in smart agriculture. (paper)

  14. Fabrication of Scalable Indoor Light Energy Harvester and Study for Agricultural IoT Applications

    Science.gov (United States)

    Watanabe, M.; Nakamura, A.; Kunii, A.; Kusano, K.; Futagawa, M.

    2015-12-01

    A scalable indoor light energy harvester was fabricated by microelectromechanical system (MEMS) and printing hybrid technology and evaluated for agricultural IoT applications under different environmental input power density conditions, such as outdoor farming under the sun, greenhouse farming under scattered lighting, and a plant factory under LEDs. We fabricated and evaluated a dye- sensitized-type solar cell (DSC) as a low cost and “scalable” optical harvester device. We developed a transparent conductive oxide (TCO)-less process with a honeycomb metal mesh substrate fabricated by MEMS technology. In terms of the electrical and optical properties, we achieved scalable harvester output power by cell area sizing. Second, we evaluated the dependence of the input power scalable characteristics on the input light intensity, spectrum distribution, and light inlet direction angle, because harvested environmental input power is unstable. The TiO2 fabrication relied on nanoimprint technology, which was designed for optical optimization and fabrication, and we confirmed that the harvesters are robust to a variety of environments. Finally, we studied optical energy harvesting applications for agricultural IoT systems. These scalable indoor light harvesters could be used in many applications and situations in smart agriculture.

  15. Traffic and Quality Characterization of the H.264/AVC Scalable Video Coding Extension

    Directory of Open Access Journals (Sweden)

    Geert Van der Auwera

    2008-01-01

    Full Text Available The recent scalable video coding (SVC extension to the H.264/AVC video coding standard has unprecedented compression efficiency while supporting a wide range of scalability modes, including temporal, spatial, and quality (SNR scalability, as well as combined spatiotemporal SNR scalability. The traffic characteristics, especially the bit rate variabilities, of the individual layer streams critically affect their network transport. We study the SVC traffic statistics, including the bit rate distortion and bit rate variability distortion, with long CIF resolution video sequences and compare them with the corresponding MPEG-4 Part 2 traffic statistics. We consider (i temporal scalability with three temporal layers, (ii spatial scalability with a QCIF base layer and a CIF enhancement layer, as well as (iii quality scalability modes FGS and MGS. We find that the significant improvement in RD efficiency of SVC is accompanied by substantially higher traffic variabilities as compared to the equivalent MPEG-4 Part 2 streams. We find that separately analyzing the traffic of temporal-scalability only encodings gives reasonable estimates of the traffic statistics of the temporal layers embedded in combined spatiotemporal encodings and in the base layer of combined FGS-temporal encodings. Overall, we find that SVC achieves significantly higher compression ratios than MPEG-4 Part 2, but produces unprecedented levels of traffic variability, thus presenting new challenges for the network transport of scalable video.

  16. A Scalable Smart Meter Data Generator Using Spark

    DEFF Research Database (Denmark)

    Iftikhar, Nadeem; Liu, Xiufeng; Danalachi, Sergiu

    2017-01-01

    Today, smart meters are being used worldwide. As a matter of fact smart meters produce large volumes of data. Thus, it is important for smart meter data management and analytics systems to process petabytes of data. Benchmarking and testing of these systems require scalable data, however, it can ...

  17. Tradeoff between robustness and elaboration in carotenoid networks produces cycles of avian color diversification.

    Science.gov (United States)

    Badyaev, Alexander V; Morrison, Erin S; Belloni, Virginia; Sanderson, Michael J

    2015-08-20

    Resolution of the link between micro- and macroevolution calls for comparing both processes on the same deterministic landscape, such as genomic, metabolic or fitness networks. We apply this perspective to the evolution of carotenoid pigmentation that produces spectacular diversity in avian colors and show that basic structural properties of the underlying carotenoid metabolic network are reflected in global patterns of elaboration and diversification in color displays. Birds color themselves by consuming and metabolizing several dietary carotenoids from the environment. Such fundamental dependency on the most upstream external compounds should intrinsically constrain sustained evolutionary elongation of multi-step metabolic pathways needed for color elaboration unless the metabolic network gains robustness - the ability to synthesize the same carotenoid from an additional dietary starting point. We found that gains and losses of metabolic robustness were associated with evolutionary cycles of elaboration and stasis in expressed carotenoids in birds. Lack of metabolic robustness constrained lineage's metabolic explorations to the immediate biochemical vicinity of their ecologically distinct dietary carotenoids, whereas gains of robustness repeatedly resulted in sustained elongation of metabolic pathways on evolutionary time scales and corresponding color elaboration. The structural link between length and robustness in metabolic pathways may explain periodic convergence of phylogenetically distant and ecologically distinct species in expressed carotenoid pigmentation; account for stasis in carotenoid colors in some ecological lineages; and show how the connectivity of the underlying metabolic network provides a mechanistic link between microevolutionary elaboration and macroevolutionary diversification.

  18. Towards scalable quantum communication and computation: Novel approaches and realizations

    Science.gov (United States)

    Jiang, Liang

    Quantum information science involves exploration of fundamental laws of quantum mechanics for information processing tasks. This thesis presents several new approaches towards scalable quantum information processing. First, we consider a hybrid approach to scalable quantum computation, based on an optically connected network of few-qubit quantum registers. Specifically, we develop a novel scheme for scalable quantum computation that is robust against various imperfections. To justify that nitrogen-vacancy (NV) color centers in diamond can be a promising realization of the few-qubit quantum register, we show how to isolate a few proximal nuclear spins from the rest of the environment and use them for the quantum register. We also demonstrate experimentally that the nuclear spin coherence is only weakly perturbed under optical illumination, which allows us to implement quantum logical operations that use the nuclear spins to assist the repetitive-readout of the electronic spin. Using this technique, we demonstrate more than two-fold improvement in signal-to-noise ratio. Apart from direct application to enhance the sensitivity of the NV-based nano-magnetometer, this experiment represents an important step towards the realization of robust quantum information processors using electronic and nuclear spin qubits. We then study realizations of quantum repeaters for long distance quantum communication. Specifically, we develop an efficient scheme for quantum repeaters based on atomic ensembles. We use dynamic programming to optimize various quantum repeater protocols. In addition, we propose a new protocol of quantum repeater with encoding, which efficiently uses local resources (about 100 qubits) to identify and correct errors, to achieve fast one-way quantum communication over long distances. Finally, we explore quantum systems with topological order. Such systems can exhibit remarkable phenomena such as quasiparticles with anyonic statistics and have been proposed as

  19. Space Situational Awareness Data Processing Scalability Utilizing Google Cloud Services

    Science.gov (United States)

    Greenly, D.; Duncan, M.; Wysack, J.; Flores, F.

    Space Situational Awareness (SSA) is a fundamental and critical component of current space operations. The term SSA encompasses the awareness, understanding and predictability of all objects in space. As the population of orbital space objects and debris increases, the number of collision avoidance maneuvers grows and prompts the need for accurate and timely process measures. The SSA mission continually evolves to near real-time assessment and analysis demanding the need for higher processing capabilities. By conventional methods, meeting these demands requires the integration of new hardware to keep pace with the growing complexity of maneuver planning algorithms. SpaceNav has implemented a highly scalable architecture that will track satellites and debris by utilizing powerful virtual machines on the Google Cloud Platform. SpaceNav algorithms for processing CDMs outpace conventional means. A robust processing environment for tracking data, collision avoidance maneuvers and various other aspects of SSA can be created and deleted on demand. Migrating SpaceNav tools and algorithms into the Google Cloud Platform will be discussed and the trials and tribulations involved. Information will be shared on how and why certain cloud products were used as well as integration techniques that were implemented. Key items to be presented are: 1.Scientific algorithms and SpaceNav tools integrated into a scalable architecture a) Maneuver Planning b) Parallel Processing c) Monte Carlo Simulations d) Optimization Algorithms e) SW Application Development/Integration into the Google Cloud Platform 2. Compute Engine Processing a) Application Engine Automated Processing b) Performance testing and Performance Scalability c) Cloud MySQL databases and Database Scalability d) Cloud Data Storage e) Redundancy and Availability

  20. Prodigious Effects of Concentration Intensification on Nanoparticle Synthesis: A High-Quality, Scalable Approach

    KAUST Repository

    Williamson, Curtis B.

    2015-12-23

    © 2015 American Chemical Society. Realizing the promise of nanoparticle-based technologies demands more efficient, robust synthesis methods (i.e., process intensification) that consistently produce large quantities of high-quality nanoparticles (NPs). We explored NP synthesis via the heat-up method in a regime of previously unexplored high concentrations near the solubility limit of the precursors. We discovered that in this highly concentrated and viscous regime the NP synthesis parameters are less sensitive to experimental variability and thereby provide a robust, scalable, and size-focusing NP synthesis. Specifically, we synthesize high-quality metal sulfide NPs (<7% relative standard deviation for Cu2-xS and CdS), and demonstrate a 10-1000-fold increase in Cu2-xS NP production (>200 g) relative to the current field of large-scale (0.1-5 g yields) and laboratory-scale (<0.1 g) efforts. Compared to conventional synthesis methods (hot injection with dilute precursor concentration) characterized by rapid growth and low yield, our highly concentrated NP system supplies remarkably controlled growth rates and a 10-fold increase in NP volumetric production capacity (86 g/L). The controlled growth, high yield, and robust nature of highly concentrated solutions can facilitate large-scale nanomanufacturing of NPs by relaxing the synthesis requirements to achieve monodisperse products. Mechanistically, our investigation of the thermal and rheological properties and growth rates reveals that this high concentration regime has reduced mass diffusion (a 5-fold increase in solution viscosity), is stable to thermal perturbations (64% increase in heat capacity), and is resistant to Ostwald ripening.

  1. Scalable Nanomanufacturing—A Review

    Directory of Open Access Journals (Sweden)

    Khershed Cooper

    2017-01-01

    Full Text Available This article describes the field of scalable nanomanufacturing, its importance and need, its research activities and achievements. The National Science Foundation is taking a leading role in fostering basic research in scalable nanomanufacturing (SNM. From this effort several novel nanomanufacturing approaches have been proposed, studied and demonstrated, including scalable nanopatterning. This paper will discuss SNM research areas in materials, processes and applications, scale-up methods with project examples, and manufacturing challenges that need to be addressed to move nanotechnology discoveries closer to the marketplace.

  2. Towards a Scalable, Biomimetic, Antibacterial Coating

    Science.gov (United States)

    Dickson, Mary Nora

    Corneal afflictions are the second leading cause of blindness worldwide. When a corneal transplant is unavailable or contraindicated, an artificial cornea device is the only chance to save sight. Bacterial or fungal biofilm build up on artificial cornea devices can lead to serious complications including the need for systemic antibiotic treatment and even explantation. As a result, much emphasis has been placed on anti-adhesion chemical coatings and antibiotic leeching coatings. These methods are not long-lasting, and microorganisms can eventually circumvent these measures. Thus, I have developed a surface topographical antimicrobial coating. Various surface structures including rough surfaces, superhydrophobic surfaces, and the natural surfaces of insects' wings and sharks' skin are promising anti-biofilm candidates, however none meet the criteria necessary for implementation on the surface of an artificial cornea device. In this thesis I: 1) developed scalable fabrication protocols for a library of biomimetic nanostructure polymer surfaces 2) assessed the potential these for poly(methyl methacrylate) nanopillars to kill or prevent formation of biofilm by E. coli bacteria and species of Pseudomonas and Staphylococcus bacteria and improved upon a proposed mechanism for the rupture of Gram-negative bacterial cell walls 3) developed a scalable, commercially viable method for producing antibacterial nanopillars on a curved, PMMA artificial cornea device and 4) developed scalable fabrication protocols for implantation of antibacterial nanopatterned surfaces on the surfaces of thermoplastic polyurethane materials, commonly used in catheter tubings. This project constitutes a first step towards fabrication of the first entirely PMMA artificial cornea device. The major finding of this work is that by precisely controlling the topography of a polymer surface at the nano-scale, we can kill adherent bacteria and prevent biofilm formation of certain pathogenic bacteria

  3. Robust Fully Distributed Minibatch Gradient Descent with Privacy Preservation

    Directory of Open Access Journals (Sweden)

    Gábor Danner

    2018-01-01

    Full Text Available Privacy and security are among the highest priorities in data mining approaches over data collected from mobile devices. Fully distributed machine learning is a promising direction in this context. However, it is a hard problem to design protocols that are efficient yet provide sufficient levels of privacy and security. In fully distributed environments, secure multiparty computation (MPC is often applied to solve these problems. However, in our dynamic and unreliable application domain, known MPC algorithms are not scalable or not robust enough. We propose a light-weight protocol to quickly and securely compute the sum query over a subset of participants assuming a semihonest adversary. During the computation the participants learn no individual values. We apply this protocol to efficiently calculate the sum of gradients as part of a fully distributed minibatch stochastic gradient descent algorithm. The protocol achieves scalability and robustness by exploiting the fact that in this application domain a “quick and dirty” sum computation is acceptable. We utilize the Paillier homomorphic cryptosystem as part of our solution combined with extreme lossy gradient compression to make the cost of the cryptographic algorithms affordable. We demonstrate both theoretically and experimentally, based on churn statistics from a real smartphone trace, that the protocol is indeed practically viable.

  4. Numeric Analysis for Relationship-Aware Scalable Streaming Scheme

    Directory of Open Access Journals (Sweden)

    Heung Ki Lee

    2014-01-01

    Full Text Available Frequent packet loss of media data is a critical problem that degrades the quality of streaming services over mobile networks. Packet loss invalidates frames containing lost packets and other related frames at the same time. Indirect loss caused by losing packets decreases the quality of streaming. A scalable streaming service can decrease the amount of dropped multimedia resulting from a single packet loss. Content providers typically divide one large media stream into several layers through a scalable streaming service and then provide each scalable layer to the user depending on the mobile network. Also, a scalable streaming service makes it possible to decode partial multimedia data depending on the relationship between frames and layers. Therefore, a scalable streaming service provides a way to decrease the wasted multimedia data when one packet is lost. However, the hierarchical structure between frames and layers of scalable streams determines the service quality of the scalable streaming service. Even if whole packets of layers are transmitted successfully, they cannot be decoded as a result of the absence of reference frames and layers. Therefore, the complicated relationship between frames and layers in a scalable stream increases the volume of abandoned layers. For providing a high-quality scalable streaming service, we choose a proper relationship between scalable layers as well as the amount of transmitted multimedia data depending on the network situation. We prove that a simple scalable scheme outperforms a complicated scheme in an error-prone network. We suggest an adaptive set-top box (AdaptiveSTB to lower the dependency between scalable layers in a scalable stream. Also, we provide a numerical model to obtain the indirect loss of multimedia data and apply it to various multimedia streams. Our AdaptiveSTB enhances the quality of a scalable streaming service by removing indirect loss.

  5. Scalable coherent interface

    International Nuclear Information System (INIS)

    Alnaes, K.; Kristiansen, E.H.; Gustavson, D.B.; James, D.V.

    1990-01-01

    The Scalable Coherent Interface (IEEE P1596) is establishing an interface standard for very high performance multiprocessors, supporting a cache-coherent-memory model scalable to systems with up to 64K nodes. This Scalable Coherent Interface (SCI) will supply a peak bandwidth per node of 1 GigaByte/second. The SCI standard should facilitate assembly of processor, memory, I/O and bus bridge cards from multiple vendors into massively parallel systems with throughput far above what is possible today. The SCI standard encompasses two levels of interface, a physical level and a logical level. The physical level specifies electrical, mechanical and thermal characteristics of connectors and cards that meet the standard. The logical level describes the address space, data transfer protocols, cache coherence mechanisms, synchronization primitives and error recovery. In this paper we address logical level issues such as packet formats, packet transmission, transaction handshake, flow control, and cache coherence. 11 refs., 10 figs

  6. Scalable, ultra-resistant structural colors based on network metamaterials

    KAUST Repository

    Galinski, Henning

    2017-05-05

    Structural colors have drawn wide attention for their potential as a future printing technology for various applications, ranging from biomimetic tissues to adaptive camouflage materials. However, an efficient approach to realize robust colors with a scalable fabrication technique is still lacking, hampering the realization of practical applications with this platform. Here, we develop a new approach based on large-scale network metamaterials that combine dealloyed subwavelength structures at the nanoscale with lossless, ultra-thin dielectric coatings. By using theory and experiments, we show how subwavelength dielectric coatings control a mechanism of resonant light coupling with epsilon-near-zero regions generated in the metallic network, generating the formation of saturated structural colors that cover a wide portion of the spectrum. Ellipsometry measurements support the efficient observation of these colors, even at angles of 70°. The network-like architecture of these nanomaterials allows for high mechanical resistance, which is quantified in a series of nano-scratch tests. With such remarkable properties, these metastructures represent a robust design technology for real-world, large-scale commercial applications.

  7. Scalable photoreactor for hydrogen production

    KAUST Repository

    Takanabe, Kazuhiro; Shinagawa, Tatsuya

    2017-01-01

    Provided herein are scalable photoreactors that can include a membrane-free water- splitting electrolyzer and systems that can include a plurality of membrane-free water- splitting electrolyzers. Also provided herein are methods of using the scalable photoreactors provided herein.

  8. Scalable photoreactor for hydrogen production

    KAUST Repository

    Takanabe, Kazuhiro

    2017-04-06

    Provided herein are scalable photoreactors that can include a membrane-free water- splitting electrolyzer and systems that can include a plurality of membrane-free water- splitting electrolyzers. Also provided herein are methods of using the scalable photoreactors provided herein.

  9. Resource-aware complexity scalability for mobile MPEG encoding

    NARCIS (Netherlands)

    Mietens, S.O.; With, de P.H.N.; Hentschel, C.; Panchanatan, S.; Vasudev, B.

    2004-01-01

    Complexity scalability attempts to scale the required resources of an algorithm with the chose quality settings, in order to broaden the application range. In this paper, we present complexity-scalable MPEG encoding of which the core processing modules are modified for scalability. Scalability is

  10. Mapping robust parallel multigrid algorithms to scalable memory architectures

    Science.gov (United States)

    Overman, Andrea; Vanrosendale, John

    1993-01-01

    The convergence rate of standard multigrid algorithms degenerates on problems with stretched grids or anisotropic operators. The usual cure for this is the use of line or plane relaxation. However, multigrid algorithms based on line and plane relaxation have limited and awkward parallelism and are quite difficult to map effectively to highly parallel architectures. Newer multigrid algorithms that overcome anisotropy through the use of multiple coarse grids rather than relaxation are better suited to massively parallel architectures because they require only simple point-relaxation smoothers. In this paper, we look at the parallel implementation of a V-cycle multiple semicoarsened grid (MSG) algorithm on distributed-memory architectures such as the Intel iPSC/860 and Paragon computers. The MSG algorithms provide two levels of parallelism: parallelism within the relaxation or interpolation on each grid and across the grids on each multigrid level. Both levels of parallelism must be exploited to map these algorithms effectively to parallel architectures. This paper describes a mapping of an MSG algorithm to distributed-memory architectures that demonstrates how both levels of parallelism can be exploited. The result is a robust and effective multigrid algorithm for distributed-memory machines.

  11. Development of a scalable suspension culture for cardiac differentiation from human pluripotent stem cells

    Directory of Open Access Journals (Sweden)

    Vincent C. Chen

    2015-09-01

    Full Text Available To meet the need of a large quantity of hPSC-derived cardiomyocytes (CM for pre-clinical and clinical studies, a robust and scalable differentiation system for CM production is essential. With a human pluripotent stem cells (hPSC aggregate suspension culture system we established previously, we developed a matrix-free, scalable, and GMP-compliant process for directing hPSC differentiation to CM in suspension culture by modulating Wnt pathways with small molecules. By optimizing critical process parameters including: cell aggregate size, small molecule concentrations, induction timing, and agitation rate, we were able to consistently differentiate hPSCs to >90% CM purity with an average yield of 1.5 to 2 × 109 CM/L at scales up to 1 L spinner flasks. CM generated from the suspension culture displayed typical genetic, morphological, and electrophysiological cardiac cell characteristics. This suspension culture system allows seamless transition from hPSC expansion to CM differentiation in a continuous suspension culture. It not only provides a cost and labor effective scalable process for large scale CM production, but also provides a bioreactor prototype for automation of cell manufacturing, which will accelerate the advance of hPSC research towards therapeutic applications.

  12. Adaptive format conversion for scalable video coding

    Science.gov (United States)

    Wan, Wade K.; Lim, Jae S.

    2001-12-01

    The enhancement layer in many scalable coding algorithms is composed of residual coding information. There is another type of information that can be transmitted instead of (or in addition to) residual coding. Since the encoder has access to the original sequence, it can utilize adaptive format conversion (AFC) to generate the enhancement layer and transmit the different format conversion methods as enhancement data. This paper investigates the use of adaptive format conversion information as enhancement data in scalable video coding. Experimental results are shown for a wide range of base layer qualities and enhancement bitrates to determine when AFC can improve video scalability. Since the parameters needed for AFC are small compared to residual coding, AFC can provide video scalability at low enhancement layer bitrates that are not possible with residual coding. In addition, AFC can also be used in addition to residual coding to improve video scalability at higher enhancement layer bitrates. Adaptive format conversion has not been studied in detail, but many scalable applications may benefit from it. An example of an application that AFC is well-suited for is the migration path for digital television where AFC can provide immediate video scalability as well as assist future migrations.

  13. Scalable Density-Based Subspace Clustering

    DEFF Research Database (Denmark)

    Müller, Emmanuel; Assent, Ira; Günnemann, Stephan

    2011-01-01

    For knowledge discovery in high dimensional databases, subspace clustering detects clusters in arbitrary subspace projections. Scalability is a crucial issue, as the number of possible projections is exponential in the number of dimensions. We propose a scalable density-based subspace clustering...... method that steers mining to few selected subspace clusters. Our novel steering technique reduces subspace processing by identifying and clustering promising subspaces and their combinations directly. Thereby, it narrows down the search space while maintaining accuracy. Thorough experiments on real...... and synthetic databases show that steering is efficient and scalable, with high quality results. For future work, our steering paradigm for density-based subspace clustering opens research potential for speeding up other subspace clustering approaches as well....

  14. Scalable devices

    KAUST Repository

    Krü ger, Jens J.; Hadwiger, Markus

    2014-01-01

    In computer science in general and in particular the field of high performance computing and supercomputing the term scalable plays an important role. It indicates that a piece of hardware, a concept, an algorithm, or an entire system scales

  15. Scalable manufacturing of biomimetic moldable hydrogels for industrial applications

    Science.gov (United States)

    Yu, Anthony C.; Chen, Haoxuan; Chan, Doreen; Agmon, Gillie; Stapleton, Lyndsay M.; Sevit, Alex M.; Tibbitt, Mark W.; Acosta, Jesse D.; Zhang, Tony; Franzia, Paul W.; Langer, Robert; Appel, Eric A.

    2016-12-01

    Hydrogels are a class of soft material that is exploited in many, often completely disparate, industrial applications, on account of their unique and tunable properties. Advances in soft material design are yielding next-generation moldable hydrogels that address engineering criteria in several industrial settings such as complex viscosity modifiers, hydraulic or injection fluids, and sprayable carriers. Industrial implementation of these viscoelastic materials requires extreme volumes of material, upwards of several hundred million gallons per year. Here, we demonstrate a paradigm for the scalable fabrication of self-assembled moldable hydrogels using rationally engineered, biomimetic polymer-nanoparticle interactions. Cellulose derivatives are linked together by selective adsorption to silica nanoparticles via dynamic and multivalent interactions. We show that the self-assembly process for gel formation is easily scaled in a linear fashion from 0.5 mL to over 15 L without alteration of the mechanical properties of the resultant materials. The facile and scalable preparation of these materials leveraging self-assembly of inexpensive, renewable, and environmentally benign starting materials, coupled with the tunability of their properties, make them amenable to a range of industrial applications. In particular, we demonstrate their utility as injectable materials for pipeline maintenance and product recovery in industrial food manufacturing as well as their use as sprayable carriers for robust application of fire retardants in preventing wildland fires.

  16. Silicon quantum processor with robust long-distance qubit couplings

    Energy Technology Data Exchange (ETDEWEB)

    Tosi, Guilherme; Mohiyaddin, Fahd A.; Schmitt, Vivien; Tenberg, Stefanie; Rahman, Rajib; Klimeck, Gerhard; Morello, Andrea

    2017-09-06

    Practical quantum computers require a large network of highly coherent qubits, interconnected in a design robust against errors. Donor spins in silicon provide state-of-the-art coherence and quantum gate fidelities, in a platform adapted from industrial semiconductor processing. Here we present a scalable design for a silicon quantum processor that does not require precise donor placement and leaves ample space for the routing of interconnects and readout devices. We introduce the flip-flop qubit, a combination of the electron-nuclear spin states of a phosphorus donor that can be controlled by microwave electric fields. Two-qubit gates exploit a second-order electric dipole-dipole interaction, allowing selective coupling beyond the nearest-neighbor, at separations of hundreds of nanometers, while microwave resonators can extend the entanglement to macroscopic distances. We predict gate fidelities within fault-tolerance thresholds using realistic noise models. This design provides a realizable blueprint for scalable spin-based quantum computers in silicon.

  17. Evidence of "hidden hearing loss" following noise exposures that produce robust TTS and ABR wave-I amplitude reductions.

    Science.gov (United States)

    Lobarinas, Edward; Spankovich, Christopher; Le Prell, Colleen G

    2017-06-01

    In animals, noise exposures that produce robust temporary threshold shifts (TTS) can produce immediate damage to afferent synapses and long-term degeneration of low spontaneous rate auditory nerve fibers. This synaptopathic damage has been shown to correlate with reduced auditory brainstem response (ABR) wave-I amplitudes at suprathreshold levels. The perceptual consequences of this "synaptopathy" remain unknown but have been suggested to include compromised hearing performance in competing background noise. Here, we used a modified startle inhibition paradigm to evaluate whether noise exposures that produce robust TTS and ABR wave-I reduction but not permanent threshold shift (PTS) reduced hearing-in-noise performance. Animals exposed to 109 dB SPL octave band noise showed TTS >30 dB 24-h post noise and modest but persistent ABR wave-I reduction 2 weeks post noise despite full recovery of ABR thresholds. Hearing-in-noise performance was negatively affected by the noise exposure. However, the effect was observed only at the poorest signal to noise ratio and was frequency specific. Although TTS >30 dB 24-h post noise was a predictor of functional deficits, there was no relationship between the degree of ABR wave-I reduction and degree of functional impairment. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Development of a modular and scalable sensor system for the gathering of position and orientation of moved objects

    International Nuclear Information System (INIS)

    Klingbeil, L.

    2006-02-01

    A modular and scalable sensor system for the estimation of position and orientation of moving objects has been developed and characterized. A sensor unit, which is mounted to the moving object, consists of acceleration -, angular rate - and magnetic field sensors for every spatial axis. Customized Kalman filter algorithms provide a robust and low latency reconstruction of the sensor's orientation. Additionally an ultrasound transducer network is used to measure the distance of a sensor unit with respect to several reference points in the room. This allows reconstruction of the absolute position using trilateration methods. The system is scalable with respect to the number of sensor units and the covered tracking volume. It is suitable for various applications for example the analysis of body movements or head tracking in augmented or virtual reality environments. (orig.)

  19. Differentiation of Human Pluripotent Stem Cells into Functional Endothelial Cells in Scalable Suspension Culture

    Directory of Open Access Journals (Sweden)

    Ruth Olmer

    2018-05-01

    Full Text Available Summary: Endothelial cells (ECs are involved in a variety of cellular responses. As multifunctional components of vascular structures, endothelial (progenitor cells have been utilized in cellular therapies and are required as an important cellular component of engineered tissue constructs and in vitro disease models. Although primary ECs from different sources are readily isolated and expanded, cell quantity and quality in terms of functionality and karyotype stability is limited. ECs derived from human induced pluripotent stem cells (hiPSCs represent an alternative and potentially superior cell source, but traditional culture approaches and 2D differentiation protocols hardly allow for production of large cell numbers. Aiming at the production of ECs, we have developed a robust approach for efficient endothelial differentiation of hiPSCs in scalable suspension culture. The established protocol results in relevant numbers of ECs for regenerative approaches and industrial applications that show in vitro proliferation capacity and a high degree of chromosomal stability. : In this article, U. Martin and colleagues show the generation of hiPSC endothelial cells in scalable cultures in up to 100 mL culture volume. The generated ECs show in vitro proliferation capacity and a high degree of chromosomal stability after in vitro expansion. The established protocol allows to generate hiPSC-derived ECs in relevant numbers for regenerative approaches. Keywords: hiPSC differentiation, endothelial cells, scalable culture

  20. Development of a robust, versatile, and scalable inoculum train for the production of a DNA vaccine.

    Science.gov (United States)

    Okonkowski, J; Kizer-Bentley, L; Listner, K; Robinson, D; Chartrain, M

    2005-01-01

    For many microbial fermentation processes, the inoculum train can have a substantial impact on process performance in terms of productivity, profitability, and process control. In general, it is understood that a well-characterized and flexible inoculum train is essential for future scale-up and implementation of the process in a pilot plant or manufacturing setting. A fermentation process utilizing E. coli DH5 for the production of plasmid DNA carrying the HIV gag gene for use as a vaccine is currently under development in our laboratory. As part of the development effort, we evaluated inoculum train schemes that incorporate one, two, or three stages. In addition, we investigated the effect of inoculum viable-cell concentrations, either thawed or actively growing, over a wide range (from 2.5 x 10(4) to 1.0 x 10(8) viable cells/mL or approximately 0.001% to 4% of final working volume). The various inoculum trains were evaluated in terms of final plasmid yield, process time, reproducibility, robustness, and feasibility at large scale. The results of these studies show that final plasmid yield remained in the desired range, despite the number of stages or inoculation viable-cell concentrations comprising the inoculum train. On the basis of these observations and because it established a large database, the first part of these investigations supports an exceptional flexibility in the design of scalable inoculum trains for this DNA vaccine process. This work also highlighted that a slightly higher level of process reproducibility, as measured by the time for the culture to reach mid-exponential growth, was observed when using actively growing versus frozen cells. It also demonstrated the existence of a viable-cell concentration threshold for the one-stage process, since we observed that inoculation of the production stage with very low amounts of viable cells from a frozen source could lead to increased process sensitivity to external factors such as variation in the

  1. Scalable algorithms for contact problems

    CERN Document Server

    Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít

    2016-01-01

    This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...

  2. Bioinspired superhydrophobic surfaces, fabricated through simple and scalable roll-to-roll processing.

    Science.gov (United States)

    Park, Sung-Hoon; Lee, Sangeui; Moreira, David; Bandaru, Prabhakar R; Han, InTaek; Yun, Dong-Jin

    2015-10-22

    A simple, scalable, non-lithographic, technique for fabricating durable superhydrophobic (SH) surfaces, based on the fingering instabilities associated with non-Newtonian flow and shear tearing, has been developed. The high viscosity of the nanotube/elastomer paste has been exploited for the fabrication. The fabricated SH surfaces had the appearance of bristled shark skin and were robust with respect to mechanical forces. While flow instability is regarded as adverse to roll-coating processes for fabricating uniform films, we especially use the effect to create the SH surface. Along with their durability and self-cleaning capabilities, we have demonstrated drag reduction effects of the fabricated films through dynamic flow measurements.

  3. Targeted amplicon sequencing (TAS): a scalable next-gen approach to multilocus, multitaxa phylogenetics.

    Science.gov (United States)

    Bybee, Seth M; Bracken-Grissom, Heather; Haynes, Benjamin D; Hermansen, Russell A; Byers, Robert L; Clement, Mark J; Udall, Joshua A; Wilcox, Edward R; Crandall, Keith A

    2011-01-01

    Next-gen sequencing technologies have revolutionized data collection in genetic studies and advanced genome biology to novel frontiers. However, to date, next-gen technologies have been used principally for whole genome sequencing and transcriptome sequencing. Yet many questions in population genetics and systematics rely on sequencing specific genes of known function or diversity levels. Here, we describe a targeted amplicon sequencing (TAS) approach capitalizing on next-gen capacity to sequence large numbers of targeted gene regions from a large number of samples. Our TAS approach is easily scalable, simple in execution, neither time-nor labor-intensive, relatively inexpensive, and can be applied to a broad diversity of organisms and/or genes. Our TAS approach includes a bioinformatic application, BarcodeCrucher, to take raw next-gen sequence reads and perform quality control checks and convert the data into FASTA format organized by gene and sample, ready for phylogenetic analyses. We demonstrate our approach by sequencing targeted genes of known phylogenetic utility to estimate a phylogeny for the Pancrustacea. We generated data from 44 taxa using 68 different 10-bp multiplexing identifiers. The overall quality of data produced was robust and was informative for phylogeny estimation. The potential for this method to produce copious amounts of data from a single 454 plate (e.g., 325 taxa for 24 loci) significantly reduces sequencing expenses incurred from traditional Sanger sequencing. We further discuss the advantages and disadvantages of this method, while offering suggestions to enhance the approach.

  4. Scalable creation of gold nanostructures on high performance engineering polymeric substrate

    Science.gov (United States)

    Jia, Kun; Wang, Pan; Wei, Shiliang; Huang, Yumin; Liu, Xiaobo

    2017-12-01

    The article reveals a facile protocol for scalable production of gold nanostructures on a high performance engineering thermoplastic substrate made of polyarylene ether nitrile (PEN) for the first time. Firstly, gold thin films with different thicknesses of 2 nm, 4 nm and 6 nm were evaporated on a spin-coated PEN substrate on glass slide in vacuum. Next, the as-evaporated samples were thermally annealed around the glass transition temperature of the PEN substrate, on which gold nanostructures with island-like morphology were created. Moreover, it was found that the initial gold evaporation thickness and annealing atmosphere played an important role in determining the morphology and plasmonic properties of the formulated Au NPs. Interestingly, we discovered that isotropic Au NPs can be easily fabricated on the freestanding PEN substrate, which was fabricated by a cost-effective polymer solution casting method. More specifically, monodispersed Au nanospheres with an average size of ∼60 nm were obtained after annealing a 4 nm gold film covered PEN casting substrate at 220 °C for 2 h in oxygen. Therefore, the scalable production of Au NPs with controlled morphology on PEN substrate would open the way for development of robust flexible nanosensors and optical devices using high performance engineering polyarylene ethers.

  5. New Complexity Scalable MPEG Encoding Techniques for Mobile Applications

    Directory of Open Access Journals (Sweden)

    Stephan Mietens

    2004-03-01

    Full Text Available Complexity scalability offers the advantage of one-time design of video applications for a large product family, including mobile devices, without the need of redesigning the applications on the algorithmic level to meet the requirements of the different products. In this paper, we present complexity scalable MPEG encoding having core modules with modifications for scalability. The interdependencies of the scalable modules and the system performance are evaluated. Experimental results show scalability giving a smooth change in complexity and corresponding video quality. Scalability is basically achieved by varying the number of computed DCT coefficients and the number of evaluated motion vectors but other modules are designed such they scale with the previous parameters. In the experiments using the “Stefan” sequence, the elapsed execution time of the scalable encoder, reflecting the computational complexity, can be gradually reduced to roughly 50% of its original execution time. The video quality scales between 20 dB and 48 dB PSNR with unity quantizer setting, and between 21.5 dB and 38.5 dB PSNR for different sequences targeting 1500 kbps. The implemented encoder and the scalability techniques can be successfully applied in mobile systems based on MPEG video compression.

  6. Robust Optimization of the Self- scheduling and Market Involvement for an Electricity Producer

    KAUST Repository

    Lima, Ricardo

    2015-01-01

    This work address the optimization under uncertainty of the self-scheduling, forward contracting, and pool involvement of an electricity producer operating a mixed power generation station, which combines thermal, hydro and wind sources, and uses a two-stage adaptive robust optimization approach. In this problem the wind power production and the electricity pool price are considered to be uncertain, and are described by uncertainty convex sets. Two variants of a constraint generation algorithm are proposed, namely a primal and dual version, and they are used to solve two case studies based on two different producers. Their market strategies are investigated for three different scenarios, corresponding to as many instances of electricity price forecasts. The effect of the producers’ approach, whether conservative or more risk prone, is also investigated by solving each instance for multiple values of the so-called budget parameter. It was possible to conclude that this parameter influences markedly the producers’ strategy, in terms of scheduling, profit, forward contracting, and pool involvement. Regarding the computational results, these show that for some instances, the two variants of the algorithms have a similar performance, while for a particular subset of them one variant has a clear superiority

  7. Robust Optimization of the Self- scheduling and Market Involvement for an Electricity Producer

    KAUST Repository

    Lima, Ricardo

    2015-01-07

    This work address the optimization under uncertainty of the self-scheduling, forward contracting, and pool involvement of an electricity producer operating a mixed power generation station, which combines thermal, hydro and wind sources, and uses a two-stage adaptive robust optimization approach. In this problem the wind power production and the electricity pool price are considered to be uncertain, and are described by uncertainty convex sets. Two variants of a constraint generation algorithm are proposed, namely a primal and dual version, and they are used to solve two case studies based on two different producers. Their market strategies are investigated for three different scenarios, corresponding to as many instances of electricity price forecasts. The effect of the producers’ approach, whether conservative or more risk prone, is also investigated by solving each instance for multiple values of the so-called budget parameter. It was possible to conclude that this parameter influences markedly the producers’ strategy, in terms of scheduling, profit, forward contracting, and pool involvement. Regarding the computational results, these show that for some instances, the two variants of the algorithms have a similar performance, while for a particular subset of them one variant has a clear superiority

  8. The development of a scalable parallel 3-D CFD algorithm for turbomachinery. M.S. Thesis Final Report

    Science.gov (United States)

    Luke, Edward Allen

    1993-01-01

    Two algorithms capable of computing a transonic 3-D inviscid flow field about rotating machines are considered for parallel implementation. During the study of these algorithms, a significant new method of measuring the performance of parallel algorithms is developed. The theory that supports this new method creates an empirical definition of scalable parallel algorithms that is used to produce quantifiable evidence that a scalable parallel application was developed. The implementation of the parallel application and an automated domain decomposition tool are also discussed.

  9. Learning Optimized Local Difference Binaries for Scalable Augmented Reality on Mobile Devices.

    Science.gov (United States)

    Xin Yang; Kwang-Ting Cheng

    2014-06-01

    The efficiency, robustness and distinctiveness of a feature descriptor are critical to the user experience and scalability of a mobile augmented reality (AR) system. However, existing descriptors are either too computationally expensive to achieve real-time performance on a mobile device such as a smartphone or tablet, or not sufficiently robust and distinctive to identify correct matches from a large database. As a result, current mobile AR systems still only have limited capabilities, which greatly restrict their deployment in practice. In this paper, we propose a highly efficient, robust and distinctive binary descriptor, called Learning-based Local Difference Binary (LLDB). LLDB directly computes a binary string for an image patch using simple intensity and gradient difference tests on pairwise grid cells within the patch. To select an optimized set of grid cell pairs, we densely sample grid cells from an image patch and then leverage a modified AdaBoost algorithm to automatically extract a small set of critical ones with the goal of maximizing the Hamming distance between mismatches while minimizing it between matches. Experimental results demonstrate that LLDB is extremely fast to compute and to match against a large database due to its high robustness and distinctiveness. Compared to the state-of-the-art binary descriptors, primarily designed for speed, LLDB has similar efficiency for descriptor construction, while achieving a greater accuracy and faster matching speed when matching over a large database with 2.3M descriptors on mobile devices.

  10. Validation of the manufacturing process used to produce long-acting recombinant factor IX Fc fusion protein.

    Science.gov (United States)

    McCue, J; Osborne, D; Dumont, J; Peters, R; Mei, B; Pierce, G F; Kobayashi, K; Euwart, D

    2014-07-01

    Recombinant factor IX Fc (rFIXFc) fusion protein is the first of a new class of bioengineered long-acting factors approved for the treatment and prevention of bleeding episodes in haemophilia B. The aim of this work was to describe the manufacturing process for rFIXFc, to assess product quality and to evaluate the capacity of the process to remove impurities and viruses. This manufacturing process utilized a transferable and scalable platform approach established for therapeutic antibody manufacturing and adapted for production of the rFIXFc molecule. rFIXFc was produced using a process free of human- and animal-derived raw materials and a host cell line derived from human embryonic kidney (HEK) 293H cells. The process employed multi-step purification and viral clearance processing, including use of a protein A affinity capture chromatography step, which binds to the Fc portion of the rFIXFc molecule with high affinity and specificity, and a 15 nm pore size virus removal nanofilter. Process validation studies were performed to evaluate identity, purity, activity and safety. The manufacturing process produced rFIXFc with consistent product quality and high purity. Impurity clearance validation studies demonstrated robust and reproducible removal of process-related impurities and adventitious viruses. The rFIXFc manufacturing process produces a highly pure product, free of non-human glycan structures. Validation studies demonstrate that this product is produced with consistent quality and purity. In addition, the scalability and transferability of this process are key attributes to ensure consistent and continuous supply of rFIXFc. © 2014 The Authors. Haemophilia Published by John Wiley & Sons Ltd.

  11. Scalable graphene production: perspectives and challenges of plasma applications

    Science.gov (United States)

    Levchenko, Igor; Ostrikov, Kostya (Ken); Zheng, Jie; Li, Xingguo; Keidar, Michael; B. K. Teo, Kenneth

    2016-05-01

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h-1 m-2 was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of various

  12. Scalable graphene production: perspectives and challenges of plasma applications.

    Science.gov (United States)

    Levchenko, Igor; Ostrikov, Kostya Ken; Zheng, Jie; Li, Xingguo; Keidar, Michael; B K Teo, Kenneth

    2016-05-19

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h(-1) m(-2) was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of

  13. Non-damaging and scalable carbon nanotube synthesis on carbon fibres

    OpenAIRE

    De Luca, H; Anthony, DB; Qian, H; Greenhalgh, E; Bismarck, A; Shaffer, M

    2016-01-01

    The growth of carbon nanotubes (CNTs) on carbon fibres (CFs) to produce a hierarchical fibre with two differing reinforcement length scales, in this instance nanometre and micrometre respectively, is considered a route to improve current state-of-the-art fibre reinforced composites [1]. The scalable production of carbon nanotube-grafted-carbon fibres (CNT-g-CFs) has been limited due to high temperatures, the use of flammable gases and the requirement of inert conditions for CNT synthesis, whi...

  14. A scalable variational inequality approach for flow through porous media models with pressure-dependent viscosity

    Science.gov (United States)

    Mapakshi, N. K.; Chang, J.; Nakshatrala, K. B.

    2018-04-01

    Mathematical models for flow through porous media typically enjoy the so-called maximum principles, which place bounds on the pressure field. It is highly desirable to preserve these bounds on the pressure field in predictive numerical simulations, that is, one needs to satisfy discrete maximum principles (DMP). Unfortunately, many of the existing formulations for flow through porous media models do not satisfy DMP. This paper presents a robust, scalable numerical formulation based on variational inequalities (VI), to model non-linear flows through heterogeneous, anisotropic porous media without violating DMP. VI is an optimization technique that places bounds on the numerical solutions of partial differential equations. To crystallize the ideas, a modification to Darcy equations by taking into account pressure-dependent viscosity will be discretized using the lowest-order Raviart-Thomas (RT0) and Variational Multi-scale (VMS) finite element formulations. It will be shown that these formulations violate DMP, and, in fact, these violations increase with an increase in anisotropy. It will be shown that the proposed VI-based formulation provides a viable route to enforce DMP. Moreover, it will be shown that the proposed formulation is scalable, and can work with any numerical discretization and weak form. A series of numerical benchmark problems are solved to demonstrate the effects of heterogeneity, anisotropy and non-linearity on DMP violations under the two chosen formulations (RT0 and VMS), and that of non-linearity on solver convergence for the proposed VI-based formulation. Parallel scalability on modern computational platforms will be illustrated through strong-scaling studies, which will prove the efficiency of the proposed formulation in a parallel setting. Algorithmic scalability as the problem size is scaled up will be demonstrated through novel static-scaling studies. The performed static-scaling studies can serve as a guide for users to be able to select

  15. A Scalable Architecture of a Structured LDPC Decoder

    Science.gov (United States)

    Lee, Jason Kwok-San; Lee, Benjamin; Thorpe, Jeremy; Andrews, Kenneth; Dolinar, Sam; Hamkins, Jon

    2004-01-01

    We present a scalable decoding architecture for a certain class of structured LDPC codes. The codes are designed using a small (n,r) protograph that is replicated Z times to produce a decoding graph for a (Z x n, Z x r) code. Using this architecture, we have implemented a decoder for a (4096,2048) LDPC code on a Xilinx Virtex-II 2000 FPGA, and achieved decoding speeds of 31 Mbps with 10 fixed iterations. The implemented message-passing algorithm uses an optimized 3-bit non-uniform quantizer that operates with 0.2dB implementation loss relative to a floating point decoder.

  16. From Field Notes to Data Portal - A Scalable Data QA/QC Framework for Tower Networks: Progress and Preliminary Results

    Science.gov (United States)

    Sturtevant, C.; Hackley, S.; Lee, R.; Holling, G.; Bonarrigo, S.

    2017-12-01

    Quality assurance and control (QA/QC) is one of the most important yet challenging aspects of producing research-quality data. Data quality issues are multi-faceted, including sensor malfunctions, unmet theoretical assumptions, and measurement interference from humans or the natural environment. Tower networks such as Ameriflux, ICOS, and NEON continue to grow in size and sophistication, yet tools for robust, efficient, scalable QA/QC have lagged. Quality control remains a largely manual process heavily relying on visual inspection of data. In addition, notes of measurement interference are often recorded on paper without an explicit pathway to data flagging. As such, an increase in network size requires a near-proportional increase in personnel devoted to QA/QC, quickly stressing the human resources available. We present a scalable QA/QC framework in development for NEON that combines the efficiency and standardization of automated checks with the power and flexibility of human review. This framework includes fast-response monitoring of sensor health, a mobile application for electronically recording maintenance activities, traditional point-based automated quality flagging, and continuous monitoring of quality outcomes and longer-term holistic evaluations. This framework maintains the traceability of quality information along the entirety of the data generation pipeline, and explicitly links field reports of measurement interference to quality flagging. Preliminary results show that data quality can be effectively monitored and managed for a multitude of sites with a small group of QA/QC staff. Several components of this framework are open-source, including a R-Shiny application for efficiently monitoring, synthesizing, and investigating data quality issues.

  17. Photonic Architecture for Scalable Quantum Information Processing in Diamond

    Directory of Open Access Journals (Sweden)

    Kae Nemoto

    2014-08-01

    Full Text Available Physics and information are intimately connected, and the ultimate information processing devices will be those that harness the principles of quantum mechanics. Many physical systems have been identified as candidates for quantum information processing, but none of them are immune from errors. The challenge remains to find a path from the experiments of today to a reliable and scalable quantum computer. Here, we develop an architecture based on a simple module comprising an optical cavity containing a single negatively charged nitrogen vacancy center in diamond. Modules are connected by photons propagating in a fiber-optical network and collectively used to generate a topological cluster state, a robust substrate for quantum information processing. In principle, all processes in the architecture can be deterministic, but current limitations lead to processes that are probabilistic but heralded. We find that the architecture enables large-scale quantum information processing with existing technology.

  18. The reliability paradox: Why robust cognitive tasks do not produce reliable individual differences.

    Science.gov (United States)

    Hedge, Craig; Powell, Georgina; Sumner, Petroc

    2017-07-19

    Individual differences in cognitive paradigms are increasingly employed to relate cognition to brain structure, chemistry, and function. However, such efforts are often unfruitful, even with the most well established tasks. Here we offer an explanation for failures in the application of robust cognitive paradigms to the study of individual differences. Experimental effects become well established - and thus those tasks become popular - when between-subject variability is low. However, low between-subject variability causes low reliability for individual differences, destroying replicable correlations with other factors and potentially undermining published conclusions drawn from correlational relationships. Though these statistical issues have a long history in psychology, they are widely overlooked in cognitive psychology and neuroscience today. In three studies, we assessed test-retest reliability of seven classic tasks: Eriksen Flanker, Stroop, stop-signal, go/no-go, Posner cueing, Navon, and Spatial-Numerical Association of Response Code (SNARC). Reliabilities ranged from 0 to .82, being surprisingly low for most tasks given their common use. As we predicted, this emerged from low variance between individuals rather than high measurement variance. In other words, the very reason such tasks produce robust and easily replicable experimental effects - low between-participant variability - makes their use as correlational tools problematic. We demonstrate that taking such reliability estimates into account has the potential to qualitatively change theoretical conclusions. The implications of our findings are that well-established approaches in experimental psychology and neuropsychology may not directly translate to the study of individual differences in brain structure, chemistry, and function, and alternative metrics may be required.

  19. Application of a scalable plant transient gene expression platform for malaria vaccine development

    Directory of Open Access Journals (Sweden)

    Holger eSpiegel

    2015-12-01

    Full Text Available Despite decades of intensive research efforts there is currently no vaccine that provides sustained sterile immunity against malaria. In this context, a large number of targets from the different stages of the Plasmodium falciparum life cycle have been evaluated as vaccine candidates. None of these candidates has fulfilled expectations, and as long as we lack a single target that induces strain-transcending protective immune responses, combining key antigens from different life cycle stages seems to be the most promising route towards the development of efficacious malaria vaccines. After the identification of potential targets using approaches such as omics-based technology and reverse immunology, the rapid expression, purification and characterization of these proteins, as well as the generation and analysis of fusion constructs combining different promising antigens or antigen domains before committing to expensive and time consuming clinical development, represents one of the bottlenecks in the vaccine development pipeline. The production of recombinant proteins by transient gene expression in plants is a robust and versatile alternative to cell-based microbial and eukaryotic production platforms. The transfection of plant tissues and/or whole plants using Agrobacterium tumefaciens offers a low technical entry barrier, low costs and a high degree of flexibility embedded within a rapid and scalable workflow. Recombinant proteins can easily be targeted to different subcellular compartments according to their physicochemical requirements, including post-translational modifications, to ensure optimal yields of high quality product, and to support simple and economical downstream processing. Here we demonstrate the use of a plant transient expression platform based on transfection with A. tumefaciens as essential component of a malaria vaccine development workflow involving screens for expression, solubility and stability using fluorescent fusion

  20. Rapid and scalable plant-based production of a cholera toxin B subunit variant to aid in mass vaccination against cholera outbreaks.

    Directory of Open Access Journals (Sweden)

    Krystal Teasley Hamorsky

    Full Text Available INTRODUCTION: Cholera toxin B subunit (CTB is a component of an internationally licensed oral cholera vaccine. The protein induces neutralizing antibodies against the holotoxin, the virulence factor responsible for severe diarrhea. A field clinical trial has suggested that the addition of CTB to killed whole-cell bacteria provides superior short-term protection to whole-cell-only vaccines; however, challenges in CTB biomanufacturing (i.e., cost and scale hamper its implementation to mass vaccination in developing countries. To provide a potential solution to this issue, we developed a rapid, robust, and scalable CTB production system in plants. METHODOLOGY/PRINCIPAL FINDINGS: In a preliminary study of expressing original CTB in transgenic Nicotiana benthamiana, the protein was N-glycosylated with plant-specific glycans. Thus, an aglycosylated CTB variant (pCTB was created and overexpressed via a plant virus vector. Upon additional transgene engineering for retention in the endoplasmic reticulum and optimization of a secretory signal, the yield of pCTB was dramatically improved, reaching >1 g per kg of fresh leaf material. The protein was efficiently purified by simple two-step chromatography. The GM1-ganglioside binding capacity and conformational stability of pCTB were virtually identical to the bacteria-derived original B subunit, as demonstrated in competitive enzyme-linked immunosorbent assay, surface plasmon resonance, and fluorescence-based thermal shift assay. Mammalian cell surface-binding was corroborated by immunofluorescence and flow cytometry. pCTB exhibited strong oral immunogenicity in mice, inducing significant levels of CTB-specific intestinal antibodies that persisted over 6 months. Moreover, these antibodies effectively neutralized the cholera holotoxin in vitro. CONCLUSIONS/SIGNIFICANCE: Taken together, these results demonstrated that pCTB has robust producibility in Nicotiana plants and retains most, if not all, of major

  1. Towards a Scalable Fully-Implicit Fully-coupled Resistive MHD Formulation with Stabilized FE Methods

    Energy Technology Data Exchange (ETDEWEB)

    Shadid, J N; Pawlowski, R P; Banks, J W; Chacon, L; Lin, P T; Tuminaro, R S

    2009-06-03

    This paper presents an initial study that is intended to explore the development of a scalable fully-implicit stabilized unstructured finite element (FE) capability for low-Mach-number resistive MHD. The discussion considers the development of the stabilized FE formulation and the underlying fully-coupled preconditioned Newton-Krylov nonlinear iterative solver. To enable robust, scalable and efficient solution of the large-scale sparse linear systems generated by the Newton linearization, fully-coupled algebraic multilevel preconditioners are employed. Verification results demonstrate the expected order-of-acuracy for the stabilized FE discretization of a 2D vector potential form for the steady and transient solution of the resistive MHD system. In addition, this study puts forth a set of challenging prototype problems that include the solution of an MHD Faraday conduction pump, a hydromagnetic Rayleigh-Bernard linear stability calculation, and a magnetic island coalescence problem. Initial results that explore the scaling of the solution methods are presented on up to 4096 processors for problems with up to 64M unknowns on a CrayXT3/4. Additionally, a large-scale proof-of-capability calculation for 1 billion unknowns for the MHD Faraday pump problem on 24,000 cores is presented.

  2. The Concept of Business Model Scalability

    DEFF Research Database (Denmark)

    Lund, Morten; Nielsen, Christian

    2018-01-01

    -term pro table business. However, the main message of this article is that while providing a good value proposition may help the rm ‘get by’, the really successful businesses of today are those able to reach the sweet-spot of business model scalability. Design/Methodology/Approach: The article is based...... on a ve-year longitudinal action research project of over 90 companies that participated in the International Center for Innovation project aimed at building 10 global network-based business models. Findings: This article introduces and discusses the term scalability from a company-level perspective......Purpose: The purpose of the article is to de ne what scalable business models are. Central to the contemporary understanding of business models is the value proposition towards the customer and the hypotheses generated about delivering value to the customer which become a good foundation for a long...

  3. Oracle database performance and scalability a quantitative approach

    CERN Document Server

    Liu, Henry H

    2011-01-01

    A data-driven, fact-based, quantitative text on Oracle performance and scalability With database concepts and theories clearly explained in Oracle's context, readers quickly learn how to fully leverage Oracle's performance and scalability capabilities at every stage of designing and developing an Oracle-based enterprise application. The book is based on the author's more than ten years of experience working with Oracle, and is filled with dependable, tested, and proven performance optimization techniques. Oracle Database Performance and Scalability is divided into four parts that enable reader

  4. PKI Scalability Issues

    OpenAIRE

    Slagell, Adam J; Bonilla, Rafael

    2004-01-01

    This report surveys different PKI technologies such as PKIX and SPKI and the issues of PKI that affect scalability. Much focus is spent on certificate revocation methodologies and status verification systems such as CRLs, Delta-CRLs, CRS, Certificate Revocation Trees, Windowed Certificate Revocation, OCSP, SCVP and DVCS.

  5. On Scalability and Replicability of Smart Grid Projects—A Case Study

    Directory of Open Access Journals (Sweden)

    Lukas Sigrist

    2016-03-01

    Full Text Available This paper studies the scalability and replicability of smart grid projects. Currently, most smart grid projects are still in the R&D or demonstration phases. The full roll-out of the tested solutions requires a suitable degree of scalability and replicability to prevent project demonstrators from remaining local experimental exercises. Scalability and replicability are the preliminary requisites to perform scaling-up and replication successfully; therefore, scalability and replicability allow for or at least reduce barriers for the growth and reuse of the results of project demonstrators. The paper proposes factors that influence and condition a project’s scalability and replicability. These factors involve technical, economic, regulatory and stakeholder acceptance related aspects, and they describe requirements for scalability and replicability. In order to assess and evaluate the identified scalability and replicability factors, data has been collected from European and national smart grid projects by means of a survey, reflecting the projects’ view and results. The evaluation of the factors allows quantifying the status quo of on-going projects with respect to the scalability and replicability, i.e., they provide a feedback on to what extent projects take into account these factors and on whether the projects’ results and solutions are actually scalable and replicable.

  6. Scalable-to-lossless transform domain distributed video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Veselov, Anton

    2010-01-01

    Distributed video coding (DVC) is a novel approach providing new features as low complexity encoding by mainly exploiting the source statistics at the decoder based on the availability of decoder side information. In this paper, scalable-tolossless DVC is presented based on extending a lossy Tran...... codec provides frame by frame encoding. Comparing the lossless coding efficiency, the proposed scalable-to-lossless TDWZ video codec can save up to 5%-13% bits compared to JPEG LS and H.264 Intra frame lossless coding and do so as a scalable-to-lossless coding....

  7. An extended systematic mapping study about the scalability of i* Models

    Directory of Open Access Journals (Sweden)

    Paulo Lima

    2016-12-01

    Full Text Available i* models have been used for requirements specification in many domains, such as healthcare, telecommunication, and air traffic control. Managing the scalability and the complexity of such models is an important challenge in Requirements Engineering (RE. Scalability is also one of the most intractable issues in the design of visual notations in general: a well-known problem with visual representations is that they do not scale well. This issue has led us to investigate scalability in i* models and its variants by means of a systematic mapping study. This paper is an extended version of a previous paper on the scalability of i* including papers indicated by specialists. Moreover, we also discuss the challenges and open issues regarding scalability of i* models and its variants. A total of 126 papers were analyzed in order to understand: how the RE community perceives scalability; and which proposals have considered this topic. We found that scalability issues are indeed perceived as relevant and that further work is still required, even though many potential solutions have already been proposed. This study can be a starting point for researchers aiming to further advance the treatment of scalability in i* models.

  8. Scalable Transactions for Web Applications in the Cloud

    NARCIS (Netherlands)

    Zhou, W.; Pierre, G.E.O.; Chi, C.-H.

    2009-01-01

    Cloud Computing platforms provide scalability and high availability properties for web applications but they sacrifice data consistency at the same time. However, many applications cannot afford any data inconsistency. We present a scalable transaction manager for NoSQL cloud database services to

  9. Requirements for Scalable Access Control and Security Management Architectures

    National Research Council Canada - National Science Library

    Keromytis, Angelos D; Smith, Jonathan M

    2005-01-01

    Maximizing local autonomy has led to a scalable Internet. Scalability and the capacity for distributed control have unfortunately not extended well to resource access control policies and mechanisms...

  10. A scalable geometric multigrid solver for nonsymmetric elliptic systems with application to variable-density flows

    Science.gov (United States)

    Esmaily, M.; Jofre, L.; Mani, A.; Iaccarino, G.

    2018-03-01

    A geometric multigrid algorithm is introduced for solving nonsymmetric linear systems resulting from the discretization of the variable density Navier-Stokes equations on nonuniform structured rectilinear grids and high-Reynolds number flows. The restriction operation is defined such that the resulting system on the coarser grids is symmetric, thereby allowing for the use of efficient smoother algorithms. To achieve an optimal rate of convergence, the sequence of interpolation and restriction operations are determined through a dynamic procedure. A parallel partitioning strategy is introduced to minimize communication while maintaining the load balance between all processors. To test the proposed algorithm, we consider two cases: 1) homogeneous isotropic turbulence discretized on uniform grids and 2) turbulent duct flow discretized on stretched grids. Testing the algorithm on systems with up to a billion unknowns shows that the cost varies linearly with the number of unknowns. This O (N) behavior confirms the robustness of the proposed multigrid method regarding ill-conditioning of large systems characteristic of multiscale high-Reynolds number turbulent flows. The robustness of our method to density variations is established by considering cases where density varies sharply in space by a factor of up to 104, showing its applicability to two-phase flow problems. Strong and weak scalability studies are carried out, employing up to 30,000 processors, to examine the parallel performance of our implementation. Excellent scalability of our solver is shown for a granularity as low as 104 to 105 unknowns per processor. At its tested peak throughput, it solves approximately 4 billion unknowns per second employing over 16,000 processors with a parallel efficiency higher than 50%.

  11. Scalable cloud without dedicated storage

    Science.gov (United States)

    Batkovich, D. V.; Kompaniets, M. V.; Zarochentsev, A. K.

    2015-05-01

    We present a prototype of a scalable computing cloud. It is intended to be deployed on the basis of a cluster without the separate dedicated storage. The dedicated storage is replaced by the distributed software storage. In addition, all cluster nodes are used both as computing nodes and as storage nodes. This solution increases utilization of the cluster resources as well as improves fault tolerance and performance of the distributed storage. Another advantage of this solution is high scalability with a relatively low initial and maintenance cost. The solution is built on the basis of the open source components like OpenStack, CEPH, etc.

  12. Energy Efficiency and Scalability of Metallic Nanoparticle Production Using Arc/Spark Discharge

    Directory of Open Access Journals (Sweden)

    Martin Slotte

    2017-10-01

    Full Text Available The increased global demand for metallic nanoparticles for an ever growing number of applications has given rise to a need for larger scale and more efficient nanoparticle (NP production processes. In this paper one such process is evaluated from the viewpoints of scalability and energy efficiency. Multiple setups of different scale of an arc/spark process were evaluated for energy efficiency and scalability using exergy analysis, heat loss evaluation and life cycle impact assessment, based on data collected from EU FP7 project partners. The energy efficiency of the process is quite low, with e.g., a specific electricity consumption (SEC of producing ~80 nm copper NP of 180 kWh/kg while the thermodynamic minimum energy need is 0.03 kWh/kg. This is due to thermal energy use characteristics of the system. During scale-up of the process the SEC remained similar to that of smaller setups. Loss of NP mass in the tubing of larger setups gives a lower material yield. The variation in material yield has a significant impact on the life cycle impact for the produced NP in both the Human Health and Ecosystem Quality categories while the impact is smaller in the Global Warming and Resource Depletion categories.

  13. Scalable fabrication of strongly textured organic semiconductor micropatterns by capillary force lithography.

    Science.gov (United States)

    Jo, Pil Sung; Vailionis, Arturas; Park, Young Min; Salleo, Alberto

    2012-06-26

    Strongly textured organic semiconductor micropatterns made of the small molecule dioctylbenzothienobenzothiophene (C(8)-BTBT) are fabricated by using a method based on capillary force lithography (CFL). This technique provides the C(8)-BTBT solution with nucleation sites for directional growth, and can be used as a scalable way to produce high quality crystalline arrays in desired regions of a substrate for OFET applications. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Scalable Production of Si Nanoparticles Directly from Low Grade Sources for Lithium-Ion Battery Anode.

    Science.gov (United States)

    Zhu, Bin; Jin, Yan; Tan, Yingling; Zong, Linqi; Hu, Yue; Chen, Lei; Chen, Yanbin; Zhang, Qiao; Zhu, Jia

    2015-09-09

    Silicon, one of the most promising candidates as lithium-ion battery anode, has attracted much attention due to its high theoretical capacity, abundant existence, and mature infrastructure. Recently, Si nanostructures-based lithium-ion battery anode, with sophisticated structure designs and process development, has made significant progress. However, low cost and scalable processes to produce these Si nanostructures remained as a challenge, which limits the widespread applications. Herein, we demonstrate that Si nanoparticles with controlled size can be massively produced directly from low grade Si sources through a scalable high energy mechanical milling process. In addition, we systematically studied Si nanoparticles produced from two major low grade Si sources, metallurgical silicon (∼99 wt % Si, $1/kg) and ferrosilicon (∼83 wt % Si, $0.6/kg). It is found that nanoparticles produced from ferrosilicon sources contain FeSi2, which can serve as a buffer layer to alleviate the mechanical fractures of volume expansion, whereas nanoparticles from metallurgical Si sources have higher capacity and better kinetic properties because of higher purity and better electronic transport properties. Ferrosilicon nanoparticles and metallurgical Si nanoparticles demonstrate over 100 stable deep cycling after carbon coating with the reversible capacities of 1360 mAh g(-1) and 1205 mAh g(-1), respectively. Therefore, our approach provides a new strategy for cost-effective, energy-efficient, large scale synthesis of functional Si electrode materials.

  15. Enhancing Scalability of Sparse Direct Methods

    International Nuclear Information System (INIS)

    Li, Xiaoye S.; Demmel, James; Grigori, Laura; Gu, Ming; Xia, Jianlin; Jardin, Steve; Sovinec, Carl; Lee, Lie-Quan

    2007-01-01

    TOPS is providing high-performance, scalable sparse direct solvers, which have had significant impacts on the SciDAC applications, including fusion simulation (CEMM), accelerator modeling (COMPASS), as well as many other mission-critical applications in DOE and elsewhere. Our recent developments have been focusing on new techniques to overcome scalability bottleneck of direct methods, in both time and memory. These include parallelizing symbolic analysis phase and developing linear-complexity sparse factorization methods. The new techniques will make sparse direct methods more widely usable in large 3D simulations on highly-parallel petascale computers

  16. Modular Universal Scalable Ion-trap Quantum Computer

    Science.gov (United States)

    2016-06-02

    SECURITY CLASSIFICATION OF: The main goal of the original MUSIQC proposal was to construct and demonstrate a modular and universally- expandable ion...Distribution Unlimited UU UU UU UU 02-06-2016 1-Aug-2010 31-Jan-2016 Final Report: Modular Universal Scalable Ion-trap Quantum Computer The views...P.O. Box 12211 Research Triangle Park, NC 27709-2211 Ion trap quantum computation, scalable modular architectures REPORT DOCUMENTATION PAGE 11

  17. Scalable and Media Aware Adaptive Video Streaming over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Béatrice Pesquet-Popescu

    2008-07-01

    Full Text Available This paper proposes an advanced video streaming system based on scalable video coding in order to optimize resource utilization in wireless networks with retransmission mechanisms at radio protocol level. The key component of this system is a packet scheduling algorithm which operates on the different substreams of a main scalable video stream and which is implemented in a so-called media aware network element. The concerned type of transport channel is a dedicated channel subject to parameters (bitrate, loss rate variations on the long run. Moreover, we propose a combined scalability approach in which common temporal and SNR scalability features can be used jointly with a partitioning of the image into regions of interest. Simulation results show that our approach provides substantial quality gain compared to classical packet transmission methods and they demonstrate how ROI coding combined with SNR scalability allows to improve again the visual quality.

  18. The validity and scalability of the Theory of Mind Scale with toddlers and preschoolers.

    Science.gov (United States)

    Hiller, Rachel M; Weber, Nathan; Young, Robyn L

    2014-12-01

    Despite the importance of theory of mind (ToM) for typical development, there remain 2 key issues affecting our ability to draw robust conclusions. One is the continued focus on false belief as the sole measure of ToM. The second is the lack of empirically validated measures of ToM as a broad construct. Our key aim was to examine the validity and reliability of the 5-item ToM scale (Peterson, Wellman, & Liu, 2005). In particular, we extended on previous research of this scale by assessing its scalability and validity for use with children from 2 years of age. Sixty-eight typically developing children (aged 24 to 61 months) were assessed on the scale's 5 tasks, along with a sixth Sally-Anne false-belief task. Our data replicated the scalability of the 5 tasks for a Rasch-but not Guttman-scale. Guttman analysis showed that a 4-item scale may be more suitable for this age range. Further, the tasks showed good internal consistency and validity for use with children as young as 2 years of age. Overall, the measure provides a valid and reliable tool for the assessment of ToM, and in particular, the longitudinal assessment of this ability as a construct. (c) 2014 APA, all rights reserved.

  19. Design issues for numerical libraries on scalable multicore architectures

    International Nuclear Information System (INIS)

    Heroux, M A

    2008-01-01

    Future generations of scalable computers will rely on multicore nodes for a significant portion of overall system performance. At present, most applications and libraries cannot exploit multiple cores beyond running addition MPI processes per node. In this paper we discuss important multicore architecture issues, programming models, algorithms requirements and software design related to effective use of scalable multicore computers. In particular, we focus on important issues for library research and development, making recommendations for how to effectively develop libraries for future scalable computer systems

  20. Mechanical characterization of scalable cellulose nano-fiber based composites made using liquid composite molding process

    Science.gov (United States)

    Bamdad Barari; Thomas K. Ellingham; Issam I. Ghamhia; Krishna M. Pillai; Rani El-Hajjar; Lih-Sheng Turng; Ronald Sabo

    2016-01-01

    Plant derived cellulose nano-fibers (CNF) are a material with remarkable mechanical properties compared to other natural fibers. However, efforts to produce nano-composites on a large scale using CNF have yet to be investigated. In this study, scalable CNF nano-composites were made from isotropically porous CNF preforms using a freeze drying process. An improvised...

  1. Robustness and device independence of verifiable blind quantum computing

    International Nuclear Information System (INIS)

    Gheorghiu, Alexandru; Kashefi, Elham; Wallden, Petros

    2015-01-01

    Recent advances in theoretical and experimental quantum computing bring us closer to scalable quantum computing devices. This makes the need for protocols that verify the correct functionality of quantum operations timely and has led to the field of quantum verification. In this paper we address key challenges to make quantum verification protocols applicable to experimental implementations. We prove the robustness of the single server verifiable universal blind quantum computing protocol of Fitzsimons and Kashefi (2012 arXiv:1203.5217) in the most general scenario. This includes the case where the purification of the deviated input state is in the hands of an adversarial server. The proved robustness property allows the composition of this protocol with a device-independent state tomography protocol that we give, which is based on the rigidity of CHSH games as proposed by Reichardt et al (2013 Nature 496 456–60). The resulting composite protocol has lower round complexity for the verification of entangled quantum servers with a classical verifier and, as we show, can be made fault tolerant. (paper)

  2. Scuba: scalable kernel-based gene prioritization.

    Science.gov (United States)

    Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio

    2018-01-25

    The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .

  3. Building scalable apps with Redis and Node.js

    CERN Document Server

    Johanan, Joshua

    2014-01-01

    If the phrase scalability sounds alien to you, then this is an ideal book for you. You will not need much Node.js experience as each framework is demonstrated in a way that requires no previous knowledge of the framework. You will be building scalable Node.js applications in no time! Knowledge of JavaScript is required.

  4. Scalable shared-memory multiprocessing

    CERN Document Server

    Lenoski, Daniel E

    1995-01-01

    Dr. Lenoski and Dr. Weber have experience with leading-edge research and practical issues involved in implementing large-scale parallel systems. They were key contributors to the architecture and design of the DASH multiprocessor. Currently, they are involved with commercializing scalable shared-memory technology.

  5. A scalable healthcare information system based on a service-oriented architecture.

    Science.gov (United States)

    Yang, Tzu-Hsiang; Sun, Yeali S; Lai, Feipei

    2011-06-01

    Many existing healthcare information systems are composed of a number of heterogeneous systems and face the important issue of system scalability. This paper first describes the comprehensive healthcare information systems used in National Taiwan University Hospital (NTUH) and then presents a service-oriented architecture (SOA)-based healthcare information system (HIS) based on the service standard HL7. The proposed architecture focuses on system scalability, in terms of both hardware and software. Moreover, we describe how scalability is implemented in rightsizing, service groups, databases, and hardware scalability. Although SOA-based systems sometimes display poor performance, through a performance evaluation of our HIS based on SOA, the average response time for outpatient, inpatient, and emergency HL7Central systems are 0.035, 0.04, and 0.036 s, respectively. The outpatient, inpatient, and emergency WebUI average response times are 0.79, 1.25, and 0.82 s. The scalability of the rightsizing project and our evaluation results show that the SOA HIS we propose provides evidence that SOA can provide system scalability and sustainability in a highly demanding healthcare information system.

  6. Multidisciplinary Design Optimization for High Reliability and Robustness

    National Research Council Canada - National Science Library

    Grandhi, Ramana

    2005-01-01

    .... Over the last 3 years Wright State University has been applying analysis tools to predict the behavior of critical disciplines to produce highly robust torpedo designs using robust multi-disciplinary...

  7. TAO-robust backpropagation learning algorithm.

    Science.gov (United States)

    Pernía-Espinoza, Alpha V; Ordieres-Meré, Joaquín B; Martínez-de-Pisón, Francisco J; González-Marcos, Ana

    2005-03-01

    In several fields, as industrial modelling, multilayer feedforward neural networks are often used as universal function approximations. These supervised neural networks are commonly trained by a traditional backpropagation learning format, which minimises the mean squared error (mse) of the training data. However, in the presence of corrupted data (outliers) this training scheme may produce wrong models. We combine the benefits of the non-linear regression model tau-estimates [introduced by Tabatabai, M. A. Argyros, I. K. Robust Estimation and testing for general nonlinear regression models. Applied Mathematics and Computation. 58 (1993) 85-101] with the backpropagation algorithm to produce the TAO-robust learning algorithm, in order to deal with the problems of modelling with outliers. The cost function of this approach has a bounded influence function given by the weighted average of two psi functions, one corresponding to a very robust estimate and the other to a highly efficient estimate. The advantages of the proposed algorithm are studied with an example.

  8. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    Thomas André

    2007-03-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  9. JPEG2000-Compatible Scalable Scheme for Wavelet-Based Video Coding

    Directory of Open Access Journals (Sweden)

    André Thomas

    2007-01-01

    Full Text Available We present a simple yet efficient scalable scheme for wavelet-based video coders, able to provide on-demand spatial, temporal, and SNR scalability, and fully compatible with the still-image coding standard JPEG2000. Whereas hybrid video coders must undergo significant changes in order to support scalability, our coder only requires a specific wavelet filter for temporal analysis, as well as an adapted bit allocation procedure based on models of rate-distortion curves. Our study shows that scalably encoded sequences have the same or almost the same quality than nonscalably encoded ones, without a significant increase in complexity. A full compatibility with Motion JPEG2000, which tends to be a serious candidate for the compression of high-definition video sequences, is ensured.

  10. Robust Forecasting of Non-Stationary Time Series

    NARCIS (Netherlands)

    Croux, C.; Fried, R.; Gijbels, I.; Mahieu, K.

    2010-01-01

    This paper proposes a robust forecasting method for non-stationary time series. The time series is modelled using non-parametric heteroscedastic regression, and fitted by a localized MM-estimator, combining high robustness and large efficiency. The proposed method is shown to produce reliable

  11. A Scalable and Modular Dome Illumination System for Scientific Microphotography on a Budget.

    Directory of Open Access Journals (Sweden)

    Ricardo Kawada

    Full Text Available A scalable and modular LED illumination dome for microscopic scientific photography is described and illustrated, and methods for constructing such a dome are detailed. Dome illumination for insect specimens has become standard practice across the field of insect systematics, but many dome designs remain expensive and inflexible with respect to new LED technology. Further, a one-size-fits-all dome cannot accommodate the large breadth of insect size encountered in nature, forcing the photographer to adapt, in some cases, to a less than ideal dome design. The dome described here is scalable, as it is based on a isodecahedron, and the template for the dome is available as a downloaded file from the internet that can be printed on any printer, on the photographer's choice of media. As a result, a photographer can afford, using this design, to produce a series of domes of various sizes and materials, and LED ring lights of various sizes and color temperatures, depending on the need.

  12. Declarative and Scalable Selection for Map Visualizations

    DEFF Research Database (Denmark)

    Kefaloukos, Pimin Konstantin Balic

    and is itself a source and cause of prolific data creation. This calls for scalable map processing techniques that can handle the data volume and which play well with the predominant data models on the Web. (4) Maps are now consumed around the clock by a global audience. While historical maps were singleuser......-defined constraints as well as custom objectives. The purpose of the language is to derive a target multi-scale database from a source database according to holistic specifications. (b) The Glossy SQL compiler allows Glossy SQL to be scalably executed in a spatial analytics system, such as a spatial relational......, there are indications that the method is scalable for databases that contain millions of records, especially if the target language of the compiler is substituted by a cluster-ready variant of SQL. While several realistic use cases for maps have been implemented in CVL, additional non-geographic data visualization uses...

  13. Scalable robotic biofabrication of tissue spheroids

    International Nuclear Information System (INIS)

    Mehesz, A Nagy; Hajdu, Z; Visconti, R P; Markwald, R R; Mironov, V; Brown, J; Beaver, W; Da Silva, J V L

    2011-01-01

    Development of methods for scalable biofabrication of uniformly sized tissue spheroids is essential for tissue spheroid-based bioprinting of large size tissue and organ constructs. The most recent scalable technique for tissue spheroid fabrication employs a micromolded recessed template prepared in a non-adhesive hydrogel, wherein the cells loaded into the template self-assemble into tissue spheroids due to gravitational force. In this study, we present an improved version of this technique. A new mold was designed to enable generation of 61 microrecessions in each well of a 96-well plate. The microrecessions were seeded with cells using an EpMotion 5070 automated pipetting machine. After 48 h of incubation, tissue spheroids formed at the bottom of each microrecession. To assess the quality of constructs generated using this technology, 600 tissue spheroids made by this method were compared with 600 spheroids generated by the conventional hanging drop method. These analyses showed that tissue spheroids fabricated by the micromolded method are more uniform in diameter. Thus, use of micromolded recessions in a non-adhesive hydrogel, combined with automated cell seeding, is a reliable method for scalable robotic fabrication of uniform-sized tissue spheroids.

  14. Scalable robotic biofabrication of tissue spheroids

    Energy Technology Data Exchange (ETDEWEB)

    Mehesz, A Nagy; Hajdu, Z; Visconti, R P; Markwald, R R; Mironov, V [Advanced Tissue Biofabrication Center, Department of Regenerative Medicine and Cell Biology, Medical University of South Carolina, Charleston, SC (United States); Brown, J [Department of Mechanical Engineering, Clemson University, Clemson, SC (United States); Beaver, W [York Technical College, Rock Hill, SC (United States); Da Silva, J V L, E-mail: mironovv@musc.edu [Renato Archer Information Technology Center-CTI, Campinas (Brazil)

    2011-06-15

    Development of methods for scalable biofabrication of uniformly sized tissue spheroids is essential for tissue spheroid-based bioprinting of large size tissue and organ constructs. The most recent scalable technique for tissue spheroid fabrication employs a micromolded recessed template prepared in a non-adhesive hydrogel, wherein the cells loaded into the template self-assemble into tissue spheroids due to gravitational force. In this study, we present an improved version of this technique. A new mold was designed to enable generation of 61 microrecessions in each well of a 96-well plate. The microrecessions were seeded with cells using an EpMotion 5070 automated pipetting machine. After 48 h of incubation, tissue spheroids formed at the bottom of each microrecession. To assess the quality of constructs generated using this technology, 600 tissue spheroids made by this method were compared with 600 spheroids generated by the conventional hanging drop method. These analyses showed that tissue spheroids fabricated by the micromolded method are more uniform in diameter. Thus, use of micromolded recessions in a non-adhesive hydrogel, combined with automated cell seeding, is a reliable method for scalable robotic fabrication of uniform-sized tissue spheroids.

  15. Architectures and Applications for Scalable Quantum Information Systems

    Science.gov (United States)

    2007-01-01

    Gershenfeld and I. Chuang. Quantum computing with molecules. Scientific American, June 1998. [16] A. Globus, D. Bailey, J. Han, R. Jaffe, C. Levit , R...AFRL-IF-RS-TR-2007-12 Final Technical Report January 2007 ARCHITECTURES AND APPLICATIONS FOR SCALABLE QUANTUM INFORMATION SYSTEMS...NUMBER 5b. GRANT NUMBER FA8750-01-2-0521 4. TITLE AND SUBTITLE ARCHITECTURES AND APPLICATIONS FOR SCALABLE QUANTUM INFORMATION SYSTEMS 5c

  16. Extending JPEG-LS for low-complexity scalable video coding

    DEFF Research Database (Denmark)

    Ukhanova, Anna; Sergeev, Anton; Forchhammer, Søren

    2011-01-01

    JPEG-LS, the well-known international standard for lossless and near-lossless image compression, was originally designed for non-scalable applications. In this paper we propose a scalable modification of JPEG-LS and compare it with the leading image and video coding standards JPEG2000 and H.264/SVC...

  17. Techno-Economic Analysis of Scalable Coal-Based Fuel Cells

    Energy Technology Data Exchange (ETDEWEB)

    Chuang, Steven S. C. [Univ. of Akron, OH (United States)

    2014-08-31

    Researchers at The University of Akron (UA) have demonstrated the technical feasibility of a laboratory coal fuel cell that can economically convert high sulfur coal into electricity with near zero negative environmental impact. Scaling up this coal fuel cell technology to the megawatt scale for the nation’s electric power supply requires two key elements: (i) developing the manufacturing technology for the components of the coal-based fuel cell, and (ii) long term testing of a kW scale fuel cell pilot plant. This project was expected to develop a scalable coal fuel cell manufacturing process through testing, demonstrating the feasibility of building a large-scale coal fuel cell power plant. We have developed a reproducible tape casting technique for the mass production of the planner fuel cells. Low cost interconnect and cathode current collector material was identified and current collection was improved. In addition, this study has demonstrated that electrochemical oxidation of carbon can take place on the Ni anode surface and the CO and CO2 product produced can further react with carbon to initiate the secondary reactions. One important secondary reaction is the reaction of carbon with CO2 to produce CO. We found CO and carbon can be electrochemically oxidized simultaneously inside of the anode porous structure and on the surface of anode for producing electricity. Since CH4 produced from coal during high temperature injection of coal into the anode chamber can cause severe deactivation of Ni-anode, we have studied how CH4 can interact with CO2 to produce in the anode chamber. CO produced was found able to inhibit coking and allow the rate of anode deactivation to be decreased. An injection system was developed to inject the solid carbon and coal fuels without bringing air into the anode chamber. Five planner fuel cells connected in a series configuration and tested. Extensive studies on the planner fuels

  18. Scalable 2D Hierarchical Porous Carbon Nanosheets for Flexible Supercapacitors with Ultrahigh Energy Density.

    Science.gov (United States)

    Yao, Lei; Wu, Qin; Zhang, Peixin; Zhang, Junmin; Wang, Dongrui; Li, Yongliang; Ren, Xiangzhong; Mi, Hongwei; Deng, Libo; Zheng, Zijian

    2018-03-01

    2D carbon nanomaterials such as graphene and its derivatives, have gained tremendous research interests in energy storage because of their high capacitance and chemical stability. However, scalable synthesis of ultrathin carbon nanosheets with well-defined pore architectures remains a great challenge. Herein, the first synthesis of 2D hierarchical porous carbon nanosheets (2D-HPCs) with rich nitrogen dopants is reported, which is prepared with high scalability through a rapid polymerization of a nitrogen-containing thermoset and a subsequent one-step pyrolysis and activation into 2D porous nanosheets. 2D-HPCs, which are typically 1.5 nm thick and 1-3 µm wide, show a high surface area (2406 m 2 g -1 ) and with hierarchical micro-, meso-, and macropores. This 2D and hierarchical porous structure leads to robust flexibility and good energy-storage capability, being 139 Wh kg -1 for a symmetric supercapacitor. Flexible supercapacitor devices fabricated by these 2D-HPCs also present an ultrahigh volumetric energy density of 8.4 mWh cm -3 at a power density of 24.9 mW cm -3 , which is retained at 80% even when the power density is increased by 20-fold. The devices show very high electrochemical life (96% retention after 10000 charge/discharge cycles) and excellent mechanical flexibility. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Scalable, full-colour and controllable chromotropic plasmonic printing

    OpenAIRE

    Xue, Jiancai; Zhou, Zhang-Kai; Wei, Zhiqiang; Su, Rongbin; Lai, Juan; Li, Juntao; Li, Chao; Zhang, Tengwei; Wang, Xue-Hua

    2015-01-01

    Plasmonic colour printing has drawn wide attention as a promising candidate for the next-generation colour-printing technology. However, an efficient approach to realize full colour and scalable fabrication is still lacking, which prevents plasmonic colour printing from practical applications. Here we present a scalable and full-colour plasmonic printing approach by combining conjugate twin-phase modulation with a plasmonic broadband absorber. More importantly, our approach also demonstrates ...

  20. Temporal scalability comparison of the H.264/SVC and distributed video codec

    DEFF Research Database (Denmark)

    Huang, Xin; Ukhanova, Ann; Belyaev, Evgeny

    2009-01-01

    The problem of the multimedia scalable video streaming is a current topic of interest. There exist many methods for scalable video coding. This paper is focused on the scalable extension of H.264/AVC (H.264/SVC) and distributed video coding (DVC). The paper presents an efficiency comparison of SV...

  1. Scalable and near-optimal design space exploration for embedded systems

    CERN Document Server

    Kritikakou, Angeliki; Goutis, Costas

    2014-01-01

    This book describes scalable and near-optimal, processor-level design space exploration (DSE) methodologies.  The authors present design methodologies for data storage and processing in real-time, cost-sensitive data-dominated embedded systems.  Readers will be enabled to reduce time-to-market, while satisfying system requirements for performance, area, and energy consumption, thereby minimizing the overall cost of the final design.   • Describes design space exploration (DSE) methodologies for data storage and processing in embedded systems, which achieve near-optimal solutions with scalable exploration time; • Presents a set of principles and the processes which support the development of the proposed scalable and near-optimal methodologies; • Enables readers to apply scalable and near-optimal methodologies to the intra-signal in-place optimization step for both regular and irregular memory accesses.

  2. Software performance and scalability a quantitative approach

    CERN Document Server

    Liu, Henry H

    2009-01-01

    Praise from the Reviewers:"The practicality of the subject in a real-world situation distinguishes this book from othersavailable on the market."—Professor Behrouz Far, University of Calgary"This book could replace the computer organization texts now in use that every CS and CpEstudent must take. . . . It is much needed, well written, and thoughtful."—Professor Larry Bernstein, Stevens Institute of TechnologyA distinctive, educational text onsoftware performance and scalabilityThis is the first book to take a quantitative approach to the subject of software performance and scalability

  3. High-performance scalable Information Service for the ATLAS experiment

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Hauser, R

    2012-01-01

    The ATLAS experiment is being operated by highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to access the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS TDAQ project. The IS provides high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about hundred gigabytes of information which is being constantly updated with the update interval varying from a second to few tens of seconds. IS ...

  4. Robustness of airline route networks

    Science.gov (United States)

    Lordan, Oriol; Sallan, Jose M.; Escorihuela, Nuria; Gonzalez-Prieto, David

    2016-03-01

    Airlines shape their route network by defining their routes through supply and demand considerations, paying little attention to network performance indicators, such as network robustness. However, the collapse of an airline network can produce high financial costs for the airline and all its geographical area of influence. The aim of this study is to analyze the topology and robustness of the network route of airlines following Low Cost Carriers (LCCs) and Full Service Carriers (FSCs) business models. Results show that FSC hubs are more central than LCC bases in their route network. As a result, LCC route networks are more robust than FSC networks.

  5. Space-Filling Supercapacitor Carpets: Highly scalable fractal architecture for energy storage

    Science.gov (United States)

    Tiliakos, Athanasios; Trefilov, Alexandra M. I.; Tanasǎ, Eugenia; Balan, Adriana; Stamatin, Ioan

    2018-04-01

    Revamping ground-breaking ideas from fractal geometry, we propose an alternative micro-supercapacitor configuration realized by laser-induced graphene (LIG) foams produced via laser pyrolysis of inexpensive commercial polymers. The Space-Filling Supercapacitor Carpet (SFSC) architecture introduces the concept of nested electrodes based on the pre-fractal Peano space-filling curve, arranged in a symmetrical equilateral setup that incorporates multiple parallel capacitor cells sharing common electrodes for maximum efficiency and optimal length-to-area distribution. We elucidate on the theoretical foundations of the SFSC architecture, and we introduce innovations (high-resolution vector-mode printing) in the LIG method that allow for the realization of flexible and scalable devices based on low iterations of the Peano algorithm. SFSCs exhibit distributed capacitance properties, leading to capacitance, energy, and power ratings proportional to the number of nested electrodes (up to 4.3 mF, 0.4 μWh, and 0.2 mW for the largest tested model of low iteration using aqueous electrolytes), with competitively high energy and power densities. This can pave the road for full scalability in energy storage, reaching beyond the scale of micro-supercapacitors for incorporating into larger and more demanding applications.

  6. GPU-FS-kNN: a software tool for fast and scalable kNN computation using GPUs.

    Directory of Open Access Journals (Sweden)

    Ahmed Shamsul Arefin

    Full Text Available BACKGROUND: The analysis of biological networks has become a major challenge due to the recent development of high-throughput techniques that are rapidly producing very large data sets. The exploding volumes of biological data are craving for extreme computational power and special computing facilities (i.e. super-computers. An inexpensive solution, such as General Purpose computation based on Graphics Processing Units (GPGPU, can be adapted to tackle this challenge, but the limitation of the device internal memory can pose a new problem of scalability. An efficient data and computational parallelism with partitioning is required to provide a fast and scalable solution to this problem. RESULTS: We propose an efficient parallel formulation of the k-Nearest Neighbour (kNN search problem, which is a popular method for classifying objects in several fields of research, such as pattern recognition, machine learning and bioinformatics. Being very simple and straightforward, the performance of the kNN search degrades dramatically for large data sets, since the task is computationally intensive. The proposed approach is not only fast but also scalable to large-scale instances. Based on our approach, we implemented a software tool GPU-FS-kNN (GPU-based Fast and Scalable k-Nearest Neighbour for CUDA enabled GPUs. The basic approach is simple and adaptable to other available GPU architectures. We observed speed-ups of 50-60 times compared with CPU implementation on a well-known breast microarray study and its associated data sets. CONCLUSION: Our GPU-based Fast and Scalable k-Nearest Neighbour search technique (GPU-FS-kNN provides a significant performance improvement for nearest neighbour computation in large-scale networks. Source code and the software tool is available under GNU Public License (GPL at https://sourceforge.net/p/gpufsknn/.

  7. Quality Scalability Compression on Single-Loop Solution in HEVC

    Directory of Open Access Journals (Sweden)

    Mengmeng Zhang

    2014-01-01

    Full Text Available This paper proposes a quality scalable extension design for the upcoming high efficiency video coding (HEVC standard. In the proposed design, the single-loop decoder solution is extended into the proposed scalable scenario. A novel interlayer intra/interprediction is added to reduce the amount of bits representation by exploiting the correlation between coding layers. The experimental results indicate that the average Bjøntegaard delta rate decrease of 20.50% can be gained compared with the simulcast encoding. The proposed technique achieved 47.98% Bjøntegaard delta rate reduction compared with the scalable video coding extension of the H.264/AVC. Consequently, significant rate savings confirm that the proposed method achieves better performance.

  8. SOL: A Library for Scalable Online Learning Algorithms

    OpenAIRE

    Wu, Yue; Hoi, Steven C. H.; Liu, Chenghao; Lu, Jing; Sahoo, Doyen; Yu, Nenghai

    2016-01-01

    SOL is an open-source library for scalable online learning algorithms, and is particularly suitable for learning with high-dimensional data. The library provides a family of regular and sparse online learning algorithms for large-scale binary and multi-class classification tasks with high efficiency, scalability, portability, and extensibility. SOL was implemented in C++, and provided with a collection of easy-to-use command-line tools, python wrappers and library calls for users and develope...

  9. Toward a scalable flexible-order model for 3D nonlinear water waves

    DEFF Research Database (Denmark)

    Engsig-Karup, Allan Peter; Ducrozet, Guillaume; Bingham, Harry B.

    For marine and coastal applications, current work are directed toward the development of a scalable numerical 3D model for fully nonlinear potential water waves over arbitrary depths. The model is high-order accurate, robust and efficient for large-scale problems, and support will be included...... for flexibility in the description of structures by the use of curvilinear boundary-fitted meshes. The mathematical equations for potential waves in the physical domain is transformed through $\\sigma$-mapping(s) to a time-invariant boundary-fitted domain which then becomes a basis for an efficient solution...... strategy on a time-invariant mesh. The 3D numerical model is based on a finite difference method as in the original works \\cite{LiFleming1997,BinghamZhang2007}. Full details and other aspects of an improved 3D solution can be found in \\cite{EBL08}. The new and improved approach for three...

  10. A scalable distributed RRT for motion planning

    KAUST Repository

    Jacobs, Sam Ade

    2013-05-01

    Rapidly-exploring Random Tree (RRT), like other sampling-based motion planning methods, has been very successful in solving motion planning problems. Even so, sampling-based planners cannot solve all problems of interest efficiently, so attention is increasingly turning to parallelizing them. However, one challenge in parallelizing RRT is the global computation and communication overhead of nearest neighbor search, a key operation in RRTs. This is a critical issue as it limits the scalability of previous algorithms. We present two parallel algorithms to address this problem. The first algorithm extends existing work by introducing a parameter that adjusts how much local computation is done before a global update. The second algorithm radially subdivides the configuration space into regions, constructs a portion of the tree in each region in parallel, and connects the subtrees,i removing cycles if they exist. By subdividing the space, we increase computation locality enabling a scalable result. We show that our approaches are scalable. We present results demonstrating almost linear scaling to hundreds of processors on a Linux cluster and a Cray XE6 machine. © 2013 IEEE.

  11. A scalable distributed RRT for motion planning

    KAUST Repository

    Jacobs, Sam Ade; Stradford, Nicholas; Rodriguez, Cesar; Thomas, Shawna; Amato, Nancy M.

    2013-01-01

    Rapidly-exploring Random Tree (RRT), like other sampling-based motion planning methods, has been very successful in solving motion planning problems. Even so, sampling-based planners cannot solve all problems of interest efficiently, so attention is increasingly turning to parallelizing them. However, one challenge in parallelizing RRT is the global computation and communication overhead of nearest neighbor search, a key operation in RRTs. This is a critical issue as it limits the scalability of previous algorithms. We present two parallel algorithms to address this problem. The first algorithm extends existing work by introducing a parameter that adjusts how much local computation is done before a global update. The second algorithm radially subdivides the configuration space into regions, constructs a portion of the tree in each region in parallel, and connects the subtrees,i removing cycles if they exist. By subdividing the space, we increase computation locality enabling a scalable result. We show that our approaches are scalable. We present results demonstrating almost linear scaling to hundreds of processors on a Linux cluster and a Cray XE6 machine. © 2013 IEEE.

  12. Focal plane array with modular pixel array components for scalability

    Science.gov (United States)

    Kay, Randolph R; Campbell, David V; Shinde, Subhash L; Rienstra, Jeffrey L; Serkland, Darwin K; Holmes, Michael L

    2014-12-09

    A modular, scalable focal plane array is provided as an array of integrated circuit dice, wherein each die includes a given amount of modular pixel array circuitry. The array of dice effectively multiplies the amount of modular pixel array circuitry to produce a larger pixel array without increasing die size. Desired pixel pitch across the enlarged pixel array is preserved by forming die stacks with each pixel array circuitry die stacked on a separate die that contains the corresponding signal processing circuitry. Techniques for die stack interconnections and die stack placement are implemented to ensure that the desired pixel pitch is preserved across the enlarged pixel array.

  13. Triboelectric Charging at the Nanostructured Solid/Liquid Interface for Area-Scalable Wave Energy Conversion and Its Use in Corrosion Protection.

    Science.gov (United States)

    Zhao, Xue Jiao; Zhu, Guang; Fan, You Jun; Li, Hua Yang; Wang, Zhong Lin

    2015-07-28

    We report a flexible and area-scalable energy-harvesting technique for converting kinetic wave energy. Triboelectrification as a result of direct interaction between a dynamic wave and a large-area nanostructured solid surface produces an induced current among an array of electrodes. An integration method ensures that the induced current between any pair of electrodes can be constructively added up, which enables significant enhancement in output power and realizes area-scalable integration of electrode arrays. Internal and external factors that affect the electric output are comprehensively discussed. The produced electricity not only drives small electronics but also achieves effective impressed current cathodic protection. This type of thin-film-based device is a potentially practical solution of on-site sustained power supply at either coastal or off-shore sites wherever a dynamic wave is available. Potential applications include corrosion protection, pollution degradation, water desalination, and wireless sensing for marine surveillance.

  14. Scalable implementation of ancilla-free optimal 1→M phase-covariant quantum cloning by combining quantum Zeno dynamics and adiabatic passage

    International Nuclear Information System (INIS)

    Shao, Xiao-Qiang; Zheng, Tai-Yu; Zhang, Shou

    2011-01-01

    A scalable way for implementation of ancilla-free optimal 1→M phase-covariant quantum cloning (PCC) is proposed by combining quantum Zeno dynamics and adiabatic passage. An optimal 1→M PCC can be achieved directly from the existed optimal 1→(M-1) PCC without excited states population during the whole process. The cases for optimal 1→3 (4) PCCs are discussed detailedly to show that the scheme is robust against the effect of decoherence. Moreover, the time for carrying out each cloning transformation is regular, which may reduce the complexity for achieving the optimal PCC in experiment. -- Highlights: → We implement the ancilla-free optimal 1→M phase-covariant quantum cloning machine. → This scheme is robust against the cavity decay and the spontaneous emission of atom. → The time for carrying out each cloning transformation is regular.

  15. Scalable implementation of ancilla-free optimal 1→M phase-covariant quantum cloning by combining quantum Zeno dynamics and adiabatic passage

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Xiao-Qiang, E-mail: xqshao83@yahoo.cn [School of Physics, Northeast Normal University, Changchun 130024 (China); Zheng, Tai-Yu, E-mail: zhengty@nenu.edu.cn [School of Physics, Northeast Normal University, Changchun 130024 (China); Zhang, Shou [Department of Physics, College of Science, Yanbian University, Yanji, Jilin 133002 (China)

    2011-09-19

    A scalable way for implementation of ancilla-free optimal 1→M phase-covariant quantum cloning (PCC) is proposed by combining quantum Zeno dynamics and adiabatic passage. An optimal 1→M PCC can be achieved directly from the existed optimal 1→(M-1) PCC without excited states population during the whole process. The cases for optimal 1→3 (4) PCCs are discussed detailedly to show that the scheme is robust against the effect of decoherence. Moreover, the time for carrying out each cloning transformation is regular, which may reduce the complexity for achieving the optimal PCC in experiment. -- Highlights: → We implement the ancilla-free optimal 1→M phase-covariant quantum cloning machine. → This scheme is robust against the cavity decay and the spontaneous emission of atom. → The time for carrying out each cloning transformation is regular.

  16. CloudTPS: Scalable Transactions for Web Applications in the Cloud

    NARCIS (Netherlands)

    Zhou, W.; Pierre, G.E.O.; Chi, C.-H.

    2010-01-01

    NoSQL Cloud data services provide scalability and high availability properties for web applications but at the same time they sacrifice data consistency. However, many applications cannot afford any data inconsistency. CloudTPS is a scalable transaction manager to allow cloud database services to

  17. Synthesis and characterization of robust magnetic carriers for bioprocess applications

    Energy Technology Data Exchange (ETDEWEB)

    Kopp, Willian, E-mail: willkopp@gmail.com [Federal University of São Carlos-UFSCar, Graduate Program in Chemical Engineering, Rodovia Washington Luiz, km 235, São Carlos, São Paulo 13565-905 (Brazil); Silva, Felipe A., E-mail: eq.felipe.silva@gmail.com [Federal University of São Carlos-UFSCar, Graduate Program in Chemical Engineering, Rodovia Washington Luiz, km 235, São Carlos, São Paulo 13565-905 (Brazil); Lima, Lionete N., E-mail: lionetenunes@yahoo.com.br [Federal University of São Carlos-UFSCar, Graduate Program in Chemical Engineering, Rodovia Washington Luiz, km 235, São Carlos, São Paulo 13565-905 (Brazil); Masunaga, Sueli H., E-mail: sueli.masunaga@gmail.com [Department of Physics, Montana State University-MSU, 173840, Bozeman, MT 59717-3840 (United States); Tardioli, Paulo W., E-mail: pwtardioli@ufscar.br [Department of Chemical Engineering, Federal University of São Carlos-UFSCar, Rodovia Washington Luiz, km 235, São Carlos, São Paulo 13565-905 (Brazil); Giordano, Roberto C., E-mail: roberto@ufscar.br [Department of Chemical Engineering, Federal University of São Carlos-UFSCar, Rodovia Washington Luiz, km 235, São Carlos, São Paulo 13565-905 (Brazil); Araújo-Moreira, Fernando M., E-mail: faraujo@df.ufscar.br [Department of Physics, Federal University of São Carlos-UFSCar, Rodovia Washington Luiz, km 235, São Carlos, São Paulo 13565-905 (Brazil); and others

    2015-03-15

    Highlights: • Silica magnetic microparticles were synthesized for applications in bioprocesses. • The process to produce magnetic microparticles is inexpensive and easily scalable. • Microparticles with very high saturation magnetization were obtained. • The structure of the silica magnetic microparticles could be controlled. - Abstract: Magnetic carriers are an effective option to withdraw selected target molecules from complex mixtures or to immobilize enzymes. This paper describes the synthesis of robust silica magnetic microparticles (SMMps), particularly designed for applications in bioprocesses. SMMps were synthesized in a micro-emulsion, using sodium silicate as the silica source and superparamagnetic iron oxide nanoparticles as the magnetic core. Thermally resistant particles, with high and accessible surface area, narrow particle size distribution, high saturation magnetization, and with superparamagnetic properties were obtained. Several reaction conditions were tested, yielding materials with saturation magnetization between 45 and 63 emu g{sup −1}, particle size between 2 and 200 μm and average diameter between 11.2 and 15.9 μm, surface area between 49 and 103 m{sup 2} g{sup −1} and pore diameter between 2 and 60 nm. The performance of SMMps in a bioprocess was evaluated by the immobilization of Pseudomonas fluorescens lipase on to octyl modified SMMp, the biocatalyst obtained was used in the production of butyl butyrate with good results.

  18. From Digital Disruption to Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten; Thomsen, Peter Poulsen

    2017-01-01

    This article discusses the terms disruption, digital disruption, business models and business model scalability. It illustrates how managers should be using these terms for the benefit of their business by developing business models capable of achieving exponentially increasing returns to scale...... will seldom lead to business model scalability capable of competing with digital disruption(s)....... as a response to digital disruption. A series of case studies illustrate that besides frequent existing messages in the business literature relating to the importance of creating agile businesses, both in growing and declining economies, as well as hard to copy value propositions or value propositions that take...

  19. Robust production of virus-like particles and monoclonal antibodies with geminiviral replicon vectors in lettuce

    Science.gov (United States)

    Lai, Huafang; He, Junyun; Engle, Michael; Diamond, Michael S.; Chen, Qiang

    2011-01-01

    Summary Pharmaceutical protein production in plants has been greatly promoted by the development of viral-based vectors and transient expression systems. Tobacco and related Nicotiana species are currently the most common host plants for generation of plant-made pharmaceutical proteins (PMPs). Downstream processing of target PMPs from these plants, however, is hindered by potential technical and regulatory difficulties due to the presence of high levels of phenolics and toxic alkaloids. Here, we explored the use of lettuce, which grows quickly yet produces low levels of secondary metabolites, and viral vector-based transient expression systems to develop a robust PMP production platform. Our results showed that a geminiviral replicon system based on the bean yellow dwarf virus permits high-level expression in lettuce of virus-like particles (VLP) derived from the Norwalk virus capsid protein and therapeutic monoclonal antibodies (mAbs) against Ebola and West Nile viruses. These vaccine and therapeutic candidates can be readily purified from lettuce leaves with scalable processing methods while fully retaining functional activity. Furthermore, this study also demonstrated the feasibility of using commercially produced lettuce for high-level PMP production. This allows our production system to have access to unlimited quantities of inexpensive plant material for large-scale production. These results establish a new production platform for biological pharmaceutical agents that is effective, safe, low-cost, and amenable to large-scale manufacturing. PMID:21883868

  20. Scalable Packet Classification with Hash Tables

    Science.gov (United States)

    Wang, Pi-Chung

    In the last decade, the technique of packet classification has been widely deployed in various network devices, including routers, firewalls and network intrusion detection systems. In this work, we improve the performance of packet classification by using multiple hash tables. The existing hash-based algorithms have superior scalability with respect to the required space; however, their search performance may not be comparable to other algorithms. To improve the search performance, we propose a tuple reordering algorithm to minimize the number of accessed hash tables with the aid of bitmaps. We also use pre-computation to ensure the accuracy of our search procedure. Performance evaluation based on both real and synthetic filter databases shows that our scheme is effective and scalable and the pre-computation cost is moderate.

  1. A Scalable Parallel PWTD-Accelerated SIE Solver for Analyzing Transient Scattering from Electrically Large Objects

    KAUST Repository

    Liu, Yang

    2015-12-17

    A scalable parallel plane-wave time-domain (PWTD) algorithm for efficient and accurate analysis of transient scattering from electrically large objects is presented. The algorithm produces scalable communication patterns on very large numbers of processors by leveraging two mechanisms: (i) a hierarchical parallelization strategy to evenly distribute the computation and memory loads at all levels of the PWTD tree among processors, and (ii) a novel asynchronous communication scheme to reduce the cost and memory requirement of the communications between the processors. The efficiency and accuracy of the algorithm are demonstrated through its applications to the analysis of transient scattering from a perfect electrically conducting (PEC) sphere with a diameter of 70 wavelengths and a PEC square plate with a dimension of 160 wavelengths. Furthermore, the proposed algorithm is used to analyze transient fields scattered from realistic airplane and helicopter models under high frequency excitation.

  2. A scalable method for parallelizing sampling-based motion planning algorithms

    KAUST Repository

    Jacobs, Sam Ade; Manavi, Kasra; Burgos, Juan; Denny, Jory; Thomas, Shawna; Amato, Nancy M.

    2012-01-01

    This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.

  3. A scalable method for parallelizing sampling-based motion planning algorithms

    KAUST Repository

    Jacobs, Sam Ade

    2012-05-01

    This paper describes a scalable method for parallelizing sampling-based motion planning algorithms. It subdivides configuration space (C-space) into (possibly overlapping) regions and independently, in parallel, uses standard (sequential) sampling-based planners to construct roadmaps in each region. Next, in parallel, regional roadmaps in adjacent regions are connected to form a global roadmap. By subdividing the space and restricting the locality of connection attempts, we reduce the work and inter-processor communication associated with nearest neighbor calculation, a critical bottleneck for scalability in existing parallel motion planning methods. We show that our method is general enough to handle a variety of planning schemes, including the widely used Probabilistic Roadmap (PRM) and Rapidly-exploring Random Trees (RRT) algorithms. We compare our approach to two other existing parallel algorithms and demonstrate that our approach achieves better and more scalable performance. Our approach achieves almost linear scalability on a 2400 core LINUX cluster and on a 153,216 core Cray XE6 petascale machine. © 2012 IEEE.

  4. Efficient Enhancement for Spatial Scalable Video Coding Transmission

    Directory of Open Access Journals (Sweden)

    Mayada Khairy

    2017-01-01

    Full Text Available Scalable Video Coding (SVC is an international standard technique for video compression. It is an extension of H.264 Advanced Video Coding (AVC. In the encoding of video streams by SVC, it is suitable to employ the macroblock (MB mode because it affords superior coding efficiency. However, the exhaustive mode decision technique that is usually used for SVC increases the computational complexity, resulting in a longer encoding time (ET. Many other algorithms were proposed to solve this problem with imperfection of increasing transmission time (TT across the network. To minimize the ET and TT, this paper introduces four efficient algorithms based on spatial scalability. The algorithms utilize the mode-distribution correlation between the base layer (BL and enhancement layers (ELs and interpolation between the EL frames. The proposed algorithms are of two categories. Those of the first category are based on interlayer residual SVC spatial scalability. They employ two methods, namely, interlayer interpolation (ILIP and the interlayer base mode (ILBM method, and enable ET and TT savings of up to 69.3% and 83.6%, respectively. The algorithms of the second category are based on full-search SVC spatial scalability. They utilize two methods, namely, full interpolation (FIP and the full-base mode (FBM method, and enable ET and TT savings of up to 55.3% and 76.6%, respectively.

  5. Embedded High Performance Scalable Computing Systems

    National Research Council Canada - National Science Library

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  6. Investigation on Reliability and Scalability of an FBG-Based Hierarchical AOFSN

    Directory of Open Access Journals (Sweden)

    Li-Mei Peng

    2010-03-01

    Full Text Available The reliability and scalability of large-scale based optical fiber sensor networks (AOFSN are considered in this paper. The AOFSN network consists of three-level hierarchical sensor network architectures. The first two levels consist of active interrogation and remote nodes (RNs and the third level, called the sensor subnet (SSN, consists of passive Fiber Bragg Gratings (FBGs and a few switches. The switch architectures in the RN and various SSNs to improve the reliability and scalability of AOFSN are studied. Two SSNs with a regular topology are proposed to support simple routing and scalability in AOFSN: square-based sensor cells (SSC and pentagon-based sensor cells (PSC. The reliability and scalability are evaluated in terms of the available sensing coverage in the case of one or multiple link failures.

  7. Scalable Coverage Maintenance for Dense Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jun Lu

    2007-06-01

    Full Text Available Owing to numerous potential applications, wireless sensor networks have been attracting significant research effort recently. The critical challenge that wireless sensor networks often face is to sustain long-term operation on limited battery energy. Coverage maintenance schemes can effectively prolong network lifetime by selecting and employing a subset of sensors in the network to provide sufficient sensing coverage over a target region. We envision future wireless sensor networks composed of a vast number of miniaturized sensors in exceedingly high density. Therefore, the key issue of coverage maintenance for future sensor networks is the scalability to sensor deployment density. In this paper, we propose a novel coverage maintenance scheme, scalable coverage maintenance (SCOM, which is scalable to sensor deployment density in terms of communication overhead (i.e., number of transmitted and received beacons and computational complexity (i.e., time and space complexity. In addition, SCOM achieves high energy efficiency and load balancing over different sensors. We have validated our claims through both analysis and simulations.

  8. DISP: Optimizations towards Scalable MPI Startup

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Huansong [Florida State University, Tallahassee; Pophale, Swaroop S [ORNL; Gorentla Venkata, Manjunath [ORNL; Yu, Weikuan [Florida State University, Tallahassee

    2016-01-01

    Despite the popularity of MPI for high performance computing, the startup of MPI programs faces a scalability challenge as both the execution time and memory consumption increase drastically at scale. We have examined this problem using the collective modules of Cheetah and Tuned in Open MPI as representative implementations. Previous improvements for collectives have focused on algorithmic advances and hardware off-load. In this paper, we examine the startup cost of the collective module within a communicator and explore various techniques to improve its efficiency and scalability. Accordingly, we have developed a new scalable startup scheme with three internal techniques, namely Delayed Initialization, Module Sharing and Prediction-based Topology Setup (DISP). Our DISP scheme greatly benefits the collective initialization of the Cheetah module. At the same time, it helps boost the performance of non-collective initialization in the Tuned module. We evaluate the performance of our implementation on Titan supercomputer at ORNL with up to 4096 processes. The results show that our delayed initialization can speed up the startup of Tuned and Cheetah by an average of 32.0% and 29.2%, respectively, our module sharing can reduce the memory consumption of Tuned and Cheetah by up to 24.1% and 83.5%, respectively, and our prediction-based topology setup can speed up the startup of Cheetah by up to 80%.

  9. SAChES: Scalable Adaptive Chain-Ensemble Sampling.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Huang, Maoyi [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hou, Zhangshuan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bao, Jie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ren, Huiying [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-08-01

    We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the use of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.

  10. Blind Cooperative Routing for Scalable and Energy-Efficient Internet of Things

    KAUST Repository

    Bader, Ahmed; Alouini, Mohamed-Slim

    2016-01-01

    Multihop networking is promoted in this paper for energy-efficient and highly-scalable Internet of Things (IoT). Recognizing concerns related to the scalability of classical multihop routing and medium access techniques, the use of blind cooperation

  11. Rate control scheme for consistent video quality in scalable video codec.

    Science.gov (United States)

    Seo, Chan-Won; Han, Jong-Ki; Nguyen, Truong Q

    2011-08-01

    Multimedia data delivered to mobile devices over wireless channels or the Internet are complicated by bandwidth fluctuation and the variety of mobile devices. Scalable video coding has been developed as an extension of H.264/AVC to solve this problem. Since scalable video codec provides various scalabilities to adapt the bitstream for the channel conditions and terminal types, scalable codec is one of the useful codecs for wired or wireless multimedia communication systems, such as IPTV and streaming services. In such scalable multimedia communication systems, video quality fluctuation degrades the visual perception significantly. It is important to efficiently use the target bits in order to maintain a consistent video quality or achieve a small distortion variation throughout the whole video sequence. The scheme proposed in this paper provides a useful function to control video quality in applications supporting scalability, whereas conventional schemes have been proposed to control video quality in the H.264 and MPEG-4 systems. The proposed algorithm decides the quantization parameter of the enhancement layer to maintain a consistent video quality throughout the entire sequence. The video quality of the enhancement layer is controlled based on a closed-form formula which utilizes the residual data and quantization error of the base layer. The simulation results show that the proposed algorithm controls the frame quality of the enhancement layer in a simple operation, where the parameter decision algorithm is applied to each frame.

  12. Scalable, incremental learning with MapReduce parallelization for cell detection in high-resolution 3D microscopy data

    KAUST Repository

    Sung, Chul

    2013-08-01

    Accurate estimation of neuronal count and distribution is central to the understanding of the organization and layout of cortical maps in the brain, and changes in the cell population induced by brain disorders. High-throughput 3D microscopy techniques such as Knife-Edge Scanning Microscopy (KESM) are enabling whole-brain survey of neuronal distributions. Data from such techniques pose serious challenges to quantitative analysis due to the massive, growing, and sparsely labeled nature of the data. In this paper, we present a scalable, incremental learning algorithm for cell body detection that can address these issues. Our algorithm is computationally efficient (linear mapping, non-iterative) and does not require retraining (unlike gradient-based approaches) or retention of old raw data (unlike instance-based learning). We tested our algorithm on our rat brain Nissl data set, showing superior performance compared to an artificial neural network-based benchmark, and also demonstrated robust performance in a scenario where the data set is rapidly growing in size. Our algorithm is also highly parallelizable due to its incremental nature, and we demonstrated this empirically using a MapReduce-based implementation of the algorithm. We expect our scalable, incremental learning approach to be widely applicable to medical imaging domains where there is a constant flux of new data. © 2013 IEEE.

  13. Scalable inference for stochastic block models

    KAUST Repository

    Peng, Chengbin

    2017-12-08

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference algorithms for such a model are increasingly limited due to their high time complexity and poor scalability. In this paper, we propose a multi-stage maximum likelihood approach to recover the latent parameters of the stochastic block model, in time linear with respect to the number of edges. We also propose a parallel algorithm based on message passing. Our algorithm can overlap communication and computation, providing speedup without compromising accuracy as the number of processors grows. For example, to process a real-world graph with about 1.3 million nodes and 10 million edges, our algorithm requires about 6 seconds on 64 cores of a contemporary commodity Linux cluster. Experiments demonstrate that the algorithm can produce high quality results on both benchmark and real-world graphs. An example of finding more meaningful communities is illustrated consequently in comparison with a popular modularity maximization algorithm.

  14. Scalable Production of Graphene-Based Wearable E-Textiles.

    Science.gov (United States)

    Karim, Nazmul; Afroj, Shaila; Tan, Sirui; He, Pei; Fernando, Anura; Carr, Chris; Novoselov, Kostya S

    2017-12-26

    Graphene-based wearable e-textiles are considered to be promising due to their advantages over traditional metal-based technology. However, the manufacturing process is complex and currently not suitable for industrial scale application. Here we report a simple, scalable, and cost-effective method of producing graphene-based wearable e-textiles through the chemical reduction of graphene oxide (GO) to make stable reduced graphene oxide (rGO) dispersion which can then be applied to the textile fabric using a simple pad-dry technique. This application method allows the potential manufacture of conductive graphene e-textiles at commercial production rates of ∼150 m/min. The graphene e-textile materials produced are durable and washable with acceptable softness/hand feel. The rGO coating enhanced the tensile strength of cotton fabric and also the flexibility due to the increase in strain% at maximum load. We demonstrate the potential application of these graphene e-textiles for wearable electronics with activity monitoring sensor. This could potentially lead to a multifunctional single graphene e-textile garment that can act both as sensors and flexible heating elements powered by the energy stored in graphene textile supercapacitors.

  15. The NIDS Cluster: Scalable, Stateful Network Intrusion Detection on Commodity Hardware

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, Brian L; Vallentin, Matthias; Sommer, Robin; Lee, Jason; Leres, Craig; Paxson, Vern; Tierney, Brian

    2007-09-19

    In this work we present a NIDS cluster as a scalable solution for realizing high-performance, stateful network intrusion detection on commodity hardware. The design addresses three challenges: (i) distributing traffic evenly across an extensible set of analysis nodes in a fashion that minimizes the communication required for coordination, (ii) adapting the NIDS's operation to support coordinating its low-level analysis rather than just aggregating alerts; and (iii) validating that the cluster produces sound results. Prototypes of our NIDS cluster now operate at the Lawrence Berkeley National Laboratory and the University of California at Berkeley. In both environments the clusters greatly enhance the power of the network security monitoring.

  16. Scalable Integrated Region-Based Image Retrieval Using IRM and Statistical Clustering.

    Science.gov (United States)

    Wang, James Z.; Du, Yanping

    Statistical clustering is critical in designing scalable image retrieval systems. This paper presents a scalable algorithm for indexing and retrieving images based on region segmentation. The method uses statistical clustering on region features and IRM (Integrated Region Matching), a measure developed to evaluate overall similarity between images…

  17. Scalable fast multipole accelerated vortex methods

    KAUST Repository

    Hu, Qi; Gumerov, Nail A.; Yokota, Rio; Barba, Lorena A.; Duraiswami, Ramani

    2014-01-01

    -node communication and load balance efficiently, with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff

  18. Scalability Dilemma and Statistic Multiplexed Computing — A Theory and Experiment

    Directory of Open Access Journals (Sweden)

    Justin Yuan Shi

    2017-08-01

    Full Text Available The For the last three decades, end-to-end computing paradigms, such as MPI (Message Passing Interface, RPC (Remote Procedure Call and RMI (Remote Method Invocation, have been the de facto paradigms for distributed and parallel programming. Despite of the successes, applications built using these paradigms suffer due to the proportionality factor of crash in the application with its size. Checkpoint/restore and backup/recovery are the only means to save otherwise lost critical information. The scalability dilemma is such a practical challenge that the probability of the data losses increases as the application scales in size. The theoretical significance of this practical challenge is that it undermines the fundamental structure of the scientific discovery process and mission critical services in production today. In 1997, the direct use of end-to-end reference model in distributed programming was recognized as a fallacy. The scalability dilemma was predicted. However, this voice was overrun by the passage of time. Today, the rapidly growing digitized data demands solving the increasingly critical scalability challenges. Computing architecture scalability, although loosely defined, is now the front and center of large-scale computing efforts. Constrained only by the economic law of diminishing returns, this paper proposes a narrow definition of a Scalable Computing Service (SCS. Three scalability tests are also proposed in order to distinguish service architecture flaws from poor application programming. Scalable data intensive service requires additional treatments. Thus, the data storage is assumed reliable in this paper. A single-sided Statistic Multiplexed Computing (SMC paradigm is proposed. A UVR (Unidirectional Virtual Ring SMC architecture is examined under SCS tests. SMC was designed to circumvent the well-known impossibility of end-to-end paradigms. It relies on the proven statistic multiplexing principle to deliver reliable service

  19. Scalable Resolution Display Walls

    KAUST Repository

    Leigh, Jason; Johnson, Andrew; Renambot, Luc; Peterka, Tom; Jeong, Byungil; Sandin, Daniel J.; Talandis, Jonas; Jagodic, Ratko; Nam, Sungwon; Hur, Hyejung; Sun, Yiwen

    2013-01-01

    This article will describe the progress since 2000 on research and development in 2-D and 3-D scalable resolution display walls that are built from tiling individual lower resolution flat panel displays. The article will describe approaches and trends in display hardware construction, middleware architecture, and user-interaction design. The article will also highlight examples of use cases and the benefits the technology has brought to their respective disciplines. © 1963-2012 IEEE.

  20. Evaluation of 3D printed anatomically scalable transfemoral prosthetic knee.

    Science.gov (United States)

    Ramakrishnan, Tyagi; Schlafly, Millicent; Reed, Kyle B

    2017-07-01

    This case study compares a transfemoral amputee's gait while using the existing Ossur Total Knee 2000 and our novel 3D printed anatomically scalable transfemoral prosthetic knee. The anatomically scalable transfemoral prosthetic knee is 3D printed out of a carbon-fiber and nylon composite that has a gear-mesh coupling with a hard-stop weight-actuated locking mechanism aided by a cross-linked four-bar spring mechanism. This design can be scaled using anatomical dimensions of a human femur and tibia to have a unique fit for each user. The transfemoral amputee who was tested is high functioning and walked on the Computer Assisted Rehabilitation Environment (CAREN) at a self-selected pace. The motion capture and force data that was collected showed that there were distinct differences in the gait dynamics. The data was used to perform the Combined Gait Asymmetry Metric (CGAM), where the scores revealed that the overall asymmetry of the gait on the Ossur Total Knee was more asymmetric than the anatomically scalable transfemoral prosthetic knee. The anatomically scalable transfemoral prosthetic knee had higher peak knee flexion that caused a large step time asymmetry. This made walking on the anatomically scalable transfemoral prosthetic knee more strenuous due to the compensatory movements in adapting to the different dynamics. This can be overcome by tuning the cross-linked spring mechanism to emulate the dynamics of the subject better. The subject stated that the knee would be good for daily use and has the potential to be adapted as a running knee.

  1. Scalable optical switches for computing applications

    NARCIS (Netherlands)

    White, I.H.; Aw, E.T.; Williams, K.A.; Wang, Haibo; Wonfor, A.; Penty, R.V.

    2009-01-01

    A scalable photonic interconnection network architecture is proposed whereby a Clos network is populated with broadcast-and-select stages. This enables the efficient exploitation of an emerging class of photonic integrated switch fabric. A low distortion space switch technology based on recently

  2. On the scalability of LISP and advanced overlaid services

    OpenAIRE

    Coras, Florin

    2015-01-01

    In just four decades the Internet has gone from a lab experiment to a worldwide, business critical infrastructure that caters to the communication needs of almost a half of the Earth's population. With these figures on its side, arguing against the Internet's scalability would seem rather unwise. However, the Internet's organic growth is far from finished and, as billions of new devices are expected to be joined in the not so distant future, scalability, or lack thereof, is commonly believed ...

  3. Scalable Algorithms for Adaptive Statistical Designs

    Directory of Open Access Journals (Sweden)

    Robert Oehmke

    2000-01-01

    Full Text Available We present a scalable, high-performance solution to multidimensional recurrences that arise in adaptive statistical designs. Adaptive designs are an important class of learning algorithms for a stochastic environment, and we focus on the problem of optimally assigning patients to treatments in clinical trials. While adaptive designs have significant ethical and cost advantages, they are rarely utilized because of the complexity of optimizing and analyzing them. Computational challenges include massive memory requirements, few calculations per memory access, and multiply-nested loops with dynamic indices. We analyze the effects of various parallelization options, and while standard approaches do not work well, with effort an efficient, highly scalable program can be developed. This allows us to solve problems thousands of times more complex than those solved previously, which helps make adaptive designs practical. Further, our work applies to many other problems involving neighbor recurrences, such as generalized string matching.

  4. Scalable fabrication of perovskite solar cells

    Energy Technology Data Exchange (ETDEWEB)

    Li, Zhen; Klein, Talysa R.; Kim, Dong Hoe; Yang, Mengjin; Berry, Joseph J.; van Hest, Maikel F. A. M.; Zhu, Kai

    2018-03-27

    Perovskite materials use earth-abundant elements, have low formation energies for deposition and are compatible with roll-to-roll and other high-volume manufacturing techniques. These features make perovskite solar cells (PSCs) suitable for terawatt-scale energy production with low production costs and low capital expenditure. Demonstrations of performance comparable to that of other thin-film photovoltaics (PVs) and improvements in laboratory-scale cell stability have recently made scale up of this PV technology an intense area of research focus. Here, we review recent progress and challenges in scaling up PSCs and related efforts to enable the terawatt-scale manufacturing and deployment of this PV technology. We discuss common device and module architectures, scalable deposition methods and progress in the scalable deposition of perovskite and charge-transport layers. We also provide an overview of device and module stability, module-level characterization techniques and techno-economic analyses of perovskite PV modules.

  5. Robustness of Distance-to-Default

    DEFF Research Database (Denmark)

    Jessen, Cathrine; Lando, David

    2013-01-01

    Distance-to-default is a remarkably robust measure for ranking firms according to their risk of default. The ranking seems to work despite the fact that the Merton model from which the measure is derived produces default probabilities that are far too small when applied to real data. We use...... simulations to investigate the robustness of the distance-to-default measure to different model specifications. Overall we find distance-to-default to be robust to a number of deviations from the simple Merton model that involve different asset value dynamics and different default triggering mechanisms....... A notable exception is a model with stochastic volatility of assets. In this case both the ranking of firms and the estimated default probabilities using distance-to-default perform significantly worse. We therefore propose a volatility adjustment of the distance-to-default measure, that significantly...

  6. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    Directory of Open Access Journals (Sweden)

    Sven Van Poucke

    Full Text Available With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension. Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM, the ETL process (Extract, Transform, Load was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  7. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    Science.gov (United States)

    Van Poucke, Sven; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; De Deyne, Cathy

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  8. Scalable Parallel Distributed Coprocessor System for Graph Searching Problems with Massive Data

    Directory of Open Access Journals (Sweden)

    Wanrong Huang

    2017-01-01

    Full Text Available The Internet applications, such as network searching, electronic commerce, and modern medical applications, produce and process massive data. Considerable data parallelism exists in computation processes of data-intensive applications. A traversal algorithm, breadth-first search (BFS, is fundamental in many graph processing applications and metrics when a graph grows in scale. A variety of scientific programming methods have been proposed for accelerating and parallelizing BFS because of the poor temporal and spatial locality caused by inherent irregular memory access patterns. However, new parallel hardware could provide better improvement for scientific methods. To address small-world graph problems, we propose a scalable and novel field-programmable gate array-based heterogeneous multicore system for scientific programming. The core is multithread for streaming processing. And the communication network InfiniBand is adopted for scalability. We design a binary search algorithm to address mapping to unify all processor addresses. Within the limits permitted by the Graph500 test bench after 1D parallel hybrid BFS algorithm testing, our 8-core and 8-thread-per-core system achieved superior performance and efficiency compared with the prior work under the same degree of parallelism. Our system is efficient not as a special acceleration unit but as a processor platform that deals with graph searching applications.

  9. Model-Based Evaluation Of System Scalability: Bandwidth Analysis For Smartphone-Based Biosensing Applications

    DEFF Research Database (Denmark)

    Patou, François; Madsen, Jan; Dimaki, Maria

    2016-01-01

    Scalability is a design principle often valued for the engineering of complex systems. Scalability is the ability of a system to change the current value of one of its specification parameters. Although targeted frameworks are available for the evaluation of scalability for specific digital systems...... re-engineering of 5 independent system modules, from the replacement of a wireless Bluetooth interface, to the revision of the ADC sample-and-hold operation could help increase system bandwidth....

  10. A Better Banana. IAEA Research Helps Produce Higher-Yielding Robust Variety

    International Nuclear Information System (INIS)

    Durczok, Alessia

    2011-01-01

    Nearly 7 billion people live on our planet today and populations continue to grow. Some of us enjoy better nutrition, longer lives and more robust health than our grandparents did a century ago. At the same time, the UN forecasts an increase of mal- or undernourished people, especially in the developing world. More and better food is essential to combat and overcome malnutrition and hunger. Larger and more stable food supplies will need to be developed for food-deficient areas

  11. Scalability of Sustainable Business Models in Hybrid Organizations

    Directory of Open Access Journals (Sweden)

    Adam Jabłoński

    2016-02-01

    Full Text Available The dynamics of change in modern business create new mechanisms for company management to determine their pursuit and the achievement of their high performance. This performance maintained over a long period of time becomes a source of ensuring business continuity by companies. An ontological being enabling the adoption of such assumptions is such a business model that has the ability to generate results in every possible market situation and, moreover, it has the features of permanent adaptability. A feature that describes the adaptability of the business model is its scalability. Being a factor ensuring more work and more efficient work with an increasing number of components, scalability can be applied to the concept of business models as the company’s ability to maintain similar or higher performance through it. Ensuring the company’s performance in the long term helps to build the so-called sustainable business model that often balances the objectives of stakeholders and shareholders, and that is created by the implemented principles of value-based management and corporate social responsibility. This perception of business paves the way for building hybrid organizations that integrate business activities with pro-social ones. The combination of an approach typical of hybrid organizations in designing and implementing sustainable business models pursuant to the scalability criterion seems interesting from the cognitive point of view. Today, hybrid organizations are great spaces for building effective and efficient mechanisms for dialogue between business and society. This requires the appropriate business model. The purpose of the paper is to present the conceptualization and operationalization of scalability of sustainable business models that determine the performance of a hybrid organization in the network environment. The paper presents the original concept of applying scalability in sustainable business models with detailed

  12. Spirally Structured Conductive Composites for Highly Stretchable, Robust Conductors and Sensors.

    Science.gov (United States)

    Wu, Xiaodong; Han, Yangyang; Zhang, Xinxing; Lu, Canhui

    2017-07-12

    Flexible and stretchable electronics are highly desirable for next generation devices. However, stretchability and conductivity are fundamentally difficult to combine for conventional conductive composites, which restricts their widespread applications especially as stretchable electronics. Here, we innovatively develop a new class of highly stretchable and robust conductive composites via a simple and scalable structural approach. Briefly, carbon nanotubes are spray-coated onto a self-adhesive rubber film, followed by rolling up the film completely to create a spirally layered structure within the composites. This unique spirally layered structure breaks the typical trade-off between stretchability and conductivity of traditional conductive composites and, more importantly, restrains the generation and propagation of mechanical microcracks in the conductive layer under strain. Benefiting from such structure-induced advantages, the spirally layered composites exhibit high stretchability and flexibility, good conductive stability, and excellent robustness, enabling the composites to serve as highly stretchable conductors (up to 300% strain), versatile sensors for monitoring both subtle and large human activities, and functional threads for wearable electronics. This novel and efficient methodology provides a new design philosophy for manufacturing not only stretchable conductors and sensors but also other stretchable electronics, such as transistors, generators, artificial muscles, etc.

  13. Development of a modular and scalable sensor system for the gathering of position and orientation of moved objects; Entwicklung eines modularen und skalierbaren Sensorsystems zur Erfassung von Position und Orientierung bewegter Objekte

    Energy Technology Data Exchange (ETDEWEB)

    Klingbeil, L.

    2006-02-15

    A modular and scalable sensor system for the estimation of position and orientation of moving objects has been developed and characterized. A sensor unit, which is mounted to the moving object, consists of acceleration -, angular rate - and magnetic field sensors for every spatial axis. Customized Kalman filter algorithms provide a robust and low latency reconstruction of the sensor's orientation. Additionally an ultrasound transducer network is used to measure the distance of a sensor unit with respect to several reference points in the room. This allows reconstruction of the absolute position using trilateration methods. The system is scalable with respect to the number of sensor units and the covered tracking volume. It is suitable for various applications for example the analysis of body movements or head tracking in augmented or virtual reality environments. (orig.)

  14. Fast & scalable pattern transfer via block copolymer nanolithography

    DEFF Research Database (Denmark)

    Li, Tao; Wang, Zhongli; Schulte, Lars

    2015-01-01

    A fully scalable and efficient pattern transfer process based on block copolymer (BCP) self-assembling directly on various substrates is demonstrated. PS-rich and PDMS-rich poly(styrene-b-dimethylsiloxane) (PS-b-PDMS) copolymers are used to give monolayer sphere morphology after spin-casting of s......A fully scalable and efficient pattern transfer process based on block copolymer (BCP) self-assembling directly on various substrates is demonstrated. PS-rich and PDMS-rich poly(styrene-b-dimethylsiloxane) (PS-b-PDMS) copolymers are used to give monolayer sphere morphology after spin...... on long range lateral order, including fabrication of substrates for catalysis, solar cells, sensors, ultrafiltration membranes and templating of semiconductors or metals....

  15. Scalable Motion Estimation Processor Core for Multimedia System-on-Chip Applications

    Science.gov (United States)

    Lai, Yeong-Kang; Hsieh, Tian-En; Chen, Lien-Fei

    2007-04-01

    In this paper, we describe a high-throughput and scalable motion estimation processor architecture for multimedia system-on-chip applications. The number of processing elements (PEs) is scalable according to the variable algorithm parameters and the performance required for different applications. Using the PE rings efficiently and an intelligent memory-interleaving organization, the efficiency of the architecture can be increased. Moreover, using efficient on-chip memories and a data management technique can effectively decrease the power consumption and memory bandwidth. Techniques for reducing the number of interconnections and external memory accesses are also presented. Our results demonstrate that the proposed scalable PE-ringed architecture is a flexible and high-performance processor core in multimedia system-on-chip applications.

  16. Vortex Filaments in Grids for Scalable, Fine Smoke Simulation.

    Science.gov (United States)

    Meng, Zhang; Weixin, Si; Yinling, Qian; Hanqiu, Sun; Jing, Qin; Heng, Pheng-Ann

    2015-01-01

    Vortex modeling can produce attractive visual effects of dynamic fluids, which are widely applicable for dynamic media, computer games, special effects, and virtual reality systems. However, it is challenging to effectively simulate intensive and fine detailed fluids such as smoke with fast increasing vortex filaments and smoke particles. The authors propose a novel vortex filaments in grids scheme in which the uniform grids dynamically bridge the vortex filaments and smoke particles for scalable, fine smoke simulation with macroscopic vortex structures. Using the vortex model, their approach supports the trade-off between simulation speed and scale of details. After computing the whole velocity, external control can be easily exerted on the embedded grid to guide the vortex-based smoke motion. The experimental results demonstrate the efficiency of using the proposed scheme for a visually plausible smoke simulation with macroscopic vortex structures.

  17. Robust Forecasting of Non-Stationary Time Series

    OpenAIRE

    Croux, C.; Fried, R.; Gijbels, I.; Mahieu, K.

    2010-01-01

    This paper proposes a robust forecasting method for non-stationary time series. The time series is modelled using non-parametric heteroscedastic regression, and fitted by a localized MM-estimator, combining high robustness and large efficiency. The proposed method is shown to produce reliable forecasts in the presence of outliers, non-linearity, and heteroscedasticity. In the absence of outliers, the forecasts are only slightly less precise than those based on a localized Least Squares estima...

  18. Accounting Fundamentals and the Variation of Stock Price: Factoring in the Investment Scalability

    Directory of Open Access Journals (Sweden)

    Sumiyana Sumiyana

    2010-05-01

    Full Text Available This study develops a new return model with respect to accounting fundamentals. The new return model is based on Chen and Zhang (2007. This study takes into account theinvestment scalability information. Specifically, this study splitsthe scale of firm’s operations into short-run and long-runinvestment scalabilities. We document that five accounting fun-damentals explain the variation of annual stock return. Thefactors, comprised book value, earnings yield, short-run andlong-run investment scalabilities, and growth opportunities, co associate positively with stock price. The remaining factor,which is the pure interest rate, is negatively related to annualstock return. This study finds that inducing short-run and long-run investment scalabilities into the model could improve the degree of association. In other words, they have value rel-evance. Finally, this study suggests that basic trading strategieswill improve if investors revert to the accounting fundamentals. Keywords: accounting fundamentals; book value; earnings yield; growth opportuni­ties; short­run and long­run investment scalabilities; trading strategy;value relevance

  19. Toward robust scalable algebraic multigrid solvers

    International Nuclear Information System (INIS)

    Waisman, Haim; Schroder, Jacob; Olson, Luke; Hiriyur, Badri; Gaidamour, Jeremie; Siefert, Christopher; Hu, Jonathan Joseph; Tuminaro, Raymond Stephen

    2010-01-01

    This talk highlights some multigrid challenges that arise from several application areas including structural dynamics, fluid flow, and electromagnetics. A general framework is presented to help introduce and understand algebraic multigrid methods based on energy minimization concepts. Connections between algebraic multigrid prolongators and finite element basis functions are made to explored. It is shown how the general algebraic multigrid framework allows one to adapt multigrid ideas to a number of different situations. Examples are given corresponding to linear elasticity and specifically in the solution of linear systems associated with extended finite elements for fracture problems.

  20. Robust and Scalable DTLS Session Establishment

    OpenAIRE

    Tiloca, Marco; Gehrmann, Christian; Seitz, Ludwig

    2016-01-01

    The Datagram Transport Layer Security (DTLS) protocol is highly vulnerable to a form of denial-of-service attack (DoS), aimed at establishing a high number of invalid, half-open, secure sessions. Moreover, even when the efficient pre-shared key provisioning mode is considered, the key storage on the server side scales poorly with the number of clients. SICS Swedish ICT has designed a security architecture that efficiently addresses both issues without breaking the current standard.

  1. Scalable and balanced dynamic hybrid data assimilation

    Science.gov (United States)

    Kauranne, Tuomo; Amour, Idrissa; Gunia, Martin; Kallio, Kari; Lepistö, Ahti; Koponen, Sampsa

    2017-04-01

    Scalability of complex weather forecasting suites is dependent on the technical tools available for implementing highly parallel computational kernels, but to an equally large extent also on the dependence patterns between various components of the suite, such as observation processing, data assimilation and the forecast model. Scalability is a particular challenge for 4D variational assimilation methods that necessarily couple the forecast model into the assimilation process and subject this combination to an inherently serial quasi-Newton minimization process. Ensemble based assimilation methods are naturally more parallel, but large models force ensemble sizes to be small and that results in poor assimilation accuracy, somewhat akin to shooting with a shotgun in a million-dimensional space. The Variational Ensemble Kalman Filter (VEnKF) is an ensemble method that can attain the accuracy of 4D variational data assimilation with a small ensemble size. It achieves this by processing a Gaussian approximation of the current error covariance distribution, instead of a set of ensemble members, analogously to the Extended Kalman Filter EKF. Ensemble members are re-sampled every time a new set of observations is processed from a new approximation of that Gaussian distribution which makes VEnKF a dynamic assimilation method. After this a smoothing step is applied that turns VEnKF into a dynamic Variational Ensemble Kalman Smoother VEnKS. In this smoothing step, the same process is iterated with frequent re-sampling of the ensemble but now using past iterations as surrogate observations until the end result is a smooth and balanced model trajectory. In principle, VEnKF could suffer from similar scalability issues as 4D-Var. However, this can be avoided by isolating the forecast model completely from the minimization process by implementing the latter as a wrapper code whose only link to the model is calling for many parallel and totally independent model runs, all of them

  2. Manufacturing process used to produce long-acting recombinant factor VIII Fc fusion protein.

    Science.gov (United States)

    McCue, Justin; Kshirsagar, Rashmi; Selvitelli, Keith; Lu, Qi; Zhang, Mingxuan; Mei, Baisong; Peters, Robert; Pierce, Glenn F; Dumont, Jennifer; Raso, Stephen; Reichert, Heidi

    2015-07-01

    Recombinant factor VIII Fc fusion protein (rFVIIIFc) is a long-acting coagulation factor approved for the treatment of hemophilia A. Here, the rFVIIIFc manufacturing process and results of studies evaluating product quality and the capacity of the process to remove potential impurities and viruses are described. This manufacturing process utilized readily transferable and scalable unit operations and employed multi-step purification and viral clearance processing, including a novel affinity chromatography adsorbent and a 15 nm pore size virus removal nanofilter. A cell line derived from human embryonic kidney (HEK) 293H cells was used to produce rFVIIIFc. Validation studies evaluated identity, purity, activity, and safety. Process-related impurity clearance and viral clearance spiking studies demonstrate robust and reproducible removal of impurities and viruses, with total viral clearance >8-15 log10 for four model viruses (xenotropic murine leukemia virus, mice minute virus, reovirus type 3, and suid herpes virus 1). Terminal galactose-α-1,3-galactose and N-glycolylneuraminic acid, two non-human glycans, were undetectable in rFVIIIFc. Biochemical and in vitro biological analyses confirmed the purity, activity, and consistency of rFVIIIFc. In conclusion, this manufacturing process produces a highly pure product free of viruses, impurities, and non-human glycan structures, with scale capabilities to ensure a consistent and adequate supply of rFVIIIFc. Copyright © 2015 Biogen. Published by Elsevier Ltd.. All rights reserved.

  3. On the scalability of uncoordinated multiple access for the Internet of Things

    KAUST Repository

    Chisci, Giovanni

    2017-11-16

    The Internet of things (IoT) will entail massive number of wireless connections with sporadic traffic patterns. To support the IoT traffic, several technologies are evolving to support low power wide area (LPWA) wireless communications. However, LPWA networks rely on variations of uncoordinated spectrum access, either for data transmissions or scheduling requests, thus imposing a scalability problem to the IoT. This paper presents a novel spatiotemporal model to study the scalability of the ALOHA medium access. In particular, the developed mathematical model relies on stochastic geometry and queueing theory to account for spatial and temporal attributes of the IoT. To this end, the scalability of the ALOHA is characterized by the percentile of IoT devices that can be served while keeping their queues stable. The results highlight the scalability problem of ALOHA and quantify the extend to which ALOHA can support in terms of number of devices, traffic requirement, and transmission rate.

  4. Continuity-Aware Scheduling Algorithm for Scalable Video Streaming

    Directory of Open Access Journals (Sweden)

    Atinat Palawan

    2016-05-01

    Full Text Available The consumer demand for retrieving and delivering visual content through consumer electronic devices has increased rapidly in recent years. The quality of video in packet networks is susceptible to certain traffic characteristics: average bandwidth availability, loss, delay and delay variation (jitter. This paper presents a scheduling algorithm that modifies the stream of scalable video to combat jitter. The algorithm provides unequal look-ahead by safeguarding the base layer (without the need for overhead of the scalable video. The results of the experiments show that our scheduling algorithm reduces the number of frames with a violated deadline and significantly improves the continuity of the video stream without compromising the average Y Peek Signal-to-Noise Ratio (PSNR.

  5. Robust Optimization Model for Production Planning Problem under Uncertainty

    Directory of Open Access Journals (Sweden)

    Pembe GÜÇLÜ

    2017-01-01

    Full Text Available Conditions of businesses change very quickly. To take into account the uncertainty engendered by changes has become almost a rule while planning. Robust optimization techniques that are methods of handling uncertainty ensure to produce less sensitive results to changing conditions. Production planning, is to decide from which product, when and how much will be produced, with a most basic definition. Modeling and solution of the Production planning problems changes depending on structure of the production processes, parameters and variables. In this paper, it is aimed to generate and apply scenario based robust optimization model for capacitated two-stage multi-product production planning problem under parameter and demand uncertainty. With this purpose, production planning problem of a textile company that operate in İzmir has been modeled and solved, then deterministic scenarios’ and robust method’s results have been compared. Robust method has provided a production plan that has higher cost but, will result close to feasible and optimal for most of the different scenarios in the future.

  6. Scalable air cathode microbial fuel cells using glass fiber separators, plastic mesh supporters, and graphite fiber brush anodes

    KAUST Repository

    Zhang, Xiaoyuan

    2011-01-01

    The combined use of brush anodes and glass fiber (GF1) separators, and plastic mesh supporters were used here for the first time to create a scalable microbial fuel cell architecture. Separators prevented short circuiting of closely-spaced electrodes, and cathode supporters were used to avoid water gaps between the separator and cathode that can reduce power production. The maximum power density with a separator and supporter and a single cathode was 75±1W/m3. Removing the separator decreased power by 8%. Adding a second cathode increased power to 154±1W/m3. Current was increased by connecting two MFCs connected in parallel. These results show that brush anodes, combined with a glass fiber separator and a plastic mesh supporter, produce a useful MFC architecture that is inherently scalable due to good insulation between the electrodes and a compact architecture. © 2010 Elsevier Ltd.

  7. Scalable Partitioning Algorithms for FPGAs With Heterogeneous Resources

    National Research Council Canada - National Science Library

    Selvakkumaran, Navaratnasothie; Ranjan, Abhishek; Raje, Salil; Karypis, George

    2004-01-01

    As FPGA densities increase, partitioning-based FPGA placement approaches are becoming increasingly important as they can be used to provide high-quality and computationally scalable placement solutions...

  8. Scalable Video Coding with Interlayer Signal Decorrelation Techniques

    Directory of Open Access Journals (Sweden)

    Yang Wenxian

    2007-01-01

    Full Text Available Scalability is one of the essential requirements in the compression of visual data for present-day multimedia communications and storage. The basic building block for providing the spatial scalability in the scalable video coding (SVC standard is the well-known Laplacian pyramid (LP. An LP achieves the multiscale representation of the video as a base-layer signal at lower resolution together with several enhancement-layer signals at successive higher resolutions. In this paper, we propose to improve the coding performance of the enhancement layers through efficient interlayer decorrelation techniques. We first show that, with nonbiorthogonal upsampling and downsampling filters, the base layer and the enhancement layers are correlated. We investigate two structures to reduce this correlation. The first structure updates the base-layer signal by subtracting from it the low-frequency component of the enhancement layer signal. The second structure modifies the prediction in order that the low-frequency component in the new enhancement layer is diminished. The second structure is integrated in the JSVM 4.0 codec with suitable modifications in the prediction modes. Experimental results with some standard test sequences demonstrate coding gains up to 1 dB for I pictures and up to 0.7 dB for both I and P pictures.

  9. Scalable Atomistic Simulation Algorithms for Materials Research

    Directory of Open Access Journals (Sweden)

    Aiichiro Nakano

    2002-01-01

    Full Text Available A suite of scalable atomistic simulation programs has been developed for materials research based on space-time multiresolution algorithms. Design and analysis of parallel algorithms are presented for molecular dynamics (MD simulations and quantum-mechanical (QM calculations based on the density functional theory. Performance tests have been carried out on 1,088-processor Cray T3E and 1,280-processor IBM SP3 computers. The linear-scaling algorithms have enabled 6.44-billion-atom MD and 111,000-atom QM calculations on 1,024 SP3 processors with parallel efficiency well over 90%. production-quality programs also feature wavelet-based computational-space decomposition for adaptive load balancing, spacefilling-curve-based adaptive data compression with user-defined error bound for scalable I/O, and octree-based fast visibility culling for immersive and interactive visualization of massive simulation data.

  10. Ergatis: a web interface and scalable software system for bioinformatics workflows

    Science.gov (United States)

    Orvis, Joshua; Crabtree, Jonathan; Galens, Kevin; Gussman, Aaron; Inman, Jason M.; Lee, Eduardo; Nampally, Sreenath; Riley, David; Sundaram, Jaideep P.; Felix, Victor; Whitty, Brett; Mahurkar, Anup; Wortman, Jennifer; White, Owen; Angiuoli, Samuel V.

    2010-01-01

    Motivation: The growth of sequence data has been accompanied by an increasing need to analyze data on distributed computer clusters. The use of these systems for routine analysis requires scalable and robust software for data management of large datasets. Software is also needed to simplify data management and make large-scale bioinformatics analysis accessible and reproducible to a wide class of target users. Results: We have developed a workflow management system named Ergatis that enables users to build, execute and monitor pipelines for computational analysis of genomics data. Ergatis contains preconfigured components and template pipelines for a number of common bioinformatics tasks such as prokaryotic genome annotation and genome comparisons. Outputs from many of these components can be loaded into a Chado relational database. Ergatis was designed to be accessible to a broad class of users and provides a user friendly, web-based interface. Ergatis supports high-throughput batch processing on distributed compute clusters and has been used for data management in a number of genome annotation and comparative genomics projects. Availability: Ergatis is an open-source project and is freely available at http://ergatis.sourceforge.net Contact: jorvis@users.sourceforge.net PMID:20413634

  11. Scalable DeNoise-and-Forward in Bidirectional Relay Networks

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Krigslund, Rasmus; Popovski, Petar

    2010-01-01

    In this paper a scalable relaying scheme is proposed based on an existing concept called DeNoise-and-Forward, DNF. We call it Scalable DNF, S-DNF, and it targets the scenario with multiple communication flows through a single common relay. The idea of the scheme is to combine packets at the relay...... in order to save transmissions. To ensure decodability at the end-nodes, a priori information about the content of the combined packets must be available. This is gathered during the initial transmissions to the relay. The trade-off between decodability and number of necessary transmissions is analysed...

  12. Robustness of holonomic quantum gates

    International Nuclear Information System (INIS)

    Solinas, P.; Zanardi, P.; Zanghi, N.

    2005-01-01

    Full text: If the driving field fluctuates during the quantum evolution this produces errors in the applied operator. The holonomic (and geometrical) quantum gates are believed to be robust against some kind of noise. Because of the geometrical dependence of the holonomic operators can be robust against this kind of noise; in fact if the fluctuations are fast enough they cancel out leaving the final operator unchanged. I present the numerical studies of holonomic quantum gates subject to this parametric noise, the fidelity of the noise and ideal evolution is calculated for different noise correlation times. The holonomic quantum gates seem robust not only for fast fluctuating fields but also for slow fluctuating fields. These results can be explained as due to the geometrical feature of the holonomic operator: for fast fluctuating fields the fluctuations are canceled out, for slow fluctuating fields the fluctuations do not perturb the loop in the parameter space. (author)

  13. A scalable approach to modeling groundwater flow on massively parallel computers

    International Nuclear Information System (INIS)

    Ashby, S.F.; Falgout, R.D.; Tompson, A.F.B.

    1995-12-01

    We describe a fully scalable approach to the simulation of groundwater flow on a hierarchy of computing platforms, ranging from workstations to massively parallel computers. Specifically, we advocate the use of scalable conceptual models in which the subsurface model is defined independently of the computational grid on which the simulation takes place. We also describe a scalable multigrid algorithm for computing the groundwater flow velocities. We axe thus able to leverage both the engineer's time spent developing the conceptual model and the computing resources used in the numerical simulation. We have successfully employed this approach at the LLNL site, where we have run simulations ranging in size from just a few thousand spatial zones (on workstations) to more than eight million spatial zones (on the CRAY T3D)-all using the same conceptual model

  14. Constraint Solver Techniques for Implementing Precise and Scalable Static Program Analysis

    DEFF Research Database (Denmark)

    Zhang, Ye

    solver using unification we could make a program analysis easier to design and implement, much more scalable, and still as precise as expected. We present an inclusion constraint language with the explicit equality constructs for specifying program analysis problems, and a parameterized framework...... developers to build reliable software systems more quickly and with fewer bugs or security defects. While designing and implementing a program analysis remains a hard work, making it both scalable and precise is even more challenging. In this dissertation, we show that with a general inclusion constraint...... data flow analyses for C language, we demonstrate a large amount of equivalences could be detected by off-line analyses, and they could then be used by a constraint solver to significantly improve the scalability of an analysis without sacrificing any precision....

  15. Algorithmic psychometrics and the scalable subject.

    Science.gov (United States)

    Stark, Luke

    2018-04-01

    Recent public controversies, ranging from the 2014 Facebook 'emotional contagion' study to psychographic data profiling by Cambridge Analytica in the 2016 American presidential election, Brexit referendum and elsewhere, signal watershed moments in which the intersecting trajectories of psychology and computer science have become matters of public concern. The entangled history of these two fields grounds the application of applied psychological techniques to digital technologies, and an investment in applying calculability to human subjectivity. Today, a quantifiable psychological subject position has been translated, via 'big data' sets and algorithmic analysis, into a model subject amenable to classification through digital media platforms. I term this position the 'scalable subject', arguing it has been shaped and made legible by algorithmic psychometrics - a broad set of affordances in digital platforms shaped by psychology and the behavioral sciences. In describing the contours of this 'scalable subject', this paper highlights the urgent need for renewed attention from STS scholars on the psy sciences, and on a computational politics attentive to psychology, emotional expression, and sociality via digital media.

  16. Scalable Simulation of Electromagnetic Hybrid Codes

    International Nuclear Information System (INIS)

    Perumalla, Kalyan S.; Fujimoto, Richard; Karimabadi, Dr. Homa

    2006-01-01

    New discrete-event formulations of physics simulation models are emerging that can outperform models based on traditional time-stepped techniques. Detailed simulation of the Earth's magnetosphere, for example, requires execution of sub-models that are at widely differing timescales. In contrast to time-stepped simulation which requires tightly coupled updates to entire system state at regular time intervals, the new discrete event simulation (DES) approaches help evolve the states of sub-models on relatively independent timescales. However, parallel execution of DES-based models raises challenges with respect to their scalability and performance. One of the key challenges is to improve the computation granularity to offset synchronization and communication overheads within and across processors. Our previous work was limited in scalability and runtime performance due to the parallelization challenges. Here we report on optimizations we performed on DES-based plasma simulation models to improve parallel performance. The net result is the capability to simulate hybrid particle-in-cell (PIC) models with over 2 billion ion particles using 512 processors on supercomputing platforms

  17. Scalable, full-colour and controllable chromotropic plasmonic printing

    Science.gov (United States)

    Xue, Jiancai; Zhou, Zhang-Kai; Wei, Zhiqiang; Su, Rongbin; Lai, Juan; Li, Juntao; Li, Chao; Zhang, Tengwei; Wang, Xue-Hua

    2015-01-01

    Plasmonic colour printing has drawn wide attention as a promising candidate for the next-generation colour-printing technology. However, an efficient approach to realize full colour and scalable fabrication is still lacking, which prevents plasmonic colour printing from practical applications. Here we present a scalable and full-colour plasmonic printing approach by combining conjugate twin-phase modulation with a plasmonic broadband absorber. More importantly, our approach also demonstrates controllable chromotropic capability, that is, the ability of reversible colour transformations. This chromotropic capability affords enormous potentials in building functionalized prints for anticounterfeiting, special label, and high-density data encryption storage. With such excellent performances in functional colour applications, this colour-printing approach could pave the way for plasmonic colour printing in real-world commercial utilization. PMID:26567803

  18. Scalable Domain Decomposed Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  19. Development of Robust Metal-Supported SOFCs and Stack Components in EU METSAPP Consortium

    DEFF Research Database (Denmark)

    Sudireddy, Bhaskar Reddy; Nielsen, Jimmi; Persson, Åsa Helen

    2017-01-01

    METSAPP project has been executed with an overall aim of developing advanced metal-supported cells and stacks based on a robust, reliable and up-scalable technology. During the project, oxidation resistant nanostructured anodes based on modified SrTiO3 were developed and integrated into MS...... and best performance and stability combination was observed with doped SrTiO3 based anode designs. Furthermore, numerical models to understand the corrosion behavior of the MS-SOFCs were developed and validated. Finally, the cost effective concept of coated metal interconnects was developed, which resulted...... in 90% reduction in Cr evaporation, three times lower Cr2O3 scale thickness and increased lifetime. The possibility of assembling these cells into two radically different stack designs was demonstrated....

  20. Robust quantum gates between trapped ions using shaped pulses

    Energy Technology Data Exchange (ETDEWEB)

    Zou, Ping, E-mail: zouping@m.scnu.edu.cn; Zhang, Zhi-Ming, E-mail: zmzhang@scnu.edu.cn

    2015-12-18

    We improve two existing entangling gate schemes between trapped ion qubits immersed in a large linear crystal. Based on the existing two-qubit gate schemes by applying segmented forces on the individually addressed qubits, we present a systematic method to optimize the shapes of the forces to suppress the dominant source of infidelity. The spin-dependent forces in the scheme can be from periodic photon kicks or from continuous optical pulses. The entangling gates are fast, robust, and have high fidelity. They can be used to implement scalable quantum computation and quantum simulation. - Highlights: • We present a systematic method to optimize the shape of the pulses to decouple qubits from intermediary motional modes. • Our optimized scheme can be applied to both the ultrafast gate and fast gate. • Our optimized scheme can suppress the dominant source of infidelity to arbitrary order. • When the number of trapped ions increase, the number of needed segments increases slowly.

  1. Mindfulness and compassion: an examination of mechanism and scalability.

    Directory of Open Access Journals (Sweden)

    Daniel Lim

    Full Text Available Emerging evidence suggests that meditation engenders prosocial behaviors meant to benefit others. However, the robustness, underlying mechanisms, and potential scalability of such effects remain open to question. The current experiment employed an ecologically valid situation that exposed participants to a person in visible pain. Following three-week, mobile-app based training courses in mindfulness meditation or cognitive skills (i.e., an active control condition, participants arrived at a lab individually to complete purported measures of cognitive ability. Upon entering a public waiting area outside the lab that contained three chairs, participants seated themselves in the last remaining unoccupied chair; confederates occupied the other two. As the participant sat and waited, a third confederate using crutches and a large walking boot entered the waiting area while displaying discomfort. Compassionate responding was assessed by whether participants gave up their seat to allow the uncomfortable confederate to sit, thereby relieving her pain. Participants' levels of empathic accuracy was also assessed. As predicted, participants assigned to the mindfulness meditation condition gave up their seats more frequently than did those assigned to the active control group. In addition, empathic accuracy was not increased by mindfulness practice, suggesting that mindfulness-enhanced compassionate behavior does not stem from associated increases in the ability to decode the emotional experiences of others.

  2. Using scalable vector graphics to evolve art

    NARCIS (Netherlands)

    den Heijer, E.; Eiben, A. E.

    2016-01-01

    In this paper, we describe our investigations of the use of scalable vector graphics as a genotype representation in evolutionary art. We describe the technical aspects of using SVG in evolutionary art, and explain our custom, SVG specific operators initialisation, mutation and crossover. We perform

  3. Robust power spectral estimation for EEG data.

    Science.gov (United States)

    Melman, Tamar; Victor, Jonathan D

    2016-08-01

    Typical electroencephalogram (EEG) recordings often contain substantial artifact. These artifacts, often large and intermittent, can interfere with quantification of the EEG via its power spectrum. To reduce the impact of artifact, EEG records are typically cleaned by a preprocessing stage that removes individual segments or components of the recording. However, such preprocessing can introduce bias, discard available signal, and be labor-intensive. With this motivation, we present a method that uses robust statistics to reduce dependence on preprocessing by minimizing the effect of large intermittent outliers on the spectral estimates. Using the multitaper method (Thomson, 1982) as a starting point, we replaced the final step of the standard power spectrum calculation with a quantile-based estimator, and the Jackknife approach to confidence intervals with a Bayesian approach. The method is implemented in provided MATLAB modules, which extend the widely used Chronux toolbox. Using both simulated and human data, we show that in the presence of large intermittent outliers, the robust method produces improved estimates of the power spectrum, and that the Bayesian confidence intervals yield close-to-veridical coverage factors. The robust method, as compared to the standard method, is less affected by artifact: inclusion of outliers produces fewer changes in the shape of the power spectrum as well as in the coverage factor. In the presence of large intermittent outliers, the robust method can reduce dependence on data preprocessing as compared to standard methods of spectral estimation. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Robust and Adaptive Control With Aerospace Applications

    CERN Document Server

    Lavretsky, Eugene

    2013-01-01

    Robust and Adaptive Control shows the reader how to produce consistent and accurate controllers that operate in the presence of uncertainties and unforeseen events. Driven by aerospace applications the focus of the book is primarily on continuous-dynamical systems.  The text is a three-part treatment, beginning with robust and optimal linear control methods and moving on to a self-contained presentation of the design and analysis of model reference adaptive control (MRAC) for nonlinear uncertain dynamical systems. Recent extensions and modifications to MRAC design are included, as are guidelines for combining robust optimal and MRAC controllers. Features of the text include: ·         case studies that demonstrate the benefits of robust and adaptive control for piloted, autonomous and experimental aerial platforms; ·         detailed background material for each chapter to motivate theoretical developments; ·         realistic examples and simulation data illustrating key features ...

  5. Robustness Metrics: Consolidating the multiple approaches to quantify Robustness

    DEFF Research Database (Denmark)

    Göhler, Simon Moritz; Eifler, Tobias; Howard, Thomas J.

    2016-01-01

    robustness metrics; 3) Functional expectancy and dispersion robustness metrics; and 4) Probability of conformance robustness metrics. The goal was to give a comprehensive overview of robustness metrics and guidance to scholars and practitioners to understand the different types of robustness metrics...

  6. Scalable Open Source Smart Grid Simulator (SGSim)

    DEFF Research Database (Denmark)

    Ebeid, Emad Samuel Malki; Jacobsen, Rune Hylsberg; Stefanni, Francesco

    2017-01-01

    . This paper presents an open source smart grid simulator (SGSim). The simulator is based on open source SystemC Network Simulation Library (SCNSL) and aims to model scalable smart grid applications. SGSim has been tested under different smart grid scenarios that contain hundreds of thousands of households...

  7. Scientific visualization uncertainty, multifield, biomedical, and scalable visualization

    CERN Document Server

    Chen, Min; Johnson, Christopher; Kaufman, Arie; Hagen, Hans

    2014-01-01

    Based on the seminar that took place in Dagstuhl, Germany in June 2011, this contributed volume studies the four important topics within the scientific visualization field: uncertainty visualization, multifield visualization, biomedical visualization and scalable visualization. • Uncertainty visualization deals with uncertain data from simulations or sampled data, uncertainty due to the mathematical processes operating on the data, and uncertainty in the visual representation, • Multifield visualization addresses the need to depict multiple data at individual locations and the combination of multiple datasets, • Biomedical is a vast field with select subtopics addressed from scanning methodologies to structural applications to biological applications, • Scalability in scientific visualization is critical as data grows and computational devices range from hand-held mobile devices to exascale computational platforms. Scientific Visualization will be useful to practitioners of scientific visualization, ...

  8. Scalable quantum memory in the ultrastrong coupling regime.

    Science.gov (United States)

    Kyaw, T H; Felicetti, S; Romero, G; Solano, E; Kwek, L-C

    2015-03-02

    Circuit quantum electrodynamics, consisting of superconducting artificial atoms coupled to on-chip resonators, represents a prime candidate to implement the scalable quantum computing architecture because of the presence of good tunability and controllability. Furthermore, recent advances have pushed the technology towards the ultrastrong coupling regime of light-matter interaction, where the qubit-resonator coupling strength reaches a considerable fraction of the resonator frequency. Here, we propose a qubit-resonator system operating in that regime, as a quantum memory device and study the storage and retrieval of quantum information in and from the Z2 parity-protected quantum memory, within experimentally feasible schemes. We are also convinced that our proposal might pave a way to realize a scalable quantum random-access memory due to its fast storage and readout performances.

  9. Decentralized control of a scalable photovoltaic (PV)-battery hybrid power system

    International Nuclear Information System (INIS)

    Kim, Myungchin; Bae, Sungwoo

    2017-01-01

    Highlights: • This paper introduces the design and control of a PV-battery hybrid power system. • Reliable and scalable operation of hybrid power systems is achieved. • System and power control are performed without a centralized controller. • Reliability and scalability characteristics are studied in a quantitative manner. • The system control performance is verified using realistic solar irradiation data. - Abstract: This paper presents the design and control of a sustainable standalone photovoltaic (PV)-battery hybrid power system (HPS). The research aims to develop an approach that contributes to increased level of reliability and scalability for an HPS. To achieve such objectives, a PV-battery HPS with a passively connected battery was studied. A quantitative hardware reliability analysis was performed to assess the effect of energy storage configuration to the overall system reliability. Instead of requiring the feedback control information of load power through a centralized supervisory controller, the power flow in the proposed HPS is managed by a decentralized control approach that takes advantage of the system architecture. Reliable system operation of an HPS is achieved through the proposed control approach by not requiring a separate supervisory controller. Furthermore, performance degradation of energy storage can be prevented by selecting the controller gains such that the charge rate does not exceed operational requirements. The performance of the proposed system architecture with the control strategy was verified by simulation results using realistic irradiance data and a battery model in which its temperature effect was considered. With an objective to support scalable operation, details on how the proposed design could be applied were also studied so that the HPS could satisfy potential system growth requirements. Such scalability was verified by simulating various cases that involve connection and disconnection of sources and loads. The

  10. Scalable optical quantum computer

    International Nuclear Information System (INIS)

    Manykin, E A; Mel'nichenko, E V

    2014-01-01

    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr 3+ , regularly located in the lattice of the orthosilicate (Y 2 SiO 5 ) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

  11. SuperLU{_}DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiaoye S.; Demmel, James W.

    2002-03-27

    In this paper, we present the main algorithmic features in the software package SuperLU{_}DIST, a distributed-memory sparse direct solver for large sets of linear equations. We give in detail our parallelization strategies, with focus on scalability issues, and demonstrate the parallel performance and scalability on current machines. The solver is based on sparse Gaussian elimination, with an innovative static pivoting strategy proposed earlier by the authors. The main advantage of static pivoting over classical partial pivoting is that it permits a priori determination of data structures and communication pattern for sparse Gaussian elimination, which makes it more scalable on distributed memory machines. Based on this a priori knowledge, we designed highly parallel and scalable algorithms for both LU decomposition and triangular solve and we show that they are suitable for large-scale distributed memory machines.

  12. Efficient and scalable purification of cardiomyocytes from human embryonic and induced pluripotent stem cells by VCAM1 surface expression.

    Directory of Open Access Journals (Sweden)

    Hideki Uosaki

    Full Text Available RATIONALE: Human embryonic and induced pluripotent stem cells (hESCs/hiPSCs are promising cell sources for cardiac regenerative medicine. To realize hESC/hiPSC-based cardiac cell therapy, efficient induction, purification, and transplantation methods for cardiomyocytes are required. Though marker gene transduction or fluorescent-based purification methods have been reported, fast, efficient and scalable purification methods with no genetic modification are essential for clinical purpose but have not yet been established. In this study, we attempted to identify cell surface markers for cardiomyocytes derived from hESC/hiPSCs. METHOD AND RESULT: We adopted a previously reported differentiation protocol for hESCs based on high density monolayer culture to hiPSCs with some modification. Cardiac troponin-T (TNNT2-positive cardiomyocytes appeared robustly with 30-70% efficiency. Using this differentiation method, we screened 242 antibodies for human cell surface molecules to isolate cardiomyocytes derived from hiPSCs and identified anti-VCAM1 (Vascular cell adhesion molecule 1 antibody specifically marked cardiomyocytes. TNNT2-positive cells were detected at day 7-8 after induction and 80% of them became VCAM1-positive by day 11. Approximately 95-98% of VCAM1-positive cells at day 11 were positive for TNNT2. VCAM1 was exclusive with CD144 (endothelium, CD140b (pericytes and TRA-1-60 (undifferentiated hESCs/hiPSCs. 95% of MACS-purified cells were positive for TNNT2. MACS purification yielded 5-10×10(5 VCAM1-positive cells from a single well of a six-well culture plate. Purified VCAM1-positive cells displayed molecular and functional features of cardiomyocytes. VCAM1 also specifically marked cardiomyocytes derived from other hESC or hiPSC lines. CONCLUSION: We succeeded in efficiently inducing cardiomyocytes from hESCs/hiPSCs and identifying VCAM1 as a potent cell surface marker for robust, efficient and scalable purification of cardiomyocytes from h

  13. Advanced technologies for scalable ATLAS conditions database access on the grid

    CERN Document Server

    Basset, R; Dimitrov, G; Girone, M; Hawkings, R; Nevski, P; Valassi, A; Vaniachine, A; Viegas, F; Walker, R; Wong, A

    2010-01-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysi...

  14. Scalable force directed graph layout algorithms using fast multipole methods

    KAUST Repository

    Yunis, Enas Abdulrahman

    2012-06-01

    We present an extension to ExaFMM, a Fast Multipole Method library, as a generalized approach for fast and scalable execution of the Force-Directed Graph Layout algorithm. The Force-Directed Graph Layout algorithm is a physics-based approach to graph layout that treats the vertices V as repelling charged particles with the edges E connecting them acting as springs. Traditionally, the amount of work required in applying the Force-Directed Graph Layout algorithm is O(|V|2 + |E|) using direct calculations and O(|V| log |V| + |E|) using truncation, filtering, and/or multi-level techniques. Correct application of the Fast Multipole Method allows us to maintain a lower complexity of O(|V| + |E|) while regaining most of the precision lost in other techniques. Solving layout problems for truly large graphs with millions of vertices still requires a scalable algorithm and implementation. We have been able to leverage the scalability and architectural adaptability of the ExaFMM library to create a Force-Directed Graph Layout implementation that runs efficiently on distributed multicore and multi-GPU architectures. © 2012 IEEE.

  15. Scalability of voltage-controlled filamentary and nanometallic resistance memory devices.

    Science.gov (United States)

    Lu, Yang; Lee, Jong Ho; Chen, I-Wei

    2017-08-31

    Much effort has been devoted to device and materials engineering to realize nanoscale resistance random access memory (RRAM) for practical applications, but a rational physical basis to be relied on to design scalable devices spanning many length scales is still lacking. In particular, there is no clear criterion for switching control in those RRAM devices in which resistance changes are limited to localized nanoscale filaments that experience concentrated heat, electric current and field. Here, we demonstrate voltage-controlled resistance switching, always at a constant characteristic critical voltage, for macro and nanodevices in both filamentary RRAM and nanometallic RRAM, and the latter switches uniformly and does not require a forming process. As a result, area-scalability can be achieved under a device-area-proportional current compliance for the low resistance state of the filamentary RRAM, and for both the low and high resistance states of the nanometallic RRAM. This finding will help design area-scalable RRAM at the nanoscale. It also establishes an analogy between RRAM and synapses, in which signal transmission is also voltage-controlled.

  16. Scalability of DL_POLY on High Performance Computing Platform

    Directory of Open Access Journals (Sweden)

    Mabule Samuel Mabakane

    2017-12-01

    Full Text Available This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.

  17. Scalable fast multipole accelerated vortex methods

    KAUST Repository

    Hu, Qi

    2014-05-01

    The fast multipole method (FMM) is often used to accelerate the calculation of particle interactions in particle-based methods to simulate incompressible flows. To evaluate the most time-consuming kernels - the Biot-Savart equation and stretching term of the vorticity equation, we mathematically reformulated it so that only two Laplace scalar potentials are used instead of six. This automatically ensuring divergence-free far-field computation. Based on this formulation, we developed a new FMM-based vortex method on heterogeneous architectures, which distributed the work between multicore CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm uses new data structures which can dynamically manage inter-node communication and load balance efficiently, with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching calculation for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s.

  18. Scalability Optimization of Seamless Positioning Service

    Directory of Open Access Journals (Sweden)

    Juraj Machaj

    2016-01-01

    Full Text Available Recently positioning services are getting more attention not only within research community but also from service providers. From the service providers point of view positioning service that will be able to work seamlessly in all environments, for example, indoor, dense urban, and rural, has a huge potential to open new markets. However, such system does not only need to provide accurate position estimates but have to be scalable and resistant to fake positioning requests. In the previous works we have proposed a modular system, which is able to provide seamless positioning in various environments. The system automatically selects optimal positioning module based on available radio signals. The system currently consists of three positioning modules—GPS, GSM based positioning, and Wi-Fi based positioning. In this paper we will propose algorithm which will reduce time needed for position estimation and thus allow higher scalability of the modular system and thus allow providing positioning services to higher amount of users. Such improvement is extremely important, for real world application where large number of users will require position estimates, since positioning error is affected by response time of the positioning server.

  19. PM2006: a highly scalable urban planning management information system--Case study: Suzhou Urban Planning Bureau

    Science.gov (United States)

    Jing, Changfeng; Liang, Song; Ruan, Yong; Huang, Jie

    2008-10-01

    During the urbanization process, when facing complex requirements of city development, ever-growing urban data, rapid development of planning business and increasing planning complexity, a scalable, extensible urban planning management information system is needed urgently. PM2006 is such a system that can deal with these problems. In response to the status and problems in urban planning, the scalability and extensibility of PM2006 are introduced which can be seen as business-oriented workflow extensibility, scalability of DLL-based architecture, flexibility on platforms of GIS and database, scalability of data updating and maintenance and so on. It is verified that PM2006 system has good extensibility and scalability which can meet the requirements of all levels of administrative divisions and can adapt to ever-growing changes in urban planning business. At the end of this paper, the application of PM2006 in Urban Planning Bureau of Suzhou city is described.

  20. Architecture Knowledge for Evaluating Scalable Databases

    Science.gov (United States)

    2015-01-16

    Architecture Knowledge for Evaluating Scalable Databases 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Nurgaliev... Scala , Erlang, Javascript Cursor-based queries Supported, Not Supported JOIN queries Supported, Not Supported Complex data types Lists, maps, sets...is therefore needed, using technology such as machine learning to extract content from product documentation. The terminology used in the database

  1. Cooperative Scalable Moving Continuous Query Processing

    DEFF Research Database (Denmark)

    Li, Xiaohui; Karras, Panagiotis; Jensen, Christian S.

    2012-01-01

    of the global view and handle the majority of the workload. Meanwhile, moving clients, having basic memory and computation resources, handle small portions of the workload. This model is further enhanced by dynamic region allocation and grid size adjustment mechanisms that reduce the communication...... and computation cost for both servers and clients. An experimental study demonstrates that our approaches offer better scalability than competitors...

  2. Final Report: Center for Programming Models for Scalable Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Mellor-Crummey, John [William Marsh Rice University

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  3. On the Scalability of Time-predictable Chip-Multiprocessing

    DEFF Research Database (Denmark)

    Puffitsch, Wolfgang; Schoeberl, Martin

    2012-01-01

    Real-time systems need a time-predictable execution platform to be able to determine the worst-case execution time statically. In order to be time-predictable, several advanced processor features, such as out-of-order execution and other forms of speculation, have to be avoided. However, just using...... simple processors is not an option for embedded systems with high demands on computing power. In order to provide high performance and predictability we argue to use multiprocessor systems with a time-predictable memory interface. In this paper we present the scalability of a Java chip......-multiprocessor system that is designed to be time-predictable. Adding time-predictable caches is mandatory to achieve scalability with a shared memory multi-processor system. As Java bytecode retains information about the nature of memory accesses, it is possible to implement a memory hierarchy that takes...

  4. ATLAS Grid Data Processing: system evolution and scalability

    CERN Document Server

    Golubkov, D; The ATLAS collaboration; Klimentov, A; Minaenko, A; Nevski, P; Vaniachine, A; Walker, R

    2012-01-01

    The production system for Grid Data Processing handles petascale ATLAS data reprocessing and Monte Carlo activities. The production system empowered further data processing steps on the Grid performed by dozens of ATLAS physics groups with coordinated access to computing resources worldwide, including additional resources sponsored by regional facilities. The system provides knowledge management of configuration parameters for massive data processing tasks, reproducibility of results, scalable database access, orchestrated workflow and performance monitoring, dynamic workload sharing, automated fault tolerance and petascale data integrity control. The system evolves to accommodate a growing number of users and new requirements from our contacts in ATLAS main areas: Trigger, Physics, Data Preparation and Software & Computing. To assure scalability, the next generation production system architecture development is in progress. We report on scaling up the production system for a growing number of users provi...

  5. Iterative Integration of Visual Insights during Scalable Patent Search and Analysis.

    Science.gov (United States)

    Koch, S; Bosch, H; Giereth, M; Ertl, T

    2011-05-01

    Patents are of growing importance in current economic markets. Analyzing patent information has, therefore, become a common task for many interest groups. As a prerequisite for patent analysis, extensive search for relevant patent information is essential. Unfortunately, the complexity of patent material inhibits a straightforward retrieval of all relevant patent documents and leads to iterative, time-consuming approaches in practice. Already the amount of patent data to be analyzed poses challenges with respect to scalability. Further scalability issues arise concerning the diversity of users and the large variety of analysis tasks. With "PatViz", a system for interactive analysis of patent information has been developed addressing scalability at various levels. PatViz provides a visual environment allowing for interactive reintegration of insights into subsequent search iterations, thereby bridging the gap between search and analytic processes. Because of its extensibility, we expect that the approach we have taken can be employed in different problem domains that require high quality of search results regarding their completeness.

  6. Scalable Video Streaming Relay for Smart Mobile Devices in Wireless Networks.

    Science.gov (United States)

    Kwon, Dongwoo; Je, Huigwang; Kim, Hyeonwoo; Ju, Hongtaek; An, Donghyeok

    2016-01-01

    Recently, smart mobile devices and wireless communication technologies such as WiFi, third generation (3G), and long-term evolution (LTE) have been rapidly deployed. Many smart mobile device users can access the Internet wirelessly, which has increased mobile traffic. In 2014, more than half of the mobile traffic around the world was devoted to satisfying the increased demand for the video streaming. In this paper, we propose a scalable video streaming relay scheme. Because many collisions degrade the scalability of video streaming, we first separate networks to prevent excessive contention between devices. In addition, the member device controls the video download rate in order to adapt to video playback. If the data are sufficiently buffered, the member device stops the download. If not, it requests additional video data. We implemented apps to evaluate the proposed scheme and conducted experiments with smart mobile devices. The results showed that our scheme improves the scalability of video streaming in a wireless local area network (WLAN).

  7. Scalable Video Streaming Relay for Smart Mobile Devices in Wireless Networks

    Science.gov (United States)

    Kwon, Dongwoo; Je, Huigwang; Kim, Hyeonwoo; Ju, Hongtaek; An, Donghyeok

    2016-01-01

    Recently, smart mobile devices and wireless communication technologies such as WiFi, third generation (3G), and long-term evolution (LTE) have been rapidly deployed. Many smart mobile device users can access the Internet wirelessly, which has increased mobile traffic. In 2014, more than half of the mobile traffic around the world was devoted to satisfying the increased demand for the video streaming. In this paper, we propose a scalable video streaming relay scheme. Because many collisions degrade the scalability of video streaming, we first separate networks to prevent excessive contention between devices. In addition, the member device controls the video download rate in order to adapt to video playback. If the data are sufficiently buffered, the member device stops the download. If not, it requests additional video data. We implemented apps to evaluate the proposed scheme and conducted experiments with smart mobile devices. The results showed that our scheme improves the scalability of video streaming in a wireless local area network (WLAN). PMID:27907113

  8. Asynchronous Checkpoint Migration with MRNet in the Scalable Checkpoint / Restart Library

    Energy Technology Data Exchange (ETDEWEB)

    Mohror, K; Moody, A; de Supinski, B R

    2012-03-20

    Applications running on today's supercomputers tolerate failures by periodically saving their state in checkpoint files on stable storage, such as a parallel file system. Although this approach is simple, the overhead of writing the checkpoints can be prohibitive, especially for large-scale jobs. In this paper, we present initial results of an enhancement to our Scalable Checkpoint/Restart Library (SCR). We employ MRNet, a tree-based overlay network library, to transfer checkpoints from the compute nodes to the parallel file system asynchronously. This enhancement increases application efficiency by removing the need for an application to block while checkpoints are transferred to the parallel file system. We show that the integration of SCR with MRNet can reduce the time spent in I/O operations by as much as 15x. However, our experiments exposed new scalability issues with our initial implementation. We discuss the sources of the scalability problems and our plans to address them.

  9. Applications of the scalable coherent interface to data acquisition at LHC

    CERN Document Server

    Bogaerts, A; Divià, R; Müller, H; Parkman, C; Ponting, P J; Skaali, B; Midttun, G; Wormald, D; Wikne, J; Falciano, S; Cesaroni, F; Vinogradov, V I; Kristiansen, E H; Solberg, B; Guglielmi, A M; Worm, F H; Bovier, J; Davis, C; CERN. Geneva. Detector Research and Development Committee

    1991-01-01

    We propose to use the Scalable Coherent Interface (SCI) as a very high speed interconnect between LHC detector data buffers and farms of commercial trigger processors. Both the global second and third level trigger can be based on SCI as a reconfigurable and scalable system. SCI is a proposed IEEE standard which uses fast point-to-point links to provide computer-bus like services. It can connect a maximum of 65 536 nodes (memories or processors), providing data transfer rates of up to 1 Gbyte/s. Scalable data acquisition systems can be built using either simple SCI rings or complex switches. The interconnections may be flat cables, coaxial cables, or optical fibres. SCI protocols have been entirely implemented in VLSI, resulting in a significant simplification of data acquisition software. Novel SCI features allow efficient implementation of both data and processor driven readout architectures. In particular, a very efficient implementation of the third level trigger can be achieved by combining SCI's shared ...

  10. Affordable and Scalable Manufacturing of Wearable Multi-Functional Sensory “Skin” for Internet of Everything Applications

    KAUST Repository

    Nassar, Joanna M.

    2017-10-01

    mapping through 3D stacked arrays, or advanced personalized healthcare through the developed “paper watch” prototype. My last approach targets a harsh environment waterproof “marine skin” tagging system, using marine animals as allies to study the marine ecosystem. The “skin” platforms offer real-time and simultaneous monitoring while preserving high performance and robust behaviors under various bending conditions, maintaining system compatibility using cost-effective and scalable approaches for a tangible realization of a truly flexible wearable device.

  11. Study on scalable Coulombic degradation for estimating the lifetime of organic light-emitting devices

    International Nuclear Information System (INIS)

    Zhang Wenwen; Hou Xun; Wu Zhaoxin; Liang Shixiong; Jiao Bo; Zhang Xinwen; Wang Dawei; Chen Zhijian; Gong Qihuang

    2011-01-01

    The luminance decays of organic light-emitting diodes (OLEDs) are investigated with initial luminance of 1000 to 20 000 cd m -2 through a scalable Coulombic degradation and a stretched exponential decay. We found that the estimated lifetime by scalable Coulombic degradation deviates from the experimental results when the OLEDs work with high initial luminance. By measuring the temperature of the device during degradation, we found that the higher device temperatures will lead to instabilities of organic materials in devices, which is expected to result in the difference between the experimental results and estimation using the scalable Coulombic degradation.

  12. Robust enhancement of high harmonic generation via attosecond control of ionization.

    Science.gov (United States)

    Bruner, Barry D; Krüger, Michael; Pedatzur, Oren; Orenstein, Gal; Azoury, Doron; Dudovich, Nirit

    2018-04-02

    High-harmonic generation (HHG) is a powerful tool to generate coherent attosecond light pulses in the extreme ultraviolet. However, the low conversion efficiency of HHG at the single atom level poses a significant practical limitation for many applications. Enhancing the efficiency of the process defines one of the primary challenges in the application of HHG as an advanced XUV source. In this work, we demonstrate a new mechanism, which in contrast to current methods, enhances the HHG conversion efficiency purely on a single particle level. We show that using a bichromatic driving field, sub-optical-cycle control and enhancement of the tunnelling ionization rate can be achieved, leading to enhancements in HHG efficiency by up to two orders of magnitude. Our method advances the perspectives of HHG spectroscopy, where isolating the single particle response is an essential component, and offers a simple route toward scalable, robust XUV sources.

  13. Robust control investigations for equipment loaded panels

    DEFF Research Database (Denmark)

    Aglietti, G.S.; Langley, R.S.; Rogers, E.

    1998-01-01

    This paper develops a modelling technique for equipment load panels which directly produces (adequate) models of the underlying dynamics on which to base robust controller design/evaluations. This technique is based on the use of the Lagrange's equations of motion and the resulting models...

  14. Scalable power selection method for wireless mesh networks

    CSIR Research Space (South Africa)

    Olwal, TO

    2009-01-01

    Full Text Available This paper addresses the problem of a scalable dynamic power control (SDPC) for wireless mesh networks (WMNs) based on IEEE 802.11 standards. An SDPC model that accounts for architectural complexities witnessed in multiple radios and hops...

  15. Scalable storage for a DBMS using transparent distribution

    NARCIS (Netherlands)

    J.S. Karlsson; M.L. Kersten (Martin)

    1997-01-01

    textabstractScalable Distributed Data Structures (SDDSs) provide a self-managing and self-organizing data storage of potentially unbounded size. This stands in contrast to common distribution schemas deployed in conventional distributed DBMS. SDDSs, however, have mostly been used in synthetic

  16. Scalable multifunction RF system concepts for joint operations

    NARCIS (Netherlands)

    Otten, M.P.G.; Wit, J.J.M. de; Smits, F.M.A.; Rossum, W.L. van; Huizing, A.

    2010-01-01

    RF systems based on modular architectures have the potential of better re-use of technology, decreasing development time, and decreasing life cycle cost. Moreover, modular architectures provide scalability, allowing low cost upgrades and adaptability to different platforms. To achieve maximum

  17. Scalable optical quantum computer

    Energy Technology Data Exchange (ETDEWEB)

    Manykin, E A; Mel' nichenko, E V [Institute for Superconductivity and Solid-State Physics, Russian Research Centre ' Kurchatov Institute' , Moscow (Russian Federation)

    2014-12-31

    A way of designing a scalable optical quantum computer based on the photon echo effect is proposed. Individual rare earth ions Pr{sup 3+}, regularly located in the lattice of the orthosilicate (Y{sub 2}SiO{sub 5}) crystal, are suggested to be used as optical qubits. Operations with qubits are performed using coherent and incoherent laser pulses. The operation protocol includes both the method of measurement-based quantum computations and the technique of optical computations. Modern hybrid photon echo protocols, which provide a sufficient quantum efficiency when reading recorded states, are considered as most promising for quantum computations and communications. (quantum computer)

  18. The use of cloth fabric diffusion layers for scalable microbial fuel cells

    KAUST Repository

    Luo, Yong

    2013-04-01

    A scalable and pre-manufactured cloth material (Goretex® fabric) was used as a diffusion layer (DL) material as a replacement for a liquid-applied polytetrafluoroethylene (PTFE) DL. Cathodes with the Goretex fabric heat-bonded to the air-side of carbon cloth cathode (CC-Goretex) produced a maximum power density of 1330±30mW/m2, similar to that using a PTFE DL (1390±70mW/m2, CC-PTFE). This method was also successfully used to produce cathodes made of inexpensive carbon mesh, which resulted in only slightly less power (1180±10mW/m2) (CM-Goretex). Coulombic efficiencies were a function of current density, with the highest value for CC-PTFE cathodes (63%), similar to CC-Goretex cathodes (61%), and slightly larger than that obtained for the CM-Goretex cathodes (54%). These results show that a commercially available fabric can easily be used as the DL in an MFC, achieving performance similar to that obtained with a more labor-intensive process based on liquid-applied DLs using PTFE. © 2013 Elsevier B.V.

  19. Scalable Techniques for Formal Verification

    CERN Document Server

    Ray, Sandip

    2010-01-01

    This book presents state-of-the-art approaches to formal verification techniques to seamlessly integrate different formal verification methods within a single logical foundation. It should benefit researchers and practitioners looking to get a broad overview of the spectrum of formal verification techniques, as well as approaches to combining such techniques within a single framework. Coverage includes a range of case studies showing how such combination is fruitful in developing a scalable verification methodology for industrial designs. This book outlines both theoretical and practical issue

  20. Scalable Optical-Fiber Communication Networks

    Science.gov (United States)

    Chow, Edward T.; Peterson, John C.

    1993-01-01

    Scalable arbitrary fiber extension network (SAFEnet) is conceptual fiber-optic communication network passing digital signals among variety of computers and input/output devices at rates from 200 Mb/s to more than 100 Gb/s. Intended for use with very-high-speed computers and other data-processing and communication systems in which message-passing delays must be kept short. Inherent flexibility makes it possible to match performance of network to computers by optimizing configuration of interconnections. In addition, interconnections made redundant to provide tolerance to faults.

  1. Bitcoin-NG: A Scalable Blockchain Protocol

    OpenAIRE

    Eyal, Ittay; Gencer, Adem Efe; Sirer, Emin Gun; van Renesse, Robbert

    2015-01-01

    Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential. This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin's blockchain protocol, Bitcoin-NG is By...

  2. SWAP-Assembler: scalable and efficient genome assembly towards thousands of cores.

    Science.gov (United States)

    Meng, Jintao; Wang, Bingqiang; Wei, Yanjie; Feng, Shengzhong; Balaji, Pavan

    2014-01-01

    There is a widening gap between the throughput of massive parallel sequencing machines and the ability to analyze these sequencing data. Traditional assembly methods requiring long execution time and large amount of memory on a single workstation limit their use on these massive data. This paper presents a highly scalable assembler named as SWAP-Assembler for processing massive sequencing data using thousands of cores, where SWAP is an acronym for Small World Asynchronous Parallel model. In the paper, a mathematical description of multi-step bi-directed graph (MSG) is provided to resolve the computational interdependence on merging edges, and a highly scalable computational framework for SWAP is developed to automatically preform the parallel computation of all operations. Graph cleaning and contig extension are also included for generating contigs with high quality. Experimental results show that SWAP-Assembler scales up to 2048 cores on Yanhuang dataset using only 26 minutes, which is better than several other parallel assemblers, such as ABySS, Ray, and PASHA. Results also show that SWAP-Assembler can generate high quality contigs with good N50 size and low error rate, especially it generated the longest N50 contig sizes for Fish and Yanhuang datasets. In this paper, we presented a highly scalable and efficient genome assembly software, SWAP-Assembler. Compared with several other assemblers, it showed very good performance in terms of scalability and contig quality. This software is available at: https://sourceforge.net/projects/swapassembler.

  3. GoFFish: Graph-Oriented Framework for Foresight and Insight Using Scalable Heuristics

    Science.gov (United States)

    2015-09-01

    A. Biem, E. Bouillet, H. Feng, A. Ranganathan , A. Riabov, O. Verscheure, H. Koutsopoulos, and C. Moran, “Ibm infos- phere streams for scalable, real...Systems and Software. Elsevier, 2013, vol. 86, no. 1, pp. 2–11. [5] A. Biem, E. Bouillet, H. Feng, A. Ranganathan , A. Riabov, O. Verscheure, H...Feng, A. Ranganathan , A. Riabov, O. Verscheure, H. Koutsopoulos, and C. Moran. Ibm infosphere streams for scalable, real-time, intelligent

  4. Deterministic and robust generation of single photons from a single quantum dot with 99.5% indistinguishability using adiabatic rapid passage.

    Science.gov (United States)

    Wei, Yu-Jia; He, Yu-Ming; Chen, Ming-Cheng; Hu, Yi-Nan; He, Yu; Wu, Dian; Schneider, Christian; Kamp, Martin; Höfling, Sven; Lu, Chao-Yang; Pan, Jian-Wei

    2014-11-12

    Single photons are attractive candidates of quantum bits (qubits) for quantum computation and are the best messengers in quantum networks. Future scalable, fault-tolerant photonic quantum technologies demand both stringently high levels of photon indistinguishability and generation efficiency. Here, we demonstrate deterministic and robust generation of pulsed resonance fluorescence single photons from a single semiconductor quantum dot using adiabatic rapid passage, a method robust against fluctuation of driving pulse area and dipole moments of solid-state emitters. The emitted photons are background-free, have a vanishing two-photon emission probability of 0.3% and a raw (corrected) two-photon Hong-Ou-Mandel interference visibility of 97.9% (99.5%), reaching a precision that places single photons at the threshold for fault-tolerant surface-code quantum computing. This single-photon source can be readily scaled up to multiphoton entanglement and used for quantum metrology, boson sampling, and linear optical quantum computing.

  5. Highly scalable Ab initio genomic motif identification

    KAUST Repository

    Marchand, Benoit; Bajic, Vladimir B.; Kaushik, Dinesh

    2011-01-01

    We present results of scaling an ab initio motif family identification system, Dragon Motif Finder (DMF), to 65,536 processor cores of IBM Blue Gene/P. DMF seeks groups of mutually similar polynucleotide patterns within a set of genomic sequences and builds various motif families from them. Such information is of relevance to many problems in life sciences. Prior attempts to scale such ab initio motif-finding algorithms achieved limited success. We solve the scalability issues using a combination of mixed-mode MPI-OpenMP parallel programming, master-slave work assignment, multi-level workload distribution, multi-level MPI collectives, and serial optimizations. While the scalability of our algorithm was excellent (94% parallel efficiency on 65,536 cores relative to 256 cores on a modest-size problem), the final speedup with respect to the original serial code exceeded 250,000 when serial optimizations are included. This enabled us to carry out many large-scale ab initio motiffinding simulations in a few hours while the original serial code would have needed decades of execution time. Copyright 2011 ACM.

  6. Screen printing as a scalable and low-cost approach for rigid and flexible thin-film transistors using separated carbon nanotubes.

    Science.gov (United States)

    Cao, Xuan; Chen, Haitian; Gu, Xiaofei; Liu, Bilu; Wang, Wenli; Cao, Yu; Wu, Fanqi; Zhou, Chongwu

    2014-12-23

    Semiconducting single-wall carbon nanotubes are very promising materials in printed electronics due to their excellent mechanical and electrical property, outstanding printability, and great potential for flexible electronics. Nonetheless, developing scalable and low-cost approaches for manufacturing fully printed high-performance single-wall carbon nanotube thin-film transistors remains a major challenge. Here we report that screen printing, which is a simple, scalable, and cost-effective technique, can be used to produce both rigid and flexible thin-film transistors using separated single-wall carbon nanotubes. Our fully printed top-gated nanotube thin-film transistors on rigid and flexible substrates exhibit decent performance, with mobility up to 7.67 cm2 V(-1) s(-1), on/off ratio of 10(4)∼10(5), minimal hysteresis, and low operation voltage (transistors (bent with radius of curvature down to 3 mm) and driving capability for organic light-emitting diode have been demonstrated. Given the high performance of the fully screen-printed single-wall carbon nanotube thin-film transistors, we believe screen printing stands as a low-cost, scalable, and reliable approach to manufacture high-performance nanotube thin-film transistors for application in display electronics. Moreover, this technique may be used to fabricate thin-film transistors based on other materials for large-area flexible macroelectronics, and low-cost display electronics.

  7. The intergroup protocols: Scalable group communication for the internet

    Energy Technology Data Exchange (ETDEWEB)

    Berket, Karlo [Univ. of California, Santa Barbara, CA (United States)

    2000-12-04

    Reliable group ordered delivery of multicast messages in a distributed system is a useful service that simplifies the programming of distributed applications. Such a service helps to maintain the consistency of replicated information and to coordinate the activities of the various processes. With the increasing popularity of the Internet, there is an increasing interest in scaling the protocols that provide this service to the environment of the Internet. The InterGroup protocol suite, described in this dissertation, provides such a service, and is intended for the environment of the Internet with scalability to large numbers of nodes and high latency links. The InterGroup protocols approach the scalability problem from various directions. They redefine the meaning of group membership, allow voluntary membership changes, add a receiver-oriented selection of delivery guarantees that permits heterogeneity of the receiver set, and provide a scalable reliability service. The InterGroup system comprises several components, executing at various sites within the system. Each component provides part of the services necessary to implement a group communication system for the wide-area. The components can be categorized as: (1) control hierarchy, (2) reliable multicast, (3) message distribution and delivery, and (4) process group membership. We have implemented a prototype of the InterGroup protocols in Java, and have tested the system performance in both local-area and wide-area networks.

  8. A scalable synthesis of highly stable and water dispersible Ag 44(SR)30 nanoclusters

    KAUST Repository

    AbdulHalim, Lina G.; Ashraf, Sumaira; Katsiev, Khabiboulakh; Kirmani, Ahmad R.; Kothalawala, Nuwan; Anjum, Dalaver H.; Abbas, Sikandar Zameer; Amassian, Aram; Stellacci, Francesco; Dass, Amala; Hussain, Irshad; Bakr, Osman

    2013-01-01

    We report the synthesis of atomically monodisperse thiol-protected silver nanoclusters [Ag44(SR)30] m, (SR = 5-mercapto-2-nitrobenzoic acid) in which the product nanocluster is highly stable in contrast to previous preparation methods. The method is one-pot, scalable, and produces nanoclusters that are stable in aqueous solution for at least 9 months at room temperature under ambient conditions, with very little degradation to their unique UV-Vis optical absorption spectrum. The composition, size, and monodispersity were determined by electrospray ionization mass spectrometry and analytical ultracentrifugation. The produced nanoclusters are likely to be in a superatom charge-state of m = 4-, due to the fact that their optical absorption spectrum shares most of the unique features of the intense and broadly absorbing nanoparticles identified as [Ag44(SR) 30]4- by Harkness et al. (Nanoscale, 2012, 4, 4269). A protocol to transfer the nanoclusters to organic solvents is also described. Using the disperse nanoclusters in organic media, we fabricated solid-state films of [Ag44(SR)30]m that retained all the distinct features of the optical absorption spectrum of the nanoclusters in solution. The films were studied by X-ray diffraction and photoelectron spectroscopy in order to investigate their crystallinity, atomic composition and valence band structure. The stability, scalability, and the film fabrication method demonstrated in this work pave the way towards the crystallization of [Ag44(SR)30]m and its full structural determination by single crystal X-ray diffraction. Moreover, due to their unique and attractive optical properties with multiple optical transitions, we anticipate these clusters to find practical applications in light-harvesting, such as photovoltaics and photocatalysis, which have been hindered so far by the instability of previous generations of the cluster. © 2013 The Royal Society of Chemistry.

  9. Adolescent sexuality education: An appraisal of some scalable ...

    African Journals Online (AJOL)

    Adolescent sexuality education: An appraisal of some scalable interventions for the Nigerian context. VC Pam. Abstract. Most issues around sexual intercourse are highly sensitive topics in Nigeria. Despite the disturbingly high adolescent HIV prevalence and teenage pregnancy rate in Nigeria, sexuality education is ...

  10. Impact of multiplexed reading scheme on nanocrossbar memristor memory's scalability

    International Nuclear Information System (INIS)

    Zhu Xuan; Tang Yu-Hua; Wu Jun-Jie; Yi Xun; Wu Chun-Qing

    2014-01-01

    Nanocrossbar is a potential memory architecture to integrate memristor to achieve large scale and high density memory. However, based on the currently widely-adopted parallel reading scheme, scalability of the nanocrossbar memory is limited, since the overhead of the reading circuits is in proportion with the size of the nanocrossbar component. In this paper, a multiplexed reading scheme is adopted as the foundation of the discussion. Through HSPICE simulation, we reanalyze scalability of the nanocrossbar memristor memory by investigating the impact of various circuit parameters on the output voltage swing as the memory scales to larger size. We find that multiplexed reading maintains sufficient noise margin in large size nanocrossbar memristor memory. In order to improve the scalability of the memory, memristors with nonlinear I—V characteristics and high LRS (low resistive state) resistance should be adopted. (interdisciplinary physics and related areas of science and technology)

  11. Integration of an intelligent systems behavior simulator and a scalable soldier-machine interface

    Science.gov (United States)

    Johnson, Tony; Manteuffel, Chris; Brewster, Benjamin; Tierney, Terry

    2007-04-01

    As the Army's Future Combat Systems (FCS) introduce emerging technologies and new force structures to the battlefield, soldiers will increasingly face new challenges in workload management. The next generation warfighter will be responsible for effectively managing robotic assets in addition to performing other missions. Studies of future battlefield operational scenarios involving the use of automation, including the specification of existing and proposed technologies, will provide significant insight into potential problem areas regarding soldier workload. The US Army Tank Automotive Research, Development, and Engineering Center (TARDEC) is currently executing an Army technology objective program to analyze and evaluate the effect of automated technologies and their associated control devices with respect to soldier workload. The Human-Robotic Interface (HRI) Intelligent Systems Behavior Simulator (ISBS) is a human performance measurement simulation system that allows modelers to develop constructive simulations of military scenarios with various deployments of interface technologies in order to evaluate operator effectiveness. One such interface is TARDEC's Scalable Soldier-Machine Interface (SMI). The scalable SMI provides a configurable machine interface application that is capable of adapting to several hardware platforms by recognizing the physical space limitations of the display device. This paper describes the integration of the ISBS and Scalable SMI applications, which will ultimately benefit both systems. The ISBS will be able to use the Scalable SMI to visualize the behaviors of virtual soldiers performing HRI tasks, such as route planning, and the scalable SMI will benefit from stimuli provided by the ISBS simulation environment. The paper describes the background of each system and details of the system integration approach.

  12. Exfoliation of non-oxidized graphene flakes for scalable conductive film.

    Science.gov (United States)

    Park, Kwang Hyun; Kim, Bo Hyun; Song, Sung Ho; Kwon, Jiyoung; Kong, Byung Seon; Kang, Kisuk; Jeon, Seokwoo

    2012-06-13

    The increasing demand for graphene has required a new route for its mass production without causing extreme damages. Here we demonstrate a simple and cost-effective intercalation based exfoliation method for preparing high quality graphene flakes, which form a stable dispersion in organic solvents without any functionalization and surfactant. Successful intercalation of alkali metal between graphite interlayers through liquid-state diffusion from ternary KCl-NaCl-ZnCl(2) eutectic system is confirmed by X-ray diffraction and X-ray photoelectric spectroscopy. Chemical composition and morphology analyses prove that the graphene flakes preserve their intrinsic properties without any degradation. The graphene flakes remain dispersed in a mixture of pyridine and salts for more than 6 months. We apply these results to produce transparent conducting (∼930 Ω/□ at ∼75% transmission) graphene films using the modified Langmuir-Blodgett method. The overall results suggest that our method can be a scalable (>1 g/batch) and economical route for the synthesis of nonoxidized graphene flakes.

  13. Scalable manufacturing processes with soft materials

    OpenAIRE

    White, Edward; Case, Jennifer; Kramer, Rebecca

    2014-01-01

    The emerging field of soft robotics will benefit greatly from new scalable manufacturing techniques for responsive materials. Currently, most of soft robotic examples are fabricated one-at-a-time, using techniques borrowed from lithography and 3D printing to fabricate molds. This limits both the maximum and minimum size of robots that can be fabricated, and hinders batch production, which is critical to gain wider acceptance for soft robotic systems. We have identified electrical structures, ...

  14. Randomized Algorithms for Scalable Machine Learning

    OpenAIRE

    Kleiner, Ariel Jacob

    2012-01-01

    Many existing procedures in machine learning and statistics are computationally intractable in the setting of large-scale data. As a result, the advent of rapidly increasing dataset sizes, which should be a boon yielding improved statistical performance, instead severely blunts the usefulness of a variety of existing inferential methods. In this work, we use randomness to ameliorate this lack of scalability by reducing complex, computationally difficult inferential problems to larger sets o...

  15. Development of the GEM-TPC X-ray Polarimeter with the Scalable Readout System

    Directory of Open Access Journals (Sweden)

    Kitaguchi Takao

    2018-01-01

    Full Text Available We have developed a gaseous Time Projection Chamber (TPC containing a single-layered foil of a gas electron multiplier (GEM to open up a new window on cosmic X-ray polarimetry in the 2–10 keV band. The micro-pattern TPC polarimeter in combination with the Scalable Readout System produced by the RD51 collaboration has been built as an engineering model to optimize detector parameters and improve polarimeter sensitivity. The polarimeter was characterized with unpolarized X-rays from an X-ray generator in a laboratory and polarized X-rays on the BL32B2 beamline at the SPring-8 synchrotron radiation facility. Preliminary results show that the polarimeter has a comparable modulation factor to a prototype of the flight one.

  16. Robust design optimization using the price of robustness, robust least squares and regularization methods

    Science.gov (United States)

    Bukhari, Hassan J.

    2017-12-01

    In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.

  17. MicROS-drt: supporting real-time and scalable data distribution in distributed robotic systems.

    Science.gov (United States)

    Ding, Bo; Wang, Huaimin; Fan, Zedong; Zhang, Pengfei; Liu, Hui

    A primary requirement in distributed robotic software systems is the dissemination of data to all interested collaborative entities in a timely and scalable manner. However, providing such a service in a highly dynamic and resource-limited robotic environment is a challenging task, and existing robot software infrastructure has limitations in this aspect. This paper presents a novel robot software infrastructure, micROS-drt, which supports real-time and scalable data distribution. The solution is based on a loosely coupled data publish-subscribe model with the ability to support various time-related constraints. And to realize this model, a mature data distribution standard, the data distribution service for real-time systems (DDS), is adopted as the foundation of the transport layer of this software infrastructure. By elaborately adapting and encapsulating the capability of the underlying DDS middleware, micROS-drt can meet the requirement of real-time and scalable data distribution in distributed robotic systems. Evaluation results in terms of scalability, latency jitter and transport priority as well as the experiment on real robots validate the effectiveness of this work.

  18. A Testbed for Highly-Scalable Mission Critical Information Systems

    National Research Council Canada - National Science Library

    Birman, Kenneth P

    2005-01-01

    ... systems in a networked environment. Headed by Professor Ken Birman, the project is exploring a novel fusion of classical protocols for reliable multicast communication with a new style of peer-to-peer protocol called scalable "gossip...

  19. Robust and Low-Cost Flame-Treated Wood for High-Performance Solar Steam Generation.

    Science.gov (United States)

    Xue, Guobin; Liu, Kang; Chen, Qian; Yang, Peihua; Li, Jia; Ding, Tianpeng; Duan, Jiangjiang; Qi, Bei; Zhou, Jun

    2017-05-03

    Solar-enabled steam generation has attracted increasing interest in recent years because of its potential applications in power generation, desalination, and wastewater treatment, among others. Recent studies have reported many strategies for promoting the efficiency of steam generation by employing absorbers based on carbon materials or plasmonic metal nanoparticles with well-defined pores. In this work, we report that natural wood can be utilized as an ideal solar absorber after a simple flame treatment. With ultrahigh solar absorbance (∼99%), low thermal conductivity (0.33 W m -1 K -1 ), and good hydrophilicity, the flame-treated wood can localize the solar heating at the evaporation surface and enable a solar-thermal efficiency of ∼72% under a solar intensity of 1 kW m -2 , and it thus represents a renewable, scalable, low-cost, and robust material for solar steam applications.

  20. A scalable block-preconditioning strategy for divergence-conforming B-spline discretizations of the Stokes problem

    KAUST Repository

    Cortes, Adriano Mauricio

    2016-10-01

    The recently introduced divergence-conforming B-spline discretizations allow the construction of smooth discrete velocity-pressure pairs for viscous incompressible flows that are at the same time inf−supinf−sup stable and pointwise divergence-free. When applied to the discretized Stokes problem, these spaces generate a symmetric and indefinite saddle-point linear system. The iterative method of choice to solve such system is the Generalized Minimum Residual Method. This method lacks robustness, and one remedy is to use preconditioners. For linear systems of saddle-point type, a large family of preconditioners can be obtained by using a block factorization of the system. In this paper, we show how the nesting of “black-box” solvers and preconditioners can be put together in a block triangular strategy to build a scalable block preconditioner for the Stokes system discretized by divergence-conforming B-splines. Besides the known cavity flow problem, we used for benchmark flows defined on complex geometries: an eccentric annulus and hollow torus of an eccentric annular cross-section.

  1. Design and thermal performances of a scalable linear Fresnel reflector solar system

    International Nuclear Information System (INIS)

    Zhu, Yanqing; Shi, Jifu; Li, Yujian; Wang, Leilei; Huang, Qizhang; Xu, Gang

    2017-01-01

    Highlights: • A scalable linear Fresnel reflector which can supply different temperatures is proposed. • Inclination design of the mechanical structure is used to reduce the end losses. • The maximum thermal efficiency of 64% is achieved in Guangzhou. - Abstract: This paper proposes a scalable linear Fresnel reflector (SLFR) solar system. The optical mirror field which contains an array of linear plat mirrors closed to each other is designed to eliminate the inter-low shading and blocking. Scalable mechanical mirror support which can place different number of mirrors is designed to supply different temperatures. The mechanical structure can be inclined to reduce the end losses. Finally, the thermal efficiency of the SLFR with two stage mirrors is tested. After adjustment, the maximum thermal efficiency of 64% is obtained and the mean thermal efficiency is higher than that before adjustment. The results indicate that the end losses have been reduced effectively by the inclination design and excellent thermal performance can be obtained by the SLFR after adjustment.

  2. A Massively Scalable Architecture for Instant Messaging & Presence

    NARCIS (Netherlands)

    Schippers, Jorrit; Remke, Anne Katharina Ingrid; Punt, Henk; Wegdam, M.; Haverkort, Boudewijn R.H.M.; Thomas, N.; Bradley, J.; Knottenbelt, W.; Dingle, N.; Harder, U.

    2010-01-01

    This paper analyzes the scalability of Instant Messaging & Presence (IM&P) architectures. We take a queueing-based modelling and analysis approach to ��?nd the bottlenecks of the current IM&P architecture at the Dutch social network Hyves, as well as of alternative architectures. We use the

  3. A Unifying Mathematical Framework for Genetic Robustness, Environmental Robustness, Network Robustness and their Trade-offs on Phenotype Robustness in Biological Networks. Part III: Synthetic Gene Networks in Synthetic Biology

    Science.gov (United States)

    Chen, Bor-Sen; Lin, Ying-Po

    2013-01-01

    Robust stabilization and environmental disturbance attenuation are ubiquitous systematic properties that are observed in biological systems at many different levels. The underlying principles for robust stabilization and environmental disturbance attenuation are universal to both complex biological systems and sophisticated engineering systems. In many biological networks, network robustness should be large enough to confer: intrinsic robustness for tolerating intrinsic parameter fluctuations; genetic robustness for buffering genetic variations; and environmental robustness for resisting environmental disturbances. Network robustness is needed so phenotype stability of biological network can be maintained, guaranteeing phenotype robustness. Synthetic biology is foreseen to have important applications in biotechnology and medicine; it is expected to contribute significantly to a better understanding of functioning of complex biological systems. This paper presents a unifying mathematical framework for investigating the principles of both robust stabilization and environmental disturbance attenuation for synthetic gene networks in synthetic biology. Further, from the unifying mathematical framework, we found that the phenotype robustness criterion for synthetic gene networks is the following: if intrinsic robustness + genetic robustness + environmental robustness ≦ network robustness, then the phenotype robustness can be maintained in spite of intrinsic parameter fluctuations, genetic variations, and environmental disturbances. Therefore, the trade-offs between intrinsic robustness, genetic robustness, environmental robustness, and network robustness in synthetic biology can also be investigated through corresponding phenotype robustness criteria from the systematic point of view. Finally, a robust synthetic design that involves network evolution algorithms with desired behavior under intrinsic parameter fluctuations, genetic variations, and environmental

  4. Scalable privacy-preserving big data aggregation mechanism

    Directory of Open Access Journals (Sweden)

    Dapeng Wu

    2016-08-01

    Full Text Available As the massive sensor data generated by large-scale Wireless Sensor Networks (WSNs recently become an indispensable part of ‘Big Data’, the collection, storage, transmission and analysis of the big sensor data attract considerable attention from researchers. Targeting the privacy requirements of large-scale WSNs and focusing on the energy-efficient collection of big sensor data, a Scalable Privacy-preserving Big Data Aggregation (Sca-PBDA method is proposed in this paper. Firstly, according to the pre-established gradient topology structure, sensor nodes in the network are divided into clusters. Secondly, sensor data is modified by each node according to the privacy-preserving configuration message received from the sink. Subsequently, intra- and inter-cluster data aggregation is employed during the big sensor data reporting phase to reduce energy consumption. Lastly, aggregated results are recovered by the sink to complete the privacy-preserving big data aggregation. Simulation results validate the efficacy and scalability of Sca-PBDA and show that the big sensor data generated by large-scale WSNs is efficiently aggregated to reduce network resource consumption and the sensor data privacy is effectively protected to meet the ever-growing application requirements.

  5. Towards Scalable Graph Computation on Mobile Devices.

    Science.gov (United States)

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2014-10-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach.

  6. Towards Scalable Graph Computation on Mobile Devices

    Science.gov (United States)

    Chen, Yiqi; Lin, Zhiyuan; Pienta, Robert; Kahng, Minsuk; Chau, Duen Horng

    2015-01-01

    Mobile devices have become increasingly central to our everyday activities, due to their portability, multi-touch capabilities, and ever-improving computational power. Such attractive features have spurred research interest in leveraging mobile devices for computation. We explore a novel approach that aims to use a single mobile device to perform scalable graph computation on large graphs that do not fit in the device's limited main memory, opening up the possibility of performing on-device analysis of large datasets, without relying on the cloud. Based on the familiar memory mapping capability provided by today's mobile operating systems, our approach to scale up computation is powerful and intentionally kept simple to maximize its applicability across the iOS and Android platforms. Our experiments demonstrate that an iPad mini can perform fast computation on large real graphs with as many as 272 million edges (Google+ social graph), at a speed that is only a few times slower than a 13″ Macbook Pro. Through creating a real world iOS app with this technique, we demonstrate the strong potential application for scalable graph computation on a single mobile device using our approach. PMID:25859564

  7. Investing in Global Markets: Big Data and Applications of Robust Regression

    Directory of Open Access Journals (Sweden)

    John eGuerard

    2016-02-01

    Full Text Available In this analysis of the risk and return of stocks in global markets, we apply several applications of robust regression techniques in producing stock selection models and several optimization techniques in portfolio construction in global stock universes. We find that (1 the robust regression applications are appropriate for modeling stock returns in global markets; and (2 mean-variance techniques continue to produce portfolios capable of generating excess returns above transaction costs and statistically significant asset selection. We estimate expected return models in a global equity markets using a given stock selection model and generate statistically significant active returns from various portfolio construction techniques.

  8. NPTool: Towards Scalability and Reliability of Business Process Management

    Science.gov (United States)

    Braghetto, Kelly Rosa; Ferreira, João Eduardo; Pu, Calton

    Currently one important challenge in business process management is provide at the same time scalability and reliability of business process executions. This difficulty becomes more accentuated when the execution control assumes complex countless business processes. This work presents NavigationPlanTool (NPTool), a tool to control the execution of business processes. NPTool is supported by Navigation Plan Definition Language (NPDL), a language for business processes specification that uses process algebra as formal foundation. NPTool implements the NPDL language as a SQL extension. The main contribution of this paper is a description of the NPTool showing how the process algebra features combined with a relational database model can be used to provide a scalable and reliable control in the execution of business processes. The next steps of NPTool include reuse of control-flow patterns and support to data flow management.

  9. Developing Scalable Information Security Systems

    Directory of Open Access Journals (Sweden)

    Valery Konstantinovich Ablekov

    2013-06-01

    Full Text Available Existing physical security systems has wide range of lacks, including: high cost, a large number of vulnerabilities, problems of modification and support system. This paper covers an actual problem of developing systems without this list of drawbacks. The paper presents the architecture of the information security system, which operates through the network protocol TCP/IP, including the ability to connect different types of devices and integration with existing security systems. The main advantage is a significant increase in system reliability, scalability, both vertically and horizontally, with minimal cost of both financial and time resources.

  10. Scalable Production of the Silicon-Tin Yin-Yang Hybrid Structure with Graphene Coating for High Performance Lithium-Ion Battery Anodes.

    Science.gov (United States)

    Jin, Yan; Tan, Yingling; Hu, Xiaozhen; Zhu, Bin; Zheng, Qinghui; Zhang, Zijiao; Zhu, Guoying; Yu, Qian; Jin, Zhong; Zhu, Jia

    2017-05-10

    Alloy anodes possessed of high theoretical capacity show great potential for next-generation advanced lithium-ion battery. Even though huge volume change during lithium insertion and extraction leads to severe problems, such as pulverization and an unstable solid-electrolyte interphase (SEI), various nanostructures including nanoparticles, nanowires, and porous networks can address related challenges to improve electrochemical performance. However, the complex and expensive fabrication process hinders the widespread application of nanostructured alloy anodes, which generate an urgent demand of low-cost and scalable processes to fabricate building blocks with fine controls of size, morphology, and porosity. Here, we demonstrate a scalable and low-cost process to produce a porous yin-yang hybrid composite anode with graphene coating through high energy ball-milling and selective chemical etching. With void space to buffer the expansion, the produced functional electrodes demonstrate stable cycling performance of 910 mAh g -1 over 600 cycles at a rate of 0.5C for Si-graphene "yin" particles and 750 mAh g -1 over 300 cycles at 0.2C for Sn-graphene "yang" particles. Therefore, we open up a new approach to fabricate alloy anode materials at low-cost, low-energy consumption, and large scale. This type of porous silicon or tin composite with graphene coating can also potentially play a significant role in thermoelectrics and optoelectronics applications.

  11. Accounting Fundamentals and the Variation of Stock Price: Factoring in the Investment Scalability

    OpenAIRE

    Sumiyana, Sumiyana; Baridwan, Zaki; Sugiri, Slamet; Hartono, Jogiyanto

    2010-01-01

    This study develops a new return model with respect to accounting fundamentals. The new return model is based on Chen and Zhang (2007). This study takes into account theinvestment scalability information. Specifically, this study splitsthe scale of firm’s operations into short-run and long-runinvestment scalabilities. We document that five accounting fun-damentals explain the variation of annual stock return. Thefactors, comprised book value, earnings yield, short-run andlong-run investment s...

  12. A scalable fully implicit framework for reservoir simulation on parallel computers

    KAUST Repository

    Yang, Haijian

    2017-11-10

    The modeling of multiphase fluid flow in porous medium is of interest in the field of reservoir simulation. The promising numerical methods in the literature are mostly based on the explicit or semi-implicit approach, which both have certain stability restrictions on the time step size. In this work, we introduce and study a scalable fully implicit solver for the simulation of two-phase flow in a porous medium with capillarity, gravity and compressibility, which is free from the limitations of the conventional methods. In the fully implicit framework, a mixed finite element method is applied to discretize the model equations for the spatial terms, and the implicit Backward Euler scheme with adaptive time stepping is used for the temporal integration. The resultant nonlinear system arising at each time step is solved in a monolithic way by using a Newton–Krylov type method. The corresponding linear system from the Newton iteration is large sparse, nonsymmetric and ill-conditioned, consequently posing a significant challenge to the fully implicit solver. To address this issue, the family of additive Schwarz preconditioners is taken into account to accelerate the convergence of the linear system, and thereby improves the robustness of the outer Newton method. Several test cases in one, two and three dimensions are used to validate the correctness of the scheme and examine the performance of the newly developed algorithm on parallel computers.

  13. A scalable fully implicit framework for reservoir simulation on parallel computers

    KAUST Repository

    Yang, Haijian; Sun, Shuyu; Li, Yiteng; Yang, Chao

    2017-01-01

    The modeling of multiphase fluid flow in porous medium is of interest in the field of reservoir simulation. The promising numerical methods in the literature are mostly based on the explicit or semi-implicit approach, which both have certain stability restrictions on the time step size. In this work, we introduce and study a scalable fully implicit solver for the simulation of two-phase flow in a porous medium with capillarity, gravity and compressibility, which is free from the limitations of the conventional methods. In the fully implicit framework, a mixed finite element method is applied to discretize the model equations for the spatial terms, and the implicit Backward Euler scheme with adaptive time stepping is used for the temporal integration. The resultant nonlinear system arising at each time step is solved in a monolithic way by using a Newton–Krylov type method. The corresponding linear system from the Newton iteration is large sparse, nonsymmetric and ill-conditioned, consequently posing a significant challenge to the fully implicit solver. To address this issue, the family of additive Schwarz preconditioners is taken into account to accelerate the convergence of the linear system, and thereby improves the robustness of the outer Newton method. Several test cases in one, two and three dimensions are used to validate the correctness of the scheme and examine the performance of the newly developed algorithm on parallel computers.

  14. Cascaded column generation for scalable predictive demand side management

    NARCIS (Netherlands)

    Toersche, Hermen; Molderink, Albert; Hurink, Johann L.; Smit, Gerardus Johannes Maria

    2014-01-01

    We propose a nested Dantzig-Wolfe decomposition, combined with dynamic programming, for the distributed scheduling of a large heterogeneous fleet of residential appliances with nonlinear behavior. A cascaded column generation approach gives a scalable optimization strategy, provided that the problem

  15. Impact of packet losses in scalable 3D holoscopic video coding

    Science.gov (United States)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2014-05-01

    Holoscopic imaging became a prospective glassless 3D technology to provide more natural 3D viewing experiences to the end user. Additionally, holoscopic systems also allow new post-production degrees of freedom, such as controlling the plane of focus or the viewing angle presented to the user. However, to successfully introduce this technology into the consumer market, a display scalable coding approach is essential to achieve backward compatibility with legacy 2D and 3D displays. Moreover, to effectively transmit 3D holoscopic content over error-prone networks, e.g., wireless networks or the Internet, error resilience techniques are required to mitigate the impact of data impairments in the user quality perception. Therefore, it is essential to deeply understand the impact of packet losses in terms of decoding video quality for the specific case of 3D holoscopic content, notably when a scalable approach is used. In this context, this paper studies the impact of packet losses when using a three-layer display scalable 3D holoscopic video coding architecture previously proposed, where each layer represents a different level of display scalability (i.e., L0 - 2D, L1 - stereo or multiview, and L2 - full 3D holoscopic). For this, a simple error concealment algorithm is used, which makes use of inter-layer redundancy between multiview and 3D holoscopic content and the inherent correlation of the 3D holoscopic content to estimate lost data. Furthermore, a study of the influence of 2D views generation parameters used in lower layers on the performance of the used error concealment algorithm is also presented.

  16. A repeatable and scalable fabrication method for sharp, hollow silicon microneedles

    Science.gov (United States)

    Kim, H.; Theogarajan, L. S.; Pennathur, S.

    2018-03-01

    Scalability and manufacturability are impeding the mass commercialization of microneedles in the medical field. Specifically, microneedle geometries need to be sharp, beveled, and completely controllable, difficult to achieve with microelectromechanical fabrication techniques. In this work, we performed a parametric study using silicon etch chemistries to optimize the fabrication of scalable and manufacturable beveled silicon hollow microneedles. We theoretically verified our parametric results with diffusion reaction equations and created a design guideline for a various set of miconeedles (80-160 µm needle base width, 100-1000 µm pitch, 40-50 µm inner bore diameter, and 150-350 µm height) to show the repeatability, scalability, and manufacturability of our process. As a result, hollow silicon microneedles with any dimensions can be fabricated with less than 2% non-uniformity across a wafer and 5% deviation between different processes. The key to achieving such high uniformity and consistency is a non-agitated HF-HNO3 bath, silicon nitride masks, and surrounding silicon filler materials with well-defined dimensions. Our proposed method is non-labor intensive, well defined by theory, and straightforward for wafer scale mass production, opening doors to a plethora of potential medical and biosensing applications.

  17. Perceptual Robust Design

    DEFF Research Database (Denmark)

    Pedersen, Søren Nygaard

    The research presented in this PhD thesis has focused on a perceptual approach to robust design. The results of the research and the original contribution to knowledge is a preliminary framework for understanding, positioning, and applying perceptual robust design. Product quality is a topic...... been presented. Therefore, this study set out to contribute to the understanding and application of perceptual robust design. To achieve this, a state-of-the-art and current practice review was performed. From the review two main research problems were identified. Firstly, a lack of tools...... for perceptual robustness was found to overlap with the optimum for functional robustness and at most approximately 2.2% out of the 14.74% could be ascribed solely to the perceptual robustness optimisation. In conclusion, the thesis have offered a new perspective on robust design by merging robust design...

  18. Zinc oxide nanoleaves: A scalable disperser-assisted sonochemical approach for synthesis and an antibacterial application.

    Science.gov (United States)

    Gupta, Anadi; Srivastava, Rohit

    2018-03-01

    Current study reports a new and highly scalable method for the synthesis of novel structure Zinc oxide nanoleaves (ZnO-NLs) using disperser-assisted sonochemical approach. The synthesis was carried out in different batches from 50mL to 1L to ensure the scalability of the method which produced almost similar results. The use of high speed (9000rpm) mechanical dispersion while bath sonication (200W, 33kHz) yield 4.4g of ZnO-NLs powder in 1L batch reaction within 2h (>96% yield). The ZnO-NLs shows an excellent thermal stability even at a higher temperature (900°C) and high surface area. The high antibacterial activity of ZnO-NLs against diseases causing Gram-positive bacteria Staphylococcus aureus shows a reduction in CFU, morphological changes like eight times reduction in cell size, cell burst, and cellular leakage at 200µg/mL concentration. This study provides an efficient, cost-effective and an environmental friendly approach for the synthesis of ZnO-NLs at industrial scale as well as new technique to increase the efficiency of the existing sonochemical method. We envisage that this method can be applied to various fields where ZnO is significantly consumed like rubber manufacturing, ceramic industry and medicine. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. EvAg: A Scalable Peer-to-Peer Evolutionary Algorithm

    NARCIS (Netherlands)

    Laredo, J.L.J.; Eiben, A.E.; van Steen, M.R.; Merelo, J.J.

    2010-01-01

    This paper studies the scalability of an Evolutionary Algorithm (EA) whose population is structured by means of a gossiping protocol and where the evolutionary operators act exclusively within the local neighborhoods. This makes the algorithm inherently suited for parallel execution in a

  20. GPU-based Scalable Volumetric Reconstruction for Multi-view Stereo

    Energy Technology Data Exchange (ETDEWEB)

    Kim, H; Duchaineau, M; Max, N

    2011-09-21

    We present a new scalable volumetric reconstruction algorithm for multi-view stereo using a graphics processing unit (GPU). It is an effectively parallelized GPU algorithm that simultaneously uses a large number of GPU threads, each of which performs voxel carving, in order to integrate depth maps with images from multiple views. Each depth map, triangulated from pair-wise semi-dense correspondences, represents a view-dependent surface of the scene. This algorithm also provides scalability for large-scale scene reconstruction in a high resolution voxel grid by utilizing streaming and parallel computation. The output is a photo-realistic 3D scene model in a volumetric or point-based representation. We demonstrate the effectiveness and the speed of our algorithm with a synthetic scene and real urban/outdoor scenes. Our method can also be integrated with existing multi-view stereo algorithms such as PMVS2 to fill holes or gaps in textureless regions.

  1. Parallel scalability and efficiency of vortex particle method for aeroelasticity analysis of bluff bodies

    Science.gov (United States)

    Tolba, Khaled Ibrahim; Morgenthal, Guido

    2018-01-01

    This paper presents an analysis of the scalability and efficiency of a simulation framework based on the vortex particle method. The code is applied for the numerical aerodynamic analysis of line-like structures. The numerical code runs on multicore CPU and GPU architectures using OpenCL framework. The focus of this paper is the analysis of the parallel efficiency and scalability of the method being applied to an engineering test case, specifically the aeroelastic response of a long-span bridge girder at the construction stage. The target is to assess the optimal configuration and the required computer architecture, such that it becomes feasible to efficiently utilise the method within the computational resources available for a regular engineering office. The simulations and the scalability analysis are performed on a regular gaming type computer.

  2. A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system

    Science.gov (United States)

    Toor, S.; Osmani, L.; Eerola, P.; Kraemer, O.; Lindén, T.; Tarkoma, S.; White, J.

    2014-06-01

    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

  3. A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system

    International Nuclear Information System (INIS)

    Toor, S; Eerola, P; Kraemer, O; Lindén, T; Osmani, L; Tarkoma, S; White, J

    2014-01-01

    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

  4. Scalable Performance Measurement and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Gamblin, Todd [Univ. of North Carolina, Chapel Hill, NC (United States)

    2009-01-01

    Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number of tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.

  5. Topochemical approach to efficiently produce main-chain poly(bile acid)s with high molecular weights.

    Science.gov (United States)

    Li, Weina; Li, Xuesong; Zhu, Wei; Li, Changxu; Xu, Dan; Ju, Yong; Li, Guangtao

    2011-07-21

    Based on a topochemical approach, a strategy for efficiently producing main-chain poly(bile acid)s in the solid state was developed. This strategy allows for facile and scalable synthesis of main-chain poly(bile acid)s not only with high molecular weights, but also with quantitative conversions and yields.

  6. Heat-treated stainless steel felt as scalable anode material for bioelectrochemical systems.

    Science.gov (United States)

    Guo, Kun; Soeriyadi, Alexander H; Feng, Huajun; Prévoteau, Antonin; Patil, Sunil A; Gooding, J Justin; Rabaey, Korneel

    2015-11-01

    This work reports a simple and scalable method to convert stainless steel (SS) felt into an effective anode for bioelectrochemical systems (BESs) by means of heat treatment. X-ray photoelectron spectroscopy and cyclic voltammetry elucidated that the heat treatment generated an iron oxide rich layer on the SS felt surface. The iron oxide layer dramatically enhanced the electroactive biofilm formation on SS felt surface in BESs. Consequently, the sustained current densities achieved on the treated electrodes (1 cm(2)) were around 1.5±0.13 mA/cm(2), which was seven times higher than the untreated electrodes (0.22±0.04 mA/cm(2)). To test the scalability of this material, the heat-treated SS felt was scaled up to 150 cm(2) and similar current density (1.5 mA/cm(2)) was achieved on the larger electrode. The low cost, straightforwardness of the treatment, high conductivity and high bioelectrocatalytic performance make heat-treated SS felt a scalable anodic material for BESs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Scalable Frequent Subgraph Mining

    KAUST Repository

    Abdelhamid, Ehab

    2017-06-19

    A graph is a data structure that contains a set of nodes and a set of edges connecting these nodes. Nodes represent objects while edges model relationships among these objects. Graphs are used in various domains due to their ability to model complex relations among several objects. Given an input graph, the Frequent Subgraph Mining (FSM) task finds all subgraphs with frequencies exceeding a given threshold. FSM is crucial for graph analysis, and it is an essential building block in a variety of applications, such as graph clustering and indexing. FSM is computationally expensive, and its existing solutions are extremely slow. Consequently, these solutions are incapable of mining modern large graphs. This slowness is caused by the underlying approaches of these solutions which require finding and storing an excessive amount of subgraph matches. This dissertation proposes a scalable solution for FSM that avoids the limitations of previous work. This solution is composed of four components. The first component is a single-threaded technique which, for each candidate subgraph, needs to find only a minimal number of matches. The second component is a scalable parallel FSM technique that utilizes a novel two-phase approach. The first phase quickly builds an approximate search space, which is then used by the second phase to optimize and balance the workload of the FSM task. The third component focuses on accelerating frequency evaluation, which is a critical step in FSM. To do so, a machine learning model is employed to predict the type of each graph node, and accordingly, an optimized method is selected to evaluate that node. The fourth component focuses on mining dynamic graphs, such as social networks. To this end, an incremental index is maintained during the dynamic updates. Only this index is processed and updated for the majority of graph updates. Consequently, search space is significantly pruned and efficiency is improved. The empirical evaluation shows that the

  8. Design for scalability in 3D computer graphics architectures

    DEFF Research Database (Denmark)

    Holten-Lund, Hans Erik

    2002-01-01

    This thesis describes useful methods and techniques for designing scalable hybrid parallel rendering architectures for 3D computer graphics. Various techniques for utilizing parallelism in a pipelines system are analyzed. During the Ph.D study a prototype 3D graphics architecture named Hybris has...

  9. Scalable electro-photonic integration concept based on polymer waveguides

    NARCIS (Netherlands)

    Bosman, E.; Steenberge, G. van; Boersma, A.; Wiegersma, S.; Harmsma, P.J.; Karppinen, M.; Korhonen, T.; Offrein, B.J.; Dangel, R.; Daly, A.; Ortsiefer, M.; Justice, J.; Corbett, B.; Dorrestein, S.; Duis, J.

    2016-01-01

    A novel method for fabricating a single mode optical interconnection platform is presented. The method comprises the miniaturized assembly of optoelectronic single dies, the scalable fabrication of polymer single mode waveguides and the coupling to glass fiber arrays providing the I/O's. The low

  10. Towards Bandwidth Scalable Transceiver Technology for Optical Metro-Access Networks

    DEFF Research Database (Denmark)

    Spolitis, Sandis; Bobrovs, Vjaceslavs; Wagner, Christoph

    2015-01-01

    sliceable transceiver for 1 Gbit/s non-return to zero (NRZ) signal sliced into two slices is presented. Digital signal processing (DSP) power consumption and latency values for proposed sliceable transceiver technique are also discussed. In this research post FEC with 7% overhead error free transmission has......Massive fiber-to-the-home network deployment is creating a challenge for telecommunications network operators: exponential increase of the power consumption at the central offices and a never ending quest for equipment upgrades operating at higher bandwidth. In this paper, we report on flexible...... signal slicing technique, which allows transmission of high-bandwidth signals via low bandwidth electrical and optoelectrical equipment. The presented signal slicing technique is highly scalable in terms of bandwidth which is determined by the number of slices used. In this paper performance of scalable...

  11. Proof of Stake Blockchain: Performance and Scalability for Groupware Communications

    DEFF Research Database (Denmark)

    Spasovski, Jason; Eklund, Peter

    2017-01-01

    A blockchain is a distributed transaction ledger, a disruptive technology that creates new possibilities for digital ecosystems. The blockchain ecosystem maintains an immutable transaction record to support many types of digital services. This paper compares the performance and scalability of a web......-based groupware communication application using both non-blockchain and blockchain technologies. Scalability is measured where message load is synthesized over two typical communication topologies. The first is 1 to n network -- a typical client-server or star-topology with a central vertex (server) receiving all...... messages from the remaining n - 1 vertices (clients). The second is a more naturally occurring scale-free network topology, where multiple communication hubs are distributed throughout the network. System performance is tested with both blockchain and non-blockchain solutions using multiple cloud computing...

  12. SCALABLE TIME SERIES CHANGE DETECTION FOR BIOMASS MONITORING USING GAUSSIAN PROCESS

    Data.gov (United States)

    National Aeronautics and Space Administration — SCALABLE TIME SERIES CHANGE DETECTION FOR BIOMASS MONITORING USING GAUSSIAN PROCESS VARUN CHANDOLA AND RANGA RAJU VATSAVAI Abstract. Biomass monitoring,...

  13. Big data integration: scalability and sustainability

    KAUST Repository

    Zhang, Zhang

    2016-01-26

    Integration of various types of omics data is critically indispensable for addressing most important and complex biological questions. In the era of big data, however, data integration becomes increasingly tedious, time-consuming and expensive, posing a significant obstacle to fully exploit the wealth of big biological data. Here we propose a scalable and sustainable architecture that integrates big omics data through community-contributed modules. Community modules are contributed and maintained by different committed groups and each module corresponds to a specific data type, deals with data collection, processing and visualization, and delivers data on-demand via web services. Based on this community-based architecture, we build Information Commons for Rice (IC4R; http://ic4r.org), a rice knowledgebase that integrates a variety of rice omics data from multiple community modules, including genome-wide expression profiles derived entirely from RNA-Seq data, resequencing-based genomic variations obtained from re-sequencing data of thousands of rice varieties, plant homologous genes covering multiple diverse plant species, post-translational modifications, rice-related literatures, and community annotations. Taken together, such architecture achieves integration of different types of data from multiple community-contributed modules and accordingly features scalable, sustainable and collaborative integration of big data as well as low costs for database update and maintenance, thus helpful for building IC4R into a comprehensive knowledgebase covering all aspects of rice data and beneficial for both basic and translational researches.

  14. Robust and scalable hierarchical matrix-based fast direct solver and preconditioner for the numerical solution of elliptic partial differential equations

    KAUST Repository

    Chavez Chavez, Gustavo Ivan

    2017-01-01

    Numerical experiments corroborate the robustness, accuracy, and complexity claims and provide a baseline of the performance and memory footprint by comparisons with competing approaches such as the multigrid solver hypre, and the STRUMPACK implementation of the multifrontal factorization with hierarchically semi-separable matrices. The companion implementation can utilize many thousands of cores of Shaheen, KAUST's Haswell-based Cray XC-40 supercomputer, and compares favorably with other implementations of hierarchical solvers in terms of time-to-solution and memory consumption.

  15. ENDEAVOUR: A Scalable SDN Architecture for Real-World IXPs

    KAUST Repository

    Antichi, Gianni

    2017-10-25

    Innovation in interdomain routing has remained stagnant for over a decade. Recently, IXPs have emerged as economically-advantageous interconnection points for reducing path latencies and exchanging ever increasing traffic volumes among, possibly, hundreds of networks. Given their far-reaching implications on interdomain routing, IXPs are the ideal place to foster network innovation and extend the benefits of SDN to the interdomain level. In this paper, we present, evaluate, and demonstrate ENDEAVOUR, an SDN platform for IXPs. ENDEAVOUR can be deployed on a multi-hop IXP fabric, supports a large number of use cases, and is highly-scalable while avoiding broadcast storms. Our evaluation with real data from one of the largest IXPs, demonstrates the benefits and scalability of our solution: ENDEAVOUR requires around 70% fewer rules than alternative SDN solutions thanks to our rule partitioning mechanism. In addition, by providing an open source solution, we invite everyone from the community to experiment (and improve) our implementation as well as adapt it to new use cases.

  16. Programming Scala Scalability = Functional Programming + Objects

    CERN Document Server

    Wampler, Dean

    2009-01-01

    Learn how to be more productive with Scala, a new multi-paradigm language for the Java Virtual Machine (JVM) that integrates features of both object-oriented and functional programming. With this book, you'll discover why Scala is ideal for highly scalable, component-based applications that support concurrency and distribution. Programming Scala clearly explains the advantages of Scala as a JVM language. You'll learn how to leverage the wealth of Java class libraries to meet the practical needs of enterprise and Internet projects more easily. Packed with code examples, this book provides us

  17. Domain decomposition method of stochastic PDEs: a two-level scalable preconditioner

    International Nuclear Information System (INIS)

    Subber, Waad; Sarkar, Abhijit

    2012-01-01

    For uncertainty quantification in many practical engineering problems, the stochastic finite element method (SFEM) may be computationally challenging. In SFEM, the size of the algebraic linear system grows rapidly with the spatial mesh resolution and the order of the stochastic dimension. In this paper, we describe a non-overlapping domain decomposition method, namely the iterative substructuring method to tackle the large-scale linear system arising in the SFEM. The SFEM is based on domain decomposition in the geometric space and a polynomial chaos expansion in the probabilistic space. In particular, a two-level scalable preconditioner is proposed for the iterative solver of the interface problem for the stochastic systems. The preconditioner is equipped with a coarse problem which globally connects the subdomains both in the geometric and probabilistic spaces via their corner nodes. This coarse problem propagates the information quickly across the subdomains leading to a scalable preconditioner. For numerical illustrations, a two-dimensional stochastic elliptic partial differential equation (SPDE) with spatially varying non-Gaussian random coefficients is considered. The numerical scalability of the the preconditioner is investigated with respect to the mesh size, subdomain size, fixed problem size per subdomain and order of polynomial chaos expansion. The numerical experiments are performed on a Linux cluster using MPI and PETSc parallel libraries.

  18. Aero-MINE (Motionless INtegrated Energy) for Distributed Scalable Wind Power.

    Energy Technology Data Exchange (ETDEWEB)

    Houchens, Brent C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Blaylock, Myra L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-06-01

    The proposed Aero-MINE technology will extract energy from wind without any exterior moving parts. Aero-MINEs can be integrated into buildings or function stand-alone, and are scalable. This gives them advantages similar to solar panels, but with the added benefit of operation in cloudy or dark conditions. Furthermore, compared to solar panels, Aero-MINEs can be manufactured at lower cost and with less environmental impact. Power generation is isolated internally by the pneumatic transmission of air and the outlet air-jet nozzles amplify the effectiveness. Multiple units can be connected to one centrally located electric generator. Aero-MINEs are ideal for the built-environment, with numerous possible configurations ranging from architectural integration to modular bolt-on products. Traditional wind turbines suffer from many fundamental challenges. The fast-moving blades produce significant aero-acoustic noise, visual disturbances, light-induced flickering and impose wildlife mortality risks. The conversion of massive mechanical torque to electricity is a challenge for gears, generators and power conversion electronics. In addition, the installation, operation and maintenance of wind turbines is required at significant height. Furthermore, wind farms are often in remote locations far from dense regions of electricity customers. These technical and logistical challenges add significantly to the cost of the electricity produced by utility-scale wind farms. In contrast, distributed wind energy eliminates many of the logistical challenges. However, solutions such as micro-turbines produce relatively small amounts of energy due to the reduction in swept area and still suffer from the motion-related disadvantages of utility-scale turbines. Aero-MINEs combine the best features of distributed generation, while eliminating the disadvantages.

  19. Sustainable Resilient, Robust & Resplendent Enterprises

    DEFF Research Database (Denmark)

    Edgeman, Rick

    to their impact. Resplendent enterprises are introduced with resplendence referring not to some sort of public or private façade, but instead refers to organizations marked by dual brilliance and nobility of strategy, governance and comportment that yields superior and sustainable triple bottom line performance....... Herein resilience, robustness, and resplendence (R3) are integrated with sustainable enterprise excellence (Edgeman and Eskildsen, 2013) or SEE and social-ecological innovation (Eskildsen and Edgeman, 2012) to aid progress of a firm toward producing continuously relevant performance that proceed from...

  20. Embedded DCT and wavelet methods for fine granular scalable video: analysis and comparison

    Science.gov (United States)

    van der Schaar-Mitrea, Mihaela; Chen, Yingwei; Radha, Hayder

    2000-04-01

    Video transmission over bandwidth-varying networks is becoming increasingly important due to emerging applications such as streaming of video over the Internet. The fundamental obstacle in designing such systems resides in the varying characteristics of the Internet (i.e. bandwidth variations and packet-loss patterns). In MPEG-4, a new SNR scalability scheme, called Fine-Granular-Scalability (FGS), is currently under standardization, which is able to adapt in real-time (i.e. at transmission time) to Internet bandwidth variations. The FGS framework consists of a non-scalable motion-predicted base-layer and an intra-coded fine-granular scalable enhancement layer. For example, the base layer can be coded using a DCT-based MPEG-4 compliant, highly efficient video compression scheme. Subsequently, the difference between the original and decoded base-layer is computed, and the resulting FGS-residual signal is intra-frame coded with an embedded scalable coder. In order to achieve high coding efficiency when compressing the FGS enhancement layer, it is crucial to analyze the nature and characteristics of residual signals common to the SNR scalability framework (including FGS). In this paper, we present a thorough analysis of SNR residual signals by evaluating its statistical properties, compaction efficiency and frequency characteristics. The signal analysis revealed that the energy compaction of the DCT and wavelet transforms is limited and the frequency characteristic of SNR residual signals decay rather slowly. Moreover, the blockiness artifacts of the low bit-rate coded base-layer result in artificial high frequencies in the residual signal. Subsequently, a variety of wavelet and embedded DCT coding techniques applicable to the FGS framework are evaluated and their results are interpreted based on the identified signal properties. As expected from the theoretical signal analysis, the rate-distortion performances of the embedded wavelet and DCT-based coders are very

  1. Scalability and efficiency of genetic algorithms for geometrical applications

    NARCIS (Netherlands)

    Dijk, van S.F.; Thierens, D.; Berg, de M.; Schoenauer, M.

    2000-01-01

    We study the scalability and efficiency of a GA that we developed earlier to solve the practical cartographic problem of labeling a map with point features. We argue that the special characteristics of our GA make that it fits in well with theoretical models predicting the optimal population size

  2. Estimates of the Sampling Distribution of Scalability Coefficient H

    Science.gov (United States)

    Van Onna, Marieke J. H.

    2004-01-01

    Coefficient "H" is used as an index of scalability in nonparametric item response theory (NIRT). It indicates the degree to which a set of items rank orders examinees. Theoretical sampling distributions, however, have only been derived asymptotically and only under restrictive conditions. Bootstrap methods offer an alternative possibility to…

  3. LoRa Scalability: A Simulation Model Based on Interference Measurements.

    Science.gov (United States)

    Haxhibeqiri, Jetmir; Van den Abeele, Floris; Moerman, Ingrid; Hoebeke, Jeroen

    2017-05-23

    LoRa is a long-range, low power, low bit rate and single-hop wireless communication technology. It is intended to be used in Internet of Things (IoT) applications involving battery-powered devices with low throughput requirements. A LoRaWAN network consists of multiple end nodes that communicate with one or more gateways. These gateways act like a transparent bridge towards a common network server. The amount of end devices and their throughput requirements will have an impact on the performance of the LoRaWAN network. This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments. First, we determine the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node. Measurements show that even under concurrent transmission, one of the packets can be received under certain conditions. Based on these measurements, we create a simulation model for assessing the scalability of a single gateway LoRaWAN network. We show that when the number of nodes increases up to 1000 per gateway, the losses will be up to 32%. In such a case, pure Aloha will have around 90% losses. However, when the duty cycle of the application layer becomes lower than the allowed radio duty cycle of 1%, losses will be even lower. We also show network scalability simulation results for some IoT use cases based on real data.

  4. Event metadata records as a testbed for scalable data mining

    International Nuclear Information System (INIS)

    Gemmeren, P van; Malon, D

    2010-01-01

    At a data rate of 200 hertz, event metadata records ('TAGs,' in ATLAS parlance) provide fertile grounds for development and evaluation of tools for scalable data mining. It is easy, of course, to apply HEP-specific selection or classification rules to event records and to label such an exercise 'data mining,' but our interest is different. Advanced statistical methods and tools such as classification, association rule mining, and cluster analysis are common outside the high energy physics community. These tools can prove useful, not for discovery physics, but for learning about our data, our detector, and our software. A fixed and relatively simple schema makes TAG export to other storage technologies such as HDF5 straightforward. This simplifies the task of exploiting very-large-scale parallel platforms such as Argonne National Laboratory's BlueGene/P, currently the largest supercomputer in the world for open science, in the development of scalable tools for data mining. Using a domain-neutral scientific data format may also enable us to take advantage of existing data mining components from other communities. There is, further, a substantial literature on the topic of one-pass algorithms and stream mining techniques, and such tools may be inserted naturally at various points in the event data processing and distribution chain. This paper describes early experience with event metadata records from ATLAS simulation and commissioning as a testbed for scalable data mining tool development and evaluation.

  5. Robust multivariate analysis

    CERN Document Server

    J Olive, David

    2017-01-01

    This text presents methods that are robust to the assumption of a multivariate normal distribution or methods that are robust to certain types of outliers. Instead of using exact theory based on the multivariate normal distribution, the simpler and more applicable large sample theory is given.  The text develops among the first practical robust regression and robust multivariate location and dispersion estimators backed by theory.   The robust techniques  are illustrated for methods such as principal component analysis, canonical correlation analysis, and factor analysis.  A simple way to bootstrap confidence regions is also provided. Much of the research on robust multivariate analysis in this book is being published for the first time. The text is suitable for a first course in Multivariate Statistical Analysis or a first course in Robust Statistics. This graduate text is also useful for people who are familiar with the traditional multivariate topics, but want to know more about handling data sets with...

  6. Scalable Light Module for Low-Cost, High-Efficiency Light- Emitting Diode Luminaires

    Energy Technology Data Exchange (ETDEWEB)

    Tarsa, Eric [Cree, Inc., Goleta, CA (United States)

    2015-08-31

    During this two-year program Cree developed a scalable, modular optical architecture for low-cost, high-efficacy light emitting diode (LED) luminaires. Stated simply, the goal of this architecture was to efficiently and cost-effectively convey light from LEDs (point sources) to broad luminaire surfaces (area sources). By simultaneously developing warm-white LED components and low-cost, scalable optical elements, a high system optical efficiency resulted. To meet program goals, Cree evaluated novel approaches to improve LED component efficacy at high color quality while not sacrificing LED optical efficiency relative to conventional packages. Meanwhile, efficiently coupling light from LEDs into modular optical elements, followed by optimally distributing and extracting this light, were challenges that were addressed via novel optical design coupled with frequent experimental evaluations. Minimizing luminaire bill of materials and assembly costs were two guiding principles for all design work, in the effort to achieve luminaires with significantly lower normalized cost ($/klm) than existing LED fixtures. Chief project accomplishments included the achievement of >150 lm/W warm-white LEDs having primary optics compatible with low-cost modular optical elements. In addition, a prototype Light Module optical efficiency of over 90% was measured, demonstrating the potential of this scalable architecture for ultra-high-efficacy LED luminaires. Since the project ended, Cree has continued to evaluate optical element fabrication and assembly methods in an effort to rapidly transfer this scalable, cost-effective technology to Cree production development groups. The Light Module concept is likely to make a strong contribution to the development of new cost-effective, high-efficacy luminaries, thereby accelerating widespread adoption of energy-saving SSL in the U.S.

  7. Architecture and Implementation of a Scalable Sensor Data Storage and Analysis System Using Cloud Computing and Big Data Technologies

    Directory of Open Access Journals (Sweden)

    Galip Aydin

    2015-01-01

    Full Text Available Sensors are becoming ubiquitous. From almost any type of industrial applications to intelligent vehicles, smart city applications, and healthcare applications, we see a steady growth of the usage of various types of sensors. The rate of increase in the amount of data produced by these sensors is much more dramatic since sensors usually continuously produce data. It becomes crucial for these data to be stored for future reference and to be analyzed for finding valuable information, such as fault diagnosis information. In this paper we describe a scalable and distributed architecture for sensor data collection, storage, and analysis. The system uses several open source technologies and runs on a cluster of virtual servers. We use GPS sensors as data source and run machine-learning algorithms for data analysis.

  8. Scalable force directed graph layout algorithms using fast multipole methods

    KAUST Repository

    Yunis, Enas Abdulrahman; Yokota, Rio; Ahmadia, Aron

    2012-01-01

    We present an extension to ExaFMM, a Fast Multipole Method library, as a generalized approach for fast and scalable execution of the Force-Directed Graph Layout algorithm. The Force-Directed Graph Layout algorithm is a physics-based approach

  9. ESVD: An Integrated Energy Scalable Framework for Low-Power Video Decoding Systems

    Directory of Open Access Journals (Sweden)

    Wen Ji

    2010-01-01

    Full Text Available Video applications using mobile wireless devices are a challenging task due to the limited capacity of batteries. The higher complex functionality of video decoding needs high resource requirements. Thus, power efficient control has become more critical design with devices integrating complex video processing techniques. Previous works on power efficient control in video decoding systems often aim at the low complexity design and not explicitly consider the scalable impact of subfunctions in decoding process, and seldom consider the relationship with the features of compressed video date. This paper is dedicated to developing an energy-scalable video decoding (ESVD strategy for energy-limited mobile terminals. First, ESVE can dynamically adapt the variable energy resources due to the device aware technique. Second, ESVD combines the decoder control with decoded data, through classifying the data into different partition profiles according to its characteristics. Third, it introduces utility theoretical analysis during the resource allocation process, so as to maximize the resource utilization. Finally, it adapts the energy resource as different energy budget and generates the scalable video decoding output under energy-limited systems. Experimental results demonstrate the efficiency of the proposed approach.

  10. Robustness of Structures

    DEFF Research Database (Denmark)

    Faber, Michael Havbro; Vrouwenvelder, A.C.W.M.; Sørensen, John Dalsgaard

    2011-01-01

    In 2005, the Joint Committee on Structural Safety (JCSS) together with Working Commission (WC) 1 of the International Association of Bridge and Structural Engineering (IABSE) organized a workshop on robustness of structures. Two important decisions resulted from this workshop, namely...... ‘COST TU0601: Robustness of Structures’ was initiated in February 2007, aiming to provide a platform for exchanging and promoting research in the area of structural robustness and to provide a basic framework, together with methods, strategies and guidelines enhancing robustness of structures...... the development of a joint European project on structural robustness under the COST (European Cooperation in Science and Technology) programme and the decision to develop a more elaborate document on structural robustness in collaboration between experts from the JCSS and the IABSE. Accordingly, a project titled...

  11. Ultracold molecules: vehicles to scalable quantum information processing

    International Nuclear Information System (INIS)

    Brickman Soderberg, Kathy-Anne; Gemelke, Nathan; Chin Cheng

    2009-01-01

    In this paper, we describe a novel scheme to implement scalable quantum information processing using Li-Cs molecular states to entangle 6 Li and 133 Cs ultracold atoms held in independent optical lattices. The 6 Li atoms will act as quantum bits to store information and 133 Cs atoms will serve as messenger bits that aid in quantum gate operations and mediate entanglement between distant qubit atoms. Each atomic species is held in a separate optical lattice and the atoms can be overlapped by translating the lattices with respect to each other. When the messenger and qubit atoms are overlapped, targeted single-spin operations and entangling operations can be performed by coupling the atomic states to a molecular state with radio-frequency pulses. By controlling the frequency and duration of the radio-frequency pulses, entanglement can be either created or swapped between a qubit messenger pair. We estimate operation fidelities for entangling two distant qubits and discuss scalability of this scheme and constraints on the optical lattice lasers. Finally we demonstrate experimental control of the optical potentials sufficient to translate atoms in the lattice.

  12. Silicon nanophotonics for scalable quantum coherent feedback networks

    International Nuclear Information System (INIS)

    Sarovar, Mohan; Brif, Constantin; Soh, Daniel B.S.; Cox, Jonathan; DeRose, Christopher T.; Camacho, Ryan; Davids, Paul

    2016-01-01

    The emergence of coherent quantum feedback control (CQFC) as a new paradigm for precise manipulation of dynamics of complex quantum systems has led to the development of efficient theoretical modeling and simulation tools and opened avenues for new practical implementations. This work explores the applicability of the integrated silicon photonics platform for implementing scalable CQFC networks. If proven successful, on-chip implementations of these networks would provide scalable and efficient nanophotonic components for autonomous quantum information processing devices and ultra-low-power optical processing systems at telecommunications wavelengths. We analyze the strengths of the silicon photonics platform for CQFC applications and identify the key challenges to both the theoretical formalism and experimental implementations. In particular, we determine specific extensions to the theoretical CQFC framework (which was originally developed with bulk-optics implementations in mind), required to make it fully applicable to modeling of linear and nonlinear integrated optics networks. We also report the results of a preliminary experiment that studied the performance of an in situ controllable silicon nanophotonic network of two coupled cavities and analyze the properties of this device using the CQFC formalism. (orig.)

  13. Silicon nanophotonics for scalable quantum coherent feedback networks

    Energy Technology Data Exchange (ETDEWEB)

    Sarovar, Mohan; Brif, Constantin [Sandia National Laboratories, Livermore, CA (United States); Soh, Daniel B.S. [Sandia National Laboratories, Livermore, CA (United States); Stanford University, Edward L. Ginzton Laboratory, Stanford, CA (United States); Cox, Jonathan; DeRose, Christopher T.; Camacho, Ryan; Davids, Paul [Sandia National Laboratories, Albuquerque, NM (United States)

    2016-12-15

    The emergence of coherent quantum feedback control (CQFC) as a new paradigm for precise manipulation of dynamics of complex quantum systems has led to the development of efficient theoretical modeling and simulation tools and opened avenues for new practical implementations. This work explores the applicability of the integrated silicon photonics platform for implementing scalable CQFC networks. If proven successful, on-chip implementations of these networks would provide scalable and efficient nanophotonic components for autonomous quantum information processing devices and ultra-low-power optical processing systems at telecommunications wavelengths. We analyze the strengths of the silicon photonics platform for CQFC applications and identify the key challenges to both the theoretical formalism and experimental implementations. In particular, we determine specific extensions to the theoretical CQFC framework (which was originally developed with bulk-optics implementations in mind), required to make it fully applicable to modeling of linear and nonlinear integrated optics networks. We also report the results of a preliminary experiment that studied the performance of an in situ controllable silicon nanophotonic network of two coupled cavities and analyze the properties of this device using the CQFC formalism. (orig.)

  14. A Robust Photogrammetric Processing Method of Low-Altitude UAV Images

    Directory of Open Access Journals (Sweden)

    Mingyao Ai

    2015-02-01

    Full Text Available Low-altitude Unmanned Aerial Vehicles (UAV images which include distortion, illumination variance, and large rotation angles are facing multiple challenges of image orientation and image processing. In this paper, a robust and convenient photogrammetric approach is proposed for processing low-altitude UAV images, involving a strip management method to automatically build a standardized regional aerial triangle (AT network, a parallel inner orientation algorithm, a ground control points (GCPs predicting method, and an improved Scale Invariant Feature Transform (SIFT method to produce large number of evenly distributed reliable tie points for bundle adjustment (BA. A multi-view matching approach is improved to produce Digital Surface Models (DSM and Digital Orthophoto Maps (DOM for 3D visualization. Experimental results show that the proposed approach is robust and feasible for photogrammetric processing of low-altitude UAV images and 3D visualization of products.

  15. Scalable Coating and Properties of Transparent, Flexible, Silver Nanowire Electrodes

    KAUST Repository

    Hu, Liangbing

    2010-05-25

    We report a comprehensive study of transparent and conductive silver nanowire (Ag NW) electrodes, including a scalable fabrication process, morphologies, and optical, mechanical adhesion, and flexibility properties, and various routes to improve the performance. We utilized a synthesis specifically designed for long and thin wires for improved performance in terms of sheet resistance and optical transmittance. Twenty Ω/sq and ∼ 80% specular transmittance, and 8 ohms/sq and 80% diffusive transmittance in the visible range are achieved, which fall in the same range as the best indium tin oxide (ITO) samples on plastic substrates for flexible electronics and solar cells. The Ag NW electrodes show optical transparencies superior to ITO for near-infrared wavelengths (2-fold higher transmission). Owing to light scattering effects, the Ag NW network has the largest difference between diffusive transmittance and specular transmittance when compared with ITO and carbon nanotube electrodes, a property which could greatly enhance solar cell performance. A mechanical study shows that Ag NW electrodes on flexible substrates show excellent robustness when subjected to bending. We also study the electrical conductance of Ag nanowires and their junctions and report a facile electrochemical method for a Au coating to reduce the wire-to-wire junction resistance for better overall film conductance. Simple mechanical pressing was also found to increase the NW film conductance due to the reduction of junction resistance. The overall properties of transparent Ag NW electrodes meet the requirements of transparent electrodes for many applications and could be an immediate ITO replacement for flexible electronics and solar cells. © 2010 American Chemical Society.

  16. Scalable Coating and Properties of Transparent, Flexible, Silver Nanowire Electrodes

    KAUST Repository

    Hu, Liangbing; Kim, Han Sun; Lee, Jung-Yong; Peumans, Peter; Cui, Yi

    2010-01-01

    We report a comprehensive study of transparent and conductive silver nanowire (Ag NW) electrodes, including a scalable fabrication process, morphologies, and optical, mechanical adhesion, and flexibility properties, and various routes to improve the performance. We utilized a synthesis specifically designed for long and thin wires for improved performance in terms of sheet resistance and optical transmittance. Twenty Ω/sq and ∼ 80% specular transmittance, and 8 ohms/sq and 80% diffusive transmittance in the visible range are achieved, which fall in the same range as the best indium tin oxide (ITO) samples on plastic substrates for flexible electronics and solar cells. The Ag NW electrodes show optical transparencies superior to ITO for near-infrared wavelengths (2-fold higher transmission). Owing to light scattering effects, the Ag NW network has the largest difference between diffusive transmittance and specular transmittance when compared with ITO and carbon nanotube electrodes, a property which could greatly enhance solar cell performance. A mechanical study shows that Ag NW electrodes on flexible substrates show excellent robustness when subjected to bending. We also study the electrical conductance of Ag nanowires and their junctions and report a facile electrochemical method for a Au coating to reduce the wire-to-wire junction resistance for better overall film conductance. Simple mechanical pressing was also found to increase the NW film conductance due to the reduction of junction resistance. The overall properties of transparent Ag NW electrodes meet the requirements of transparent electrodes for many applications and could be an immediate ITO replacement for flexible electronics and solar cells. © 2010 American Chemical Society.

  17. Performance Evaluation and Robustness Testing of Advanced Oscilloscope Triggering Schemes

    Directory of Open Access Journals (Sweden)

    Shakeb A. KHAN

    2010-01-01

    Full Text Available In this paper, performance and robustness of two advanced oscilloscope triggering schemes is evaluated. The problem of time period measurement of complex waveforms can be solved using the algorithms, which utilize the associative memory network based weighted hamming distance (Whd and autocorrelation based techniques. Robustness of both the advanced techniques, are then evaluated by simulated addition of random noise of different levels to complex test signals waveforms, and minimum value of Whd (Whd min and peak value of coefficient of correlation(COCmax are computed over 10000 cycles of the selected test waveforms. The distance between mean value of second lowest value of Whd and Whd min and distance between second highest value of coefficient of correlation (COC and COC max are used as parameters to analyze the robustness of considered techniques. From the results, it is found that both the techniques are capable of producing trigger pulses efficiently; but correlation based technique is found to be better from robustness point of view.

  18. Using H∞ to design robust POD controllers for wind power plants

    DEFF Research Database (Denmark)

    Mehmedalic, Jasmin; Knüppel, Thyge; Østergaard, Jacob

    2012-01-01

    Large wind power plants (WPPs) can help to improve small signal stability by increasing the damping of electromechanical modes of oscillation. This can be done by adding a power system oscillation damping (POD) controller to the wind power plants, similar to power system stabilizer (PSS......) controllers on conventional generation. Here two different design methods are evaluated for their suitability in producing a robust power system oscillation damping controller for wind power plants with full-load converter wind turbine generators (WTGs). Controllers are designed using classic PSS design and H......∞ methods and the designed controllers evaluated on both performance and robustness. It is found that the choice of control signal has a large influence on the robustness of the controllers, and the best performance and robustness is found when the converter active power command is used as control signal...

  19. Tip-Based Nanofabrication for Scalable Manufacturing

    Directory of Open Access Journals (Sweden)

    Huan Hu

    2017-03-01

    Full Text Available Tip-based nanofabrication (TBN is a family of emerging nanofabrication techniques that use a nanometer scale tip to fabricate nanostructures. In this review, we first introduce the history of the TBN and the technology development. We then briefly review various TBN techniques that use different physical or chemical mechanisms to fabricate features and discuss some of the state-of-the-art techniques. Subsequently, we focus on those TBN methods that have demonstrated potential to scale up the manufacturing throughput. Finally, we discuss several research directions that are essential for making TBN a scalable nano-manufacturing technology.

  20. Tip-Based Nanofabrication for Scalable Manufacturing

    International Nuclear Information System (INIS)

    Hu, Huan; Somnath, Suhas

    2017-01-01

    Tip-based nanofabrication (TBN) is a family of emerging nanofabrication techniques that use a nanometer scale tip to fabricate nanostructures. Here in this review, we first introduce the history of the TBN and the technology development. We then briefly review various TBN techniques that use different physical or chemical mechanisms to fabricate features and discuss some of the state-of-the-art techniques. Subsequently, we focus on those TBN methods that have demonstrated potential to scale up the manufacturing throughput. Finally, we discuss several research directions that are essential for making TBN a scalable nano-manufacturing technology.

  1. A Simple and Robust Method for Culturing Human-Induced Pluripotent Stem Cells in an Undifferentiated State Using Botulinum Hemagglutinin.

    Science.gov (United States)

    Kim, Mee-Hae; Matsubara, Yoshifumi; Fujinaga, Yukako; Kino-Oka, Masahiro

    2018-02-01

    Clinical and industrial applications of human-induced pluripotent stem cells (hiPSCs) is hindered by the lack of robust culture strategies capable of sustaining a culture in an undifferentiated state. Here, a simple and robust hiPSC-culture-propagation strategy incorporating botulinum hemagglutinin (HA)-mediated selective removal of cells deviating from an undifferentiated state is developed. After HA treatment, cell-cell adhesion is disrupted, and deviated cells detached from the central region of the colony to subsequently form tight monolayer colonies following prolonged incubation. The authors find that the temporal and dose-dependent activity of HA regulated deviated-cell removal and recoverability after disruption of cell-cell adhesion in hiPSC colonies. The effects of HA are confirmed under all culture conditions examined, regardless of hiPSC line and feeder-dependent or -free culture conditions. After routine application of our HA-treatment paradigm for serial passages, hiPSCs maintains expression of pluripotent markers and readily forms embryoid bodies expressing markers for all three germ-cell layers. This method enables highly efficient culturing of hiPSCs and use of entire undifferentiated portions without having to pick deviated cells manually. This simple and readily reproducible culture strategy is a potentially useful tool for improving the robust and scalable maintenance of undifferentiated hiPSC cultures. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Semantic Models for Scalable Search in the Internet of Things

    Directory of Open Access Journals (Sweden)

    Dennis Pfisterer

    2013-03-01

    Full Text Available The Internet of Things is anticipated to connect billions of embedded devices equipped with sensors to perceive their surroundings. Thereby, the state of the real world will be available online and in real-time and can be combined with other data and services in the Internet to realize novel applications such as Smart Cities, Smart Grids, or Smart Healthcare. This requires an open representation of sensor data and scalable search over data from diverse sources including sensors. In this paper we show how the Semantic Web technologies RDF (an open semantic data format and SPARQL (a query language for RDF-encoded data can be used to address those challenges. In particular, we describe how prediction models can be employed for scalable sensor search, how these prediction models can be encoded as RDF, and how the models can be queried by means of SPARQL.

  3. AutoBD: Automated Bi-Level Description for Scalable Fine-Grained Visual Categorization.

    Science.gov (United States)

    Yao, Hantao; Zhang, Shiliang; Yan, Chenggang; Zhang, Yongdong; Li, Jintao; Tian, Qi

    Compared with traditional image classification, fine-grained visual categorization is a more challenging task, because it targets to classify objects belonging to the same species, e.g. , classify hundreds of birds or cars. In the past several years, researchers have made many achievements on this topic. However, most of them are heavily dependent on the artificial annotations, e.g., bounding boxes, part annotations, and so on . The requirement of artificial annotations largely hinders the scalability and application. Motivated to release such dependence, this paper proposes a robust and discriminative visual description named Automated Bi-level Description (AutoBD). "Bi-level" denotes two complementary part-level and object-level visual descriptions, respectively. AutoBD is "automated," because it only requires the image-level labels of training images and does not need any annotations for testing images. Compared with the part annotations labeled by the human, the image-level labels can be easily acquired, which thus makes AutoBD suitable for large-scale visual categorization. Specifically, the part-level description is extracted by identifying the local region saliently representing the visual distinctiveness. The object-level description is extracted from object bounding boxes generated with a co-localization algorithm. Although only using the image-level labels, AutoBD outperforms the recent studies on two public benchmark, i.e. , classification accuracy achieves 81.6% on CUB-200-2011 and 88.9% on Car-196, respectively. On the large-scale Birdsnap data set, AutoBD achieves the accuracy of 68%, which is currently the best performance to the best of our knowledge.Compared with traditional image classification, fine-grained visual categorization is a more challenging task, because it targets to classify objects belonging to the same species, e.g. , classify hundreds of birds or cars. In the past several years, researchers have made many achievements on this topic

  4. LoRa Scalability: A Simulation Model Based on Interference Measurements

    Directory of Open Access Journals (Sweden)

    Jetmir Haxhibeqiri

    2017-05-01

    Full Text Available LoRa is a long-range, low power, low bit rate and single-hop wireless communication technology. It is intended to be used in Internet of Things (IoT applications involving battery-powered devices with low throughput requirements. A LoRaWAN network consists of multiple end nodes that communicate with one or more gateways. These gateways act like a transparent bridge towards a common network server. The amount of end devices and their throughput requirements will have an impact on the performance of the LoRaWAN network. This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments. First, we determine the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node. Measurements show that even under concurrent transmission, one of the packets can be received under certain conditions. Based on these measurements, we create a simulation model for assessing the scalability of a single gateway LoRaWAN network. We show that when the number of nodes increases up to 1000 per gateway, the losses will be up to 32%. In such a case, pure Aloha will have around 90% losses. However, when the duty cycle of the application layer becomes lower than the allowed radio duty cycle of 1%, losses will be even lower. We also show network scalability simulation results for some IoT use cases based on real data.

  5. Scalable quality assurance for large SNOMED CT hierarchies using subject-based subtaxonomies.

    Science.gov (United States)

    Ochs, Christopher; Geller, James; Perl, Yehoshua; Chen, Yan; Xu, Junchuan; Min, Hua; Case, James T; Wei, Zhi

    2015-05-01

    Standards terminologies may be large and complex, making their quality assurance challenging. Some terminology quality assurance (TQA) methodologies are based on abstraction networks (AbNs), compact terminology summaries. We have tested AbNs and the performance of related TQA methodologies on small terminology hierarchies. However, some standards terminologies, for example, SNOMED, are composed of very large hierarchies. Scaling AbN TQA techniques to such hierarchies poses a significant challenge. We present a scalable subject-based approach for AbN TQA. An innovative technique is presented for scaling TQA by creating a new kind of subject-based AbN called a subtaxonomy for large hierarchies. New hypotheses about concentrations of erroneous concepts within the AbN are introduced to guide scalable TQA. We test the TQA methodology for a subject-based subtaxonomy for the Bleeding subhierarchy in SNOMED's large Clinical finding hierarchy. To test the error concentration hypotheses, three domain experts reviewed a sample of 300 concepts. A consensus-based evaluation identified 87 erroneous concepts. The subtaxonomy-based TQA methodology was shown to uncover statistically significantly more erroneous concepts when compared to a control sample. The scalability of TQA methodologies is a challenge for large standards systems like SNOMED. We demonstrated innovative subject-based TQA techniques by identifying groups of concepts with a higher likelihood of having errors within the subtaxonomy. Scalability is achieved by reviewing a large hierarchy by subject. An innovative methodology for scaling the derivation of AbNs and a TQA methodology was shown to perform successfully for the largest hierarchy of SNOMED. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. SRC: FenixOS - A Research Operating System Focused on High Scalability and Reliability

    DEFF Research Database (Denmark)

    Passas, Stavros; Karlsson, Sven

    2011-01-01

    Computer systems keep increasing in size. Systems scale in the number of processing units, memories and peripheral devices. This creates many and diverse architectural trade-offs that the existing operating systems are not able to address. We are designing and implementing, FenixOS, a new operating...... of the operating system....... system that aims to improve the state of the art in scalability and reliability. We achieve scalability through limiting data sharing when possible, and through extensive use of lock-free data structures. Reliability is addressed with a careful re-design of the programming interface and structure...

  7. A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data.

    Directory of Open Access Journals (Sweden)

    Giovanni Delussu

    Full Text Available This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR's formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called "Constant Load" and "Constant Number of Records", with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes.

  8. A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data

    Science.gov (United States)

    Lianas, Luca; Frexia, Francesca; Zanetti, Gianluigi

    2016-01-01

    This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR’s formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called “Constant Load” and “Constant Number of Records”, with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes. PMID:27936191

  9. A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data.

    Science.gov (United States)

    Delussu, Giovanni; Lianas, Luca; Frexia, Francesca; Zanetti, Gianluigi

    2016-01-01

    This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR's formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called "Constant Load" and "Constant Number of Records", with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes.

  10. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    International Nuclear Information System (INIS)

    McGowan, S E; Albertini, F; Lomax, A J; Thomas, S J

    2015-01-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties. (paper)

  11. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    Science.gov (United States)

    McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.

    2015-04-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.

  12. Containment Domains: A Scalable, Efficient and Flexible Resilience Scheme for Exascale Systems

    Directory of Open Access Journals (Sweden)

    Jinsuk Chung

    2013-01-01

    Full Text Available This paper describes and evaluates a scalable and efficient resilience scheme based on the concept of containment domains. Containment domains are a programming construct that enable applications to express resilience needs and to interact with the system to tune and specialize error detection, state preservation and restoration, and recovery schemes. Containment domains have weak transactional semantics and are nested to take advantage of the machine and application hierarchies and to enable hierarchical state preservation, restoration and recovery. We evaluate the scalability and efficiency of containment domains using generalized trace-driven simulation and analytical analysis and show that containment domains are superior to both checkpoint restart and redundant execution approaches.

  13. Scalable Metadata Management for a Large Multi-Source Seismic Data Repository

    Energy Technology Data Exchange (ETDEWEB)

    Gaylord, J. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dodge, D. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Magana-Zook, S. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Barno, J. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Knapp, D. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Thomas, J. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sullivan, D. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ruppert, S. D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mellors, R. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-26

    In this work, we implemented the key metadata management components of a scalable seismic data ingestion framework to address limitations in our existing system, and to position it for anticipated growth in volume and complexity.

  14. CA-LOD: Collision Avoidance Level of Detail for Scalable, Controllable Crowds

    Science.gov (United States)

    Paris, Sébastien; Gerdelan, Anton; O'Sullivan, Carol

    The new wave of computer-driven entertainment technology throws audiences and game players into massive virtual worlds where entire cities are rendered in real time. Computer animated characters run through inner-city streets teeming with pedestrians, all fully rendered with 3D graphics, animations, particle effects and linked to 3D sound effects to produce more realistic and immersive computer-hosted entertainment experiences than ever before. Computing all of this detail at once is enormously computationally expensive, and game designers as a rule, have sacrificed the behavioural realism in favour of better graphics. In this paper we propose a new Collision Avoidance Level of Detail (CA-LOD) algorithm that allows games to support huge crowds in real time with the appearance of more intelligent behaviour. We propose two collision avoidance models used for two different CA-LODs: a fuzzy steering focusing on the performances, and a geometric steering to obtain the best realism. Mixing these approaches allows to obtain thousands of autonomous characters in real time, resulting in a scalable but still controllable crowd.

  15. A Numerical Study of Scalable Cardiac Electro-Mechanical Solvers on HPC Architectures

    Directory of Open Access Journals (Sweden)

    Piero Colli Franzone

    2018-04-01

    Full Text Available We introduce and study some scalable domain decomposition preconditioners for cardiac electro-mechanical 3D simulations on parallel HPC (High Performance Computing architectures. The electro-mechanical model of the cardiac tissue is composed of four coupled sub-models: (1 the static finite elasticity equations for the transversely isotropic deformation of the cardiac tissue; (2 the active tension model describing the dynamics of the intracellular calcium, cross-bridge binding and myofilament tension; (3 the anisotropic Bidomain model describing the evolution of the intra- and extra-cellular potentials in the deforming cardiac tissue; and (4 the ionic membrane model describing the dynamics of ionic currents, gating variables, ionic concentrations and stretch-activated channels. This strongly coupled electro-mechanical model is discretized in time with a splitting semi-implicit technique and in space with isoparametric finite elements. The resulting scalable parallel solver is based on Multilevel Additive Schwarz preconditioners for the solution of the Bidomain system and on BDDC preconditioned Newton-Krylov solvers for the non-linear finite elasticity system. The results of several 3D parallel simulations show the scalability of both linear and non-linear solvers and their application to the study of both physiological excitation-contraction cardiac dynamics and re-entrant waves in the presence of different mechano-electrical feedbacks.

  16. Robust, Highly Scalable Solar Array System, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Solar array systems currently under development are focused on near-term missions with designs optimized for the 30-50 kW power range. However, NASA has a vital...

  17. Using self-similarity compensation for improving inter-layer prediction in scalable 3D holoscopic video coding

    Science.gov (United States)

    Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.

    2013-09-01

    Holoscopic imaging, also known as integral imaging, has been recently attracting the attention of the research community, as a promising glassless 3D technology due to its ability to create a more realistic depth illusion than the current stereoscopic or multiview solutions. However, in order to gradually introduce this technology into the consumer market and to efficiently deliver 3D holoscopic content to end-users, backward compatibility with legacy displays is essential. Consequently, to enable 3D holoscopic content to be delivered and presented on legacy displays, a display scalable 3D holoscopic coding approach is required. Hence, this paper presents a display scalable architecture for 3D holoscopic video coding with a three-layer approach, where each layer represents a different level of display scalability: Layer 0 - a single 2D view; Layer 1 - 3D stereo or multiview; and Layer 2 - the full 3D holoscopic content. In this context, a prediction method is proposed, which combines inter-layer prediction, aiming to exploit the existing redundancy between the multiview and the 3D holoscopic layers, with self-similarity compensated prediction (previously proposed by the authors for non-scalable 3D holoscopic video coding), aiming to exploit the spatial redundancy inherent to the 3D holoscopic enhancement layer. Experimental results show that the proposed combined prediction can improve significantly the rate-distortion performance of scalable 3D holoscopic video coding with respect to the authors' previously proposed solutions, where only inter-layer or only self-similarity prediction is used.

  18. Upgrade of the TOTEM DAQ using the Scalable Readout System (SRS)

    International Nuclear Information System (INIS)

    Quinto, M; Cafagna, F; Fiergolski, A; Radicioni, E

    2013-01-01

    The main goals of the TOTEM Experiment at the LHC are the measurements of the elastic and total p-p cross sections and the studies of the diffractive dissociation processes. At LHC, collisions are produced at a rate of 40 MHz, imposing strong requirements for the Data Acquisition Systems (DAQ) in terms of trigger rate and data throughput. The TOTEM DAQ adopts a modular approach that, in standalone mode, is based on VME bus system. The VME based Front End Driver (FED) modules, host mezzanines that receive data through optical fibres directly from the detectors. After data checks and formatting are applied in the mezzanine, data is retransmitted to the VME interface and to another mezzanine card plugged in the FED module. The VME bus maximum bandwidth limits the maximum first level trigger (L1A) to 1 kHz rate. In order to get rid of the VME bottleneck and improve scalability and the overall capabilities of the DAQ, a new system was designed and constructed based on the Scalable Readout System (SRS), developed in the framework of the RD51 Collaboration. The project aims to increase the efficiency of the actual readout system providing higher bandwidth, and increasing data filtering, implementing a second-level trigger event selection based on hardware pattern recognition algorithms. This goal is to be achieved preserving the maximum back compatibility with the LHC Timing, Trigger and Control (TTC) system as well as with the CMS DAQ. The obtained results and the perspectives of the project are reported. In particular, we describe the system architecture and the new Opto-FEC adapter card developed to connect the SRS with the FED mezzanine modules. A first test bench was built and validated during the last TOTEM data taking period (February 2013). Readout of a set of 3 TOTEM Roman Pot silicon detectors was carried out to verify performance in the real LHC environment. In addition, the test allowed a check of data consistency and quality

  19. Methods for robustness programming

    NARCIS (Netherlands)

    Olieman, N.J.

    2008-01-01

    Robustness of an object is defined as the probability that an object will have properties as required. Robustness Programming (RP) is a mathematical approach for Robustness estimation and Robustness optimisation. An example in the context of designing a food product, is finding the best composition

  20. Scalable fast multipole methods for vortex element methods

    KAUST Repository

    Hu, Qi

    2012-11-01

    We use a particle-based method to simulate incompressible flows, where the Fast Multipole Method (FMM) is used to accelerate the calculation of particle interactions. The most time-consuming kernelsâ\\'the Biot-Savart equation and stretching term of the vorticity equationâ\\'are mathematically reformulated so that only two Laplace scalar potentials are used instead of six, while automatically ensuring divergence-free far-field computation. Based on this formulation, and on our previous work for a scalar heterogeneous FMM algorithm, we develop a new FMM-based vortex method capable of simulating general flows including turbulence on heterogeneous architectures, which distributes the work between multi-core CPUs and GPUs to best utilize the hardware resources and achieve excellent scalability. The algorithm also uses new data structures which can dynamically manage inter-node communication and load balance efficiently but with only a small parallel construction overhead. This algorithm can scale to large-sized clusters showing both strong and weak scalability. Careful error and timing trade-off analysis are also performed for the cutoff functions induced by the vortex particle method. Our implementation can perform one time step of the velocity+stretching for one billion particles on 32 nodes in 55.9 seconds, which yields 49.12 Tflop/s. © 2012 IEEE.

  1. Research on Robust Control Strategies for VSC-HVDC

    Science.gov (United States)

    Zhu, Kaicheng; Bao, Hai

    2018-01-01

    In the control system of VSC-HVDC, the phase locked loop provides phase signals to voltage vector control and trigger pulses to generate the required reference phase. The PLL is a typical second-order system. When the system is in unstable state, it will oscillate, make the trigger angle shift, produce harmonic, and make active power and reactive power coupled. Thus, considering the external disturbances introduced by the PLL in VSC-HVDC control system, the parameter perturbations of the controller and the model uncertainties, a H∞ robust controller of mixed sensitivity optimization problem is designed by using the Hinf function provided by the robust control toolbox. Then, compare it with the proportional integral controller through the MATLAB simulation experiment. By contrast, when the H∞ robust controller is added, active and reactive power of the converter station can track the change of reference values more accurately and quickly, and reduce overshoot. When the step change of active and reactive power occurs, mutual influence is reduced and better independent regulation is achieved.

  2. Temporal Scalability of Dynamic Volume Data using Mesh Compensated Wavelet Lifting.

    Science.gov (United States)

    Schnurrer, Wolfgang; Pallast, Niklas; Richter, Thomas; Kaup, Andre

    2017-10-12

    Due to their high resolution, dynamic medical 2D+t and 3D+t volumes from computed tomography (CT) and magnetic resonance tomography (MR) reach a size which makes them very unhandy for teleradiologic applications. A lossless scalable representation offers the advantage of a down-scaled version which can be used for orientation or previewing, while the remaining information for reconstructing the full resolution is transmitted on demand. The wavelet transform offers the desired scalability. A very high quality of the lowpass sub-band is crucial in order to use it as a down-scaled representation. We propose an approach based on compensated wavelet lifting for obtaining a scalable representation of dynamic CT and MR volumes with very high quality. The mesh compensation is feasible to model the displacement in dynamic volumes which is mainly given by expansion and contraction of tissue over time. To achieve this, we propose an optimized estimation of the mesh compensation parameters to optimally fit for dynamic volumes. Within the lifting structure, the inversion of the motion compensation is crucial in the update step. We propose to take this inversion directly into account during the estimation step and can improve the quality of the lowpass sub-band by 0.63 dB and 0.43 dB on average for our tested dynamic CT and MR volumes at the cost of an increase of the rate by 2.4% and 1.2% on average.

  3. Sustainability and scalability of a volunteer-based primary care intervention (Health TAPESTRY): a mixed-methods analysis.

    Science.gov (United States)

    Kastner, Monika; Sayal, Radha; Oliver, Doug; Straus, Sharon E; Dolovich, Lisa

    2017-08-01

    Chronic diseases are a significant public health concern, particularly in older adults. To address the delivery of health care services to optimally meet the needs of older adults with multiple chronic diseases, Health TAPESTRY (Teams Advancing Patient Experience: Strengthening Quality) uses a novel approach that involves patient home visits by trained volunteers to collect and transmit relevant health information using e-health technology to inform appropriate care from an inter-professional healthcare team. Health TAPESTRY was implemented, pilot tested, and evaluated in a randomized controlled trial (analysis underway). Knowledge translation (KT) interventions such as Health TAPESTRY should involve an investigation of their sustainability and scalability determinants to inform further implementation. However, this is seldom considered in research or considered early enough, so the objectives of this study were to assess the sustainability and scalability potential of Health TAPESTRY from the perspective of the team who developed and pilot-tested it. Our objectives were addressed using a sequential mixed-methods approach involving the administration of a validated, sustainability survey developed by the National Health Service (NHS) to all members of the Health TAPESTRY team who were actively involved in the development, implementation and pilot evaluation of the intervention (Phase 1: n = 38). Mean sustainability scores were calculated to identify the best potential for improvement across sustainability factors. Phase 2 was a qualitative study of interviews with purposively selected Health TAPESTRY team members to gain a more in-depth understanding of the factors that influence the sustainability and scalability Health TAPESTRY. Two independent reviewers coded transcribed interviews and completed a multi-step thematic analysis. Outcomes were participant perceptions of the determinants influencing the sustainability and scalability of Health TAPESTRY. Twenty

  4. Computational scalability of large size image dissemination

    Science.gov (United States)

    Kooper, Rob; Bajcsy, Peter

    2011-01-01

    We have investigated the computational scalability of image pyramid building needed for dissemination of very large image data. The sources of large images include high resolution microscopes and telescopes, remote sensing and airborne imaging, and high resolution scanners. The term 'large' is understood from a user perspective which means either larger than a display size or larger than a memory/disk to hold the image data. The application drivers for our work are digitization projects such as the Lincoln Papers project (each image scan is about 100-150MB or about 5000x8000 pixels with the total number to be around 200,000) and the UIUC library scanning project for historical maps from 17th and 18th century (smaller number but larger images). The goal of our work is understand computational scalability of the web-based dissemination using image pyramids for these large image scans, as well as the preservation aspects of the data. We report our computational benchmarks for (a) building image pyramids to be disseminated using the Microsoft Seadragon library, (b) a computation execution approach using hyper-threading to generate image pyramids and to utilize the underlying hardware, and (c) an image pyramid preservation approach using various hard drive configurations of Redundant Array of Independent Disks (RAID) drives for input/output operations. The benchmarks are obtained with a map (334.61 MB, JPEG format, 17591x15014 pixels). The discussion combines the speed and preservation objectives.

  5. Robustness of Structural Systems

    DEFF Research Database (Denmark)

    Canisius, T.D.G.; Sørensen, John Dalsgaard; Baker, J.W.

    2007-01-01

    The importance of robustness as a property of structural systems has been recognised following several structural failures, such as that at Ronan Point in 1968,where the consequenceswere deemed unacceptable relative to the initiating damage. A variety of research efforts in the past decades have...... attempted to quantify aspects of robustness such as redundancy and identify design principles that can improve robustness. This paper outlines the progress of recent work by the Joint Committee on Structural Safety (JCSS) to develop comprehensive guidance on assessing and providing robustness in structural...... systems. Guidance is provided regarding the assessment of robustness in a framework that considers potential hazards to the system, vulnerability of system components, and failure consequences. Several proposed methods for quantifying robustness are reviewed, and guidelines for robust design...

  6. Blind Cooperative Routing for Scalable and Energy-Efficient Internet of Things

    KAUST Repository

    Bader, Ahmed

    2016-02-26

    Multihop networking is promoted in this paper for energy-efficient and highly-scalable Internet of Things (IoT). Recognizing concerns related to the scalability of classical multihop routing and medium access techniques, the use of blind cooperation in conjunction with multihop communications is advocated herewith. Blind cooperation however is actually shown to be inefficient unless power control is applied. Inefficiency in this paper is projected in terms of the transport rate normalized to energy consumption. To that end, an uncoordinated power control mechanism is proposed whereby each device in a blind cooperative cluster randomly adjusts its transmit power level. An upper bound is derived for the mean transmit power that must be observed at each device. Finally, the uncoordinated power control mechanism is demonstrated to consistently outperform the simple point-to-point routing case. © 2015 IEEE.

  7. Robustness in laying hens

    NARCIS (Netherlands)

    Star, L.

    2008-01-01

    The aim of the project ‘The genetics of robustness in laying hens’ was to investigate nature and regulation of robustness in laying hens under sub-optimal conditions and the possibility to increase robustness by using animal breeding without loss of production. At the start of the project, a robust

  8. VPLS: an effective technology for building scalable transparent LAN services

    Science.gov (United States)

    Dong, Ximing; Yu, Shaohua

    2005-02-01

    Virtual Private LAN Service (VPLS) is generating considerable interest with enterprises and service providers as it offers multipoint transparent LAN service (TLS) over MPLS networks. This paper describes an effective technology - VPLS, which links virtual switch instances (VSIs) through MPLS to form an emulated Ethernet switch and build Scalable Transparent Lan Services. It first focuses on the architecture of VPLS with Ethernet bridging technique at the edge and MPLS at the core, then it tries to elucidate the data forwarding mechanism within VPLS domain, including learning and aging MAC addresses on a per LSP basis, flooding of unknown frames and replication for unknown, multicast, and broadcast frames. The loop-avoidance mechanism, known as split horizon forwarding, is also analyzed. Another important aspect of VPLS service is its basic operation, including autodiscovery and signaling, is discussed. From the perspective of efficiency and scalability the paper compares two important signaling mechanism, BGP and LDP, which are used to set up a PW between the PEs and bind the PWs to a particular VSI. With the extension of VPLS and the increase of full mesh of PWs between PE devices (n*(n-1)/2 PWs in all, a n2 complete problem), VPLS instance could have a large number of remote PE associations, resulting in an inefficient use of network bandwidth and system resources as the ingress PE has to replicate each frame and append MPLS labels for remote PE. So the latter part of this paper focuses on the scalability issue: the Hierarchical VPLS. Within the architecture of HVPLS, this paper addresses two ways to cope with a possibly large number of MAC addresses, which make VPLS operate more efficiently.

  9. Approaches for scalable modeling and emulation of cyber systems : LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Mayo, Jackson R.; Minnich, Ronald G.; Armstrong, Robert C.; Rudish, Don W.

    2009-09-01

    The goal of this research was to combine theoretical and computational approaches to better understand the potential emergent behaviors of large-scale cyber systems, such as networks of {approx} 10{sup 6} computers. The scale and sophistication of modern computer software, hardware, and deployed networked systems have significantly exceeded the computational research community's ability to understand, model, and predict current and future behaviors. This predictive understanding, however, is critical to the development of new approaches for proactively designing new systems or enhancing existing systems with robustness to current and future cyber threats, including distributed malware such as botnets. We have developed preliminary theoretical and modeling capabilities that can ultimately answer questions such as: How would we reboot the Internet if it were taken down? Can we change network protocols to make them more secure without disrupting existing Internet connectivity and traffic flow? We have begun to address these issues by developing new capabilities for understanding and modeling Internet systems at scale. Specifically, we have addressed the need for scalable network simulation by carrying out emulations of a network with {approx} 10{sup 6} virtualized operating system instances on a high-performance computing cluster - a 'virtual Internet'. We have also explored mappings between previously studied emergent behaviors of complex systems and their potential cyber counterparts. Our results provide foundational capabilities for further research toward understanding the effects of complexity in cyber systems, to allow anticipating and thwarting hackers.

  10. Microscopic Characterization of Scalable Coherent Rydberg Superatoms

    Directory of Open Access Journals (Sweden)

    Johannes Zeiher

    2015-08-01

    Full Text Available Strong interactions can amplify quantum effects such that they become important on macroscopic scales. Controlling these coherently on a single-particle level is essential for the tailored preparation of strongly correlated quantum systems and opens up new prospects for quantum technologies. Rydberg atoms offer such strong interactions, which lead to extreme nonlinearities in laser-coupled atomic ensembles. As a result, multiple excitation of a micrometer-sized cloud can be blocked while the light-matter coupling becomes collectively enhanced. The resulting two-level system, often called a “superatom,” is a valuable resource for quantum information, providing a collective qubit. Here, we report on the preparation of 2 orders of magnitude scalable superatoms utilizing the large interaction strength provided by Rydberg atoms combined with precise control of an ensemble of ultracold atoms in an optical lattice. The latter is achieved with sub-shot-noise precision by local manipulation of a two-dimensional Mott insulator. We microscopically confirm the superatom picture by in situ detection of the Rydberg excitations and observe the characteristic square-root scaling of the optical coupling with the number of atoms. Enabled by the full control over the atomic sample, including the motional degrees of freedom, we infer the overlap of the produced many-body state with a W state from the observed Rabi oscillations and deduce the presence of entanglement. Finally, we investigate the breakdown of the superatom picture when two Rydberg excitations are present in the system, which leads to dephasing and a loss of coherence.

  11. Applicability of biotechnologically produced insect silks.

    Science.gov (United States)

    Herold, Heike M; Scheibel, Thomas

    2017-09-26

    Silks are structural proteins produced by arthropods. Besides the well-known cocoon silk, which is produced by larvae of the silk moth Bombyx mori to undergo metamorphosis inside their silken shelter (and which is also used for textile production by men since millennia), numerous further less known silk-producing animals exist. The ability to produce silk evolved multiple independent times during evolution, and the fact that silk was subject to convergent evolution gave rise to an abundant natural diversity of silk proteins. Silks are used in air, under water, or like honey bee silk in the hydrophobic, waxen environment of the bee hive. The good mechanical properties of insect silk fibres together with their non-toxic, biocompatible, and biodegradable nature renders these materials appealing for both technical and biomedical applications. Although nature provides a great diversity of material properties, the variation in quality inherent in materials from natural sources together with low availability (except from silkworm silk) impeded the development of applications of silks. To overcome these two drawbacks, in recent years, recombinant silks gained more and more interest, as the biotechnological production of silk proteins allows for a scalable production at constant quality. This review summarises recent developments in recombinant silk production as well as technical procedures to process recombinant silk proteins into fibres, films, and hydrogels.

  12. Gene Delivery into Plant Cells for Recombinant Protein Production

    Directory of Open Access Journals (Sweden)

    Qiang Chen

    2015-01-01

    Full Text Available Recombinant proteins are primarily produced from cultures of mammalian, insect, and bacteria cells. In recent years, the development of deconstructed virus-based vectors has allowed plants to become a viable platform for recombinant protein production, with advantages in versatility, speed, cost, scalability, and safety over the current production paradigms. In this paper, we review the recent progress in the methodology of agroinfiltration, a solution to overcome the challenge of transgene delivery into plant cells for large-scale manufacturing of recombinant proteins. General gene delivery methodologies in plants are first summarized, followed by extensive discussion on the application and scalability of each agroinfiltration method. New development of a spray-based agroinfiltration and its application on field-grown plants is highlighted. The discussion of agroinfiltration vectors focuses on their applications for producing complex and heteromultimeric proteins and is updated with the development of bridge vectors. Progress on agroinfiltration in Nicotiana and non-Nicotiana plant hosts is subsequently showcased in context of their applications for producing high-value human biologics and low-cost and high-volume industrial enzymes. These new advancements in agroinfiltration greatly enhance the robustness and scalability of transgene delivery in plants, facilitating the adoption of plant transient expression systems for manufacturing recombinant proteins with a broad range of applications.

  13. Graceful Degradation in 3GPP MBMS Mobile TV Services Using H.264/AVC Temporal Scalability

    Directory of Open Access Journals (Sweden)

    Thomas Wiegand

    2009-01-01

    Full Text Available These days, there is an increasing interest in Mobile TV broadcast services shown by customers as well as service providers. One general problem of Mobile TV broadcast services is to maximize the coverage of users receiving an acceptable service quality, which is mainly influenced by the user's position and mobility within the cell. In this paper, graceful degradation is considered as an approach for improved service availability and coverage. We present a layered transmission approach for 3GPP's Release 6 Multimedia and Broadcast Service (MBMS based on temporal scalability using H.264/AVC Baseline Profile. A differentiation in robustness between temporal quality layers is achieved by unequal error protection approach based on either application layer Forward Error Correction (FEC or unequal transmit power for the layers or even a combination of both. We discuss the corresponding MBMS service as well as network settings and define measures allowing for evaluating the amount of users reached with a certain mobile terminal play-out quality while considering the network cell capacity usage. Using simulated 3GPP Rel. 6 network conditions, we show that if the service and network settings are chosen carefully, a noticeable extension of the coverage of the MBMS service can be achieved.

  14. Graphene oxide-based efficient and scalable solar desalination under one sun with a confined 2D water path.

    Science.gov (United States)

    Li, Xiuqiang; Xu, Weichao; Tang, Mingyao; Zhou, Lin; Zhu, Bin; Zhu, Shining; Zhu, Jia

    2016-12-06

    Because it is able to produce desalinated water directly using solar energy with minimum carbon footprint, solar steam generation and desalination is considered one of the most important technologies to address the increasingly pressing global water scarcity. Despite tremendous progress in the past few years, efficient solar steam generation and desalination can only be achieved for rather limited water quantity with the assistance of concentrators and thermal insulation, not feasible for large-scale applications. The fundamental paradox is that the conventional design of direct absorber-bulk water contact ensures efficient energy transfer and water supply but also has intrinsic thermal loss through bulk water. Here, enabled by a confined 2D water path, we report an efficient (80% under one-sun illumination) and effective (four orders salinity decrement) solar desalination device. More strikingly, because of minimized heat loss, high efficiency of solar desalination is independent of the water quantity and can be maintained without thermal insulation of the container. A foldable graphene oxide film, fabricated by a scalable process, serves as efficient solar absorbers (>94%), vapor channels, and thermal insulators. With unique structure designs fabricated by scalable processes and high and stable efficiency achieved under normal solar illumination independent of water quantity without any supporting systems, our device represents a concrete step for solar desalination to emerge as a complementary portable and personalized clean water solution.

  15. Scalable and massively parallel Monte Carlo photon transport simulations for heterogeneous computing platforms.

    Science.gov (United States)

    Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian

    2018-01-01

    We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  16. Advanced technologies for scalable ATLAS conditions database access on the grid

    International Nuclear Information System (INIS)

    Basset, R; Canali, L; Girone, M; Hawkings, R; Valassi, A; Viegas, F; Dimitrov, G; Nevski, P; Vaniachine, A; Walker, R; Wong, A

    2010-01-01

    During massive data reprocessing operations an ATLAS Conditions Database application must support concurrent access from numerous ATLAS data processing jobs running on the Grid. By simulating realistic work-flow, ATLAS database scalability tests provided feedback for Conditions Db software optimization and allowed precise determination of required distributed database resources. In distributed data processing one must take into account the chaotic nature of Grid computing characterized by peak loads, which can be much higher than average access rates. To validate database performance at peak loads, we tested database scalability at very high concurrent jobs rates. This has been achieved through coordinated database stress tests performed in series of ATLAS reprocessing exercises at the Tier-1 sites. The goal of database stress tests is to detect scalability limits of the hardware deployed at the Tier-1 sites, so that the server overload conditions can be safely avoided in a production environment. Our analysis of server performance under stress tests indicates that Conditions Db data access is limited by the disk I/O throughput. An unacceptable side-effect of the disk I/O saturation is a degradation of the WLCG 3D Services that update Conditions Db data at all ten ATLAS Tier-1 sites using the technology of Oracle Streams. To avoid such bottlenecks we prototyped and tested a novel approach for database peak load avoidance in Grid computing. Our approach is based upon the proven idea of pilot job submission on the Grid: instead of the actual query, an ATLAS utility library sends to the database server a pilot query first.

  17. Scalable graphene aptasensors for drug quantification

    Science.gov (United States)

    Vishnubhotla, Ramya; Ping, Jinglei; Gao, Zhaoli; Lee, Abigail; Saouaf, Olivia; Vrudhula, Amey; Johnson, A. T. Charlie

    2017-11-01

    Simpler and more rapid approaches for therapeutic drug-level monitoring are highly desirable to enable use at the point-of-care. We have developed an all-electronic approach for detection of the HIV drug tenofovir based on scalable fabrication of arrays of graphene field-effect transistors (GFETs) functionalized with a commercially available DNA aptamer. The shift in the Dirac voltage of the GFETs varied systematically with the concentration of tenofovir in deionized water, with a detection limit less than 1 ng/mL. Tests against a set of negative controls confirmed the specificity of the sensor response. This approach offers the potential for further development into a rapid and convenient point-of-care tool with clinically relevant performance.

  18. Scalable quantum search using trapped ions

    International Nuclear Information System (INIS)

    Ivanov, S. S.; Ivanov, P. A.; Linington, I. E.; Vitanov, N. V.

    2010-01-01

    We propose a scalable implementation of Grover's quantum search algorithm in a trapped-ion quantum information processor. The system is initialized in an entangled Dicke state by using adiabatic techniques. The inversion-about-average and oracle operators take the form of single off-resonant laser pulses. This is made possible by utilizing the physical symmetries of the trapped-ion linear crystal. The physical realization of the algorithm represents a dramatic simplification: each logical iteration (oracle and inversion about average) requires only two physical interaction steps, in contrast to the large number of concatenated gates required by previous approaches. This not only facilitates the implementation but also increases the overall fidelity of the algorithm.

  19. Robust Optical Flow Estimation

    Directory of Open Access Journals (Sweden)

    Javier Sánchez Pérez

    2013-10-01

    Full Text Available n this work, we describe an implementation of the variational method proposed by Brox etal. in 2004, which yields accurate optical flows with low running times. It has several benefitswith respect to the method of Horn and Schunck: it is more robust to the presence of outliers,produces piecewise-smooth flow fields and can cope with constant brightness changes. Thismethod relies on the brightness and gradient constancy assumptions, using the information ofthe image intensities and the image gradients to find correspondences. It also generalizes theuse of continuous L1 functionals, which help mitigate the effect of outliers and create a TotalVariation (TV regularization. Additionally, it introduces a simple temporal regularizationscheme that enforces a continuous temporal coherence of the flow fields.

  20. Spatiotemporal Stochastic Modeling of IoT Enabled Cellular Networks: Scalability and Stability Analysis

    KAUST Repository

    Gharbieh, Mohammad; Elsawy, Hesham; Bader, Ahmed; Alouini, Mohamed-Slim

    2017-01-01

    The Internet of Things (IoT) is large-scale by nature, which is manifested by the massive number of connected devices as well as their vast spatial existence. Cellular networks, which provide ubiquitous, reliable, and efficient wireless access, will play fundamental rule in delivering the first-mile access for the data tsunami to be generated by the IoT. However, cellular networks may have scalability problems to provide uplink connectivity to massive numbers of connected things. To characterize the scalability of cellular uplink in the context of IoT networks, this paper develops a traffic-aware spatiotemporal mathematical model for IoT devices supported by cellular uplink connectivity. The developed model is based on stochastic geometry and queueing theory to account for the traffic requirement per IoT device, the different transmission strategies, and the mutual interference between the IoT devices. To this end, the developed model is utilized to characterize the extent to which cellular networks can accommodate IoT traffic as well as to assess and compare three different transmission strategies that incorporate a combination of transmission persistency, backoff, and power-ramping. The analysis and the results clearly illustrate the scalability problem imposed by IoT on cellular network and offer insights into effective scenarios for each transmission strategy.

  1. Spatiotemporal Stochastic Modeling of IoT Enabled Cellular Networks: Scalability and Stability Analysis

    KAUST Repository

    Gharbieh, Mohammad

    2017-05-02

    The Internet of Things (IoT) is large-scale by nature, which is manifested by the massive number of connected devices as well as their vast spatial existence. Cellular networks, which provide ubiquitous, reliable, and efficient wireless access, will play fundamental rule in delivering the first-mile access for the data tsunami to be generated by the IoT. However, cellular networks may have scalability problems to provide uplink connectivity to massive numbers of connected things. To characterize the scalability of cellular uplink in the context of IoT networks, this paper develops a traffic-aware spatiotemporal mathematical model for IoT devices supported by cellular uplink connectivity. The developed model is based on stochastic geometry and queueing theory to account for the traffic requirement per IoT device, the different transmission strategies, and the mutual interference between the IoT devices. To this end, the developed model is utilized to characterize the extent to which cellular networks can accommodate IoT traffic as well as to assess and compare three different transmission strategies that incorporate a combination of transmission persistency, backoff, and power-ramping. The analysis and the results clearly illustrate the scalability problem imposed by IoT on cellular network and offer insights into effective scenarios for each transmission strategy.

  2. Robust Growth Determinants

    OpenAIRE

    Doppelhofer, Gernot; Weeks, Melvyn

    2011-01-01

    This paper investigates the robustness of determinants of economic growth in the presence of model uncertainty, parameter heterogeneity and outliers. The robust model averaging approach introduced in the paper uses a flexible and parsi- monious mixture modeling that allows for fat-tailed errors compared to the normal benchmark case. Applying robust model averaging to growth determinants, the paper finds that eight out of eighteen variables found to be significantly related to economic growth ...

  3. Programming time-multiplexed reconfigurable hardware using a scalable neuromorphic compiler.

    Science.gov (United States)

    Minkovich, Kirill; Srinivasa, Narayan; Cruz-Albrecht, Jose M; Cho, Youngkwan; Nogin, Aleksey

    2012-06-01

    Scalability and connectivity are two key challenges in designing neuromorphic hardware that can match biological levels. In this paper, we describe a neuromorphic system architecture design that addresses an approach to meet these challenges using traditional complementary metal-oxide-semiconductor (CMOS) hardware. A key requirement in realizing such neural architectures in hardware is the ability to automatically configure the hardware to emulate any neural architecture or model. The focus for this paper is to describe the details of such a programmable front-end. This programmable front-end is composed of a neuromorphic compiler and a digital memory, and is designed based on the concept of synaptic time-multiplexing (STM). The neuromorphic compiler automatically translates any given neural architecture to hardware switch states and these states are stored in digital memory to enable desired neural architectures. STM enables our proposed architecture to address scalability and connectivity using traditional CMOS hardware. We describe the details of the proposed design and the programmable front-end, and provide examples to illustrate its capabilities. We also provide perspectives for future extensions and potential applications.

  4. Robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming.

    Science.gov (United States)

    Baran, Richard; Northen, Trent R

    2013-10-15

    Untargeted metabolite profiling using liquid chromatography and mass spectrometry coupled via electrospray ionization is a powerful tool for the discovery of novel natural products, metabolic capabilities, and biomarkers. However, the elucidation of the identities of uncharacterized metabolites from spectral features remains challenging. A critical step in the metabolite identification workflow is the assignment of redundant spectral features (adducts, fragments, multimers) and calculation of the underlying chemical formula. Inspection of the data by experts using computational tools solving partial problems (e.g., chemical formula calculation for individual ions) can be performed to disambiguate alternative solutions and provide reliable results. However, manual curation is tedious and not readily scalable or standardized. Here we describe an automated procedure for the robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming optimization (RAMSI). Chemical rules among related ions are expressed as linear constraints and both the spectra interpretation and chemical formula calculation are performed in a single optimization step. This approach is unbiased in that it does not require predefined sets of neutral losses and positive and negative polarity spectra can be combined in a single optimization. The procedure was evaluated with 30 experimental mass spectra and was found to effectively identify the protonated or deprotonated molecule ([M + H](+) or [M - H](-)) while being robust to the presence of background ions. RAMSI provides a much-needed standardized tool for interpreting ions for subsequent identification in untargeted metabolomics workflows.

  5. GASPRNG: GPU accelerated scalable parallel random number generator library

    Science.gov (United States)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or

  6. Online Hashing for Scalable Remote Sensing Image Retrieval

    Directory of Open Access Journals (Sweden)

    Peng Li

    2018-05-01

    Full Text Available Recently, hashing-based large-scale remote sensing (RS image retrieval has attracted much attention. Many new hashing algorithms have been developed and successfully applied to fast RS image retrieval tasks. However, there exists an important problem rarely addressed in the research literature of RS image hashing. The RS images are practically produced in a streaming manner in many real-world applications, which means the data distribution keeps changing over time. Most existing RS image hashing methods are batch-based models whose hash functions are learned once for all and kept fixed all the time. Therefore, the pre-trained hash functions might not fit the ever-growing new RS images. Moreover, the batch-based models have to load all the training images into memory for model learning, which consumes many computing and memory resources. To address the above deficiencies, we propose a new online hashing method, which learns and adapts its hashing functions with respect to the newly incoming RS images in terms of a novel online partial random learning scheme. Our hash model is updated in a sequential mode such that the representative power of the learned binary codes for RS images are improved accordingly. Moreover, benefiting from the online learning strategy, our proposed hashing approach is quite suitable for scalable real-world remote sensing image retrieval. Extensive experiments on two large-scale RS image databases under online setting demonstrated the efficacy and effectiveness of the proposed method.

  7. A Scalable Communication Architecture for Advanced Metering Infrastructure

    OpenAIRE

    Ngo Hoang , Giang; Liquori , Luigi; Nguyen Chan , Hung

    2013-01-01

    Advanced Metering Infrastructure (AMI), seen as foundation for overall grid modernization, is an integration of many technologies that provides an intelligent connection between consumers and system operators [ami 2008]. One of the biggest challenge that AMI faces is to scalable collect and manage a huge amount of data from a large number of customers. In our paper, we address this challenge by introducing a mixed peer-to-peer (P2P) and client-server communication architecture for AMI in whic...

  8. Neutron generators with size scalability, ease of fabrication and multiple ion source functionalities

    Science.gov (United States)

    Elizondo-Decanini, Juan M

    2014-11-18

    A neutron generator is provided with a flat, rectilinear geometry and surface mounted metallizations. This construction provides scalability and ease of fabrication, and permits multiple ion source functionalities.

  9. Scalable and Anonymous Group Communication with MTor

    Directory of Open Access Journals (Sweden)

    Lin Dong

    2016-04-01

    Full Text Available This paper presents MTor, a low-latency anonymous group communication system. We construct MTor as an extension to Tor, allowing the construction of multi-source multicast trees on top of the existing Tor infrastructure. MTor does not depend on an external service to broker the group communication, and avoids central points of failure and trust. MTor’s substantial bandwidth savings and graceful scalability enable new classes of anonymous applications that are currently too bandwidth-intensive to be viable through traditional unicast Tor communication-e.g., group file transfer, collaborative editing, streaming video, and real-time audio conferencing.

  10. High Resolution Robust GPS-free Localization for Wireless Sensor Networks and its Applications

    KAUST Repository

    Mirza, Mohammed

    2011-12-12

    In this thesis we investigate the problem of robustness and scalability w.r.t. estimating the position of randomly deployed motes/nodes of a Wireless Sensor Network (WSN) without the help of Global Positioning System (GPS) devices. We propose a few applications of range independent localization algorithms that allow the sensors to actively determine their location with high resolution without increasing the complexity of the hardware or any additional device setup. In our first application we try to present a localized and centralized cooperative spectrum sensing using RF sensor networks. This scheme collaboratively sense the spectrum and localize the whole network efficiently and with less difficulty. In second application we try to focus on how efficiently we can localize the nodes, to detect underwater threats, without the use of beacons. In third application we try to focus on 3-Dimensional localization for LTE systems. Our performance evaluation shows that these schemes lead to a significant improvement in localization accuracy compared to the state-of-art range independent localization schemes, without requiring GPS support.

  11. Scalable Multifunction RF Systems: Combined vs. Separate Transmit and Receive Arrays

    NARCIS (Netherlands)

    Huizing, A.G.

    2008-01-01

    A scalable multifunction RF (SMRF) system allows the RF functionality (radar, electronic warfare and communications) to be easily extended and the RF performance to be scaled to the requirements of different missions and platforms. This paper presents the results of a trade-off study with respect to

  12. Wideband vs. Multiband Trade-offs for a Scalable Multifunction RF system

    NARCIS (Netherlands)

    Huizing, A.G.

    2005-01-01

    This paper presents a concept for a scalable multifunction RF (SMRF) system that allows the RF functionality (radar, electronic warfare and communications) to be easily extended and the RF performance to be scaled to the requirements of different missions and platforms. A trade-off analysis is

  13. Optimized bit extraction using distortion modeling in the scalable extension of H.264/AVC.

    Science.gov (United States)

    Maani, Ehsan; Katsaggelos, Aggelos K

    2009-09-01

    The newly adopted scalable extension of H.264/AVC video coding standard (SVC) demonstrates significant improvements in coding efficiency in addition to an increased degree of supported scalability relative to the scalable profiles of prior video coding standards. Due to the complicated hierarchical prediction structure of the SVC and the concept of key pictures, content-aware rate adaptation of SVC bit streams to intermediate bit rates is a nontrivial task. The concept of quality layers has been introduced in the design of the SVC to allow for fast content-aware prioritized rate adaptation. However, existing quality layer assignment methods are suboptimal and do not consider all network abstraction layer (NAL) units from different layers for the optimization. In this paper, we first propose a technique to accurately and efficiently estimate the quality degradation resulting from discarding an arbitrary number of NAL units from multiple layers of a bitstream by properly taking drift into account. Then, we utilize this distortion estimation technique to assign quality layers to NAL units for a more efficient extraction. Experimental results show that a significant gain can be achieved by the proposed scheme.

  14. An integrated semiconductor device enabling non-optical genome sequencing.

    Science.gov (United States)

    Rothberg, Jonathan M; Hinz, Wolfgang; Rearick, Todd M; Schultz, Jonathan; Mileski, William; Davey, Mel; Leamon, John H; Johnson, Kim; Milgrew, Mark J; Edwards, Matthew; Hoon, Jeremy; Simons, Jan F; Marran, David; Myers, Jason W; Davidson, John F; Branting, Annika; Nobile, John R; Puc, Bernard P; Light, David; Clark, Travis A; Huber, Martin; Branciforte, Jeffrey T; Stoner, Isaac B; Cawley, Simon E; Lyons, Michael; Fu, Yutao; Homer, Nils; Sedova, Marina; Miao, Xin; Reed, Brian; Sabina, Jeffrey; Feierstein, Erika; Schorn, Michelle; Alanjary, Mohammad; Dimalanta, Eileen; Dressman, Devin; Kasinskas, Rachel; Sokolsky, Tanya; Fidanza, Jacqueline A; Namsaraev, Eugeni; McKernan, Kevin J; Williams, Alan; Roth, G Thomas; Bustillo, James

    2011-07-20

    The seminal importance of DNA sequencing to the life sciences, biotechnology and medicine has driven the search for more scalable and lower-cost solutions. Here we describe a DNA sequencing technology in which scalable, low-cost semiconductor manufacturing techniques are used to make an integrated circuit able to directly perform non-optical DNA sequencing of genomes. Sequence data are obtained by directly sensing the ions produced by template-directed DNA polymerase synthesis using all-natural nucleotides on this massively parallel semiconductor-sensing device or ion chip. The ion chip contains ion-sensitive, field-effect transistor-based sensors in perfect register with 1.2 million wells, which provide confinement and allow parallel, simultaneous detection of independent sequencing reactions. Use of the most widely used technology for constructing integrated circuits, the complementary metal-oxide semiconductor (CMOS) process, allows for low-cost, large-scale production and scaling of the device to higher densities and larger array sizes. We show the performance of the system by sequencing three bacterial genomes, its robustness and scalability by producing ion chips with up to 10 times as many sensors and sequencing a human genome.

  15. Scalable Generation of Universal Platelets from Human Induced Pluripotent Stem Cells

    Directory of Open Access Journals (Sweden)

    Qiang Feng

    2014-11-01

    Full Text Available Human induced pluripotent stem cells (iPSCs provide a potentially replenishable source for the production of transfusable platelets. Here, we describe a method to generate megakaryocytes (MKs and functional platelets from iPSCs in a scalable manner under serum/feeder-free conditions. The method also permits the cryopreservation of MK progenitors, enabling a rapid “surge” capacity when large numbers of platelets are needed. Ultrastructural/morphological analyses show no major differences between iPSC platelets and human blood platelets. iPSC platelets form aggregates, lamellipodia, and filopodia after activation and circulate in macrophage-depleted animals and incorporate into developing mouse thrombi in a manner identical to human platelets. By knocking out the β2-microglobulin gene, we have generated platelets that are negative for the major histocompatibility antigens. The scalable generation of HLA-ABC-negative platelets from a renewable cell source represents an important step toward generating universal platelets for transfusion as well as a potential strategy for the management of platelet refractoriness.

  16. Scalable electrophysiology in intact small animals with nanoscale suspended electrode arrays

    Science.gov (United States)

    Gonzales, Daniel L.; Badhiwala, Krishna N.; Vercosa, Daniel G.; Avants, Benjamin W.; Liu, Zheng; Zhong, Weiwei; Robinson, Jacob T.

    2017-07-01

    Electrical measurements from large populations of animals would help reveal fundamental properties of the nervous system and neurological diseases. Small invertebrates are ideal for these large-scale studies; however, patch-clamp electrophysiology in microscopic animals typically requires invasive dissections and is low-throughput. To overcome these limitations, we present nano-SPEARs: suspended electrodes integrated into a scalable microfluidic device. Using this technology, we have made the first extracellular recordings of body-wall muscle electrophysiology inside an intact roundworm, Caenorhabditis elegans. We can also use nano-SPEARs to record from multiple animals in parallel and even from other species, such as Hydra littoralis. Furthermore, we use nano-SPEARs to establish the first electrophysiological phenotypes for C. elegans models for amyotrophic lateral sclerosis and Parkinson's disease, and show a partial rescue of the Parkinson's phenotype through drug treatment. These results demonstrate that nano-SPEARs provide the core technology for microchips that enable scalable, in vivo studies of neurobiology and neurological diseases.

  17. Median Robust Extended Local Binary Pattern for Texture Classification.

    Science.gov (United States)

    Liu, Li; Lao, Songyang; Fieguth, Paul W; Guo, Yulan; Wang, Xiaogang; Pietikäinen, Matti

    2016-03-01

    Local binary patterns (LBP) are considered among the most computationally efficient high-performance texture features. However, the LBP method is very sensitive to image noise and is unable to capture macrostructure information. To best address these disadvantages, in this paper, we introduce a novel descriptor for texture classification, the median robust extended LBP (MRELBP). Different from the traditional LBP and many LBP variants, MRELBP compares regional image medians rather than raw image intensities. A multiscale LBP type descriptor is computed by efficiently comparing image medians over a novel sampling scheme, which can capture both microstructure and macrostructure texture information. A comprehensive evaluation on benchmark data sets reveals MRELBP's high performance-robust to gray scale variations, rotation changes and noise-but at a low computational cost. MRELBP produces the best classification scores of 99.82%, 99.38%, and 99.77% on three popular Outex test suites. More importantly, MRELBP is shown to be highly robust to image noise, including Gaussian noise, Gaussian blur, salt-and-pepper noise, and random pixel corruption.

  18. The comparison between several robust ridge regression estimators in the presence of multicollinearity and multiple outliers

    Science.gov (United States)

    Zahari, Siti Meriam; Ramli, Norazan Mohamed; Moktar, Balkiah; Zainol, Mohammad Said

    2014-09-01

    In the presence of multicollinearity and multiple outliers, statistical inference of linear regression model using ordinary least squares (OLS) estimators would be severely affected and produces misleading results. To overcome this, many approaches have been investigated. These include robust methods which were reported to be less sensitive to the presence of outliers. In addition, ridge regression technique was employed to tackle multicollinearity problem. In order to mitigate both problems, a combination of ridge regression and robust methods was discussed in this study. The superiority of this approach was examined when simultaneous presence of multicollinearity and multiple outliers occurred in multiple linear regression. This study aimed to look at the performance of several well-known robust estimators; M, MM, RIDGE and robust ridge regression estimators, namely Weighted Ridge M-estimator (WRM), Weighted Ridge MM (WRMM), Ridge MM (RMM), in such a situation. Results of the study showed that in the presence of simultaneous multicollinearity and multiple outliers (in both x and y-direction), the RMM and RIDGE are more or less similar in terms of superiority over the other estimators, regardless of the number of observation, level of collinearity and percentage of outliers used. However, when outliers occurred in only single direction (y-direction), the WRMM estimator is the most superior among the robust ridge regression estimators, by producing the least variance. In conclusion, the robust ridge regression is the best alternative as compared to robust and conventional least squares estimators when dealing with simultaneous presence of multicollinearity and outliers.

  19. Particle Communication and Domain Neighbor Coupling: Scalable Domain Decomposed Algorithms for Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, M. J.; Brantley, P. S.

    2015-01-20

    In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 221 = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.

  20. Robust Online Multi-Task Learning with Correlative and Personalized Structures

    KAUST Repository

    Yang, Peng

    2017-06-29

    Multi-Task Learning (MTL) can enhance a classifier\\'s generalization performance by learning multiple related tasks simultaneously. Conventional MTL works under the offline setting and suffers from expensive training cost and poor scalability. To address such issues, online learning techniques have been applied to solve MTL problems. However, most existing algorithms of online MTL constrain task relatedness into a presumed structure via a single weight matrix, which is a strict restriction that does not always hold in practice. In this paper, we propose a robust online MTL framework that overcomes this restriction by decomposing the weight matrix into two components: the first one captures the low-rank common structure among tasks via a nuclear norm; the second one identifies the personalized patterns of outlier tasks via a group lasso. Theoretical analysis shows the proposed algorithm can achieve a sub-linear regret with respect to the best linear model in hindsight. However, the nuclear norm that simply adds all nonzero singular values together may not be a good low-rank approximation. To improve the results, we use a log-determinant function as a non-convex rank approximation. Experimental results on a number of real-world applications also verify the efficacy of our approaches.

  1. Robust Online Multi-Task Learning with Correlative and Personalized Structures

    KAUST Repository

    Yang, Peng; Zhao, Peilin; Gao, Xin

    2017-01-01

    Multi-Task Learning (MTL) can enhance a classifier's generalization performance by learning multiple related tasks simultaneously. Conventional MTL works under the offline setting and suffers from expensive training cost and poor scalability. To address such issues, online learning techniques have been applied to solve MTL problems. However, most existing algorithms of online MTL constrain task relatedness into a presumed structure via a single weight matrix, which is a strict restriction that does not always hold in practice. In this paper, we propose a robust online MTL framework that overcomes this restriction by decomposing the weight matrix into two components: the first one captures the low-rank common structure among tasks via a nuclear norm; the second one identifies the personalized patterns of outlier tasks via a group lasso. Theoretical analysis shows the proposed algorithm can achieve a sub-linear regret with respect to the best linear model in hindsight. However, the nuclear norm that simply adds all nonzero singular values together may not be a good low-rank approximation. To improve the results, we use a log-determinant function as a non-convex rank approximation. Experimental results on a number of real-world applications also verify the efficacy of our approaches.

  2. Robustness of structures

    DEFF Research Database (Denmark)

    Vrouwenvelder, T.; Sørensen, John Dalsgaard

    2009-01-01

    After the collapse of the World Trade Centre towers in 2001 and a number of collapses of structural systems in the beginning of the century, robustness of structural systems has gained renewed interest. Despite many significant theoretical, methodical and technological advances, structural...... of robustness for structural design such requirements are not substantiated in more detail, nor have the engineering profession been able to agree on an interpretation of robustness which facilitates for its uantification. A European COST action TU 601 on ‘Robustness of structures' has started in 2007...... by a group of members of the CSS. This paper describes the ongoing work in this action, with emphasis on the development of a theoretical and risk based quantification and optimization procedure on the one side and a practical pre-normative guideline on the other....

  3. Scalable production in human cells and biochemical characterization of full-length normal and mutant huntingtin.

    Directory of Open Access Journals (Sweden)

    Bin Huang

    Full Text Available Huntingtin (Htt is a 350 kD intracellular protein, ubiquitously expressed and mainly localized in the cytoplasm. Huntington's disease (HD is caused by a CAG triplet amplification in exon 1 of the corresponding gene resulting in a polyglutamine (polyQ expansion at the N-terminus of Htt. Production of full-length Htt has been difficult in the past and so far a scalable system or process has not been established for recombinant production of Htt in human cells. The ability to produce Htt in milligram quantities would be a prerequisite for many biochemical and biophysical studies aiming in a better understanding of Htt function under physiological conditions and in case of mutation and disease. For scalable production of full-length normal (17Q and mutant (46Q and 128Q Htt we have established two different systems, the first based on doxycycline-inducible Htt expression in stable cell lines, the second on "gutless" adenovirus mediated gene transfer. Purified material has then been used for biochemical characterization of full-length Htt. Posttranslational modifications (PTMs were determined and several new phosphorylation sites were identified. Nearly all PTMs in full-length Htt localized to areas outside of predicted alpha-solenoid protein regions. In all detected N-terminal peptides methionine as the first amino acid was missing and the second, alanine, was found to be acetylated. Differences in secondary structure between normal and mutant Htt, a helix-rich protein, were not observed in our study. Purified Htt tends to form dimers and higher order oligomers, thus resembling the situation observed with N-terminal fragments, although the mechanism of oligomer formation may be different.

  4. A scalable system for production of functional pancreatic progenitors from human embryonic stem cells.

    Directory of Open Access Journals (Sweden)

    Thomas C Schulz

    Full Text Available Development of a human embryonic stem cell (hESC-based therapy for type 1 diabetes will require the translation of proof-of-principle concepts into a scalable, controlled, and regulated cell manufacturing process. We have previously demonstrated that hESC can be directed to differentiate into pancreatic progenitors that mature into functional glucose-responsive, insulin-secreting cells in vivo. In this study we describe hESC expansion and banking methods and a suspension-based differentiation system, which together underpin an integrated scalable manufacturing process for producing pancreatic progenitors. This system has been optimized for the CyT49 cell line. Accordingly, qualified large-scale single-cell master and working cGMP cell banks of CyT49 have been generated to provide a virtually unlimited starting resource for manufacturing. Upon thaw from these banks, we expanded CyT49 for two weeks in an adherent culture format that achieves 50-100 fold expansion per week. Undifferentiated CyT49 were then aggregated into clusters in dynamic rotational suspension culture, followed by differentiation en masse for two weeks with a four-stage protocol. Numerous scaled differentiation runs generated reproducible and defined population compositions highly enriched for pancreatic cell lineages, as shown by examining mRNA expression at each stage of differentiation and flow cytometry of the final population. Islet-like tissue containing glucose-responsive, insulin-secreting cells was generated upon implantation into mice. By four- to five-months post-engraftment, mature neo-pancreatic tissue was sufficient to protect against streptozotocin (STZ-induced hyperglycemia. In summary, we have developed a tractable manufacturing process for the generation of functional pancreatic progenitors from hESC on a scale amenable to clinical entry.

  5. Scalable video on demand adaptive Internet-based distribution

    CERN Document Server

    Zink, Michael

    2013-01-01

    In recent years, the proliferation of available video content and the popularity of the Internet have encouraged service providers to develop new ways of distributing content to clients. Increasing video scaling ratios and advanced digital signal processing techniques have led to Internet Video-on-Demand applications, but these currently lack efficiency and quality. Scalable Video on Demand: Adaptive Internet-based Distribution examines how current video compression and streaming can be used to deliver high-quality applications over the Internet. In addition to analysing the problems

  6. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2008-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately......The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chip. This means that parallel processing is required in application areas that traditionally have not used...

  7. Parallelism and Scalability in an Image Processing Application

    DEFF Research Database (Denmark)

    Rasmussen, Morten Sleth; Stuart, Matthias Bo; Karlsson, Sven

    2009-01-01

    parallel programs. This paper investigates parallelism and scalability of an embedded image processing application. The major challenges faced when parallelizing the application were to extract enough parallelism from the application and to reduce load imbalance. The application has limited immediately......The recent trends in processor architecture show that parallel processing is moving into new areas of computing in the form of many-core desktop processors and multi-processor system-on-chips. This means that parallel processing is required in application areas that traditionally have not used...

  8. Scalable microcarrier-based manufacturing of mesenchymal stem/stromal cells.

    Science.gov (United States)

    de Soure, António M; Fernandes-Platzgummer, Ana; da Silva, Cláudia L; Cabral, Joaquim M S

    2016-10-20

    Due to their unique features, mesenchymal stem/stromal cells (MSC) have been exploited in clinical settings as therapeutic candidates for the treatment of a variety of diseases. However, the success in obtaining clinically-relevant MSC numbers for cell-based therapies is dependent on efficient isolation and ex vivo expansion protocols, able to comply with good manufacturing practices (GMP). In this context, the 2-dimensional static culture systems typically used for the expansion of these cells present several limitations that may lead to reduced cell numbers and compromise cell functions. Furthermore, many studies in the literature report the expansion of MSC using fetal bovine serum (FBS)-supplemented medium, which has been critically rated by regulatory agencies. Alternative platforms for the scalable manufacturing of MSC have been developed, namely using microcarriers in bioreactors, with also a considerable number of studies now reporting the production of MSC using xenogeneic/serum-free medium formulations. In this review we provide a comprehensive overview on the scalable manufacturing of human mesenchymal stem/stromal cells, depicting the various steps involved in the process from cell isolation to ex vivo expansion, using different cell tissue sources and culture medium formulations and exploiting bioprocess engineering tools namely microcarrier technology and bioreactors. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. A highly scalable peptide-based assay system for proteomics.

    Directory of Open Access Journals (Sweden)

    Igor A Kozlov

    Full Text Available We report a scalable and cost-effective technology for generating and screening high-complexity customizable peptide sets. The peptides are made as peptide-cDNA fusions by in vitro transcription/translation from pools of DNA templates generated by microarray-based synthesis. This approach enables large custom sets of peptides to be designed in silico, manufactured cost-effectively in parallel, and assayed efficiently in a multiplexed fashion. The utility of our peptide-cDNA fusion pools was demonstrated in two activity-based assays designed to discover protease and kinase substrates. In the protease assay, cleaved peptide substrates were separated from uncleaved and identified by digital sequencing of their cognate cDNAs. We screened the 3,011 amino acid HCV proteome for susceptibility to cleavage by the HCV NS3/4A protease and identified all 3 known trans cleavage sites with high specificity. In the kinase assay, peptide substrates phosphorylated by tyrosine kinases were captured and identified by sequencing of their cDNAs. We screened a pool of 3,243 peptides against Abl kinase and showed that phosphorylation events detected were specific and consistent with the known substrate preferences of Abl kinase. Our approach is scalable and adaptable to other protein-based assays.

  10. Scalable Nernst thermoelectric power using a coiled galfenol wire

    Science.gov (United States)

    Yang, Zihao; Codecido, Emilio A.; Marquez, Jason; Zheng, Yuanhua; Heremans, Joseph P.; Myers, Roberto C.

    2017-09-01

    The Nernst thermopower usually is considered far too weak in most metals for waste heat recovery. However, its transverse orientation gives it an advantage over the Seebeck effect on non-flat surfaces. Here, we experimentally demonstrate the scalable generation of a Nernst voltage in an air-cooled metal wire coiled around a hot cylinder. In this geometry, a radial temperature gradient generates an azimuthal electric field in the coil. A Galfenol (Fe0.85Ga0.15) wire is wrapped around a cartridge heater, and the voltage drop across the wire is measured as a function of axial magnetic field. As expected, the Nernst voltage scales linearly with the length of the wire. Based on heat conduction and fluid dynamic equations, finite-element method is used to calculate the temperature gradient across the Galfenol wire and determine the Nernst coefficient. A giant Nernst coefficient of -2.6 μV/KT at room temperature is estimated, in agreement with measurements on bulk Galfenol. We expect that the giant Nernst effect in Galfenol arises from its magnetostriction, presumably through enhanced magnon-phonon coupling. Our results demonstrate the feasibility of a transverse thermoelectric generator capable of scalable output power from non-flat heat sources.

  11. Scalable Nernst thermoelectric power using a coiled galfenol wire

    Directory of Open Access Journals (Sweden)

    Zihao Yang

    2017-09-01

    Full Text Available The Nernst thermopower usually is considered far too weak in most metals for waste heat recovery. However, its transverse orientation gives it an advantage over the Seebeck effect on non-flat surfaces. Here, we experimentally demonstrate the scalable generation of a Nernst voltage in an air-cooled metal wire coiled around a hot cylinder. In this geometry, a radial temperature gradient generates an azimuthal electric field in the coil. A Galfenol (Fe0.85Ga0.15 wire is wrapped around a cartridge heater, and the voltage drop across the wire is measured as a function of axial magnetic field. As expected, the Nernst voltage scales linearly with the length of the wire. Based on heat conduction and fluid dynamic equations, finite-element method is used to calculate the temperature gradient across the Galfenol wire and determine the Nernst coefficient. A giant Nernst coefficient of -2.6 μV/KT at room temperature is estimated, in agreement with measurements on bulk Galfenol. We expect that the giant Nernst effect in Galfenol arises from its magnetostriction, presumably through enhanced magnon-phonon coupling. Our results demonstrate the feasibility of a transverse thermoelectric generator capable of scalable output power from non-flat heat sources.

  12. Continuous flow photocyclization of stilbenes – scalable synthesis of functionalized phenanthrenes and helicenes

    Directory of Open Access Journals (Sweden)

    Quentin Lefebvre

    2013-09-01

    Full Text Available A continuous flow oxidative photocyclization of stilbene derivatives has been developed which allows the scalable synthesis of backbone functionalized phenanthrenes and helicenes of various sizes in good yields.

  13. A Scalable Method toward Superhydrophilic and Underwater Superoleophobic PVDF Membranes for Effective Oil/Water Emulsion Separation.

    Science.gov (United States)

    Yuan, Tao; Meng, Jianqiang; Hao, Tingyu; Wang, Zihong; Zhang, Yufeng

    2015-07-15

    A superhydrophilic and underwater superoleophobic PVDF membrane (PVDFAH) has been prepared by surface-coating of a hydrogel onto the membrane surface, and its superior performance for oil/water emulsion separation has been demonstrated. The coated hydrogel was constructed by an interfacial polymerization based on the thiol-epoxy reaction of pentaerythritol tetrakis (3-mercaptopropionate) (PETMP) with diethylene glycol diglycidyl ether (PEGDGE) and simultaneously tethered on an alkaline-treated commercial PVDF membrane surface via the thio-ene reaction. The PVDFAH membranes can be fabricated in a few minutes under mild conditions and show superhydrophilic and underwater superoleophobic properties for a series of organic solvents. Energy dispersive X-ray (EDX) analysis shows that the hydrogel coating was efficient throughout the pore lumen. The membrane shows superior oil/water emulsion separation performance, including high water permeation, quantitative oil rejection, and robust antifouling performance in a series oil/water emulsions, including that prepared from crude oil. In addition, a 24 h Soxhlet-extraction experiment with ethanol/water solution (50:50, v/v) was conducted to test the tethered hydrogel stability. We see that the membrane maintained the water contact angle below 5°, indicating the covalent tethering stability. This technique shows great promise for scalable fabrication of membrane materials for handling practical oil emulsion purification.

  14. Addressing Challenges and Scalability in the Synthesis of Thin Uniform Metal Shells on Large Metal Nanoparticle Cores: Case Study of Ag-Pt Core-Shell Nanocubes.

    Science.gov (United States)

    Aslam, Umar; Linic, Suljo

    2017-12-13

    Bimetallic nanoparticles in which a metal is coated with an ultrathin (∼1 nm) layer of a second metal are often desired for their unique chemical and physical properties. Current synthesis methods for producing such core-shell nanostructures often require incremental addition of a shell metal precursor which is rapidly reduced onto metal cores. A major shortcoming of this approach is that it necessitates precise concentrations of chemical reagents, making it difficult to perform at large scales. To address this issue, we considered an approach whereby the reduction of the shell metal precursor was controlled through in situ chemical modification of the precursor. We used this approach to develop a highly scalable synthesis for coating atomic layers of Pt onto Ag nanocubes. We show that Ag-Pt core-shell nanostructures are synthesized in high yields and that these structures effectively combine the optical properties of the plasmonic Ag nanocube core with the surface properties of the thin Pt shell. Additionally, we demonstrate the scalability of the synthesis by performing a 10 times scale-up.

  15. Scalable Multi-group Key Management for Advanced Metering Infrastructure

    OpenAIRE

    Benmalek , Mourad; Challal , Yacine; Bouabdallah , Abdelmadjid

    2015-01-01

    International audience; Advanced Metering Infrastructure (AMI) is composed of systems and networks to incorporate changes for modernizing the electricity grid, reduce peak loads, and meet energy efficiency targets. AMI is a privileged target for security attacks with potentially great damage against infrastructures and privacy. For this reason, Key Management has been identified as one of the most challenging topics in AMI development. In this paper, we propose a new Scalable multi-group key ...

  16. Robustness of Structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2008-01-01

    This paper describes the background of the robustness requirements implemented in the Danish Code of Practice for Safety of Structures and in the Danish National Annex to the Eurocode 0, see (DS-INF 146, 2003), (DS 409, 2006), (EN 1990 DK NA, 2007) and (Sørensen and Christensen, 2006). More...... frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new structures essential....... According to Danish design rules robustness shall be documented for all structures in high consequence class. The design procedure to document sufficient robustness consists of: 1) Review of loads and possible failure modes / scenarios and determination of acceptable collapse extent; 2) Review...

  17. Dynamics robustness of cascading systems.

    Directory of Open Access Journals (Sweden)

    Jonathan T Young

    2017-03-01

    Full Text Available A most important property of biochemical systems is robustness. Static robustness, e.g., homeostasis, is the insensitivity of a state against perturbations, whereas dynamics robustness, e.g., homeorhesis, is the insensitivity of a dynamic process. In contrast to the extensively studied static robustness, dynamics robustness, i.e., how a system creates an invariant temporal profile against perturbations, is little explored despite transient dynamics being crucial for cellular fates and are reported to be robust experimentally. For example, the duration of a stimulus elicits different phenotypic responses, and signaling networks process and encode temporal information. Hence, robustness in time courses will be necessary for functional biochemical networks. Based on dynamical systems theory, we uncovered a general mechanism to achieve dynamics robustness. Using a three-stage linear signaling cascade as an example, we found that the temporal profiles and response duration post-stimulus is robust to perturbations against certain parameters. Then analyzing the linearized model, we elucidated the criteria of when signaling cascades will display dynamics robustness. We found that changes in the upstream modules are masked in the cascade, and that the response duration is mainly controlled by the rate-limiting module and organization of the cascade's kinetics. Specifically, we found two necessary conditions for dynamics robustness in signaling cascades: 1 Constraint on the rate-limiting process: The phosphatase activity in the perturbed module is not the slowest. 2 Constraints on the initial conditions: The kinase activity needs to be fast enough such that each module is saturated even with fast phosphatase activity and upstream changes are attenuated. We discussed the relevance of such robustness to several biological examples and the validity of the above conditions therein. Given the applicability of dynamics robustness to a variety of systems, it

  18. Towards Scalable Strain Gauge-Based Joint Torque Sensors

    Science.gov (United States)

    D’Imperio, Mariapaola; Cannella, Ferdinando; Caldwell, Darwin G.; Cuschieri, Alfred

    2017-01-01

    During recent decades, strain gauge-based joint torque sensors have been commonly used to provide high-fidelity torque measurements in robotics. Although measurement of joint torque/force is often required in engineering research and development, the gluing and wiring of strain gauges used as torque sensors pose difficulties during integration within the restricted space available in small joints. The problem is compounded by the need for a scalable geometric design to measure joint torque. In this communication, we describe a novel design of a strain gauge-based mono-axial torque sensor referred to as square-cut torque sensor (SCTS), the significant features of which are high degree of linearity, symmetry, and high scalability in terms of both size and measuring range. Most importantly, SCTS provides easy access for gluing and wiring of the strain gauges on sensor surface despite the limited available space. We demonstrated that the SCTS was better in terms of symmetry (clockwise and counterclockwise rotation) and more linear. These capabilities have been shown through finite element modeling (ANSYS) confirmed by observed data obtained by load testing experiments. The high performance of SCTS was confirmed by studies involving changes in size, material and/or wings width and thickness. Finally, we demonstrated that the SCTS can be successfully implementation inside the hip joints of miniaturized hydraulically actuated quadruped robot-MiniHyQ. This communication is based on work presented at the 18th International Conference on Climbing and Walking Robots (CLAWAR). PMID:28820446

  19. Scalable parallel prefix solvers for discrete ordinates transport

    International Nuclear Information System (INIS)

    Pautz, S.; Pandya, T.; Adams, M.

    2009-01-01

    The well-known 'sweep' algorithm for inverting the streaming-plus-collision term in first-order deterministic radiation transport calculations has some desirable numerical properties. However, it suffers from parallel scaling issues caused by a lack of concurrency. The maximum degree of concurrency, and thus the maximum parallelism, grows more slowly than the problem size for sweeps-based solvers. We investigate a new class of parallel algorithms that involves recasting the streaming-plus-collision problem in prefix form and solving via cyclic reduction. This method, although computationally more expensive at low levels of parallelism than the sweep algorithm, offers better theoretical scalability properties. Previous work has demonstrated this approach for one-dimensional calculations; we show how to extend it to multidimensional calculations. Notably, for multiple dimensions it appears that this approach is limited to long-characteristics discretizations; other discretizations cannot be cast in prefix form. We implement two variants of the algorithm within the radlib/SCEPTRE transport code library at Sandia National Laboratories and show results on two different massively parallel systems. Both the 'forward' and 'symmetric' solvers behave similarly, scaling well to larger degrees of parallelism then sweeps-based solvers. We do observe some issues at the highest levels of parallelism (relative to the system size) and discuss possible causes. We conclude that this approach shows good potential for future parallel systems, but the parallel scalability will depend heavily on the architecture of the communication networks of these systems. (authors)

  20. Vocal activity as a low cost and scalable index of seabird colony size.

    Science.gov (United States)

    Borker, Abraham L; McKown, Matthew W; Ackerman, Joshua T; Eagles-Smith, Collin A; Tershy, Bernie R; Croll, Donald A

    2014-08-01

    Although wildlife conservation actions have increased globally in number and complexity, the lack of scalable, cost-effective monitoring methods limits adaptive management and the evaluation of conservation efficacy. Automated sensors and computer-aided analyses provide a scalable and increasingly cost-effective tool for conservation monitoring. A key assumption of automated acoustic monitoring of birds is that measures of acoustic activity at colony sites are correlated with the relative abundance of nesting birds. We tested this assumption for nesting Forster's terns (Sterna forsteri) in San Francisco Bay for 2 breeding seasons. Sensors recorded ambient sound at 7 colonies that had 15-111 nests in 2009 and 2010. Colonies were spaced at least 250 m apart and ranged from 36 to 2,571 m(2) . We used spectrogram cross-correlation to automate the detection of tern calls from recordings. We calculated mean seasonal call rate and compared it with mean active nest count at each colony. Acoustic activity explained 71% of the variation in nest abundance between breeding sites and 88% of the change in colony size between years. These results validate a primary assumption of acoustic indices; that is, for terns, acoustic activity is correlated to relative abundance, a fundamental step toward designing rigorous and scalable acoustic monitoring programs to measure the effectiveness of conservation actions for colonial birds and other acoustically active wildlife. © 2014 Society for Conservation Biology.

  1. Design for Scalability: A Case Study of the River City Curriculum

    Science.gov (United States)

    Clarke, Jody; Dede, Chris

    2009-01-01

    One-size-fits-all educational innovations do not work because they ignore contextual factors that determine an intervention's efficacy in a particular local situation. This paper presents a framework on how to design educational innovations for scalability through enhancing their adaptability for effective usage in a wide variety of settings. The…

  2. Robust and distributed hypothesis testing

    CERN Document Server

    Gül, Gökhan

    2017-01-01

    This book generalizes and extends the available theory in robust and decentralized hypothesis testing. In particular, it presents a robust test for modeling errors which is independent from the assumptions that a sufficiently large number of samples is available, and that the distance is the KL-divergence. Here, the distance can be chosen from a much general model, which includes the KL-divergence as a very special case. This is then extended by various means. A minimax robust test that is robust against both outliers as well as modeling errors is presented. Minimax robustness properties of the given tests are also explicitly proven for fixed sample size and sequential probability ratio tests. The theory of robust detection is extended to robust estimation and the theory of robust distributed detection is extended to classes of distributions, which are not necessarily stochastically bounded. It is shown that the quantization functions for the decision rules can also be chosen as non-monotone. Finally, the boo...

  3. A Scalable and Reliable Message Transport Service for the ATLAS Trigger and Data Acquisition System

    CERN Document Server

    Kazarov, A; The ATLAS collaboration; Kolos, S; Lehmann Miotto, G; Soloviev, I

    2014-01-01

    The ATLAS Trigger and Data Acquisition (TDAQ) is a large distributed computing system composed of several thousands of interconnected computers and tens of thousands applications. During a run, TDAQ applications produce a lot of control and information messages with variable rates, addressed to TDAQ operators or to other applications. Reliable, fast and accurate delivery of the messages is important for the functioning of the whole TDAQ system. The Message Transport Service (MTS) provides facilities for the reliable transport, the filtering and the routing of the messages, basing on publish-subscribe-notify communication pattern with content-based message filtering. During the ongoing LHC shutdown, the MTS was re-implemented, taking into account important requirements like reliability, scalability and performance, handling of slow subscribers case and also simplicity of the design and the implementation. MTS uses CORBA middleware, a common layer for TDAQ infrastructure, and provides sending/subscribing APIs i...

  4. Robustness Property of Robust-BD Wald-Type Test for Varying-Dimensional General Linear Models

    Directory of Open Access Journals (Sweden)

    Xiao Guo

    2018-03-01

    Full Text Available An important issue for robust inference is to examine the stability of the asymptotic level and power of the test statistic in the presence of contaminated data. Most existing results are derived in finite-dimensional settings with some particular choices of loss functions. This paper re-examines this issue by allowing for a diverging number of parameters combined with a broader array of robust error measures, called “robust- BD ”, for the class of “general linear models”. Under regularity conditions, we derive the influence function of the robust- BD parameter estimator and demonstrate that the robust- BD Wald-type test enjoys the robustness of validity and efficiency asymptotically. Specifically, the asymptotic level of the test is stable under a small amount of contamination of the null hypothesis, whereas the asymptotic power is large enough under a contaminated distribution in a neighborhood of the contiguous alternatives, thus lending supports to the utility of the proposed robust- BD Wald-type test.

  5. Interactive segmentation: a scalable superpixel-based method

    Science.gov (United States)

    Mathieu, Bérengère; Crouzil, Alain; Puel, Jean-Baptiste

    2017-11-01

    This paper addresses the problem of interactive multiclass segmentation of images. We propose a fast and efficient new interactive segmentation method called superpixel α fusion (SαF). From a few strokes drawn by a user over an image, this method extracts relevant semantic objects. To get a fast calculation and an accurate segmentation, SαF uses superpixel oversegmentation and support vector machine classification. We compare SαF with competing algorithms by evaluating its performances on reference benchmarks. We also suggest four new datasets to evaluate the scalability of interactive segmentation methods, using images from some thousand to several million pixels. We conclude with two applications of SαF.

  6. geoKepler Workflow Module for Computationally Scalable and Reproducible Geoprocessing and Modeling

    Science.gov (United States)

    Cowart, C.; Block, J.; Crawl, D.; Graham, J.; Gupta, A.; Nguyen, M.; de Callafon, R.; Smarr, L.; Altintas, I.

    2015-12-01

    The NSF-funded WIFIRE project has developed an open-source, online geospatial workflow platform for unifying geoprocessing tools and models for for fire and other geospatially dependent modeling applications. It is a product of WIFIRE's objective to build an end-to-end cyberinfrastructure for real-time and data-driven simulation, prediction and visualization of wildfire behavior. geoKepler includes a set of reusable GIS components, or actors, for the Kepler Scientific Workflow System (https://kepler-project.org). Actors exist for reading and writing GIS data in formats such as Shapefile, GeoJSON, KML, and using OGC web services such as WFS. The actors also allow for calling geoprocessing tools in other packages such as GDAL and GRASS. Kepler integrates functions from multiple platforms and file formats into one framework, thus enabling optimal GIS interoperability, model coupling, and scalability. Products of the GIS actors can be fed directly to models such as FARSITE and WRF. Kepler's ability to schedule and scale processes using Hadoop and Spark also makes geoprocessing ultimately extensible and computationally scalable. The reusable workflows in geoKepler can be made to run automatically when alerted by real-time environmental conditions. Here, we show breakthroughs in the speed of creating complex data for hazard assessments with this platform. We also demonstrate geoKepler workflows that use Data Assimilation to ingest real-time weather data into wildfire simulations, and for data mining techniques to gain insight into environmental conditions affecting fire behavior. Existing machine learning tools and libraries such as R and MLlib are being leveraged for this purpose in Kepler, as well as Kepler's Distributed Data Parallel (DDP) capability to provide a framework for scalable processing. geoKepler workflows can be executed via an iPython notebook as a part of a Jupyter hub at UC San Diego for sharing and reporting of the scientific analysis and results from

  7. Scalability of a Low-Cost Multi-Teraflop Linux Cluster for High-End Classical Atomistic and Quantum Mechanical Simulations

    Science.gov (United States)

    Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash

    2003-01-01

    Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.

  8. Dynamic excitatory and inhibitory gain modulation can produce flexible, robust and optimal decision-making.

    Directory of Open Access Journals (Sweden)

    Ritwik K Niyogi

    Full Text Available Behavioural and neurophysiological studies in primates have increasingly shown the involvement of urgency signals during the temporal integration of sensory evidence in perceptual decision-making. Neuronal correlates of such signals have been found in the parietal cortex, and in separate studies, demonstrated attention-induced gain modulation of both excitatory and inhibitory neurons. Although previous computational models of decision-making have incorporated gain modulation, their abstract forms do not permit an understanding of the contribution of inhibitory gain modulation. Thus, the effects of co-modulating both excitatory and inhibitory neuronal gains on decision-making dynamics and behavioural performance remain unclear. In this work, we incorporate time-dependent co-modulation of the gains of both excitatory and inhibitory neurons into our previous biologically based decision circuit model. We base our computational study in the context of two classic motion-discrimination tasks performed in animals. Our model shows that by simultaneously increasing the gains of both excitatory and inhibitory neurons, a variety of the observed dynamic neuronal firing activities can be replicated. In particular, the model can exhibit winner-take-all decision-making behaviour with higher firing rates and within a significantly more robust model parameter range. It also exhibits short-tailed reaction time distributions even when operating near a dynamical bifurcation point. The model further shows that neuronal gain modulation can compensate for weaker recurrent excitation in a decision neural circuit, and support decision formation and storage. Higher neuronal gain is also suggested in the more cognitively demanding reaction time than in the fixed delay version of the task. Using the exact temporal delays from the animal experiments, fast recruitment of gain co-modulation is shown to maximize reward rate, with a timescale that is surprisingly near the

  9. On eliminating synchronous communication in molecular simulations to improve scalability

    Science.gov (United States)

    Straatsma, T. P.; Chavarría-Miranda, Daniel G.

    2013-12-01

    Molecular dynamics simulation, as a complementary tool to experimentation, has become an important methodology for the understanding and design of molecular systems as it provides access to properties that are difficult, impossible or prohibitively expensive to obtain experimentally. Many of the available software packages have been parallelized to take advantage of modern massively concurrent processing resources. The challenge in achieving parallel efficiency is commonly attributed to the fact that molecular dynamics algorithms are communication intensive. This paper illustrates how an appropriately chosen data distribution and asynchronous one-sided communication approach can be used to effectively deal with the data movement within the Global Arrays/ARMCI programming model framework. A new put_notify capability is presented here, allowing the implementation of the molecular dynamics algorithm without any explicit global or local synchronization or global data reduction operations. In addition, this push-data model is shown to very effectively allow hiding data communication behind computation. Rather than data movement or explicit global reductions, the implicit synchronization of the algorithm becomes the primary challenge for scalability. Without any explicit synchronous operations, the scalability of molecular simulations is shown to depend only on the ability to evenly balance computational load.

  10. Robustness Beamforming Algorithms

    Directory of Open Access Journals (Sweden)

    Sajad Dehghani

    2014-04-01

    Full Text Available Adaptive beamforming methods are known to degrade in the presence of steering vector and covariance matrix uncertinity. In this paper, a new approach is presented to robust adaptive minimum variance distortionless response beamforming make robust against both uncertainties in steering vector and covariance matrix. This method minimize a optimization problem that contains a quadratic objective function and a quadratic constraint. The optimization problem is nonconvex but is converted to a convex optimization problem in this paper. It is solved by the interior-point method and optimum weight vector to robust beamforming is achieved.

  11. Economical and scalable synthesis of 6-amino-2-cyanobenzothiazole

    Directory of Open Access Journals (Sweden)

    Jacob R. Hauser

    2016-09-01

    Full Text Available 2-Cyanobenzothiazoles (CBTs are useful building blocks for: 1 luciferin derivatives for bioluminescent imaging; and 2 handles for bioorthogonal ligations. A particularly versatile CBT is 6-amino-2-cyanobenzothiazole (ACBT, which has an amine handle for straight-forward derivatisation. Here we present an economical and scalable synthesis of ACBT based on a cyanation catalysed by 1,4-diazabicyclo[2.2.2]octane (DABCO, and discuss its advantages for scale-up over previously reported routes.

  12. Architectural Techniques to Enable Reliable and Scalable Memory Systems

    OpenAIRE

    Nair, Prashant J.

    2017-01-01

    High capacity and scalable memory systems play a vital role in enabling our desktops, smartphones, and pervasive technologies like Internet of Things (IoT). Unfortunately, memory systems are becoming increasingly prone to faults. This is because we rely on technology scaling to improve memory density, and at small feature sizes, memory cells tend to break easily. Today, memory reliability is seen as the key impediment towards using high-density devices, adopting new technologies, and even bui...

  13. Robust mechanobiological behavior emerges in heterogeneous myosin systems

    Science.gov (United States)

    Egan, Paul F.; Moore, Jeffrey R.; Ehrlicher, Allen J.; Weitz, David A.; Schunn, Christian; Cagan, Jonathan; LeDuc, Philip

    2017-09-01

    Biological complexity presents challenges for understanding natural phenomenon and engineering new technologies, particularly in systems with molecular heterogeneity. Such complexity is present in myosin motor protein systems, and computational modeling is essential for determining how collective myosin interactions produce emergent system behavior. We develop a computational approach for altering myosin isoform parameters and their collective organization, and support predictions with in vitro experiments of motility assays with α-actinins as molecular force sensors. The computational approach models variations in single myosin molecular structure, system organization, and force stimuli to predict system behavior for filament velocity, energy consumption, and robustness. Robustness is the range of forces where a filament is expected to have continuous velocity and depends on used myosin system energy. Myosin systems are shown to have highly nonlinear behavior across force conditions that may be exploited at a systems level by combining slow and fast myosin isoforms heterogeneously. Results suggest some heterogeneous systems have lower energy use near stall conditions and greater energy consumption when unloaded, therefore promoting robustness. These heterogeneous system capabilities are unique in comparison with homogenous systems and potentially advantageous for high performance bionanotechnologies. Findings open doors at the intersections of mechanics and biology, particularly for understanding and treating myosin-related diseases and developing approaches for motor molecule-based technologies.

  14. Development, Verification and Validation of Parallel, Scalable Volume of Fluid CFD Program for Propulsion Applications

    Science.gov (United States)

    West, Jeff; Yang, H. Q.

    2014-01-01

    There are many instances involving liquid/gas interfaces and their dynamics in the design of liquid engine powered rockets such as the Space Launch System (SLS). Some examples of these applications are: Propellant tank draining and slosh, subcritical condition injector analysis for gas generators, preburners and thrust chambers, water deluge mitigation for launch induced environments and even solid rocket motor liquid slag dynamics. Commercially available CFD programs simulating gas/liquid interfaces using the Volume of Fluid approach are currently limited in their parallel scalability. In 2010 for instance, an internal NASA/MSFC review of three commercial tools revealed that parallel scalability was seriously compromised at 8 cpus and no additional speedup was possible after 32 cpus. Other non-interface CFD applications at the time were demonstrating useful parallel scalability up to 4,096 processors or more. Based on this review, NASA/MSFC initiated an effort to implement a Volume of Fluid implementation within the unstructured mesh, pressure-based algorithm CFD program, Loci-STREAM. After verification was achieved by comparing results to the commercial CFD program CFD-Ace+, and validation by direct comparison with data, Loci-STREAM-VoF is now the production CFD tool for propellant slosh force and slosh damping rate simulations at NASA/MSFC. On these applications, good parallel scalability has been demonstrated for problems sizes of tens of millions of cells and thousands of cpu cores. Ongoing efforts are focused on the application of Loci-STREAM-VoF to predict the transient flow patterns of water on the SLS Mobile Launch Platform in order to support the phasing of water for launch environment mitigation so that vehicle determinantal effects are not realized.

  15. Efficient Delivery of Scalable Video Using a Streaming Class Model

    Directory of Open Access Journals (Sweden)

    Jason J. Quinlan

    2018-03-01

    Full Text Available When we couple the rise in video streaming with the growing number of portable devices (smart phones, tablets, laptops, we see an ever-increasing demand for high-definition video online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide a graceful changes in video quality, all while respecting viewing satisfaction. In this context, the use of well-known scalable/layered media streaming techniques, commonly known as scalable video coding (SVC, is an attractive solution. SVC encodes a number of video quality levels within a single media stream. This has been shown to be an especially effective and efficient solution, but it fares badly in the presence of datagram losses. While multiple description coding (MDC can reduce the effects of packet loss on scalable video delivery, the increased delivery cost is counterproductive for constrained networks. This situation is accentuated in cases where only the lower quality level is required. In this paper, we assess these issues and propose a new approach called Streaming Classes (SC through which we can define a key set of quality levels, each of which can be delivered in a self-contained manner. This facilitates efficient delivery, yielding reduced transmission byte-cost for devices requiring lower quality, relative to MDC and Adaptive Layer Distribution (ALD (42% and 76% respective reduction for layer 2, while also maintaining high levels of consistent quality. We also illustrate how selective packetisation technique can further reduce the effects of packet loss on viewable quality by

  16. Parallel scalability of Hartree-Fock calculations

    Science.gov (United States)

    Chow, Edmond; Liu, Xing; Smelyanskiy, Mikhail; Hammond, Jeff R.

    2015-03-01

    Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree-Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.

  17. ParaText : scalable solutions for processing and searching very large document collections : final LDRD report.

    Energy Technology Data Exchange (ETDEWEB)

    Crossno, Patricia Joyce; Dunlavy, Daniel M.; Stanton, Eric T.; Shead, Timothy M.

    2010-09-01

    This report is a summary of the accomplishments of the 'Scalable Solutions for Processing and Searching Very Large Document Collections' LDRD, which ran from FY08 through FY10. Our goal was to investigate scalable text analysis; specifically, methods for information retrieval and visualization that could scale to extremely large document collections. Towards that end, we designed, implemented, and demonstrated a scalable framework for text analysis - ParaText - as a major project deliverable. Further, we demonstrated the benefits of using visual analysis in text analysis algorithm development, improved performance of heterogeneous ensemble models in data classification problems, and the advantages of information theoretic methods in user analysis and interpretation in cross language information retrieval. The project involved 5 members of the technical staff and 3 summer interns (including one who worked two summers). It resulted in a total of 14 publications, 3 new software libraries (2 open source and 1 internal to Sandia), several new end-user software applications, and over 20 presentations. Several follow-on projects have already begun or will start in FY11, with additional projects currently in proposal.

  18. An algorithm for robust non-linear analysis of radioimmunoassays and other bioassays

    International Nuclear Information System (INIS)

    Normolle, D.P.

    1993-01-01

    The four-parameter logistic function is an appropriate model for many types of bioassays that have continuous response variables, such as radioimmunoassays. By modelling the variance of replicates in an assay, one can modify the usual parameter estimation techniques (for example, Gauss-Newton or Marquardt-Levenberg) to produce parameter estimates for the standard curve that are robust against outlying observations. This article describes the computation of robust (M-) estimates for the parameters of the four-parameter logistic function. It describes techniques for modelling the variance structure of the replicates, modifications to the usual iterative algorithms for parameter estimation in non-linear models, and a formula for inverse confidence intervals. To demonstrate the algorithm, the article presents examples where the robustly estimated four-parameter logistic model is compared with the logit-log and four-parameter logistic models with least-squares estimates. (author)

  19. Towards scalable binderless electrodes: carbon coated silicon nanofiber paper via Mg reduction of electrospun SiO2 nanofibers.

    Science.gov (United States)

    Favors, Zachary; Bay, Hamed Hosseini; Mutlu, Zafer; Ahmed, Kazi; Ionescu, Robert; Ye, Rachel; Ozkan, Mihrimah; Ozkan, Cengiz S

    2015-02-06

    The need for more energy dense and scalable Li-ion battery electrodes has become increasingly pressing with the ushering in of more powerful portable electronics and electric vehicles (EVs) requiring substantially longer range capabilities. Herein, we report on the first synthesis of nano-silicon paper electrodes synthesized via magnesiothermic reduction of electrospun SiO2 nanofiber paper produced by an in situ acid catalyzed polymerization of tetraethyl orthosilicate (TEOS) in-flight. Free-standing carbon-coated Si nanofiber binderless electrodes produce a capacity of 802 mAh g(-1) after 659 cycles with a Coulombic efficiency of 99.9%, which outperforms conventionally used slurry-prepared graphite anodes by over two times on an active material basis. Silicon nanofiber paper anodes offer a completely binder-free and Cu current collector-free approach to electrode fabrication with a silicon weight percent in excess of 80%. The absence of conductive powder additives, metallic current collectors, and polymer binders in addition to the high weight percent silicon all contribute to significantly increasing capacity at the cell level.

  20. A Scalable Framework to Detect Personal Health Mentions on Twitter.

    Science.gov (United States)

    Yin, Zhijun; Fabbri, Daniel; Rosenbloom, S Trent; Malin, Bradley

    2015-06-05

    Biomedical research has traditionally been conducted via surveys and the analysis of medical records. However, these resources are limited in their content, such that non-traditional domains (eg, online forums and social media) have an opportunity to supplement the view of an individual's health. The objective of this study was to develop a scalable framework to detect personal health status mentions on Twitter and assess the extent to which such information is disclosed. We collected more than 250 million tweets via the Twitter streaming API over a 2-month period in 2014. The corpus was filtered down to approximately 250,000 tweets, stratified across 34 high-impact health issues, based on guidance from the Medical Expenditure Panel Survey. We created a labeled corpus of several thousand tweets via a survey, administered over Amazon Mechanical Turk, that documents when terms correspond to mentions of personal health issues or an alternative (eg, a metaphor). We engineered a scalable classifier for personal health mentions via feature selection and assessed its potential over the health issues. We further investigated the utility of the tweets by determining the extent to which Twitter users disclose personal health status. Our investigation yielded several notable findings. First, we find that tweets from a small subset of the health issues can train a scalable classifier to detect health mentions. Specifically, training on 2000 tweets from four health issues (cancer, depression, hypertension, and leukemia) yielded a classifier with precision of 0.77 on all 34 health issues. Second, Twitter users disclosed personal health status for all health issues. Notably, personal health status was disclosed over 50% of the time for 11 out of 34 (33%) investigated health issues. Third, the disclosure rate was dependent on the health issue in a statistically significant manner (P<.001). For instance, more than 80% of the tweets about migraines (83/100) and allergies (85

  1. Scalable Nonlinear Compact Schemes

    Energy Technology Data Exchange (ETDEWEB)

    Ghosh, Debojyoti [Argonne National Lab. (ANL), Argonne, IL (United States); Constantinescu, Emil M. [Univ. of Chicago, IL (United States); Brown, Jed [Univ. of Colorado, Boulder, CO (United States)

    2014-04-01

    In this work, we focus on compact schemes resulting in tridiagonal systems of equations, specifically the fifth-order CRWENO scheme. We propose a scalable implementation of the nonlinear compact schemes by implementing a parallel tridiagonal solver based on the partitioning/substructuring approach. We use an iterative solver for the reduced system of equations; however, we solve this system to machine zero accuracy to ensure that no parallelization errors are introduced. It is possible to achieve machine-zero convergence with few iterations because of the diagonal dominance of the system. The number of iterations is specified a priori instead of a norm-based exit criterion, and collective communications are avoided. The overall algorithm thus involves only point-to-point communication between neighboring processors. Our implementation of the tridiagonal solver differs from and avoids the drawbacks of past efforts in the following ways: it introduces no parallelization-related approximations (multiprocessor solutions are exactly identical to uniprocessor ones), it involves minimal communication, the mathematical complexity is similar to that of the Thomas algorithm on a single processor, and it does not require any communication and computation scheduling.

  2. Performance-scalable volumetric data classification for online industrial inspection

    Science.gov (United States)

    Abraham, Aby J.; Sadki, Mustapha; Lea, R. M.

    2002-03-01

    Non-intrusive inspection and non-destructive testing of manufactured objects with complex internal structures typically requires the enhancement, analysis and visualization of high-resolution volumetric data. Given the increasing availability of fast 3D scanning technology (e.g. cone-beam CT), enabling on-line detection and accurate discrimination of components or sub-structures, the inherent complexity of classification algorithms inevitably leads to throughput bottlenecks. Indeed, whereas typical inspection throughput requirements range from 1 to 1000 volumes per hour, depending on density and resolution, current computational capability is one to two orders-of-magnitude less. Accordingly, speeding up classification algorithms requires both reduction of algorithm complexity and acceleration of computer performance. A shape-based classification algorithm, offering algorithm complexity reduction, by using ellipses as generic descriptors of solids-of-revolution, and supporting performance-scalability, by exploiting the inherent parallelism of volumetric data, is presented. A two-stage variant of the classical Hough transform is used for ellipse detection and correlation of the detected ellipses facilitates position-, scale- and orientation-invariant component classification. Performance-scalability is achieved cost-effectively by accelerating a PC host with one or more COTS (Commercial-Off-The-Shelf) PCI multiprocessor cards. Experimental results are reported to demonstrate the feasibility and cost-effectiveness of the data-parallel classification algorithm for on-line industrial inspection applications.

  3. Robustness Analyses of Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Hald, Frederik

    2013-01-01

    The robustness of structural systems has obtained a renewed interest arising from a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures, many mo...... with respect to robustness of timber structures and will discuss the consequences of such robustness issues related to the future development of timber structures.......The robustness of structural systems has obtained a renewed interest arising from a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures, many...... modern building codes consider the need for the robustness of structures and provide strategies and methods to obtain robustness. Therefore, a structural engineer may take necessary steps to design robust structures that are insensitive to accidental circumstances. The present paper summaries issues...

  4. Fourier transform based scalable image quality measure.

    Science.gov (United States)

    Narwaria, Manish; Lin, Weisi; McLoughlin, Ian; Emmanuel, Sabu; Chia, Liang-Tien

    2012-08-01

    We present a new image quality assessment (IQA) algorithm based on the phase and magnitude of the 2D (twodimensional) Discrete Fourier Transform (DFT). The basic idea is to compare the phase and magnitude of the reference and distorted images to compute the quality score. However, it is well known that the Human Visual Systems (HVSs) sensitivity to different frequency components is not the same. We accommodate this fact via a simple yet effective strategy of nonuniform binning of the frequency components. This process also leads to reduced space representation of the image thereby enabling the reduced-reference (RR) prospects of the proposed scheme. We employ linear regression to integrate the effects of the changes in phase and magnitude. In this way, the required weights are determined via proper training and hence more convincing and effective. Lastly, using the fact that phase usually conveys more information than magnitude, we use only the phase for RR quality assessment. This provides the crucial advantage of further reduction in the required amount of reference image information. The proposed method is therefore further scalable for RR scenarios. We report extensive experimental results using a total of 9 publicly available databases: 7 image (with a total of 3832 distorted images with diverse distortions) and 2 video databases (totally 228 distorted videos). These show that the proposed method is overall better than several of the existing fullreference (FR) algorithms and two RR algorithms. Additionally, there is a graceful degradation in prediction performance as the amount of reference image information is reduced thereby confirming its scalability prospects. To enable comparisons and future study, a Matlab implementation of the proposed algorithm is available at http://www.ntu.edu.sg/home/wslin/reduced_phase.rar.

  5. The Political Economy of E-Learning Educational Development Strategies, Standardisation and Scalability

    Science.gov (United States)

    Kenney, Jacqueline; Hermens, Antoine; Clarke, Thomas

    2004-01-01

    The development of e-learning by government through policy, funding allocations, research-based collaborative projects and alliances has increased recently in both developed and under-developed nations. The paper notes that government, industry and corporate users are increasingly focusing on standardisation issues and the scalability of…

  6. Enabling Highly-Scalable Remote Memory Access Programming with MPI-3 One Sided

    Directory of Open Access Journals (Sweden)

    Robert Gerstenberger

    2014-01-01

    Full Text Available Modern interconnects offer remote direct memory access (RDMA features. Yet, most applications rely on explicit message passing for communications albeit their unwanted overheads. The MPI-3.0 standard defines a programming interface for exploiting RDMA networks directly, however, it's scalability and practicability has to be demonstrated in practice. In this work, we develop scalable bufferless protocols that implement the MPI-3.0 specification. Our protocols support scaling to millions of cores with negligible memory consumption while providing highest performance and minimal overheads. To arm programmers, we provide a spectrum of performance models for all critical functions and demonstrate the usability of our library and models with several application studies with up to half a million processes. We show that our design is comparable to, or better than UPC and Fortran Coarrays in terms of latency, bandwidth and message rate. We also demonstrate application performance improvements with comparable programming complexity.

  7. Scalable streaming tools for analyzing N-body simulations: Finding halos and investigating excursion sets in one pass

    Science.gov (United States)

    Ivkin, N.; Liu, Z.; Yang, L. F.; Kumar, S. S.; Lemson, G.; Neyrinck, M.; Szalay, A. S.; Braverman, V.; Budavari, T.

    2018-04-01

    Cosmological N-body simulations play a vital role in studying models for the evolution of the Universe. To compare to observations and make a scientific inference, statistic analysis on large simulation datasets, e.g., finding halos, obtaining multi-point correlation functions, is crucial. However, traditional in-memory methods for these tasks do not scale to the datasets that are forbiddingly large in modern simulations. Our prior paper (Liu et al., 2015) proposes memory-efficient streaming algorithms that can find the largest halos in a simulation with up to 109 particles on a small server or desktop. However, this approach fails when directly scaling to larger datasets. This paper presents a robust streaming tool that leverages state-of-the-art techniques on GPU boosting, sampling, and parallel I/O, to significantly improve performance and scalability. Our rigorous analysis of the sketch parameters improves the previous results from finding the centers of the 103 largest halos (Liu et al., 2015) to ∼ 104 - 105, and reveals the trade-offs between memory, running time and number of halos. Our experiments show that our tool can scale to datasets with up to ∼ 1012 particles while using less than an hour of running time on a single GPU Nvidia GTX 1080.

  8. Real-Time Continuous Response Spectra Exceedance Calculation Displayed in a Web-Browser Enables Rapid and Robust Damage Evaluation by First Responders

    Science.gov (United States)

    Franke, M.; Skolnik, D. A.; Harvey, D.; Lindquist, K.

    2014-12-01

    A novel and robust approach is presented that provides near real-time earthquake alarms for critical structures at distributed locations and large facilities using real-time estimation of response spectra obtained from near free-field motions. Influential studies dating back to the 1980s identified spectral response acceleration as a key ground motion characteristic that correlates well with observed damage in structures. Thus, monitoring and reporting on exceedance of spectra-based thresholds are useful tools for assessing the potential for damage to facilities or multi-structure campuses based on input ground motions only. With as little as one strong-motion station per site, this scalable approach can provide rapid alarms on the damage status of remote towns, critical infrastructure (e.g., hospitals, schools) and points of interests (e.g., bridges) for a very large number of locations enabling better rapid decision making during critical and difficult immediate post-earthquake response actions. Details on the novel approach are presented along with an example implementation for a large energy company. Real-time calculation of PSA exceedance and alarm dissemination are enabled with Bighorn, an extension module based on the Antelope software package that combines real-time spectral monitoring and alarm capabilities with a robust built-in web display server. Antelope is an environmental data collection software package from Boulder Real Time Technologies (BRTT) typically used for very large seismic networks and real-time seismic data analyses. The primary processing engine produces continuous time-dependent response spectra for incoming acceleration streams. It utilizes expanded floating-point data representations within object ring-buffer packets and waveform files in a relational database. This leads to a very fast method for computing response spectra for a large number of channels. A Python script evaluates these response spectra for exceedance of one or more

  9. A graph algebra for scalable visual analytics.

    Science.gov (United States)

    Shaverdian, Anna A; Zhou, Hao; Michailidis, George; Jagadish, Hosagrahar V

    2012-01-01

    Visual analytics (VA), which combines analytical techniques with advanced visualization features, is fast becoming a standard tool for extracting information from graph data. Researchers have developed many tools for this purpose, suggesting a need for formal methods to guide these tools' creation. Increased data demands on computing requires redesigning VA tools to consider performance and reliability in the context of analysis of exascale datasets. Furthermore, visual analysts need a way to document their analyses for reuse and results justification. A VA graph framework encapsulated in a graph algebra helps address these needs. Its atomic operators include selection and aggregation. The framework employs a visual operator and supports dynamic attributes of data to enable scalable visual exploration of data.

  10. A Scalable Heuristic for Viral Marketing Under the Tipping Model

    Science.gov (United States)

    2013-09-01

    Flixster is a social media website that allows users to share reviews and other information about cinema . [35] It was extracted in Dec. 2010. – FourSquare...work of Reichman were developed independently . We also note that Reichman performs no experimental evaluation of the algorithm. A Scalable Heuristic...other dif- fusion models, such as the independent cascade model [21] and evolutionary graph theory [25] as well as probabilistic variants of the

  11. Scalable fabrication of high-power graphene micro-supercapacitors for flexible and on-chip energy storage

    Science.gov (United States)

    El-Kady, Maher F.; Kaner, Richard B.

    2013-02-01

    The rapid development of miniaturized electronic devices has increased the demand for compact on-chip energy storage. Microscale supercapacitors have great potential to complement or replace batteries and electrolytic capacitors in a variety of applications. However, conventional micro-fabrication techniques have proven to be cumbersome in building cost-effective micro-devices, thus limiting their widespread application. Here we demonstrate a scalable fabrication of graphene micro-supercapacitors over large areas by direct laser writing on graphite oxide films using a standard LightScribe DVD burner. More than 100 micro-supercapacitors can be produced on a single disc in 30 min or less. The devices are built on flexible substrates for flexible electronics and on-chip uses that can be integrated with MEMS or CMOS in a single chip. Remarkably, miniaturizing the devices to the microscale results in enhanced charge-storage capacity and rate capability. These micro-supercapacitors demonstrate a power density of ~200 W cm-3, which is among the highest values achieved for any supercapacitor.

  12. Scalable error correction in distributed ion trap computers

    International Nuclear Information System (INIS)

    Oi, Daniel K. L.; Devitt, Simon J.; Hollenberg, Lloyd C. L.

    2006-01-01

    A major challenge for quantum computation in ion trap systems is scalable integration of error correction and fault tolerance. We analyze a distributed architecture with rapid high-fidelity local control within nodes and entangled links between nodes alleviating long-distance transport. We demonstrate fault-tolerant operator measurements which are used for error correction and nonlocal gates. This scheme is readily applied to linear ion traps which cannot be scaled up beyond a few ions per individual trap but which have access to a probabilistic entanglement mechanism. A proof-of-concept system is presented which is within the reach of current experiment

  13. Scalable Brain Network Construction on White Matter Fibers.

    Science.gov (United States)

    Chung, Moo K; Adluru, Nagesh; Dalton, Kim M; Alexander, Andrew L; Davidson, Richard J

    2011-02-12

    DTI offers a unique opportunity to characterize the structural connectivity of the human brain non-invasively by tracing white matter fiber tracts. Whole brain tractography studies routinely generate up to half million tracts per brain, which serves as edges in an extremely large 3D graph with up to half million edges. Currently there is no agreed-upon method for constructing the brain structural network graphs out of large number of white matter tracts. In this paper, we present a scalable iterative framework called the ε-neighbor method for building a network graph and apply it to testing abnormal connectivity in autism.

  14. Scalable and Resilient Middleware to Handle Information Exchange during Environment Crisis

    Science.gov (United States)

    Tao, R.; Poslad, S.; Moßgraber, J.; Middleton, S.; Hammitzsch, M.

    2012-04-01

    The EU FP7 TRIDEC project focuses on enabling real-time, intelligent, information management of collaborative, complex, critical decision processes for earth management. A key challenge is to promote a communication infrastructure to facilitate interoperable environment information services during environment events and crises such as tsunamis and drilling, during which increasing volumes and dimensionality of disparate information sources, including sensor-based and human-based ones, can result, and need to be managed. Such a system needs to support: scalable, distributed messaging; asynchronous messaging; open messaging to handling changing clients such as new and retired automated system and human information sources becoming online or offline; flexible data filtering, and heterogeneous access networks (e.g., GSM, WLAN and LAN). In addition, the system needs to be resilient to handle the ICT system failures, e.g. failure, degradation and overloads, during environment events. There are several system middleware choices for TRIDEC based upon a Service-oriented-architecture (SOA), Event-driven-Architecture (EDA), Cloud Computing, and Enterprise Service Bus (ESB). In an SOA, everything is a service (e.g. data access, processing and exchange); clients can request on demand or subscribe to services registered by providers; more often interaction is synchronous. In an EDA system, events that represent significant changes in state can be processed simply, or as streams or more complexly. Cloud computing is a virtualization, interoperable and elastic resource allocation model. An ESB, a fundamental component for enterprise messaging, supports synchronous and asynchronous message exchange models and has inbuilt resilience against ICT failure. Our middleware proposal is an ESB based hybrid architecture model: an SOA extension supports more synchronous workflows; EDA assists the ESB to handle more complex event processing; Cloud computing can be used to increase and

  15. Scalable Fabrication Framework of Implantable Ultrathin and Flexible Probes with Biodegradable Sacrificial Layers.

    Science.gov (United States)

    Jiao, Xiangbing; Wang, Yuan; Qing, Quan

    2017-12-13

    For long-term biocompatibility and performance, implanted probes need to further reduce their size and mechanical stiffness to match that of the surrounding cells, which, however, makes accurate and minimally invasive insertion operations difficult due to lack of rigidity and brings additional complications in assembling and surgery. Here, we report a scalable fabrication framework of implantable probes utilizing biodegradable sacrificial layers to address this challenge. Briefly, the integrated biodegradable sacrificial layer can dissolve in physiological fluids shortly after implantation, which allows the in situ formation of functional ultrathin film structures off of the initial small and rigid supporting backbone. We show that the dissolution of this layer does not affect the viability and excitability of neuron cells in vitro. We have demonstrated two types of probes that can be used out of the box, including (1) a compact probe that spontaneously forms three-dimensional bend-up devices only after implantation and (2) an ultraflexible probe as thin as 2 μm attached to a small silicon shaft that can be accurately delivered into the tissue and then get fully released in situ without altering its shape and position because the support is fully retracted. We have obtained a >93% yield of the bend-up structure, and its geometry and stiffness can be systematically tuned. The robustness of the ultraflexible probe has been tested in tissue-mimicking agarose gels with <1% fluctuation in the test resistance. Our work provides a general strategy to prepare ultrasmall and flexible implantable probes that allow high insertion accuracy and minimal surgical damages with the best biocompatibility.

  16. Scalability Modeling for Optimal Provisioning of Data Centers in Telenor: A better balance between under- and over-provisioning

    OpenAIRE

    Rygg, Knut Helge

    2012-01-01

    The scalability of an information system describes the relationship between system ca-pacity and system size. This report studies the scalability of Microsoft Lync Server 2010 in order to provide guidelines for provisioning hardware resources. Optimal pro-visioning is required to reduce both deployment and operational costs, while keeping an acceptable service quality.All Lync servers in the test setup are virtualizedusingVMware ESXi 5.0 and the system runs on a Cisco Unified Computing System...

  17. International Conference on Robust Statistics

    CERN Document Server

    Filzmoser, Peter; Gather, Ursula; Rousseeuw, Peter

    2003-01-01

    Aspects of Robust Statistics are important in many areas. Based on the International Conference on Robust Statistics 2001 (ICORS 2001) in Vorau, Austria, this volume discusses future directions of the discipline, bringing together leading scientists, experienced researchers and practitioners, as well as younger researchers. The papers cover a multitude of different aspects of Robust Statistics. For instance, the fundamental problem of data summary (weights of evidence) is considered and its robustness properties are studied. Further theoretical subjects include e.g.: robust methods for skewness, time series, longitudinal data, multivariate methods, and tests. Some papers deal with computational aspects and algorithms. Finally, the aspects of application and programming tools complete the volume.

  18. Skin electronics from scalable fabrication of an intrinsically stretchable transistor array.

    Science.gov (United States)

    Wang, Sihong; Xu, Jie; Wang, Weichen; Wang, Ging-Ji Nathan; Rastak, Reza; Molina-Lopez, Francisco; Chung, Jong Won; Niu, Simiao; Feig, Vivian R; Lopez, Jeffery; Lei, Ting; Kwon, Soon-Ki; Kim, Yeongin; Foudeh, Amir M; Ehrlich, Anatol; Gasperini, Andrea; Yun, Youngjun; Murmann, Boris; Tok, Jeffery B-H; Bao, Zhenan

    2018-03-01

    Skin-like electronics that can adhere seamlessly to human skin or within the body are highly desirable for applications such as health monitoring, medical treatment, medical implants and biological studies, and for technologies that include human-machine interfaces, soft robotics and augmented reality. Rendering such electronics soft and stretchable-like human skin-would make them more comfortable to wear, and, through increased contact area, would greatly enhance the fidelity of signals acquired from the skin. Structural engineering of rigid inorganic and organic devices has enabled circuit-level stretchability, but this requires sophisticated fabrication techniques and usually suffers from reduced densities of devices within an array. We reasoned that the desired parameters, such as higher mechanical deformability and robustness, improved skin compatibility and higher device density, could be provided by using intrinsically stretchable polymer materials instead. However, the production of intrinsically stretchable materials and devices is still largely in its infancy: such materials have been reported, but functional, intrinsically stretchable electronics have yet to be demonstrated owing to the lack of a scalable fabrication technology. Here we describe a fabrication process that enables high yield and uniformity from a variety of intrinsically stretchable electronic polymers. We demonstrate an intrinsically stretchable polymer transistor array with an unprecedented device density of 347 transistors per square centimetre. The transistors have an average charge-carrier mobility comparable to that of amorphous silicon, varying only slightly (within one order of magnitude) when subjected to 100 per cent strain for 1,000 cycles, without current-voltage hysteresis. Our transistor arrays thus constitute intrinsically stretchable skin electronics, and include an active matrix for sensory arrays, as well as analogue and digital circuit elements. Our process offers a

  19. Skin electronics from scalable fabrication of an intrinsically stretchable transistor array

    Science.gov (United States)

    Wang, Sihong; Xu, Jie; Wang, Weichen; Wang, Ging-Ji Nathan; Rastak, Reza; Molina-Lopez, Francisco; Chung, Jong Won; Niu, Simiao; Feig, Vivian R.; Lopez, Jeffery; Lei, Ting; Kwon, Soon-Ki; Kim, Yeongin; Foudeh, Amir M.; Ehrlich, Anatol; Gasperini, Andrea; Yun, Youngjun; Murmann, Boris; Tok, Jeffery B.-H.; Bao, Zhenan

    2018-03-01

    Skin-like electronics that can adhere seamlessly to human skin or within the body are highly desirable for applications such as health monitoring, medical treatment, medical implants and biological studies, and for technologies that include human-machine interfaces, soft robotics and augmented reality. Rendering such electronics soft and stretchable—like human skin—would make them more comfortable to wear, and, through increased contact area, would greatly enhance the fidelity of signals acquired from the skin. Structural engineering of rigid inorganic and organic devices has enabled circuit-level stretchability, but this requires sophisticated fabrication techniques and usually suffers from reduced densities of devices within an array. We reasoned that the desired parameters, such as higher mechanical deformability and robustness, improved skin compatibility and higher device density, could be provided by using intrinsically stretchable polymer materials instead. However, the production of intrinsically stretchable materials and devices is still largely in its infancy: such materials have been reported, but functional, intrinsically stretchable electronics have yet to be demonstrated owing to the lack of a scalable fabrication technology. Here we describe a fabrication process that enables high yield and uniformity from a variety of intrinsically stretchable electronic polymers. We demonstrate an intrinsically stretchable polymer transistor array with an unprecedented device density of 347 transistors per square centimetre. The transistors have an average charge-carrier mobility comparable to that of amorphous silicon, varying only slightly (within one order of magnitude) when subjected to 100 per cent strain for 1,000 cycles, without current-voltage hysteresis. Our transistor arrays thus constitute intrinsically stretchable skin electronics, and include an active matrix for sensory arrays, as well as analogue and digital circuit elements. Our process offers a

  20. Scalable conditional induction variables (CIV) analysis

    DEFF Research Database (Denmark)

    Oancea, Cosmin Eugen; Rauchwerger, Lawrence

    2015-01-01

    parallelizing compiler and evaluated its impact on five Fortran benchmarks. We have found that that there are many important loops using CIV subscripts and that our analysis can lead to their scalable parallelization. This in turn has led to the parallelization of the benchmark programs they appear in.......Subscripts using induction variables that cannot be expressed as a formula in terms of the enclosing-loop indices appear in the low-level implementation of common programming abstractions such as filter, or stack operations and pose significant challenges to automatic parallelization. Because...... the complexity of such induction variables is often due to their conditional evaluation across the iteration space of loops we name them Conditional Induction Variables (CIV). This paper presents a flow-sensitive technique that summarizes both such CIV-based and affine subscripts to program level, using the same...

  1. Robust Scientists

    DEFF Research Database (Denmark)

    Gorm Hansen, Birgitte

    their core i nterests, 2) developing a selfsupply of industry interests by becoming entrepreneurs and thus creating their own compliant industry partner and 3) balancing resources within a larger collective of researchers, thus countering changes in the influx of funding caused by shifts in political...... knowledge", Danish research policy seems to have helped develop politically and economically "robust scientists". Scientific robustness is acquired by way of three strategies: 1) tasting and discriminating between resources so as to avoid funding that erodes academic profiles and push scientists away from...

  2. The Node Monitoring Component of a Scalable Systems Software Environment

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Samuel James [Iowa State Univ., Ames, IA (United States)

    2006-01-01

    This research describes Fountain, a suite of programs used to monitor the resources of a cluster. A cluster is a collection of individual computers that are connected via a high speed communication network. They are traditionally used by users who desire more resources, such as processing power and memory, than any single computer can provide. A common drawback to effectively utilizing such a large-scale system is the management infrastructure, which often does not often scale well as the system grows. Large-scale parallel systems provide new research challenges in the area of systems software, the programs or tools that manage the system from boot-up to running a parallel job. The approach presented in this thesis utilizes a collection of separate components that communicate with each other to achieve a common goal. While systems software comprises a broad array of components, this thesis focuses on the design choices for a node monitoring component. We will describe Fountain, an implementation of the Scalable Systems Software (SSS) node monitor specification. It is targeted at aggregate node monitoring for clusters, focusing on both scalability and fault tolerance as its design goals. It leverages widely used technologies such as XML and HTTP to present an interface to other components in the SSS environment.

  3. Think 500, not 50! A scalable approach to student success in STEM.

    Science.gov (United States)

    LaCourse, William R; Sutphin, Kathy Lee; Ott, Laura E; Maton, Kenneth I; McDermott, Patrice; Bieberich, Charles; Farabaugh, Philip; Rous, Philip

    2017-01-01

    UMBC, a diverse public research university, "builds" upon its reputation in producing highly capable undergraduate scholars to create a comprehensive new model, STEM BUILD at UMBC. This program is designed to help more students develop the skills, experience and motivation to excel in science, technology, engineering, and mathematics (STEM). This article provides an in-depth description of STEM BUILD at UMBC and provides the context of this initiative within UMBC's vision and mission. The STEM BUILD model targets promising STEM students who enter as freshmen or transfer students and do not qualify for significant university or other scholarship support. Of primary importance to this initiative are capacity, scalability, and institutional sustainability, as we distill the advantages and opportunities of UMBC's successful scholars programs and expand their application to more students. The general approach is to infuse the mentoring and training process into the fabric of the undergraduate experience while fostering community, scientific identity, and resilience. At the heart of STEM BUILD at UMBC is the development of BUILD Group Research (BGR), a sequence of experiences designed to overcome the challenges that undergraduates without programmatic support often encounter (e.g., limited internship opportunities, mentorships, and research positions for which top STEM students are favored). BUILD Training Program (BTP) Trainees serve as pioneers in this initiative, which is potentially a national model for universities as they address the call to retain and graduate more students in STEM disciplines - especially those from underrepresented groups. As such, BTP is a research study using random assignment trial methodology that focuses on the scalability and eventual incorporation of successful measures into the traditional format of the academy. Critical measures to transform institutional culture include establishing an extensive STEM Living and Learning Community to

  4. Electron beam processing of fresh produce - A critical review

    Science.gov (United States)

    Pillai, Suresh D.; Shayanfar, Shima

    2018-02-01

    To meet the increasing global demand for fresh produce, robust processing methods that ensures both the safety and quality of fresh produce are needed. Since fresh produce cannot withstand thermal processing conditions, most of common safety interventions used in other foods are ineffective. Electron beam (eBeam) is a non-thermal technology that can be used to extend the shelf life and ensure the microbiological safety of fresh produce. There have been studies documenting the application of eBeam to ensure both safety and quality in fresh produce, however, there are still unexplored areas that still need further research. This is a critical review on the current literature on the application of eBeam technology for fresh produce.

  5. Robust Trust in Expert Testimony

    Directory of Open Access Journals (Sweden)

    Christian Dahlman

    2015-05-01

    Full Text Available The standard of proof in criminal trials should require that the evidence presented by the prosecution is robust. This requirement of robustness says that it must be unlikely that additional information would change the probability that the defendant is guilty. Robustness is difficult for a judge to estimate, as it requires the judge to assess the possible effect of information that the he or she does not have. This article is concerned with expert witnesses and proposes a method for reviewing the robustness of expert testimony. According to the proposed method, the robustness of expert testimony is estimated with regard to competence, motivation, external strength, internal strength and relevance. The danger of trusting non-robust expert testimony is illustrated with an analysis of the Thomas Quick Case, a Swedish legal scandal where a patient at a mental institution was wrongfully convicted for eight murders.

  6. Fast and Scalable Computation of the Forward and Inverse Discrete Periodic Radon Transform.

    Science.gov (United States)

    Carranza, Cesar; Llamocca, Daniel; Pattichis, Marios

    2016-01-01

    The discrete periodic radon transform (DPRT) has extensively been used in applications that involve image reconstructions from projections. Beyond classic applications, the DPRT can also be used to compute fast convolutions that avoids the use of floating-point arithmetic associated with the use of the fast Fourier transform. Unfortunately, the use of the DPRT has been limited by the need to compute a large number of additions and the need for a large number of memory accesses. This paper introduces a fast and scalable approach for computing the forward and inverse DPRT that is based on the use of: a parallel array of fixed-point adder trees; circular shift registers to remove the need for accessing external memory components when selecting the input data for the adder trees; an image block-based approach to DPRT computation that can fit the proposed architecture to available resources; and fast transpositions that are computed in one or a few clock cycles that do not depend on the size of the input image. As a result, for an N × N image (N prime), the proposed approach can compute up to N(2) additions per clock cycle. Compared with the previous approaches, the scalable approach provides the fastest known implementations for different amounts of computational resources. For example, for a 251×251 image, for approximately 25% fewer flip-flops than required for a systolic implementation, we have that the scalable DPRT is computed 36 times faster. For the fastest case, we introduce optimized just 2N + ⌈log(2) N⌉ + 1 and 2N + 3 ⌈log(2) N⌉ + B + 2 cycles, architectures that can compute the DPRT and its inverse in respectively, where B is the number of bits used to represent each input pixel. On the other hand, the scalable DPRT approach requires more 1-b additions than for the systolic implementation and provides a tradeoff between speed and additional 1-b additions. All of the proposed DPRT architectures were implemented in VHSIC Hardware Description Language

  7. Modular design of metabolic network for robust production of n-butanol from galactose-glucose mixtures.

    Science.gov (United States)

    Lim, Hyun Gyu; Lim, Jae Hyung; Jung, Gyoo Yeol

    2015-01-01

    Refactoring microorganisms for efficient production of advanced biofuel such as n-butanol from a mixture of sugars in the cheap feedstock is a prerequisite to achieve economic feasibility in biorefinery. However, production of biofuel from inedible and cheap feedstock is highly challenging due to the slower utilization of biomass-driven sugars, arising from complex assimilation pathway, difficulties in amplification of biosynthetic pathways for heterologous metabolite, and redox imbalance caused by consuming intracellular reducing power to produce quite reduced biofuel. Even with these problems, the microorganisms should show robust production of biofuel to obtain industrial feasibility. Thus, refactoring microorganisms for efficient conversion is highly desirable in biofuel production. In this study, we engineered robust Escherichia coli to accomplish high production of n-butanol from galactose-glucose mixtures via the design of modular pathway, an efficient and systematic way, to reconstruct the entire metabolic pathway with many target genes. Three modular pathways designed using the predictable genetic elements were assembled for efficient galactose utilization, n-butanol production, and redox re-balancing to robustly produce n-butanol from a sugar mixture of galactose and glucose. Specifically, the engineered strain showed dramatically increased n-butanol production (3.3-fold increased to 6.2 g/L after 48-h fermentation) compared to the parental strain (1.9 g/L) in galactose-supplemented medium. Moreover, fermentation with mixtures of galactose and glucose at various ratios from 2:1 to 1:2 confirmed that our engineered strain was able to robustly produce n-butanol regardless of sugar composition with simultaneous utilization of galactose and glucose. Collectively, modular pathway engineering of metabolic network can be an effective approach in strain development for optimal biofuel production with cost-effective fermentable sugars. To the best of our

  8. Robustness Analysis of Timber Truss Structure

    DEFF Research Database (Denmark)

    Rajčić, Vlatka; Čizmar, Dean; Kirkegaard, Poul Henning

    2010-01-01

    The present paper discusses robustness of structures in general and the robustness requirements given in the codes. Robustness of timber structures is also an issues as this is closely related to Working group 3 (Robustness of systems) of the COST E55 project. Finally, an example of a robustness...... evaluation of a widespan timber truss structure is presented. This structure was built few years ago near Zagreb and has a span of 45m. Reliability analysis of the main members and the system is conducted and based on this a robustness analysis is preformed....

  9. Epitaxial Growth of Two-Dimensional Layered Transition-Metal Dichalcogenides: Growth Mechanism, Controllability, and Scalability

    KAUST Repository

    Li, Henan; Li, Ying; Aljarb, Areej; Shi, Yumeng; Li, Lain-Jong

    2017-01-01

    to generate high-quality TMDC layers with scalable size, controllable thickness, and excellent electronic properties suitable for both technological applications and fundamental sciences. The capability to precisely engineer 2D materials by chemical approaches

  10. Computational Cognition and Robust Decision Making

    Science.gov (United States)

    2013-03-06

    much more powerful neuromorphic chips than current state of the art. L. Chua 10 DISTRIBUTION STATEMENT A – Unclassified, Unlimited Distribution 2...Cognition Program DARPA (Gill Pratt) • Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) Program IARPA (Brad Minnery...2012 - Four projects at SNU and KAIST co-funded with AOARD DARPA SyNAPSE Program: - Design, fabrication, and demonstration of neuromorphic

  11. Compatibility of detached divertor operation with robust edge pedestal performance

    Energy Technology Data Exchange (ETDEWEB)

    Leonard, A.W., E-mail: leonard@fusion.gat.com [General Atomics, PO Box 85608, San Diego, CA 92186-5608 (United States); Makowski, M.A.; McLean, A.G. [Lawrence Livermore National Laboratory, Livermore, CA (United States); Osborne, T.H.; Snyder, P.B. [General Atomics, PO Box 85608, San Diego, CA 92186-5608 (United States)

    2015-08-15

    The compatibility of detached radiative divertor operation with a robust H-mode pedestal is examined in DIII-D. A density scan produced low temperature plasmas at the divertor target, T{sub e} ⩽ 2 eV, with high radiation leading to a factor of ⩾4 drop in peak divertor heat flux. The cold radiative plasma was confined to the divertor and did not extend across the separatrix in X-point region. A robust H-mode pedestal was maintained with a small degradation in pedestal pressure at the highest densities. The response of the pedestal pressure to increasing density is reproduced by the EPED pedestal model. However, agreement of the EPED model with experiment at high density requires an assumption of reduced diamagnetic stabilization of edge Peeling–Ballooning modes.

  12. Towards Scalable Entangled Photon Sources with Self-Assembled InAs /GaAs Quantum Dots

    Science.gov (United States)

    Wang, Jianping; Gong, Ming; Guo, G.-C.; He, Lixin

    2015-08-01

    The biexciton cascade process in self-assembled quantum dots (QDs) provides an ideal system for realizing deterministic entangled photon-pair sources, which are essential to quantum information science. The entangled photon pairs have recently been generated in experiments after eliminating the fine-structure splitting (FSS) of excitons using a number of different methods. Thus far, however, QD-based sources of entangled photons have not been scalable because the wavelengths of QDs differ from dot to dot. Here, we propose a wavelength-tunable entangled photon emitter mounted on a three-dimensional stressor, in which the FSS and exciton energy can be tuned independently, thereby enabling photon entanglement between dissimilar QDs. We confirm these results via atomistic pseudopotential calculations. This provides a first step towards future realization of scalable entangled photon generators for quantum information applications.

  13. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  14. Qualitative Robustness in Estimation

    Directory of Open Access Journals (Sweden)

    Mohammed Nasser

    2012-07-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif";} Qualitative robustness, influence function, and breakdown point are three main concepts to judge an estimator from the viewpoint of robust estimation. It is important as well as interesting to study relation among them. This article attempts to present the concept of qualitative robustness as forwarded by first proponents and its later development. It illustrates intricacies of qualitative robustness and its relation with consistency, and also tries to remove commonly believed misunderstandings about relation between influence function and qualitative robustness citing some examples from literature and providing a new counter-example. At the end it places a useful finite and a simulated version of   qualitative robustness index (QRI. In order to assess the performance of the proposed measures, we have compared fifteen estimators of correlation coefficient using simulated as well as real data sets.

  15. Robust prediction of anti-cancer drug sensitivity and sensitivity-specific biomarker.

    Directory of Open Access Journals (Sweden)

    Heewon Park

    Full Text Available The personal genomics era has attracted a large amount of attention for anti-cancer therapy by patient-specific analysis. Patient-specific analysis enables discovery of individual genomic characteristics for each patient, and thus we can effectively predict individual genetic risk of disease and perform personalized anti-cancer therapy. Although the existing methods for patient-specific analysis have successfully uncovered crucial biomarkers, their performance takes a sudden turn for the worst in the presence of outliers, since the methods are based on non-robust manners. In practice, clinical and genomic alterations datasets usually contain outliers from various sources (e.g., experiment error, coding error, etc. and the outliers may significantly affect the result of patient-specific analysis. We propose a robust methodology for patient-specific analysis in line with the NetwrokProfiler. In the proposed method, outliers in high dimensional gene expression levels and drug response datasets are simultaneously controlled by robust Mahalanobis distance in robust principal component space. Thus, we can effectively perform for predicting anti-cancer drug sensitivity and identifying sensitivity-specific biomarkers for individual patients. We observe through Monte Carlo simulations that the proposed robust method produces outstanding performances for predicting response variable in the presence of outliers. We also apply the proposed methodology to the Sanger dataset in order to uncover cancer biomarkers and predict anti-cancer drug sensitivity, and show the effectiveness of our method.

  16. Robustness in econometrics

    CERN Document Server

    Sriboonchitta, Songsak; Huynh, Van-Nam

    2017-01-01

    This book presents recent research on robustness in econometrics. Robust data processing techniques – i.e., techniques that yield results minimally affected by outliers – and their applications to real-life economic and financial situations are the main focus of this book. The book also discusses applications of more traditional statistical techniques to econometric problems. Econometrics is a branch of economics that uses mathematical (especially statistical) methods to analyze economic systems, to forecast economic and financial dynamics, and to develop strategies for achieving desirable economic performance. In day-by-day data, we often encounter outliers that do not reflect the long-term economic trends, e.g., unexpected and abrupt fluctuations. As such, it is important to develop robust data processing techniques that can accommodate these fluctuations.

  17. Robust Programming by Example

    OpenAIRE

    Bishop , Matt; Elliott , Chip

    2011-01-01

    Part 2: WISE 7; International audience; Robust programming lies at the heart of the type of coding called “secure programming”. Yet it is rarely taught in academia. More commonly, the focus is on how to avoid creating well-known vulnerabilities. While important, that misses the point: a well-structured, robust program should anticipate where problems might arise and compensate for them. This paper discusses one view of robust programming and gives an example of how it may be taught.

  18. Robust Reliability or reliable robustness? - Integrated consideration of robustness and reliability aspects

    DEFF Research Database (Denmark)

    Kemmler, S.; Eifler, Tobias; Bertsche, B.

    2015-01-01

    products are and vice versa. For a comprehensive understanding and to use existing synergies between both domains, this paper discusses the basic principles of Reliability- and Robust Design theory. The development of a comprehensive model will enable an integrated consideration of both domains...

  19. Design of an H.264/SVC resilient watermarking scheme

    Science.gov (United States)

    Van Caenegem, Robrecht; Dooms, Ann; Barbarien, Joeri; Schelkens, Peter

    2010-01-01

    The rapid dissemination of media technologies has lead to an increase of unauthorized copying and distribution of digital media. Digital watermarking, i.e. embedding information in the multimedia signal in a robust and imperceptible manner, can tackle this problem. Recently, there has been a huge growth in the number of different terminals and connections that can be used to consume multimedia. To tackle the resulting distribution challenges, scalable coding is often employed. Scalable coding allows the adaptation of a single bit-stream to varying terminal and transmission characteristics. As a result of this evolution, watermarking techniques that are robust against scalable compression become essential in order to control illegal copying. In this paper, a watermarking technique resilient against scalable video compression using the state-of-the-art H.264/SVC codec is therefore proposed and evaluated.

  20. Treatment of wastewater contaminated with detergents and mineral oils using effective and scalable technology.

    Science.gov (United States)

    Abdelmoez, Wael; Barakat, Nasser A M; Moaz, Asmaa

    2013-01-01

    In this work, effective, cheap and scalable methodology is introduced to treat oily wastewater. The water produced from car-wash processes was utilized as a model because it has various pollutants - oil, lubricants, detergents, solid particles, etc. The results showed that the turbidity and chemical oxygen demand (COD) values dramatically decrease by using the proposed treatment process, which consists of coagulation, flocculation, sand filtration, and oxidation followed by sand as well as activated carbon filtration. Moreover, the operating conditions were optimized. Without adjustment of the pH value of car-wash wastewater, it was found that 200 ppm of ferric chloride, as a coagulant, and 1 ppm of potassium permanganate, as an oxidant, are the optimum doses. The COD and turbidity values of the final treated wastewater were reduced by almost 88 and 100%, respectively. A prototype with 15 L capacity was designed and fabricated to investigate the scaling up and continuity of the proposed treatment strategy. The results were very promising and indicated that the introduced methodology can be industrially applied.

  1. Quantifying the robustness of process manufacturing concept – A medical product case study

    DEFF Research Database (Denmark)

    Boorla, Srinivasa Murthy; Troldtoft, M.E.; Eifler, Tobias

    2017-01-01

    Product robustness refers to the consistency of performance of all of the units produced. It is often the case that process manufactured products are not designed concurrently, so by the end of the product design phase the Process Manufacturing Concept (PMC) has yet to be decided. Allocating...... the unit-to-unit robustness of an early-stage for a PMC is proposed. The method uses variability and adjustability information from the manufacturing concept in combination with sensitivity information from products' design to predict its functional performance variation. A Technology maturation factor...... process capable tolerances to the product during the design phase is therefore not possible. The robustness of the concept (how capable it is to achieve the product specification), only becomes clear at this late stage and thus after testing and iteration. In this article, a method for calculating...

  2. The scalable coherent interface, IEEE P1596

    International Nuclear Information System (INIS)

    Gustavson, D.B.

    1990-01-01

    IEEE P1596, the scalable coherent interface (formerly known as SuperBus) is based on experience gained while developing Fastbus (ANSI/IEEE 960--1986, IEC 935), Futurebus (IEEE P896.x) and other modern 32-bit buses. SCI goals include a minimum bandwidth of 1 GByte/sec per processor in multiprocessor systems with thousands of processors; efficient support of a coherent distributed-cache image of distributed shared memory; support for repeaters which interface to existing or future buses; and support for inexpensive small rings as well as for general switched interconnections like Banyan, Omega, or crossbar networks. This paper presents a summary of current directions, reports the status of the work in progress, and suggests some applications in data acquisition and physics

  3. A lightweight scalable agarose-gel-synthesized thermoelectric composite

    Science.gov (United States)

    Kim, Jin Ho; Fernandes, Gustavo E.; Lee, Do-Joong; Hirst, Elizabeth S.; Osgood, Richard M., III; Xu, Jimmy

    2018-03-01

    Electronic devices are now advancing beyond classical, rigid systems and moving into lighweight flexible regimes, enabling new applications such as body-wearables and ‘e-textiles’. To support this new electronic platform, composite materials that are highly conductive yet scalable, flexible, and wearable are needed. Materials with high electrical conductivity often have poor thermoelectric properties because their thermal transport is made greater by the same factors as their electronic conductivity. We demonstrate, in proof-of-principle experiments, that a novel binary composite can disrupt thermal (phononic) transport, while maintaining high electrical conductivity, thus yielding promising thermoelectric properties. Highly conductive Multi-Wall Carbon Nanotube (MWCNT) composites are combined with a low-band gap semiconductor, PbS. The work functions of the two materials are closely matched, minimizing the electrical contact resistance within the composite. Disparities in the speed of sound in MWCNTs and PbS help to inhibit phonon propagation, and boundary layer scattering at interfaces between these two materials lead to large Seebeck coefficient (> 150 μV/K) (Mott N F and Davis E A 1971 Electronic Processes in Non-crystalline Materials (Oxford: Clarendon), p 47) and a power factor as high as 10 μW/(K2 m). The overall fabrication process is not only scalable but also conformal and compatible with large-area flexible hosts including metal sheets, films, coatings, possibly arrays of fibers, textiles and fabrics. We explain the behavior of this novel thermoelectric material platform in terms of differing length scales for electrical conductivity and phononic heat transfer, and explore new material configurations for potentially lightweight and flexible thermoelectric devices that could be networked in a textile.

  4. fastBMA: scalable network inference and transitive reduction.

    Science.gov (United States)

    Hung, Ling-Hong; Shi, Kaiyuan; Wu, Migao; Young, William Chad; Raftery, Adrian E; Yeung, Ka Yee

    2017-10-01

    Inferring genetic networks from genome-wide expression data is extremely demanding computationally. We have developed fastBMA, a distributed, parallel, and scalable implementation of Bayesian model averaging (BMA) for this purpose. fastBMA also includes a computationally efficient module for eliminating redundant indirect edges in the network by mapping the transitive reduction to an easily solved shortest-path problem. We evaluated the performance of fastBMA on synthetic data and experimental genome-wide time series yeast and human datasets. When using a single CPU core, fastBMA is up to 100 times faster than the next fastest method, LASSO, with increased accuracy. It is a memory-efficient, parallel, and distributed application that scales to human genome-wide expression data. A 10 000-gene regulation network can be obtained in a matter of hours using a 32-core cloud cluster (2 nodes of 16 cores). fastBMA is a significant improvement over its predecessor ScanBMA. It is more accurate and orders of magnitude faster than other fast network inference methods such as the 1 based on LASSO. The improved scalability allows it to calculate networks from genome scale data in a reasonable time frame. The transitive reduction method can improve accuracy in denser networks. fastBMA is available as code (M.I.T. license) from GitHub (https://github.com/lhhunghimself/fastBMA), as part of the updated networkBMA Bioconductor package (https://www.bioconductor.org/packages/release/bioc/html/networkBMA.html) and as ready-to-deploy Docker images (https://hub.docker.com/r/biodepot/fastbma/). © The Authors 2017. Published by Oxford University Press.

  5. Conscientiousness at the workplace: Applying mixture IRT to investigate scalability and predictive validity

    NARCIS (Netherlands)

    Egberink, I.J.L.; Meijer, R.R.; Veldkamp, Bernard P.

    2010-01-01

    Mixture item response theory (IRT) models have been used to assess multidimensionality of the construct being measured and to detect different response styles for different groups. In this study a mixture version of the graded response model was applied to investigate scalability and predictive

  6. Conscientiousness in the workplace : Applying mixture IRT to investigate scalability and predictive validity

    NARCIS (Netherlands)

    Egberink, I.J.L.; Meijer, R.R.; Veldkamp, B.P.

    Mixture item response theory (IRT) models have been used to assess multidimensionality of the construct being measured and to detect different response styles for different groups. In this study a mixture version of the graded response model was applied to investigate scalability and predictive

  7. High-Performance Scalable Information Service for the ATLAS Experiment

    International Nuclear Information System (INIS)

    Kolos, S; Boutsioukis, G; Hauser, R

    2012-01-01

    The ATLAS[1] experiment is operated by a highly distributed computing system which is constantly producing a lot of status information which is used to monitor the experiment operational conditions as well as to assess the quality of the physics data being taken. For example the ATLAS High Level Trigger(HLT) algorithms are executed on the online computing farm consisting from about 1500 nodes. Each HLT algorithm is producing few thousands histograms, which have to be integrated over the whole farm and carefully analyzed in order to properly tune the event rejection. In order to handle such non-physics data the Information Service (IS) facility has been developed in the scope of the ATLAS Trigger and Data Acquisition (TDAQ)[2] project. The IS provides a high-performance scalable solution for information exchange in distributed environment. In the course of an ATLAS data taking session the IS handles about a hundred gigabytes of information which is being constantly updated with the update interval varying from a second to a few tens of seconds. IS provides access to any information item on request as well as distributing notification to all the information subscribers. In the latter case IS subscribers receive information within a few milliseconds after it was updated. IS can handle arbitrary types of information, including histograms produced by the HLT applications, and provides C++, Java and Python API. The Information Service is a unique source of information for the majority of the online monitoring analysis and GUI applications used to control and monitor the ATLAS experiment. Information Service provides streaming functionality allowing efficient replication of all or part of the managed information. This functionality is used to duplicate the subset of the ATLAS monitoring data to the CERN public network with a latency of a few milliseconds, allowing efficient real-time monitoring of the data taking from outside the protected ATLAS network. Each information

  8. A scalable lock-free hash table with open addressing

    DEFF Research Database (Denmark)

    Nielsen, Jesper Puge; Karlsson, Sven

    2016-01-01

    and concurrent operations without any locks. In this paper, we present a new fully lock-free open addressed hash table with a simpler design than prior published work. We split hash table insertions into two atomic phases: first inserting a value ignoring other concurrent operations, then in the second phase......Concurrent data structures synchronized with locks do not scale well with the number of threads. As more scalable alternatives, concurrent data structures and algorithms based on widely available, however advanced, atomic operations have been proposed. These data structures allow for correct...

  9. pcircle - A Suite of Scalable Parallel File System Tools

    Energy Technology Data Exchange (ETDEWEB)

    2015-10-01

    Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.

  10. A scalable parallel algorithm for multiple objective linear programs

    Science.gov (United States)

    Wiecek, Malgorzata M.; Zhang, Hong

    1994-01-01

    This paper presents an ADBASE-based parallel algorithm for solving multiple objective linear programs (MOLP's). Job balance, speedup and scalability are of primary interest in evaluating efficiency of the new algorithm. Implementation results on Intel iPSC/2 and Paragon multiprocessors show that the algorithm significantly speeds up the process of solving MOLP's, which is understood as generating all or some efficient extreme points and unbounded efficient edges. The algorithm gives specially good results for large and very large problems. Motivation and justification for solving such large MOLP's are also included.

  11. Scalable Electrophysiology in Intact Small Animals with Nanoscale Suspended Electrode Arrays

    OpenAIRE

    Gonzales, Daniel L.; Badhiwala, Krishna N.; Vercosa, Daniel G.; Avants, Ben W.; Liu, Zheng; Zhong, Weiwei; Robinson, Jacob T.

    2017-01-01

    Electrical measurements from large populations of animals would help reveal fundamental properties of the nervous system and neurological diseases. Small invertebrates are ideal for these large-scale studies; however, patch-clamp electrophysiology in microscopic animals typically requires low-throughput and invasive dissections. To overcome these limitations, we present nano-SPEARs: suspended electrodes integrated into a scalable microfluidic device. Using this technology, we have made the fi...

  12. Scalable Video Streaming Adaptive to Time-Varying IEEE 802.11 MAC Parameters

    Science.gov (United States)

    Lee, Kyung-Jun; Suh, Doug-Young; Park, Gwang-Hoon; Huh, Jae-Doo

    This letter proposes a QoS control method for video streaming service over wireless networks. Based on statistical analysis, the time-varying MAC parameters highly related to channel condition are selected to predict available bitrate. Adaptive bitrate control of scalably-encoded video guarantees continuity in streaming service even if the channel condition changes abruptly.

  13. A versatile scalable PET processing system

    International Nuclear Information System (INIS)

    Dong, H.; Weisenberger, A.; McKisson, J.; Wenze, Xi; Cuevas, C.; Wilson, J.; Zukerman, L.

    2011-01-01

    Positron Emission Tomography (PET) historically has major clinical and preclinical applications in cancerous oncology, neurology, and cardiovascular diseases. Recently, in a new direction, an application specific PET system is being developed at Thomas Jefferson National Accelerator Facility (Jefferson Lab) in collaboration with Duke University, University of Maryland at Baltimore (UMAB), and West Virginia University (WVU) targeted for plant eco-physiology research. The new plant imaging PET system is versatile and scalable such that it could adapt to several plant imaging needs - imaging many important plant organs including leaves, roots, and stems. The mechanical arrangement of the detectors is designed to accommodate the unpredictable and random distribution in space of the plant organs without requiring the plant be disturbed. Prototyping such a system requires a new data acquisition system (DAQ) and data processing system which are adaptable to the requirements of these unique and versatile detectors.

  14. iSIGHT-FD scalability test report.

    Energy Technology Data Exchange (ETDEWEB)

    Clay, Robert L.; Shneider, Max S.

    2008-07-01

    The engineering analysis community at Sandia National Laboratories uses a number of internal and commercial software codes and tools, including mesh generators, preprocessors, mesh manipulators, simulation codes, post-processors, and visualization packages. We define an analysis workflow as the execution of an ordered, logical sequence of these tools. Various forms of analysis (and in particular, methodologies that use multiple function evaluations or samples) involve executing parameterized variations of these workflows. As part of the DART project, we are evaluating various commercial workflow management systems, including iSIGHT-FD from Engineous. This report documents the results of a scalability test that was driven by DAKOTA and conducted on a parallel computer (Thunderbird). The purpose of this experiment was to examine the suitability and performance of iSIGHT-FD for large-scale, parameterized analysis workflows. As the results indicate, we found iSIGHT-FD to be suitable for this type of application.

  15. The Concept of Business Model Scalability

    DEFF Research Database (Denmark)

    Nielsen, Christian; Lund, Morten

    2015-01-01

    The power of business models lies in their ability to visualize and clarify how firms’ may configure their value creation processes. Among the key aspects of business model thinking are a focus on what the customer values, how this value is best delivered to the customer and how strategic partners...... are leveraged in this value creation, delivery and realization exercise. Central to the mainstream understanding of business models is the value proposition towards the customer and the hypothesis generated is that if the firm delivers to the customer what he/she requires, then there is a good foundation...... for a long-term profitable business. However, the message conveyed in this article is that while providing a good value proposition may help the firm ‘get by’, the really successful businesses of today are those able to reach the sweet-spot of business model scalability. This article introduces and discusses...

  16. Scalable Arbitrated Quantum Signature of Classical Messages with Multi-Signers

    International Nuclear Information System (INIS)

    Yang Yuguang; Wang Yuan; Teng Yiwei; Chai Haiping; Wen Qiaoyan

    2010-01-01

    Unconditionally secure signature is an important part of quantum cryptography. Usually, a signature scheme only provides an environment for a single signer. Nevertheless, in real applications, many signers may collaboratively send a message to the verifier and convince the verifier that the message is actually transmitted by them. In this paper, we give a scalable arbitrated signature protocol of classical messages with multi-signers. Its security is analyzed and proved to be secure even with a compromised arbitrator. (general)

  17. A scalable and continuous-upgradable optical wireless and wired convergent access network.

    Science.gov (United States)

    Sung, J Y; Cheng, K T; Chow, C W; Yeh, C H; Pan, C-L

    2014-06-02

    In this work, a scalable and continuous upgradable convergent optical access network is proposed. By using a multi-wavelength coherent comb source and a programmable waveshaper at the central office (CO), optical millimeter-wave (mm-wave) signals of different frequencies (from baseband to > 100 GHz) can be generated. Hence, it provides a scalable and continuous upgradable solution for end-user who needs 60 GHz wireless services now and > 100 GHz wireless services in the future. During the upgrade, user only needs to upgrade their optical networking unit (ONU). A programmable waveshaper is used to select the suitable optical tones with wavelength separation equals to the desired mm-wave frequency; while the CO remains intact. The centralized characteristics of the proposed system can easily add any new service and end-user. The centralized control of the wavelength makes the system more stable. Wired data rate of 17.45 Gb/s and w-band wireless data rate up to 3.36 Gb/s were demonstrated after transmission over 40 km of single-mode fiber (SMF).

  18. Scalability of Several Asynchronous Many-Task Models for In Situ Statistical Analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bennett, Janine Camille [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Kolla, Hemanth [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Borghesi, Giulio [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2017-05-01

    This report is a sequel to [PB16], in which we provided a first progress report on research and development towards a scalable, asynchronous many-task, in situ statistical analysis engine using the Legion runtime system. This earlier work included a prototype implementation of a proposed solution, using a proxy mini-application as a surrogate for a full-scale scientific simulation code. The first scalability studies were conducted with the above on modestly-sized experimental clusters. In contrast, in the current work we have integrated our in situ analysis engines with a full-size scientific application (S3D, using the Legion-SPMD model), and have conducted nu- merical tests on the largest computational platform currently available for DOE science ap- plications. We also provide details regarding the design and development of a light-weight asynchronous collectives library. We describe how this library is utilized within our SPMD- Legion S3D workflow, and compare the data aggregation technique deployed herein to the approach taken within our previous work.

  19. Scalable Stream Processing with Quality of Service for Smart City Crowdsensing Applications

    Directory of Open Access Journals (Sweden)

    Paolo Bellavista

    2013-12-01

    Full Text Available Crowdsensing is emerging as a powerful paradigm capable of leveraging the collective, though imprecise, monitoring capabilities of common people carrying smartphones or other personal devices, which can effectively become real-time mobile sensors, collecting information about the physical places they live in. This unprecedented amount of information, considered collectively, offers new valuable opportunities to understand more thoroughly the environment in which we live and, more importantly, gives the chance to use this deeper knowledge to act and improve, in a virtuous loop, the environment itself. However, managing this process is a hard technical challenge, spanning several socio-technical issues: here, we focus on the related quality, reliability, and scalability trade-offs by proposing an architecture for crowdsensing platforms that dynamically self-configure and self-adapt depending on application-specific quality requirements. In the context of this general architecture, the paper will specifically focus on the Quasit distributed stream processing middleware, and show how Quasit can be used to process and analyze crowdsensing-generated data flows with differentiated quality requirements in a highly scalable and reliable way.

  20. Robust

    DEFF Research Database (Denmark)

    2017-01-01

    Robust – Reflections on Resilient Architecture’, is a scientific publication following the conference of the same name in November of 2017. Researches and PhD-Fellows, associated with the Masters programme: Cultural Heritage, Transformation and Restoration (Transformation), at The Royal Danish...

  1. Volumetric Medical Image Coding: An Object-based, Lossy-to-lossless and Fully Scalable Approach

    Science.gov (United States)

    Danyali, Habibiollah; Mertins, Alfred

    2011-01-01

    In this article, an object-based, highly scalable, lossy-to-lossless 3D wavelet coding approach for volumetric medical image data (e.g., magnetic resonance (MR) and computed tomography (CT)) is proposed. The new method, called 3DOBHS-SPIHT, is based on the well-known set partitioning in the hierarchical trees (SPIHT) algorithm and supports both quality and resolution scalability. The 3D input data is grouped into groups of slices (GOS) and each GOS is encoded and decoded as a separate unit. The symmetric tree definition of the original 3DSPIHT is improved by introducing a new asymmetric tree structure. While preserving the compression efficiency, the new tree structure allows for a small size of each GOS, which not only reduces memory consumption during the encoding and decoding processes, but also facilitates more efficient random access to certain segments of slices. To achieve more compression efficiency, the algorithm only encodes the main object of interest in each 3D data set, which can have any arbitrary shape, and ignores the unnecessary background. The experimental results on some MR data sets show the good performance of the 3DOBHS-SPIHT algorithm for multi-resolution lossy-to-lossless coding. The compression efficiency, full scalability, and object-based features of the proposed approach, beside its lossy-to-lossless coding support, make it a very attractive candidate for volumetric medical image information archiving and transmission applications. PMID:22606653

  2. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy.

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-19

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available.

  3. Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework.

    Science.gov (United States)

    Lewis, Steven; Csordas, Attila; Killcoyne, Sarah; Hermjakob, Henning; Hoopmann, Michael R; Moritz, Robert L; Deutsch, Eric W; Boyle, John

    2012-12-05

    For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources.

  4. PanDA Beyond ATLAS : A Scalable Workload Management System For Data Intensive Science

    CERN Document Server

    Borodin, M; The ATLAS collaboration; Jha, S; Golubkov, D; Klimentov, A; Maeno, T; Nilsson, P; Oleynik, D; Panitkin, S; Petrosyan, A; Schovancova, J; Vaniachine, A; Wenaus, T

    2014-01-01

    The LHC experiments are today at the leading edge of large scale distributed data-intensive computational science. The LHC's ATLAS experiment processes data volumes which are particularly extreme, over 140 PB to date, distributed worldwide at over of 120 sites. An important element in the success of the exciting physics results from ATLAS is the highly scalable integrated workflow and dataflow management afforded by the PanDA workload management system, used for all the distributed computing needs of the experiment. The PanDA design is not experiment specific and PanDA is now being extended to support other data intensive scientific applications. PanDA was cited as an example of "a high performance, fault tolerant software for fast, scalable access to data repositories of many kinds" during the "Big Data Research and Development Initiative" announcement, a 200 million USD U.S. government investment in tools to handle huge volumes of digital data needed to spur science and engineering discoveries. In this talk...

  5. Scalable real space pseudopotential density functional codes for materials in the exascale regime

    Science.gov (United States)

    Lena, Charles; Chelikowsky, James; Schofield, Grady; Biller, Ariel; Kronik, Leeor; Saad, Yousef; Deslippe, Jack

    Real-space pseudopotential density functional theory has proven to be an efficient method for computing the properties of matter in many different states and geometries, including liquids, wires, slabs, and clusters with and without spin polarization. Fully self-consistent solutions using this approach have been routinely obtained for systems with thousands of atoms. Yet, there are many systems of notable larger sizes where quantum mechanical accuracy is desired, but scalability proves to be a hindrance. Such systems include large biological molecules, complex nanostructures, or mismatched interfaces. We will present an overview of our new massively parallel algorithms, which offer improved scalability in preparation for exascale supercomputing. We will illustrate these algorithms by considering the electronic structure of a Si nanocrystal exceeding 104 atoms. Support provided by the SciDAC program, Department of Energy, Office of Science, Advanced Scientific Computing Research and Basic Energy Sciences. Grant Numbers DE-SC0008877 (Austin) and DE-FG02-12ER4 (Berkeley).

  6. Robustness in Railway Operations (RobustRailS)

    DEFF Research Database (Denmark)

    Jensen, Jens Parbo; Nielsen, Otto Anker

    This study considers the problem of enhancing railway timetable robustness without adding slack time, hence increasing the travel time. The approach integrates a transit assignment model to assess how passengers adapt their behaviour whenever operations are changed. First, the approach considers...

  7. Transforming a Simple Commercial Glue into Highly Robust Superhydrophobic Surfaces via Aerosol-Assisted Chemical Vapor Deposition.

    Science.gov (United States)

    Zhuang, Aoyun; Liao, Ruijin; Lu, Yao; Dixon, Sebastian C; Jiamprasertboon, Arreerat; Chen, Faze; Sathasivam, Sanjayan; Parkin, Ivan P; Carmalt, Claire J

    2017-12-06

    Robust superhydrophobic surfaces were synthesized as composites of the widely commercially available adhesives epoxy resin (EP) and polydimethylsiloxane (PDMS). The EP layer provided a strongly adhered micro/nanoscale structure on the substrates, while the PDMS was used as a post-treatment to lower the surface energy. In this study, the depositions of EP films were taken at a range of temperatures, deposition times, and substrates via aerosol-assisted chemical vapor deposition (AACVD). A novel dynamic deposition temperature approach was developed to create multiple-layered periodic micro/nanostructures that significantly improved the surface mechanical durability. Water droplet contact angles (CA) of 160° were observed with droplet sliding angles (SA) frequently UV testing (365 nm, 3.7 mW/cm 2 , 120 h) were carried out to exhibit the environmental stability of the films. Self-cleaning behavior was demonstrated in clearing the surfaces of various contaminating powders and aqueous dyes. This facile and flexible method for fabricating highly durable superhydrophobic polymer films points to a promising future for AACVD in their scalable and low-cost production.

  8. The TOTEM DAQ based on the Scalable Readout System (SRS)

    Science.gov (United States)

    Quinto, Michele; Cafagna, Francesco S.; Fiergolski, Adrian; Radicioni, Emilio

    2018-02-01

    The TOTEM (TOTal cross section, Elastic scattering and diffraction dissociation Measurement at the LHC) experiment at LHC, has been designed to measure the total proton-proton cross-section and study the elastic and diffractive scattering at the LHC energies. In order to cope with the increased machine luminosity and the higher statistic required by the extension of the TOTEM physics program, approved for the LHC's Run Two phase, the previous VME based data acquisition system has been replaced with a new one based on the Scalable Readout System. The system features an aggregated data throughput of 2GB / s towards the online storage system. This makes it possible to sustain a maximum trigger rate of ˜ 24kHz, to be compared with the 1KHz rate of the previous system. The trigger rate is further improved by implementing zero-suppression and second-level hardware algorithms in the Scalable Readout System. The new system fulfils the requirements for an increased efficiency, providing higher bandwidth, and increasing the purity of the data recorded. Moreover full compatibility has been guaranteed with the legacy front-end hardware, as well as with the DAQ interface of the CMS experiment and with the LHC's Timing, Trigger and Control distribution system. In this contribution we describe in detail the architecture of full system and its performance measured during the commissioning phase at the LHC Interaction Point.

  9. Robust statistical methods with R

    CERN Document Server

    Jureckova, Jana

    2005-01-01

    Robust statistical methods were developed to supplement the classical procedures when the data violate classical assumptions. They are ideally suited to applied research across a broad spectrum of study, yet most books on the subject are narrowly focused, overly theoretical, or simply outdated. Robust Statistical Methods with R provides a systematic treatment of robust procedures with an emphasis on practical application.The authors work from underlying mathematical tools to implementation, paying special attention to the computational aspects. They cover the whole range of robust methods, including differentiable statistical functions, distance of measures, influence functions, and asymptotic distributions, in a rigorous yet approachable manner. Highlighting hands-on problem solving, many examples and computational algorithms using the R software supplement the discussion. The book examines the characteristics of robustness, estimators of real parameter, large sample properties, and goodness-of-fit tests. It...

  10. Improving diabetes medication adherence: successful, scalable interventions

    Directory of Open Access Journals (Sweden)

    Zullig LL

    2015-01-01

    Full Text Available Leah L Zullig,1,2 Walid F Gellad,3,4 Jivan Moaddeb,2,5 Matthew J Crowley,1,2 William Shrank,6 Bradi B Granger,7 Christopher B Granger,8 Troy Trygstad,9 Larry Z Liu,10 Hayden B Bosworth1,2,7,11 1Center for Health Services Research in Primary Care, Durham Veterans Affairs Medical Center, Durham, NC, USA; 2Department of Medicine, Duke University, Durham, NC, USA; 3Center for Health Equity Research and Promotion, Pittsburgh Veterans Affairs Medical Center, Pittsburgh, PA, USA; 4Division of General Internal Medicine, University of Pittsburgh, Pittsburgh, PA, USA; 5Institute for Genome Sciences and Policy, Duke University, Durham, NC, USA; 6CVS Caremark Corporation; 7School of Nursing, Duke University, Durham, NC, USA; 8Department of Medicine, Division of Cardiology, Duke University School of Medicine, Durham, NC, USA; 9North Carolina Community Care Networks, Raleigh, NC, USA; 10Pfizer, Inc., and Weill Medical College of Cornell University, New York, NY, USA; 11Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, NC, USA Abstract: Effective medications are a cornerstone of prevention and disease treatment, yet only about half of patients take their medications as prescribed, resulting in a common and costly public health challenge for the US healthcare system. Since poor medication adherence is a complex problem with many contributing causes, there is no one universal solution. This paper describes interventions that were not only effective in improving medication adherence among patients with diabetes, but were also potentially scalable (ie, easy to implement to a large population. We identify key characteristics that make these interventions effective and scalable. This information is intended to inform healthcare systems seeking proven, low resource, cost-effective solutions to improve medication adherence. Keywords: medication adherence, diabetes mellitus, chronic disease, dissemination research

  11. Robust Manufacturing Control

    CERN Document Server

    2013-01-01

    This contributed volume collects research papers, presented at the CIRP Sponsored Conference Robust Manufacturing Control: Innovative and Interdisciplinary Approaches for Global Networks (RoMaC 2012, Jacobs University, Bremen, Germany, June 18th-20th 2012). These research papers present the latest developments and new ideas focusing on robust manufacturing control for global networks. Today, Global Production Networks (i.e. the nexus of interconnected material and information flows through which products and services are manufactured, assembled and distributed) are confronted with and expected to adapt to: sudden and unpredictable large-scale changes of important parameters which are occurring more and more frequently, event propagation in networks with high degree of interconnectivity which leads to unforeseen fluctuations, and non-equilibrium states which increasingly characterize daily business. These multi-scale changes deeply influence logistic target achievement and call for robust planning and control ...

  12. Toward optimized light utilization in nanowire arrays using scalable nanosphere lithography and selected area growth.

    Science.gov (United States)

    Madaria, Anuj R; Yao, Maoqing; Chi, Chunyung; Huang, Ningfeng; Lin, Chenxi; Li, Ruijuan; Povinelli, Michelle L; Dapkus, P Daniel; Zhou, Chongwu

    2012-06-13

    Vertically aligned, catalyst-free semiconducting nanowires hold great potential for photovoltaic applications, in which achieving scalable synthesis and optimized optical absorption simultaneously is critical. Here, we report combining nanosphere lithography (NSL) and selected area metal-organic chemical vapor deposition (SA-MOCVD) for the first time for scalable synthesis of vertically aligned gallium arsenide nanowire arrays, and surprisingly, we show that such nanowire arrays with patterning defects due to NSL can be as good as highly ordered nanowire arrays in terms of optical absorption and reflection. Wafer-scale patterning for nanowire synthesis was done using a polystyrene nanosphere template as a mask. Nanowires grown from substrates patterned by NSL show similar structural features to those patterned using electron beam lithography (EBL). Reflection of photons from the NSL-patterned nanowire array was used as a measure of the effect of defects present in the structure. Experimentally, we show that GaAs nanowires as short as 130 nm show reflection of <10% over the visible range of the solar spectrum. Our results indicate that a highly ordered nanowire structure is not necessary: despite the "defects" present in NSL-patterned nanowire arrays, their optical performance is similar to "defect-free" structures patterned by more costly, time-consuming EBL methods. Our scalable approach for synthesis of vertical semiconducting nanowires can have application in high-throughput and low-cost optoelectronic devices, including solar cells.

  13. Experimental demonstration of an improved EPON architecture using OFDMA for bandwidth scalable LAN emulation

    DEFF Research Database (Denmark)

    Deng, Lei; Zhao, Ying; Yu, Xianbin

    2011-01-01

    We propose and demonstrate an improved Ethernet passive optical network (EPON) architecture supporting bandwidth-scalable physical layer local area network (LAN) emulation. Due to the use of orthogonal frequency division multiple access (OFDMA) technology for the LAN traffic transmission, there i...

  14. Robustness - theoretical framework

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Rizzuto, Enrico; Faber, Michael H.

    2010-01-01

    More frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new struct...... of this fact sheet is to describe a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines....

  15. Facile and Scalable Fabrication of Highly Efficient Lead Iodide Perovskite Thin-Film Solar Cells in Air Using Gas Pump Method.

    Science.gov (United States)

    Ding, Bin; Gao, Lili; Liang, Lusheng; Chu, Qianqian; Song, Xiaoxuan; Li, Yan; Yang, Guanjun; Fan, Bin; Wang, Mingkui; Li, Chengxin; Li, Changjiu

    2016-08-10

    Control of the perovskite film formation process to produce high-quality organic-inorganic metal halide perovskite thin films with uniform morphology, high surface coverage, and minimum pinholes is of great importance to highly efficient solar cells. Herein, we report on large-area light-absorbing perovskite films fabrication with a new facile and scalable gas pump method. By decreasing the total pressure in the evaporation environment, the gas pump method can significantly enhance the solvent evaporation rate by 8 times faster and thereby produce an extremely dense, uniform, and full-coverage perovskite thin film. The resulting planar perovskite solar cells can achieve an impressive power conversion efficiency up to 19.00% with an average efficiency of 17.38 ± 0.70% for 32 devices with an area of 5 × 2 mm, 13.91% for devices with a large area up to 1.13 cm(2). The perovskite films can be easily fabricated in air conditions with a relative humidity of 45-55%, which definitely has a promising prospect in industrial application of large-area perovskite solar panels.

  16. Coalescent: an open-source and scalable framework for exact calculations in coalescent theory

    Science.gov (United States)

    2012-01-01

    Background Currently, there is no open-source, cross-platform and scalable framework for coalescent analysis in population genetics. There is no scalable GUI based user application either. Such a framework and application would not only drive the creation of more complex and realistic models but also make them truly accessible. Results As a first attempt, we built a framework and user application for the domain of exact calculations in coalescent analysis. The framework provides an API with the concepts of model, data, statistic, phylogeny, gene tree and recursion. Infinite-alleles and infinite-sites models are considered. It defines pluggable computations such as counting and listing all the ancestral configurations and genealogies and computing the exact probability of data. It can visualize a gene tree, trace and visualize the internals of the recursion algorithm for further improvement and attach dynamically a number of output processors. The user application defines jobs in a plug-in like manner so that they can be activated, deactivated, installed or uninstalled on demand. Multiple jobs can be run and their inputs edited. Job inputs are persisted across restarts and running jobs can be cancelled where applicable. Conclusions Coalescent theory plays an increasingly important role in analysing molecular population genetic data. Models involved are mathematically difficult and computationally challenging. An open-source, scalable framework that lets users immediately take advantage of the progress made by others will enable exploration of yet more difficult and realistic models. As models become more complex and mathematically less tractable, the need for an integrated computational approach is obvious. Object oriented designs, though has upfront costs, are practical now and can provide such an integrated approach. PMID:23033878

  17. Coalescent: an open-source and scalable framework for exact calculations in coalescent theory

    Directory of Open Access Journals (Sweden)

    Tewari Susanta

    2012-10-01

    Full Text Available Abstract Background Currently, there is no open-source, cross-platform and scalable framework for coalescent analysis in population genetics. There is no scalable GUI based user application either. Such a framework and application would not only drive the creation of more complex and realistic models but also make them truly accessible. Results As a first attempt, we built a framework and user application for the domain of exact calculations in coalescent analysis. The framework provides an API with the concepts of model, data, statistic, phylogeny, gene tree and recursion. Infinite-alleles and infinite-sites models are considered. It defines pluggable computations such as counting and listing all the ancestral configurations and genealogies and computing the exact probability of data. It can visualize a gene tree, trace and visualize the internals of the recursion algorithm for further improvement and attach dynamically a number of output processors. The user application defines jobs in a plug-in like manner so that they can be activated, deactivated, installed or uninstalled on demand. Multiple jobs can be run and their inputs edited. Job inputs are persisted across restarts and running jobs can be cancelled where applicable. Conclusions Coalescent theory plays an increasingly important role in analysing molecular population genetic data. Models involved are mathematically difficult and computationally challenging. An open-source, scalable framework that lets users immediately take advantage of the progress made by others will enable exploration of yet more difficult and realistic models. As models become more complex and mathematically less tractable, the need for an integrated computational approach is obvious. Object oriented designs, though has upfront costs, are practical now and can provide such an integrated approach.

  18. Highly Scalable Asynchronous Computing Method for Partial Differential Equations: A Path Towards Exascale

    Science.gov (United States)

    Konduri, Aditya

    Many natural and engineering systems are governed by nonlinear partial differential equations (PDEs) which result in a multiscale phenomena, e.g. turbulent flows. Numerical simulations of these problems are computationally very expensive and demand for extreme levels of parallelism. At realistic conditions, simulations are being carried out on massively parallel computers with hundreds of thousands of processing elements (PEs). It has been observed that communication between PEs as well as their synchronization at these extreme scales take up a significant portion of the total simulation time and result in poor scalability of codes. This issue is likely to pose a bottleneck in scalability of codes on future Exascale systems. In this work, we propose an asynchronous computing algorithm based on widely used finite difference methods to solve PDEs in which synchronization between PEs due to communication is relaxed at a mathematical level. We show that while stability is conserved when schemes are used asynchronously, accuracy is greatly degraded. Since message arrivals at PEs are random processes, so is the behavior of the error. We propose a new statistical framework in which we show that average errors drop always to first-order regardless of the original scheme. We propose new asynchrony-tolerant schemes that maintain accuracy when synchronization is relaxed. The quality of the solution is shown to depend, not only on the physical phenomena and numerical schemes, but also on the characteristics of the computing machine. A novel algorithm using remote memory access communications has been developed to demonstrate excellent scalability of the method for large-scale computing. Finally, we present a path to extend this method in solving complex multi-scale problems on Exascale machines.

  19. Scalable group level probabilistic sparse factor analysis

    DEFF Research Database (Denmark)

    Hinrich, Jesper Løve; Nielsen, Søren Føns Vind; Riis, Nicolai Andre Brogaard

    2017-01-01

    Many data-driven approaches exist to extract neural representations of functional magnetic resonance imaging (fMRI) data, but most of them lack a proper probabilistic formulation. We propose a scalable group level probabilistic sparse factor analysis (psFA) allowing spatially sparse maps, component...... pruning using automatic relevance determination (ARD) and subject specific heteroscedastic spatial noise modeling. For task-based and resting state fMRI, we show that the sparsity constraint gives rise to components similar to those obtained by group independent component analysis. The noise modeling...... shows that noise is reduced in areas typically associated with activation by the experimental design. The psFA model identifies sparse components and the probabilistic setting provides a natural way to handle parameter uncertainties. The variational Bayesian framework easily extends to more complex...

  20. Robustness Assessment of Spatial Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    2012-01-01

    Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures many modern buildi...... to robustness of spatial timber structures and will discuss the consequences of such robustness issues related to the future development of timber structures.......Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures many modern building...... codes consider the need for robustness of structures and provide strategies and methods to obtain robustness. Therefore a structural engineer may take necessary steps to design robust structures that are insensitive to accidental circumstances. The present paper summaries issues with respect...