WorldWideScience

Sample records for next-generation massively parallel

  1. Designing Next Generation Massively Multithreaded Architectures for Irregular Applications

    Energy Technology Data Exchange (ETDEWEB)

    Tumeo, Antonino; Secchi, Simone; Villa, Oreste

    2012-08-31

    Irregular applications, such as data mining or graph-based computations, show unpredictable memory/network access patterns and control structures. Massively multi-threaded architectures with large node count, like the Cray XMT, have been shown to address their requirements better than commodity clusters. In this paper we present the approaches that we are currently pursuing to design future generations of these architectures. First, we introduce the Cray XMT and compare it to other multithreaded architectures. We then propose an evolution of the architecture, integrating multiple cores per node and next generation network interconnect. We advocate the use of hardware support for remote memory reference aggregation to optimize network utilization. For this evaluation we developed a highly parallel, custom simulation infrastructure for multi-threaded systems. Our simulator executes unmodified XMT binaries with very large datasets, capturing effects due to contention and hot-spotting, while predicting execution times with greater than 90% accuracy. We also discuss the FPGA prototyping approach that we are employing to study efficient support for irregular applications in next generation manycore processors.

  2. Massively Parallel QCD

    International Nuclear Information System (INIS)

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-01-01

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results

  3. Massively parallel E-beam inspection: enabling next-generation patterned defect inspection for wafer and mask manufacturing

    Science.gov (United States)

    Malloy, Matt; Thiel, Brad; Bunday, Benjamin D.; Wurm, Stefan; Mukhtar, Maseeh; Quoi, Kathy; Kemen, Thomas; Zeidler, Dirk; Eberle, Anna Lena; Garbowski, Tomasz; Dellemann, Gregor; Peters, Jan Hendrik

    2015-03-01

    SEMATECH aims to identify and enable disruptive technologies to meet the ever-increasing demands of semiconductor high volume manufacturing (HVM). As such, a program was initiated in 2012 focused on high-speed e-beam defect inspection as a complement, and eventual successor, to bright field optical patterned defect inspection [1]. The primary goal is to enable a new technology to overcome the key gaps that are limiting modern day inspection in the fab; primarily, throughput and sensitivity to detect ultra-small critical defects. The program specifically targets revolutionary solutions based on massively parallel e-beam technologies, as opposed to incremental improvements to existing e-beam and optical inspection platforms. Wafer inspection is the primary target, but attention is also being paid to next generation mask inspection. During the first phase of the multi-year program multiple technologies were reviewed, a down-selection was made to the top candidates, and evaluations began on proof of concept systems. A champion technology has been selected and as of late 2014 the program has begun to move into the core technology maturation phase in order to enable eventual commercialization of an HVM system. Performance data from early proof of concept systems will be shown along with roadmaps to achieving HVM performance. SEMATECH's vision for moving from early-stage development to commercialization will be shown, including plans for development with industry leading technology providers.

  4. MCBooster: a tool for MC generation for massively parallel platforms

    CERN Multimedia

    Alves Junior, Antonio Augusto

    2016-01-01

    MCBooster is a header-only, C++11-compliant library for the generation of large samples of phase-space Monte Carlo events on massively parallel platforms. It was released on GitHub in the spring of 2016. The library core algorithms implement the Raubold-Lynch method; they are able to generate the full kinematics of decays with up to nine particles in the final state. The library supports the generation of sequential decays as well as the parallel evaluation of arbitrary functions over the generated events. The output of MCBooster completely accords with popular and well-tested software packages such as GENBOD (W515 from CERNLIB) and TGenPhaseSpace from the ROOT framework. MCBooster is developed on top of the Thrust library and runs on Linux systems. It deploys transparently on NVidia CUDA-enabled GPUs as well as multicore CPUs. This contribution summarizes the main features of MCBooster. A basic description of the user interface and some examples of applications are provided, along with measurements of perfor...

  5. MADmap: A Massively Parallel Maximum-Likelihood Cosmic Microwave Background Map-Maker

    Energy Technology Data Exchange (ETDEWEB)

    Cantalupo, Christopher; Borrill, Julian; Jaffe, Andrew; Kisner, Theodore; Stompor, Radoslaw

    2009-06-09

    MADmap is a software application used to produce maximum-likelihood images of the sky from time-ordered data which include correlated noise, such as those gathered by Cosmic Microwave Background (CMB) experiments. It works efficiently on platforms ranging from small workstations to the most massively parallel supercomputers. Map-making is a critical step in the analysis of all CMB data sets, and the maximum-likelihood approach is the most accurate and widely applicable algorithm; however, it is a computationally challenging task. This challenge will only increase with the next generation of ground-based, balloon-borne and satellite CMB polarization experiments. The faintness of the B-mode signal that these experiments seek to measure requires them to gather enormous data sets. MADmap is already being run on up to O(1011) time samples, O(108) pixels and O(104) cores, with ongoing work to scale to the next generation of data sets and supercomputers. We describe MADmap's algorithm based around a preconditioned conjugate gradient solver, fast Fourier transforms and sparse matrix operations. We highlight MADmap's ability to address problems typically encountered in the analysis of realistic CMB data sets and describe its application to simulations of the Planck and EBEX experiments. The massively parallel and distributed implementation is detailed and scaling complexities are given for the resources required. MADmap is capable of analysing the largest data sets now being collected on computing resources currently available, and we argue that, given Moore's Law, MADmap will be capable of reducing the most massive projected data sets.

  6. Frontiers of massively parallel scientific computation

    International Nuclear Information System (INIS)

    Fischer, J.R.

    1987-07-01

    Practical applications using massively parallel computer hardware first appeared during the 1980s. Their development was motivated by the need for computing power orders of magnitude beyond that available today for tasks such as numerical simulation of complex physical and biological processes, generation of interactive visual displays, satellite image analysis, and knowledge based systems. Representative of the first generation of this new class of computers is the Massively Parallel Processor (MPP). A team of scientists was provided the opportunity to test and implement their algorithms on the MPP. The first results are presented. The research spans a broad variety of applications including Earth sciences, physics, signal and image processing, computer science, and graphics. The performance of the MPP was very good. Results obtained using the Connection Machine and the Distributed Array Processor (DAP) are presented

  7. Next Generation Parallelization Systems for Processing and Control of PDS Image Node Assets

    Science.gov (United States)

    Verma, R.

    2017-06-01

    We present next-generation parallelization tools to help Planetary Data System (PDS) Imaging Node (IMG) better monitor, process, and control changes to nearly 650 million file assets and over a dozen machines on which they are referenced or stored.

  8. Neural Parallel Engine: A toolbox for massively parallel neural signal processing.

    Science.gov (United States)

    Tam, Wing-Kin; Yang, Zhi

    2018-05-01

    Large-scale neural recordings provide detailed information on neuronal activities and can help elicit the underlying neural mechanisms of the brain. However, the computational burden is also formidable when we try to process the huge data stream generated by such recordings. In this study, we report the development of Neural Parallel Engine (NPE), a toolbox for massively parallel neural signal processing on graphical processing units (GPUs). It offers a selection of the most commonly used routines in neural signal processing such as spike detection and spike sorting, including advanced algorithms such as exponential-component-power-component (EC-PC) spike detection and binary pursuit spike sorting. We also propose a new method for detecting peaks in parallel through a parallel compact operation. Our toolbox is able to offer a 5× to 110× speedup compared with its CPU counterparts depending on the algorithms. A user-friendly MATLAB interface is provided to allow easy integration of the toolbox into existing workflows. Previous efforts on GPU neural signal processing only focus on a few rudimentary algorithms, are not well-optimized and often do not provide a user-friendly programming interface to fit into existing workflows. There is a strong need for a comprehensive toolbox for massively parallel neural signal processing. A new toolbox for massively parallel neural signal processing has been created. It can offer significant speedup in processing signals from large-scale recordings up to thousands of channels. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Massively parallel multicanonical simulations

    Science.gov (United States)

    Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard

    2018-03-01

    Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.

  10. The language parallel Pascal and other aspects of the massively parallel processor

    Science.gov (United States)

    Reeves, A. P.; Bruner, J. D.

    1982-01-01

    A high level language for the Massively Parallel Processor (MPP) was designed. This language, called Parallel Pascal, is described in detail. A description of the language design, a description of the intermediate language, Parallel P-Code, and details for the MPP implementation are included. Formal descriptions of Parallel Pascal and Parallel P-Code are given. A compiler was developed which converts programs in Parallel Pascal into the intermediate Parallel P-Code language. The code generator to complete the compiler for the MPP is being developed independently. A Parallel Pascal to Pascal translator was also developed. The architecture design for a VLSI version of the MPP was completed with a description of fault tolerant interconnection networks. The memory arrangement aspects of the MPP are discussed and a survey of other high level languages is given.

  11. Massive parallel sequencing in sarcoma pathobiology: state of the art and perspectives.

    Science.gov (United States)

    Brenca, Monica; Maestro, Roberta

    2015-01-01

    Sarcomas are an aggressive and highly heterogeneous group of mesenchymal malignancies with different morphologies and clinical behavior. Current therapeutic strategies remain unsatisfactory. Cytogenetic and molecular characterization of these tumors is resulting in the breakdown of the classical histopathological categories into molecular subgroups that better define sarcoma pathobiology and pave the way to more precise diagnostic criteria and novel therapeutic opportunities. The purpose of this short review is to summarize the state-of-the-art on the exploitation of massive parallel sequencing technologies, also known as next generation sequencing, in the elucidation of sarcoma pathobiology and to discuss how these applications may impact on diagnosis, prognosis and therapy of these tumors.

  12. Practical tools to implement massive parallel pyrosequencing of PCR products in next generation molecular diagnostics.

    Directory of Open Access Journals (Sweden)

    Kim De Leeneer

    Full Text Available Despite improvements in terms of sequence quality and price per basepair, Sanger sequencing remains restricted to screening of individual disease genes. The development of massively parallel sequencing (MPS technologies heralded an era in which molecular diagnostics for multigenic disorders becomes reality. Here, we outline different PCR amplification based strategies for the screening of a multitude of genes in a patient cohort. We performed a thorough evaluation in terms of set-up, coverage and sequencing variants on the data of 10 GS-FLX experiments (over 200 patients. Crucially, we determined the actual coverage that is required for reliable diagnostic results using MPS, and provide a tool to calculate the number of patients that can be screened in a single run. Finally, we provide an overview of factors contributing to false negative or false positive mutation calls and suggest ways to maximize sensitivity and specificity, both important in a routine setting. By describing practical strategies for screening of multigenic disorders in a multitude of samples and providing answers to questions about minimum required coverage, the number of patients that can be screened in a single run and the factors that may affect sensitivity and specificity we hope to facilitate the implementation of MPS technology in molecular diagnostics.

  13. Massively Parallel Computing: A Sandia Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Dosanjh, Sudip S.; Greenberg, David S.; Hendrickson, Bruce; Heroux, Michael A.; Plimpton, Steve J.; Tomkins, James L.; Womble, David E.

    1999-05-06

    The computing power available to scientists and engineers has increased dramatically in the past decade, due in part to progress in making massively parallel computing practical and available. The expectation for these machines has been great. The reality is that progress has been slower than expected. Nevertheless, massively parallel computing is beginning to realize its potential for enabling significant break-throughs in science and engineering. This paper provides a perspective on the state of the field, colored by the authors' experiences using large scale parallel machines at Sandia National Laboratories. We address trends in hardware, system software and algorithms, and we also offer our view of the forces shaping the parallel computing industry.

  14. Massively parallel de novo protein design for targeted therapeutics

    KAUST Repository

    Chevalier, Aaron

    2017-09-26

    De novo protein design holds promise for creating small stable proteins with shapes customized to bind therapeutic targets. We describe a massively parallel approach for designing, manufacturing and screening mini-protein binders, integrating large-scale computational design, oligonucleotide synthesis, yeast display screening and next-generation sequencing. We designed and tested 22,660 mini-proteins of 37-43 residues that target influenza haemagglutinin and botulinum neurotoxin B, along with 6,286 control sequences to probe contributions to folding and binding, and identified 2,618 high-affinity binders. Comparison of the binding and non-binding design sets, which are two orders of magnitude larger than any previously investigated, enabled the evaluation and improvement of the computational model. Biophysical characterization of a subset of the binder designs showed that they are extremely stable and, unlike antibodies, do not lose activity after exposure to high temperatures. The designs elicit little or no immune response and provide potent prophylactic and therapeutic protection against influenza, even after extensive repeated dosing.

  15. Massively parallel de novo protein design for targeted therapeutics

    KAUST Repository

    Chevalier, Aaron; Silva, Daniel-Adriano; Rocklin, Gabriel J.; Hicks, Derrick R.; Vergara, Renan; Murapa, Patience; Bernard, Steffen M.; Zhang, Lu; Lam, Kwok-Ho; Yao, Guorui; Bahl, Christopher D.; Miyashita, Shin-Ichiro; Goreshnik, Inna; Fuller, James T.; Koday, Merika T.; Jenkins, Cody M.; Colvin, Tom; Carter, Lauren; Bohn, Alan; Bryan, Cassie M.; Ferná ndez-Velasco, D. Alejandro; Stewart, Lance; Dong, Min; Huang, Xuhui; Jin, Rongsheng; Wilson, Ian A.; Fuller, Deborah H.; Baker, David

    2017-01-01

    De novo protein design holds promise for creating small stable proteins with shapes customized to bind therapeutic targets. We describe a massively parallel approach for designing, manufacturing and screening mini-protein binders, integrating large-scale computational design, oligonucleotide synthesis, yeast display screening and next-generation sequencing. We designed and tested 22,660 mini-proteins of 37-43 residues that target influenza haemagglutinin and botulinum neurotoxin B, along with 6,286 control sequences to probe contributions to folding and binding, and identified 2,618 high-affinity binders. Comparison of the binding and non-binding design sets, which are two orders of magnitude larger than any previously investigated, enabled the evaluation and improvement of the computational model. Biophysical characterization of a subset of the binder designs showed that they are extremely stable and, unlike antibodies, do not lose activity after exposure to high temperatures. The designs elicit little or no immune response and provide potent prophylactic and therapeutic protection against influenza, even after extensive repeated dosing.

  16. Massively parallel de novo protein design for targeted therapeutics

    Science.gov (United States)

    Chevalier, Aaron; Silva, Daniel-Adriano; Rocklin, Gabriel J.; Hicks, Derrick R.; Vergara, Renan; Murapa, Patience; Bernard, Steffen M.; Zhang, Lu; Lam, Kwok-Ho; Yao, Guorui; Bahl, Christopher D.; Miyashita, Shin-Ichiro; Goreshnik, Inna; Fuller, James T.; Koday, Merika T.; Jenkins, Cody M.; Colvin, Tom; Carter, Lauren; Bohn, Alan; Bryan, Cassie M.; Fernández-Velasco, D. Alejandro; Stewart, Lance; Dong, Min; Huang, Xuhui; Jin, Rongsheng; Wilson, Ian A.; Fuller, Deborah H.; Baker, David

    2018-01-01

    De novo protein design holds promise for creating small stable proteins with shapes customized to bind therapeutic targets. We describe a massively parallel approach for designing, manufacturing and screening mini-protein binders, integrating large-scale computational design, oligonucleotide synthesis, yeast display screening and next-generation sequencing. We designed and tested 22,660 mini-proteins of 37–43 residues that target influenza haemagglutinin and botulinum neurotoxin B, along with 6,286 control sequences to probe contributions to folding and binding, and identified 2,618 high-affinity binders. Comparison of the binding and non-binding design sets, which are two orders of magnitude larger than any previously investigated, enabled the evaluation and improvement of the computational model. Biophysical characterization of a subset of the binder designs showed that they are extremely stable and, unlike antibodies, do not lose activity after exposure to high temperatures. The designs elicit little or no immune response and provide potent prophylactic and therapeutic protection against influenza, even after extensive repeated dosing. PMID:28953867

  17. Massively Parallel Algorithms for Solution of Schrodinger Equation

    Science.gov (United States)

    Fijany, Amir; Barhen, Jacob; Toomerian, Nikzad

    1994-01-01

    In this paper massively parallel algorithms for solution of Schrodinger equation are developed. Our results clearly indicate that the Crank-Nicolson method, in addition to its excellent numerical properties, is also highly suitable for massively parallel computation.

  18. Massively parallel mathematical sieves

    Energy Technology Data Exchange (ETDEWEB)

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  19. The apeNEXT project

    International Nuclear Information System (INIS)

    Belletti, F.; Bodin, F.; Boucaud, Ph.; Cabibbo, N.; Lonardo, A.; De Luca, S.; Lukyanov, M.; Micheli, J.; Morin, L.; Pene, O.; Pleiter, D.; Rapuano, F.; Rossetti, D.; Schifano, S.F.; Simma, H.; Tripiccione, R.; Vicini, P.

    2006-01-01

    Numerical simulations in theoretical high-energy physics (Lattice QCD) require huge computing resources. Several generations of massively parallel computers optimised for these applications have been developed within the APE (array processor experiment) project. Large prototype systems of the latest generation, apeNEXT, are currently being assembled and tested. This contribution explains how the apeNEXT architecture is optimised for Lattice QCD, provides an overview of the hardware and software of apeNEXT, and describes its new features, like the SPMD programming model and the C compiler

  20. Adapting algorithms to massively parallel hardware

    CERN Document Server

    Sioulas, Panagiotis

    2016-01-01

    In the recent years, the trend in computing has shifted from delivering processors with faster clock speeds to increasing the number of cores per processor. This marks a paradigm shift towards parallel programming in which applications are programmed to exploit the power provided by multi-cores. Usually there is gain in terms of the time-to-solution and the memory footprint. Specifically, this trend has sparked an interest towards massively parallel systems that can provide a large number of processors, and possibly computing nodes, as in the GPUs and MPPAs (Massively Parallel Processor Arrays). In this project, the focus was on two distinct computing problems: k-d tree searches and track seeding cellular automata. The goal was to adapt the algorithms to parallel systems and evaluate their performance in different cases.

  1. A Massively Parallel Face Recognition System

    Directory of Open Access Journals (Sweden)

    Lahdenoja Olli

    2007-01-01

    Full Text Available We present methods for processing the LBPs (local binary patterns with a massively parallel hardware, especially with CNN-UM (cellular nonlinear network-universal machine. In particular, we present a framework for implementing a massively parallel face recognition system, including a dedicated highly accurate algorithm suitable for various types of platforms (e.g., CNN-UM and digital FPGA. We study in detail a dedicated mixed-mode implementation of the algorithm and estimate its implementation cost in the view of its performance and accuracy restrictions.

  2. A Massively Parallel Face Recognition System

    Directory of Open Access Journals (Sweden)

    Ari Paasio

    2006-12-01

    Full Text Available We present methods for processing the LBPs (local binary patterns with a massively parallel hardware, especially with CNN-UM (cellular nonlinear network-universal machine. In particular, we present a framework for implementing a massively parallel face recognition system, including a dedicated highly accurate algorithm suitable for various types of platforms (e.g., CNN-UM and digital FPGA. We study in detail a dedicated mixed-mode implementation of the algorithm and estimate its implementation cost in the view of its performance and accuracy restrictions.

  3. Programming massively parallel processors a hands-on approach

    CERN Document Server

    Kirk, David B

    2010-01-01

    Programming Massively Parallel Processors discusses basic concepts about parallel programming and GPU architecture. ""Massively parallel"" refers to the use of a large number of processors to perform a set of computations in a coordinated parallel way. The book details various techniques for constructing parallel programs. It also discusses the development process, performance level, floating-point format, parallel patterns, and dynamic parallelism. The book serves as a teaching guide where parallel programming is the main topic of the course. It builds on the basics of C programming for CUDA, a parallel programming environment that is supported on NVI- DIA GPUs. Composed of 12 chapters, the book begins with basic information about the GPU as a parallel computer source. It also explains the main concepts of CUDA, data parallelism, and the importance of memory access efficiency using CUDA. The target audience of the book is graduate and undergraduate students from all science and engineering disciplines who ...

  4. Special Issue: Next Generation DNA Sequencing

    Directory of Open Access Journals (Sweden)

    Paul Richardson

    2010-10-01

    Full Text Available Next Generation Sequencing (NGS refers to technologies that do not rely on traditional dideoxy-nucleotide (Sanger sequencing where labeled DNA fragments are physically resolved by electrophoresis. These new technologies rely on different strategies, but essentially all of them make use of real-time data collection of a base level incorporation event across a massive number of reactions (on the order of millions versus 96 for capillary electrophoresis for instance. The major commercial NGS platforms available to researchers are the 454 Genome Sequencer (Roche, Illumina (formerly Solexa Genome analyzer, the SOLiD system (Applied Biosystems/Life Technologies and the Heliscope (Helicos Corporation. The techniques and different strategies utilized by these platforms are reviewed in a number of the papers in this special issue. These technologies are enabling new applications that take advantage of the massive data produced by this next generation of sequencing instruments. [...

  5. Analysis of multigrid methods on massively parallel computers: Architectural implications

    Science.gov (United States)

    Matheson, Lesley R.; Tarjan, Robert E.

    1993-01-01

    We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.

  6. RAMA: A file system for massively parallel computers

    Science.gov (United States)

    Miller, Ethan L.; Katz, Randy H.

    1993-01-01

    This paper describes a file system design for massively parallel computers which makes very efficient use of a few disks per processor. This overcomes the traditional I/O bottleneck of massively parallel machines by storing the data on disks within the high-speed interconnection network. In addition, the file system, called RAMA, requires little inter-node synchronization, removing another common bottleneck in parallel processor file systems. Support for a large tertiary storage system can easily be integrated in lo the file system; in fact, RAMA runs most efficiently when tertiary storage is used.

  7. Simultaneous digital quantification and fluorescence-based size characterization of massively parallel sequencing libraries.

    Science.gov (United States)

    Laurie, Matthew T; Bertout, Jessica A; Taylor, Sean D; Burton, Joshua N; Shendure, Jay A; Bielas, Jason H

    2013-08-01

    Due to the high cost of failed runs and suboptimal data yields, quantification and determination of fragment size range are crucial steps in the library preparation process for massively parallel sequencing (or next-generation sequencing). Current library quality control methods commonly involve quantification using real-time quantitative PCR and size determination using gel or capillary electrophoresis. These methods are laborious and subject to a number of significant limitations that can make library calibration unreliable. Herein, we propose and test an alternative method for quality control of sequencing libraries using droplet digital PCR (ddPCR). By exploiting a correlation we have discovered between droplet fluorescence and amplicon size, we achieve the joint quantification and size determination of target DNA with a single ddPCR assay. We demonstrate the accuracy and precision of applying this method to the preparation of sequencing libraries.

  8. Massively parallel quantum computer simulator

    NARCIS (Netherlands)

    De Raedt, K.; Michielsen, K.; De Raedt, H.; Trieu, B.; Arnold, G.; Richter, M.; Lippert, Th.; Watanabe, H.; Ito, N.

    2007-01-01

    We describe portable software to simulate universal quantum computers on massive parallel Computers. We illustrate the use of the simulation software by running various quantum algorithms on different computer architectures, such as a IBM BlueGene/L, a IBM Regatta p690+, a Hitachi SR11000/J1, a Cray

  9. Massively parallel Fokker-Planck code ALLAp

    International Nuclear Information System (INIS)

    Batishcheva, A.A.; Krasheninnikov, S.I.; Craddock, G.G.; Djordjevic, V.

    1996-01-01

    The recently developed for workstations Fokker-Planck code ALLA simulates the temporal evolution of 1V, 2V and 1D2V collisional edge plasmas. In this work we present the results of code parallelization on the CRI T3D massively parallel platform (ALLAp version). Simultaneously we benchmark the 1D2V parallel vesion against an analytic self-similar solution of the collisional kinetic equation. This test is not trivial as it demands a very strong spatial temperature and density variation within the simulation domain. (orig.)

  10. The 2nd Symposium on the Frontiers of Massively Parallel Computations

    Science.gov (United States)

    Mills, Ronnie (Editor)

    1988-01-01

    Programming languages, computer graphics, neural networks, massively parallel computers, SIMD architecture, algorithms, digital terrain models, sort computation, simulation of charged particle transport on the massively parallel processor and image processing are among the topics discussed.

  11. Next generation initiation techniques

    Science.gov (United States)

    Warner, Tom; Derber, John; Zupanski, Milija; Cohn, Steve; Verlinde, Hans

    1993-01-01

    Four-dimensional data assimilation strategies can generally be classified as either current or next generation, depending upon whether they are used operationally or not. Current-generation data-assimilation techniques are those that are presently used routinely in operational-forecasting or research applications. They can be classified into the following categories: intermittent assimilation, Newtonian relaxation, and physical initialization. It should be noted that these techniques are the subject of continued research, and their improvement will parallel the development of next generation techniques described by the other speakers. Next generation assimilation techniques are those that are under development but are not yet used operationally. Most of these procedures are derived from control theory or variational methods and primarily represent continuous assimilation approaches, in which the data and model dynamics are 'fitted' to each other in an optimal way. Another 'next generation' category is the initialization of convective-scale models. Intermittent assimilation systems use an objective analysis to combine all observations within a time window that is centered on the analysis time. Continuous first-generation assimilation systems are usually based on the Newtonian-relaxation or 'nudging' techniques. Physical initialization procedures generally involve the use of standard or nonstandard data to force some physical process in the model during an assimilation period. Under the topic of next-generation assimilation techniques, variational approaches are currently being actively developed. Variational approaches seek to minimize a cost or penalty function which measures a model's fit to observations, background fields and other imposed constraints. Alternatively, the Kalman filter technique, which is also under investigation as a data assimilation procedure for numerical weather prediction, can yield acceptable initial conditions for mesoscale models. The

  12. Impact analysis on a massively parallel computer

    International Nuclear Information System (INIS)

    Zacharia, T.; Aramayo, G.A.

    1994-01-01

    Advanced mathematical techniques and computer simulation play a major role in evaluating and enhancing the design of beverage cans, industrial, and transportation containers for improved performance. Numerical models are used to evaluate the impact requirements of containers used by the Department of Energy (DOE) for transporting radioactive materials. Many of these models are highly compute-intensive. An analysis may require several hours of computational time on current supercomputers despite the simplicity of the models being studied. As computer simulations and materials databases grow in complexity, massively parallel computers have become important tools. Massively parallel computational research at the Oak Ridge National Laboratory (ORNL) and its application to the impact analysis of shipping containers is briefly described in this paper

  13. Massively parallel evolutionary computation on GPGPUs

    CERN Document Server

    Tsutsui, Shigeyoshi

    2013-01-01

    Evolutionary algorithms (EAs) are metaheuristics that learn from natural collective behavior and are applied to solve optimization problems in domains such as scheduling, engineering, bioinformatics, and finance. Such applications demand acceptable solutions with high-speed execution using finite computational resources. Therefore, there have been many attempts to develop platforms for running parallel EAs using multicore machines, massively parallel cluster machines, or grid computing environments. Recent advances in general-purpose computing on graphics processing units (GPGPU) have opened u

  14. Massively parallel sequencing of forensic STRs

    DEFF Research Database (Denmark)

    Parson, Walther; Ballard, David; Budowle, Bruce

    2016-01-01

    The DNA Commission of the International Society for Forensic Genetics (ISFG) is reviewing factors that need to be considered ahead of the adoption by the forensic community of short tandem repeat (STR) genotyping by massively parallel sequencing (MPS) technologies. MPS produces sequence data that...

  15. A discrete ordinate response matrix method for massively parallel computers

    International Nuclear Information System (INIS)

    Hanebutte, U.R.; Lewis, E.E.

    1991-01-01

    A discrete ordinate response matrix method is formulated for the solution of neutron transport problems on massively parallel computers. The response matrix formulation eliminates iteration on the scattering source. The nodal matrices which result from the diamond-differenced equations are utilized in a factored form which minimizes memory requirements and significantly reduces the required number of algorithm utilizes massive parallelism by assigning each spatial node to a processor. The algorithm is accelerated effectively by a synthetic method in which the low-order diffusion equations are also solved by massively parallel red/black iterations. The method has been implemented on a 16k Connection Machine-2, and S 8 and S 16 solutions have been obtained for fixed-source benchmark problems in X--Y geometry

  16. Template based parallel checkpointing in a massively parallel computer system

    Science.gov (United States)

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  17. Increasing phylogenetic resolution at low taxonomic levels using massively parallel sequencing of chloroplast genomes

    Science.gov (United States)

    Matthew Parks; Richard Cronn; Aaron Liston

    2009-01-01

    We reconstruct the infrageneric phylogeny of Pinus from 37 nearly-complete chloroplast genomes (average 109 kilobases each of an approximately 120 kilobase genome) generated using multiplexed massively parallel sequencing. We found that 30/33 ingroup nodes resolved wlth > 95-percent bootstrap support; this is a substantial improvement relative...

  18. A Programming Model for Massive Data Parallelism with Data Dependencies

    International Nuclear Information System (INIS)

    Cui, Xiaohui; Mueller, Frank; Potok, Thomas E.; Zhang, Yongpeng

    2009-01-01

    Accelerating processors can often be more cost and energy effective for a wide range of data-parallel computing problems than general-purpose processors. For graphics processor units (GPUs), this is particularly the case when program development is aided by environments such as NVIDIA s Compute Unified Device Architecture (CUDA), which dramatically reduces the gap between domain-specific architectures and general purpose programming. Nonetheless, general-purpose GPU (GPGPU) programming remains subject to several restrictions. Most significantly, the separation of host (CPU) and accelerator (GPU) address spaces requires explicit management of GPU memory resources, especially for massive data parallelism that well exceeds the memory capacity of GPUs. One solution to this problem is to transfer data between the GPU and host memories frequently. In this work, we investigate another approach. We run massively data-parallel applications on GPU clusters. We further propose a programming model for massive data parallelism with data dependencies for this scenario. Experience from micro benchmarks and real-world applications shows that our model provides not only ease of programming but also significant performance gains

  19. A safe an easy method for building consensus HIV sequences from 454 massively parallel sequencing data.

    Science.gov (United States)

    Fernández-Caballero Rico, Jose Ángel; Chueca Porcuna, Natalia; Álvarez Estévez, Marta; Mosquera Gutiérrez, María Del Mar; Marcos Maeso, María Ángeles; García, Federico

    2018-02-01

    To show how to generate a consensus sequence from the information of massive parallel sequences data obtained from routine HIV anti-retroviral resistance studies, and that may be suitable for molecular epidemiology studies. Paired Sanger (Trugene-Siemens) and next-generation sequencing (NGS) (454 GSJunior-Roche) HIV RT and protease sequences from 62 patients were studied. NGS consensus sequences were generated using Mesquite, using 10%, 15%, and 20% thresholds. Molecular evolutionary genetics analysis (MEGA) was used for phylogenetic studies. At a 10% threshold, NGS-Sanger sequences from 17/62 patients were phylogenetically related, with a median bootstrap-value of 88% (IQR83.5-95.5). Association increased to 36/62 sequences, median bootstrap 94% (IQR85.5-98)], using a 15% threshold. Maximum association was at the 20% threshold, with 61/62 sequences associated, and a median bootstrap value of 99% (IQR98-100). A safe method is presented to generate consensus sequences from HIV-NGS data at 20% threshold, which will prove useful for molecular epidemiological studies. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  20. Developing a Massively Parallel Forward Projection Radiography Model for Large-Scale Industrial Applications

    Energy Technology Data Exchange (ETDEWEB)

    Bauerle, Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-08-01

    This project utilizes Graphics Processing Units (GPUs) to compute radiograph simulations for arbitrary objects. The generation of radiographs, also known as the forward projection imaging model, is computationally intensive and not widely utilized. The goal of this research is to develop a massively parallel algorithm that can compute forward projections for objects with a trillion voxels (3D pixels). To achieve this end, the data are divided into blocks that can each t into GPU memory. The forward projected image is also divided into segments to allow for future parallelization and to avoid needless computations.

  1. Massively parallel red-black algorithms for x-y-z response matrix equations

    International Nuclear Information System (INIS)

    Hanebutte, U.R.; Laurin-Kovitz, K.; Lewis, E.E.

    1992-01-01

    Recently, both discrete ordinates and spherical harmonic (S n and P n ) methods have been cast in the form of response matrices. In x-y geometry, massively parallel algorithms have been developed to solve the resulting response matrix equations on the Connection Machine family of parallel computers, the CM-2, CM-200, and CM-5. These algorithms utilize two-cycle iteration on a red-black checkerboard. In this work we examine the use of massively parallel red-black algorithms to solve response matric equations in three dimensions. This longer term objective is to utilize massively parallel algorithms to solve S n and/or P n response matrix problems. In this exploratory examination, however, we consider the simple 6 x 6 response matrices that are derivable from fine-mesh diffusion approximations in three dimensions

  2. Increasing phylogenetic resolution at low taxonomic levels using massively parallel sequencing of chloroplast genomes

    Directory of Open Access Journals (Sweden)

    Cronn Richard

    2009-12-01

    Full Text Available Abstract Background Molecular evolutionary studies share the common goal of elucidating historical relationships, and the common challenge of adequately sampling taxa and characters. Particularly at low taxonomic levels, recent divergence, rapid radiations, and conservative genome evolution yield limited sequence variation, and dense taxon sampling is often desirable. Recent advances in massively parallel sequencing make it possible to rapidly obtain large amounts of sequence data, and multiplexing makes extensive sampling of megabase sequences feasible. Is it possible to efficiently apply massively parallel sequencing to increase phylogenetic resolution at low taxonomic levels? Results We reconstruct the infrageneric phylogeny of Pinus from 37 nearly-complete chloroplast genomes (average 109 kilobases each of an approximately 120 kilobase genome generated using multiplexed massively parallel sequencing. 30/33 ingroup nodes resolved with ≥ 95% bootstrap support; this is a substantial improvement relative to prior studies, and shows massively parallel sequencing-based strategies can produce sufficient high quality sequence to reach support levels originally proposed for the phylogenetic bootstrap. Resampling simulations show that at least the entire plastome is necessary to fully resolve Pinus, particularly in rapidly radiating clades. Meta-analysis of 99 published infrageneric phylogenies shows that whole plastome analysis should provide similar gains across a range of plant genera. A disproportionate amount of phylogenetic information resides in two loci (ycf1, ycf2, highlighting their unusual evolutionary properties. Conclusion Plastome sequencing is now an efficient option for increasing phylogenetic resolution at lower taxonomic levels in plant phylogenetic and population genetic analyses. With continuing improvements in sequencing capacity, the strategies herein should revolutionize efforts requiring dense taxon and character sampling

  3. A massively parallel discrete ordinates response matrix method for neutron transport

    International Nuclear Information System (INIS)

    Hanebutte, U.R.; Lewis, E.E.

    1992-01-01

    In this paper a discrete ordinates response matrix method is formulated with anisotropic scattering for the solution of neutron transport problems on massively parallel computers. The response matrix formulation eliminates iteration on the scattering source. The nodal matrices that result from the diamond-differenced equations are utilized in a factored form that minimizes memory requirements and significantly reduces the number of arithmetic operations required per node. The red-black solution algorithm utilizes massive parallelism by assigning each spatial node to one or more processors. The algorithm is accelerated by a synthetic method in which the low-order diffusion equations are also solved by massively parallel red-black iterations. The method is implemented on a 16K Connection Machine-2, and S 8 and S 16 solutions are obtained for fixed-source benchmark problems in x-y geometry

  4. Increasing the reach of forensic genetics with massively parallel sequencing.

    Science.gov (United States)

    Budowle, Bruce; Schmedes, Sarah E; Wendt, Frank R

    2017-09-01

    The field of forensic genetics has made great strides in the analysis of biological evidence related to criminal and civil matters. More so, the discipline has set a standard of performance and quality in the forensic sciences. The advent of massively parallel sequencing will allow the field to expand its capabilities substantially. This review describes the salient features of massively parallel sequencing and how it can impact forensic genetics. The features of this technology offer increased number and types of genetic markers that can be analyzed, higher throughput of samples, and the capability of targeting different organisms, all by one unifying methodology. While there are many applications, three are described where massively parallel sequencing will have immediate impact: molecular autopsy, microbial forensics and differentiation of monozygotic twins. The intent of this review is to expose the forensic science community to the potential enhancements that have or are soon to arrive and demonstrate the continued expansion the field of forensic genetics and its service in the investigation of legal matters.

  5. Neural nets for massively parallel optimization

    Science.gov (United States)

    Dixon, Laurence C. W.; Mills, David

    1992-07-01

    To apply massively parallel processing systems to the solution of large scale optimization problems it is desirable to be able to evaluate any function f(z), z (epsilon) Rn in a parallel manner. The theorem of Cybenko, Hecht Nielsen, Hornik, Stinchcombe and White, and Funahasi shows that this can be achieved by a neural network with one hidden layer. In this paper we address the problem of the number of nodes required in the layer to achieve a given accuracy in the function and gradient values at all points within a given n dimensional interval. The type of activation function needed to obtain nonsingular Hessian matrices is described and a strategy for obtaining accurate minimal networks presented.

  6. THE TRAINING OF NEXT GENERATION DATA SCIENTISTS IN BIOMEDICINE.

    Science.gov (United States)

    Garmire, Lana X; Gliske, Stephen; Nguyen, Quynh C; Chen, Jonathan H; Nemati, Shamim; VAN Horn, John D; Moore, Jason H; Shreffler, Carol; Dunn, Michelle

    2017-01-01

    With the booming of new technologies, biomedical science has transformed into digitalized, data intensive science. Massive amount of data need to be analyzed and interpreted, demand a complete pipeline to train next generation data scientists. To meet this need, the transinstitutional Big Data to Knowledge (BD2K) Initiative has been implemented since 2014, complementing other NIH institutional efforts. In this report, we give an overview the BD2K K01 mentored scientist career awards, which have demonstrated early success. We address the specific trainings needed in representative data science areas, in order to make the next generation of data scientists in biomedicine.

  7. First massively parallel algorithm to be implemented in Apollo-II code

    International Nuclear Information System (INIS)

    Stankovski, Z.

    1994-01-01

    The collision probability (CP) method in neutron transport, as applied to arbitrary 2D XY geometries, like the TDT module in APOLLO-II, is very time consuming. Consequently RZ or 3D extensions became prohibitive. Fortunately, this method is very suitable for parallelization. Massively parallel computer architectures, especially MIMD machines, bring a new breath to this method. In this paper we present a CM5 implementation of the CP method. Parallelization is applied to the energy groups, using the CMMD message passing library. In our case we use 32 processors for the standard 99-group APOLLIB-II library. The real advantage of this algorithm will appear in the calculation of the future fine multigroup library (about 8000 groups) of the SAPHYR project with a massively parallel computer (to the order of hundreds of processors). (author). 3 tabs., 4 figs., 4 refs

  8. First massively parallel algorithm to be implemented in APOLLO-II code

    International Nuclear Information System (INIS)

    Stankovski, Z.

    1994-01-01

    The collision probability method in neutron transport, as applied to arbitrary 2-dimensional geometries, like the two dimensional transport module in APOLLO-II is very time consuming. Consequently 3-dimensional extension became prohibitive. Fortunately, this method is very suitable for parallelization. Massively parallel computer architectures, especially MIMD machines, bring a new breath to this method. In this paper we present a CM5 implementation of the collision probability method. Parallelization is applied to the energy groups, using the CMMD massage passing library. In our case we used 32 processors for the standard 99-group APOLLIB-II library. The real advantage of this algorithm will appear in the calculation of the future multigroup library (about 8000 groups) of the SAPHYR project with a massively parallel computer (to the order of hundreds of processors). (author). 4 refs., 4 figs., 3 tabs

  9. Representing and computing regular languages on massively parallel networks

    Energy Technology Data Exchange (ETDEWEB)

    Miller, M.I.; O' Sullivan, J.A. (Electronic Systems and Research Lab., of Electrical Engineering, Washington Univ., St. Louis, MO (US)); Boysam, B. (Dept. of Electrical, Computer and Systems Engineering, Rensselaer Polytechnic Inst., Troy, NY (US)); Smith, K.R. (Dept. of Electrical Engineering, Southern Illinois Univ., Edwardsville, IL (US))

    1991-01-01

    This paper proposes a general method for incorporating rule-based constraints corresponding to regular languages into stochastic inference problems, thereby allowing for a unified representation of stochastic and syntactic pattern constraints. The authors' approach first established the formal connection of rules to Chomsky grammars, and generalizes the original work of Shannon on the encoding of rule-based channel sequences to Markov chains of maximum entropy. This maximum entropy probabilistic view leads to Gibb's representations with potentials which have their number of minima growing at precisely the exponential rate that the language of deterministically constrained sequences grow. These representations are coupled to stochastic diffusion algorithms, which sample the language-constrained sequences by visiting the energy minima according to the underlying Gibbs' probability law. The coupling to stochastic search methods yields the all-important practical result that fully parallel stochastic cellular automata may be derived to generate samples from the rule-based constraint sets. The production rules and neighborhood state structure of the language of sequences directly determines the necessary connection structures of the required parallel computing surface. Representations of this type have been mapped to the DAP-510 massively-parallel processor consisting of 1024 mesh-connected bit-serial processing elements for performing automated segmentation of electron-micrograph images.

  10. PARALLEL SPATIOTEMPORAL SPECTRAL CLUSTERING WITH MASSIVE TRAJECTORY DATA

    Directory of Open Access Journals (Sweden)

    Y. Z. Gu

    2017-09-01

    Full Text Available Massive trajectory data contains wealth useful information and knowledge. Spectral clustering, which has been shown to be effective in finding clusters, becomes an important clustering approaches in the trajectory data mining. However, the traditional spectral clustering lacks the temporal expansion on the algorithm and limited in its applicability to large-scale problems due to its high computational complexity. This paper presents a parallel spatiotemporal spectral clustering based on multiple acceleration solutions to make the algorithm more effective and efficient, the performance is proved due to the experiment carried out on the massive taxi trajectory dataset in Wuhan city, China.

  11. Development of massively parallel quantum chemistry program SMASH

    International Nuclear Information System (INIS)

    Ishimura, Kazuya

    2015-01-01

    A massively parallel program for quantum chemistry calculations SMASH was released under the Apache License 2.0 in September 2014. The SMASH program is written in the Fortran90/95 language with MPI and OpenMP standards for parallelization. Frequently used routines, such as one- and two-electron integral calculations, are modularized to make program developments simple. The speed-up of the B3LYP energy calculation for (C 150 H 30 ) 2 with the cc-pVDZ basis set (4500 basis functions) was 50,499 on 98,304 cores of the K computer

  12. Massively Parallel Sort-Merge Joins in Main Memory Multi-Core Database Systems

    OpenAIRE

    Albutiu, Martina-Cezara; Kemper, Alfons; Neumann, Thomas

    2012-01-01

    Two emerging hardware trends will dominate the database system technology in the near future: increasing main memory capacities of several TB per server and massively parallel multi-core processing. Many algorithmic and control techniques in current database technology were devised for disk-based systems where I/O dominated the performance. In this work we take a new look at the well-known sort-merge join which, so far, has not been in the focus of research in scalable massively parallel mult...

  13. Proxy-equation paradigm: A strategy for massively parallel asynchronous computations

    Science.gov (United States)

    Mittal, Ankita; Girimaji, Sharath

    2017-09-01

    Massively parallel simulations of transport equation systems call for a paradigm change in algorithm development to achieve efficient scalability. Traditional approaches require time synchronization of processing elements (PEs), which severely restricts scalability. Relaxing synchronization requirement introduces error and slows down convergence. In this paper, we propose and develop a novel "proxy equation" concept for a general transport equation that (i) tolerates asynchrony with minimal added error, (ii) preserves convergence order and thus, (iii) expected to scale efficiently on massively parallel machines. The central idea is to modify a priori the transport equation at the PE boundaries to offset asynchrony errors. Proof-of-concept computations are performed using a one-dimensional advection (convection) diffusion equation. The results demonstrate the promise and advantages of the present strategy.

  14. Examination of concept of next generation computer. Progress report 1999

    Energy Technology Data Exchange (ETDEWEB)

    Higuchi, Kenji; Hasegawa, Yukihiro; Hirayama, Toshio

    2000-12-01

    The Center for Promotion of Computational Science and Engineering has conducted R and D works on the technology of parallel processing and has started the examination of the next generation computer in 1999. This report describes the behavior analyses of quantum calculation codes. It also describes the consideration for the analyses and examination results for the method to reduce cash misses. Furthermore, it describes a performance simulator that is being developed to quantitatively examine the concept of the next generation computer. (author)

  15. Animated computer graphics models of space and earth sciences data generated via the massively parallel processor

    Science.gov (United States)

    Treinish, Lloyd A.; Gough, Michael L.; Wildenhain, W. David

    1987-01-01

    The capability was developed of rapidly producing visual representations of large, complex, multi-dimensional space and earth sciences data sets via the implementation of computer graphics modeling techniques on the Massively Parallel Processor (MPP) by employing techniques recently developed for typically non-scientific applications. Such capabilities can provide a new and valuable tool for the understanding of complex scientific data, and a new application of parallel computing via the MPP. A prototype system with such capabilities was developed and integrated into the National Space Science Data Center's (NSSDC) Pilot Climate Data System (PCDS) data-independent environment for computer graphics data display to provide easy access to users. While developing these capabilities, several problems had to be solved independently of the actual use of the MPP, all of which are outlined.

  16. The Fortran-P Translator: Towards Automatic Translation of Fortran 77 Programs for Massively Parallel Processors

    Directory of Open Access Journals (Sweden)

    Matthew O'keefe

    1995-01-01

    Full Text Available Massively parallel processors (MPPs hold the promise of extremely high performance that, if realized, could be used to study problems of unprecedented size and complexity. One of the primary stumbling blocks to this promise has been the lack of tools to translate application codes to MPP form. In this article we show how applications codes written in a subset of Fortran 77, called Fortran-P, can be translated to achieve good performance on several massively parallel machines. This subset can express codes that are self-similar, where the algorithm applied to the global data domain is also applied to each subdomain. We have found many codes that match the Fortran-P programming style and have converted them using our tools. We believe a self-similar coding style will accomplish what a vectorizable style has accomplished for vector machines by allowing the construction of robust, user-friendly, automatic translation systems that increase programmer productivity and generate fast, efficient code for MPPs.

  17. Development of massively parallel quantum chemistry program SMASH

    Energy Technology Data Exchange (ETDEWEB)

    Ishimura, Kazuya [Department of Theoretical and Computational Molecular Science, Institute for Molecular Science 38 Nishigo-Naka, Myodaiji, Okazaki, Aichi 444-8585 (Japan)

    2015-12-31

    A massively parallel program for quantum chemistry calculations SMASH was released under the Apache License 2.0 in September 2014. The SMASH program is written in the Fortran90/95 language with MPI and OpenMP standards for parallelization. Frequently used routines, such as one- and two-electron integral calculations, are modularized to make program developments simple. The speed-up of the B3LYP energy calculation for (C{sub 150}H{sub 30}){sub 2} with the cc-pVDZ basis set (4500 basis functions) was 50,499 on 98,304 cores of the K computer.

  18. Massively parallel sparse matrix function calculations with NTPoly

    Science.gov (United States)

    Dawson, William; Nakajima, Takahito

    2018-04-01

    We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.

  19. Massively parallel Fokker-Planck calculations

    International Nuclear Information System (INIS)

    Mirin, A.A.

    1990-01-01

    This paper reports that the Fokker-Planck package FPPAC, which solves the complete nonlinear multispecies Fokker-Planck collision operator for a plasma in two-dimensional velocity space, has been rewritten for the Connection Machine 2. This has involved allocation of variables either to the front end or the CM2, minimization of data flow, and replacement of Cray-optimized algorithms with ones suitable for a massively parallel architecture. Calculations have been carried out on various Connection Machines throughout the country. Results and timings on these machines have been compared to each other and to those on the static memory Cray-2. For large problem size, the Connection Machine 2 is found to be cost-efficient

  20. Solving the Stokes problem on a massively parallel computer

    DEFF Research Database (Denmark)

    Axelsson, Owe; Barker, Vincent A.; Neytcheva, Maya

    2001-01-01

    boundary value problem for each velocity component, are solved by the conjugate gradient method with a preconditioning based on the algebraic multi‐level iteration (AMLI) technique. The velocity is found from the computed pressure. The method is optimal in the sense that the computational work...... is proportional to the number of unknowns. Further, it is designed to exploit a massively parallel computer with distributed memory architecture. Numerical experiments on a Cray T3E computer illustrate the parallel performance of the method....

  1. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...

  2. An integrated SNP mining and utilization (ISMU) pipeline for next generation sequencing data.

    Science.gov (United States)

    Azam, Sarwar; Rathore, Abhishek; Shah, Trushar M; Telluri, Mohan; Amindala, BhanuPrakash; Ruperao, Pradeep; Katta, Mohan A V S K; Varshney, Rajeev K

    2014-01-01

    Open source single nucleotide polymorphism (SNP) discovery pipelines for next generation sequencing data commonly requires working knowledge of command line interface, massive computational resources and expertise which is a daunting task for biologists. Further, the SNP information generated may not be readily used for downstream processes such as genotyping. Hence, a comprehensive pipeline has been developed by integrating several open source next generation sequencing (NGS) tools along with a graphical user interface called Integrated SNP Mining and Utilization (ISMU) for SNP discovery and their utilization by developing genotyping assays. The pipeline features functionalities such as pre-processing of raw data, integration of open source alignment tools (Bowtie2, BWA, Maq, NovoAlign and SOAP2), SNP prediction (SAMtools/SOAPsnp/CNS2snp and CbCC) methods and interfaces for developing genotyping assays. The pipeline outputs a list of high quality SNPs between all pairwise combinations of genotypes analyzed, in addition to the reference genome/sequence. Visualization tools (Tablet and Flapjack) integrated into the pipeline enable inspection of the alignment and errors, if any. The pipeline also provides a confidence score or polymorphism information content value with flanking sequences for identified SNPs in standard format required for developing marker genotyping (KASP and Golden Gate) assays. The pipeline enables users to process a range of NGS datasets such as whole genome re-sequencing, restriction site associated DNA sequencing and transcriptome sequencing data at a fast speed. The pipeline is very useful for plant genetics and breeding community with no computational expertise in order to discover SNPs and utilize in genomics, genetics and breeding studies. The pipeline has been parallelized to process huge datasets of next generation sequencing. It has been developed in Java language and is available at http://hpc.icrisat.cgiar.org/ISMU as a standalone

  3. Statistical method to compare massive parallel sequencing pipelines.

    Science.gov (United States)

    Elsensohn, M H; Leblay, N; Dimassi, S; Campan-Fournier, A; Labalme, A; Roucher-Boulez, F; Sanlaville, D; Lesca, G; Bardel, C; Roy, P

    2017-03-01

    Today, sequencing is frequently carried out by Massive Parallel Sequencing (MPS) that cuts drastically sequencing time and expenses. Nevertheless, Sanger sequencing remains the main validation method to confirm the presence of variants. The analysis of MPS data involves the development of several bioinformatic tools, academic or commercial. We present here a statistical method to compare MPS pipelines and test it in a comparison between an academic (BWA-GATK) and a commercial pipeline (TMAP-NextGENe®), with and without reference to a gold standard (here, Sanger sequencing), on a panel of 41 genes in 43 epileptic patients. This method used the number of variants to fit log-linear models for pairwise agreements between pipelines. To assess the heterogeneity of the margins and the odds ratios of agreement, four log-linear models were used: a full model, a homogeneous-margin model, a model with single odds ratio for all patients, and a model with single intercept. Then a log-linear mixed model was fitted considering the biological variability as a random effect. Among the 390,339 base-pairs sequenced, TMAP-NextGENe® and BWA-GATK found, on average, 2253.49 and 1857.14 variants (single nucleotide variants and indels), respectively. Against the gold standard, the pipelines had similar sensitivities (63.47% vs. 63.42%) and close but significantly different specificities (99.57% vs. 99.65%; p < 0.001). Same-trend results were obtained when only single nucleotide variants were considered (99.98% specificity and 76.81% sensitivity for both pipelines). The method allows thus pipeline comparison and selection. It is generalizable to all types of MPS data and all pipelines.

  4. Designing Next Generation Telecom Regulation

    DEFF Research Database (Denmark)

    Henten, Anders; Samarajiva, Rohan

    – ICT convergence regulation and multisector utility regulation. Whatever structure of next generation telecom regulation is adopted, all countries will need to pay much greater attention to the need for increased coordination of policy directions and regulatory activities both across the industries......Continuously expanding applications of information and communication technologies (ICT) are transforming local, national, regional and international economies into network economies, the foundation for information societies. They are being built upon expanded and upgraded national telecom networks...... to creating an environment to foster a massive expansion in the coverage and capabilities of the information infrastructure networks, with national telecom regulators as the key implementers of the policies of reform. The first phase of reform has focused on industry specific telecom policy and regulation...

  5. Massively parallel computation of conservation laws

    Energy Technology Data Exchange (ETDEWEB)

    Garbey, M [Univ. Claude Bernard, Villeurbanne (France); Levine, D [Argonne National Lab., IL (United States)

    1990-01-01

    The authors present a new method for computing solutions of conservation laws based on the use of cellular automata with the method of characteristics. The method exploits the high degree of parallelism available with cellular automata and retains important features of the method of characteristics. It yields high numerical accuracy and extends naturally to adaptive meshes and domain decomposition methods for perturbed conservation laws. They describe the method and its implementation for a Dirichlet problem with a single conservation law for the one-dimensional case. Numerical results for the one-dimensional law with the classical Burgers nonlinearity or the Buckley-Leverett equation show good numerical accuracy outside the neighborhood of the shocks. The error in the area of the shocks is of the order of the mesh size. The algorithm is well suited for execution on both massively parallel computers and vector machines. They present timing results for an Alliant FX/8, Connection Machine Model 2, and CRAY X-MP.

  6. Massively Parallel Computing at Sandia and Its Application to National Defense

    National Research Council Canada - National Science Library

    Dosanjh, Sudip

    1991-01-01

    Two years ago, researchers at Sandia National Laboratories showed that a massively parallel computer with 1024 processors could solve scientific problems more than 1000 times faster than a single processor...

  7. Massively-parallel best subset selection for ordinary least-squares regression

    DEFF Research Database (Denmark)

    Gieseke, Fabian; Polsterer, Kai Lars; Mahabal, Ashish

    2017-01-01

    Selecting an optimal subset of k out of d features for linear regression models given n training instances is often considered intractable for feature spaces with hundreds or thousands of dimensions. We propose an efficient massively-parallel implementation for selecting such optimal feature...

  8. Massive Asynchronous Parallelization of Sparse Matrix Factorizations

    Energy Technology Data Exchange (ETDEWEB)

    Chow, Edmond [Georgia Inst. of Technology, Atlanta, GA (United States)

    2018-01-08

    Solving sparse problems is at the core of many DOE computational science applications. We focus on the challenge of developing sparse algorithms that can fully exploit the parallelism in extreme-scale computing systems, in particular systems with massive numbers of cores per node. Our approach is to express a sparse matrix factorization as a large number of bilinear constraint equations, and then solving these equations via an asynchronous iterative method. The unknowns in these equations are the matrix entries of the factorization that is desired.

  9. Computational fluid dynamics on a massively parallel computer

    Science.gov (United States)

    Jespersen, Dennis C.; Levit, Creon

    1989-01-01

    A finite difference code was implemented for the compressible Navier-Stokes equations on the Connection Machine, a massively parallel computer. The code is based on the ARC2D/ARC3D program and uses the implicit factored algorithm of Beam and Warming. The codes uses odd-even elimination to solve linear systems. Timings and computation rates are given for the code, and a comparison is made with a Cray XMP.

  10. Implementation of PHENIX trigger algorithms on massively parallel computers

    International Nuclear Information System (INIS)

    Petridis, A.N.; Wohn, F.K.

    1995-01-01

    The event selection requirements of contemporary high energy and nuclear physics experiments are met by the introduction of on-line trigger algorithms which identify potentially interesting events and reduce the data acquisition rate to levels that are manageable by the electronics. Such algorithms being parallel in nature can be simulated off-line using massively parallel computers. The PHENIX experiment intends to investigate the possible existence of a new phase of matter called the quark gluon plasma which has been theorized to have existed in very early stages of the evolution of the universe by studying collisions of heavy nuclei at ultra-relativistic energies. Such interactions can also reveal important information regarding the structure of the nucleus and mandate a thorough investigation of the simpler proton-nucleus collisions at the same energies. The complexity of PHENIX events and the need to analyze and also simulate them at rates similar to the data collection ones imposes enormous computation demands. This work is a first effort to implement PHENIX trigger algorithms on parallel computers and to study the feasibility of using such machines to run the complex programs necessary for the simulation of the PHENIX detector response. Fine and coarse grain approaches have been studied and evaluated. Depending on the application the performance of a massively parallel computer can be much better or much worse than that of a serial workstation. A comparison between single instruction and multiple instruction computers is also made and possible applications of the single instruction machines to high energy and nuclear physics experiments are outlined. copyright 1995 American Institute of Physics

  11. A Massively Parallel Code for Polarization Calculations

    Science.gov (United States)

    Akiyama, Shizuka; Höflich, Peter

    2001-03-01

    We present an implementation of our Monte-Carlo radiation transport method for rapidly expanding, NLTE atmospheres for massively parallel computers which utilizes both the distributed and shared memory models. This allows us to take full advantage of the fast communication and low latency inherent to nodes with multiple CPUs, and to stretch the limits of scalability with the number of nodes compared to a version which is based on the shared memory model. Test calculations on a local 20-node Beowulf cluster with dual CPUs showed an improved scalability by about 40%.

  12. A massively-parallel electronic-structure calculations based on real-space density functional theory

    International Nuclear Information System (INIS)

    Iwata, Jun-Ichi; Takahashi, Daisuke; Oshiyama, Atsushi; Boku, Taisuke; Shiraishi, Kenji; Okada, Susumu; Yabana, Kazuhiro

    2010-01-01

    Based on the real-space finite-difference method, we have developed a first-principles density functional program that efficiently performs large-scale calculations on massively-parallel computers. In addition to efficient parallel implementation, we also implemented several computational improvements, substantially reducing the computational costs of O(N 3 ) operations such as the Gram-Schmidt procedure and subspace diagonalization. Using the program on a massively-parallel computer cluster with a theoretical peak performance of several TFLOPS, we perform electronic-structure calculations for a system consisting of over 10,000 Si atoms, and obtain a self-consistent electronic-structure in a few hundred hours. We analyze in detail the costs of the program in terms of computation and of inter-node communications to clarify the efficiency, the applicability, and the possibility for further improvements.

  13. Massive hybrid parallelism for fully implicit multiphysics

    International Nuclear Information System (INIS)

    Gaston, D. R.; Permann, C. J.; Andrs, D.; Peterson, J. W.

    2013-01-01

    As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided. (authors)

  14. Massive hybrid parallelism for fully implicit multiphysics

    Energy Technology Data Exchange (ETDEWEB)

    Gaston, D. R.; Permann, C. J.; Andrs, D.; Peterson, J. W. [Idaho National Laboratory, 2525 N. Fremont Ave., Idaho Falls, ID 83415 (United States)

    2013-07-01

    As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided. (authors)

  15. MASSIVE HYBRID PARALLELISM FOR FULLY IMPLICIT MULTIPHYSICS

    Energy Technology Data Exchange (ETDEWEB)

    Cody J. Permann; David Andrs; John W. Peterson; Derek R. Gaston

    2013-05-01

    As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided.

  16. Massively parallel Monte Carlo for many-particle simulations on GPUs

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Joshua A.; Jankowski, Eric [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Grubb, Thomas L. [Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Engel, Michael [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Glotzer, Sharon C., E-mail: sglotzer@umich.edu [Department of Chemical Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Department of Materials Science and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States)

    2013-12-01

    Current trends in parallel processors call for the design of efficient massively parallel algorithms for scientific computing. Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. In this paper, we present a massively parallel method that obeys detailed balance and implement it for a system of hard disks on the GPU. We reproduce results of serial high-precision Monte Carlo runs to verify the method. This is a good test case because the hard disk equation of state over the range where the liquid transforms into the solid is particularly sensitive to small deviations away from the balance conditions. On a Tesla K20, our GPU implementation executes over one billion trial moves per second, which is 148 times faster than on a single Intel Xeon E5540 CPU core, enables 27 times better performance per dollar, and cuts energy usage by a factor of 13. With this improved performance we are able to calculate the equation of state for systems of up to one million hard disks. These large system sizes are required in order to probe the nature of the melting transition, which has been debated for the last forty years. In this paper we present the details of our computational method, and discuss the thermodynamics of hard disks separately in a companion paper.

  17. High Throughput Line-of-Sight MIMO Systems for Next Generation Backhaul Applications

    Science.gov (United States)

    Song, Xiaohang; Cvetkovski, Darko; Hälsig, Tim; Rave, Wolfgang; Fettweis, Gerhard; Grass, Eckhard; Lankl, Berthold

    2017-09-01

    The evolution to ultra-dense next generation networks requires a massive increase in throughput and deployment flexibility. Therefore, novel wireless backhaul solutions that can support these demands are needed. In this work we present an approach for a millimeter wave line-of-sight MIMO backhaul design, targeting transmission rates in the order of 100 Gbit/s. We provide theoretical foundations for the concept showcasing its potential, which are confirmed through channel measurements. Furthermore, we provide insights into the system design with respect to antenna array setup, baseband processing, synchronization, and channel equalization. Implementation in a 60 GHz demonstrator setup proves the feasibility of the system concept for high throughput backhauling in next generation networks.

  18. Next generation sequencing reveals the hidden diversity of zooplankton assemblages.

    Directory of Open Access Journals (Sweden)

    Penelope K Lindeque

    Full Text Available BACKGROUND: Zooplankton play an important role in our oceans, in biogeochemical cycling and providing a food source for commercially important fish larvae. However, difficulties in correctly identifying zooplankton hinder our understanding of their roles in marine ecosystem functioning, and can prevent detection of long term changes in their community structure. The advent of massively parallel next generation sequencing technology allows DNA sequence data to be recovered directly from whole community samples. Here we assess the ability of such sequencing to quantify richness and diversity of a mixed zooplankton assemblage from a productive time series site in the Western English Channel. METHODOLOGY/PRINCIPLE FINDINGS: Plankton net hauls (200 µm were taken at the Western Channel Observatory station L4 in September 2010 and January 2011. These samples were analysed by microscopy and metagenetic analysis of the 18S nuclear small subunit ribosomal RNA gene using the 454 pyrosequencing platform. Following quality control a total of 419,041 sequences were obtained for all samples. The sequences clustered into 205 operational taxonomic units using a 97% similarity cut-off. Allocation of taxonomy by comparison with the National Centre for Biotechnology Information database identified 135 OTUs to species level, 11 to genus level and 1 to order, <2.5% of sequences were classified as unknowns. By comparison a skilled microscopic analyst was able to routinely enumerate only 58 taxonomic groups. CONCLUSIONS: Metagenetics reveals a previously hidden taxonomic richness, especially for Copepoda and hard-to-identify meroplankton such as Bivalvia, Gastropoda and Polychaeta. It also reveals rare species and parasites. We conclude that Next Generation Sequencing of 18S amplicons is a powerful tool for elucidating the true diversity and species richness of zooplankton communities. While this approach allows for broad diversity assessments of plankton it may

  19. ASSET: Analysis of Sequences of Synchronous Events in Massively Parallel Spike Trains

    Science.gov (United States)

    Canova, Carlos; Denker, Michael; Gerstein, George; Helias, Moritz

    2016-01-01

    With the ability to observe the activity from large numbers of neurons simultaneously using modern recording technologies, the chance to identify sub-networks involved in coordinated processing increases. Sequences of synchronous spike events (SSEs) constitute one type of such coordinated spiking that propagates activity in a temporally precise manner. The synfire chain was proposed as one potential model for such network processing. Previous work introduced a method for visualization of SSEs in massively parallel spike trains, based on an intersection matrix that contains in each entry the degree of overlap of active neurons in two corresponding time bins. Repeated SSEs are reflected in the matrix as diagonal structures of high overlap values. The method as such, however, leaves the task of identifying these diagonal structures to visual inspection rather than to a quantitative analysis. Here we present ASSET (Analysis of Sequences of Synchronous EvenTs), an improved, fully automated method which determines diagonal structures in the intersection matrix by a robust mathematical procedure. The method consists of a sequence of steps that i) assess which entries in the matrix potentially belong to a diagonal structure, ii) cluster these entries into individual diagonal structures and iii) determine the neurons composing the associated SSEs. We employ parallel point processes generated by stochastic simulations as test data to demonstrate the performance of the method under a wide range of realistic scenarios, including different types of non-stationarity of the spiking activity and different correlation structures. Finally, the ability of the method to discover SSEs is demonstrated on complex data from large network simulations with embedded synfire chains. Thus, ASSET represents an effective and efficient tool to analyze massively parallel spike data for temporal sequences of synchronous activity. PMID:27420734

  20. A massively parallel corpus: the Bible in 100 languages.

    Science.gov (United States)

    Christodouloupoulos, Christos; Steedman, Mark

    We describe the creation of a massively parallel corpus based on 100 translations of the Bible. We discuss some of the difficulties in acquiring and processing the raw material as well as the potential of the Bible as a corpus for natural language processing. Finally we present a statistical analysis of the corpora collected and a detailed comparison between the English translation and other English corpora.

  1. DGDFT: A massively parallel method for large scale density functional theory calculations.

    Science.gov (United States)

    Hu, Wei; Lin, Lin; Yang, Chao

    2015-09-28

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10(-4) Hartree/atom in terms of the error of energy and 6.2 × 10(-4) Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.

  2. DGDFT: A massively parallel method for large scale density functional theory calculations

    International Nuclear Information System (INIS)

    Hu, Wei; Yang, Chao; Lin, Lin

    2015-01-01

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10 −4 Hartree/atom in terms of the error of energy and 6.2 × 10 −4 Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail

  3. DGDFT: A massively parallel method for large scale density functional theory calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Wei, E-mail: whu@lbl.gov; Yang, Chao, E-mail: cyang@lbl.gov [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Lin, Lin, E-mail: linlin@math.berkeley.edu [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Department of Mathematics, University of California, Berkeley, California 94720 (United States)

    2015-09-28

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10{sup −4} Hartree/atom in terms of the error of energy and 6.2 × 10{sup −4} Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.

  4. Next-generation phylogenomics

    Directory of Open Access Journals (Sweden)

    Chan Cheong Xin

    2013-01-01

    Full Text Available Abstract Thanks to advances in next-generation technologies, genome sequences are now being generated at breadth (e.g. across environments and depth (thousands of closely related strains, individuals or samples unimaginable only a few years ago. Phylogenomics – the study of evolutionary relationships based on comparative analysis of genome-scale data – has so far been developed as industrial-scale molecular phylogenetics, proceeding in the two classical steps: multiple alignment of homologous sequences, followed by inference of a tree (or multiple trees. However, the algorithms typically employed for these steps scale poorly with number of sequences, such that for an increasing number of problems, high-quality phylogenomic analysis is (or soon will be computationally infeasible. Moreover, next-generation data are often incomplete and error-prone, and analysis may be further complicated by genome rearrangement, gene fusion and deletion, lateral genetic transfer, and transcript variation. Here we argue that next-generation data require next-generation phylogenomics, including so-called alignment-free approaches. Reviewers Reviewed by Mr Alexander Panchin (nominated by Dr Mikhail Gelfand, Dr Eugene Koonin and Prof Peter Gogarten. For the full reviews, please go to the Reviewers’ comments section.

  5. Scientific programming on massively parallel processor CP-PACS

    International Nuclear Information System (INIS)

    Boku, Taisuke

    1998-01-01

    The massively parallel processor CP-PACS takes various problems of calculation physics as the object, and it has been designed so that its architecture has been devised to do various numerical processings. In this report, the outline of the CP-PACS and the example of programming in the Kernel CG benchmark in NAS Parallel Benchmarks, version 1, are shown, and the pseudo vector processing mechanism and the parallel processing tuning of scientific and technical computation utilizing the three-dimensional hyper crossbar net, which are two great features of the architecture of the CP-PACS are described. As for the CP-PACS, the PUs based on RISC processor and added with pseudo vector processor are used. Pseudo vector processing is realized as the loop processing by scalar command. The features of the connection net of PUs are explained. The algorithm of the NPB version 1 Kernel CG is shown. The part that takes the time for processing most in the main loop is the product of matrix and vector (matvec), and the parallel processing of the matvec is explained. The time for the computation by the CPU is determined. As the evaluation of the performance, the evaluation of the time for execution, the short vector processing of pseudo vector processor based on slide window, and the comparison with other parallel computers are reported. (K.I.)

  6. Engineering-Based Thermal CFD Simulations on Massive Parallel Systems

    KAUST Repository

    Frisch, Jérôme

    2015-05-22

    The development of parallel Computational Fluid Dynamics (CFD) codes is a challenging task that entails efficient parallelization concepts and strategies in order to achieve good scalability values when running those codes on modern supercomputers with several thousands to millions of cores. In this paper, we present a hierarchical data structure for massive parallel computations that supports the coupling of a Navier–Stokes-based fluid flow code with the Boussinesq approximation in order to address complex thermal scenarios for energy-related assessments. The newly designed data structure is specifically designed with the idea of interactive data exploration and visualization during runtime of the simulation code; a major shortcoming of traditional high-performance computing (HPC) simulation codes. We further show and discuss speed-up values obtained on one of Germany’s top-ranked supercomputers with up to 140,000 processes and present simulation results for different engineering-based thermal problems.

  7. ARTS - adaptive runtime system for massively parallel systems. Final report; ARTS - optimale Ausfuehrungsunterstuetzung fuer komplexe Anwendungen auf massiv parallelen Systemen. Teilprojekt: Parallele Stroemungsmechanik. Abschlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Gentzsch, W.; Ferstl, F.; Paap, H.G.; Riedel, E.

    1998-03-20

    In the ARTS project, system software has been developed to support smog and fluid dynamic applications on massively parallel systems. The aim is to implement and test specific software structures within an adaptive run-time system to separate the parallel core algorithms of the applications from the platform independent runtime aspects. Only slight modifications is existing Fortran and C code are necessary to integrate the application code into the new object oriented parallel integrated ARTS framework. The OO-design offers easy control, re-use and adaptation of the system services, resulting in a dramatic decrease in development time of the application and in ease of maintainability of the application software in the future. (orig.) [Deutsch] Im Projekt ARTS wird Basissoftware zur Unterstuetzung von Anwendungen aus den Bereichen Smoganalyse und Stroemungsmechanik auf massiv parallelen Systemen entwickelt und optimiert. Im Vordergrund steht die Erprobung geeigneter Strukturen, um systemnahe Funktionalitaeten in einer Laufzeitumgebung anzusiedeln und dadurch die parallelen Kernalgorithmen der Anwendungsprogramme von den plattformunabhaengigen Laufzeitaspekten zu trennen. Es handelt sich dabei um herkoemmlich strukturierten Fortran-Code, der unter minimalen Aenderungen auch weiterhin nutzbar sein muss, sowie um objektbasiert entworfenen C-Code, der die volle Funktionalitaet der ARTS-Plattform ausnutzen kann. Ein objektorientiertes Design erlaubt eine einfache Kontrolle, Wiederverwendung und Adaption der vom System vorgegebenen Basisdienste. Daraus resultiert ein deutlich reduzierter Entwicklungs- und Laufzeitaufwand fuer die Anwendung. ARTS schafft eine integrierende Plattform, die moderne Technologien aus dem Bereich objektorientierter Laufzeitsysteme mit praxisrelevanten Anforderungen aus dem Bereich des wissenschaftlichen Hoechstleistungsrechnens kombiniert. (orig.)

  8. Characterization of the Zoarces viviparus liver transcriptome using massively parallel pyrosequencing

    Directory of Open Access Journals (Sweden)

    Asker Noomi

    2009-07-01

    Full Text Available Abstract Background The teleost Zoarces viviparus (eelpout lives along the coasts of Northern Europe and has long been an established model organism for marine ecology and environmental monitoring. The scarce information about this species genome has however restrained the use of efficient molecular-level assays, such as gene expression microarrays. Results In the present study we present the first comprehensive characterization of the Zoarces viviparus liver transcriptome. From 400,000 reads generated by massively parallel pyrosequencing, more than 50,000 pieces of putative transcripts were assembled, annotated and functionally classified. The data was estimated to cover roughly 40% of the total transcriptome and homologues for about half of the genes of Gasterosteus aculeatus (stickleback were identified. The sequence data was consequently used to design an oligonucleotide microarray for large-scale gene expression analysis. Conclusion Our results show that one run using a Genome Sequencer FLX from 454 Life Science/Roche generates enough genomic information for adequate de novo assembly of a large number of genes in a higher vertebrate. The generated sequence data, including the validated microarray probes, are publicly available to promote genome-wide research in Zoarces viviparus.

  9. Statistical analysis of next generation sequencing data

    CERN Document Server

    Nettleton, Dan

    2014-01-01

    Next Generation Sequencing (NGS) is the latest high throughput technology to revolutionize genomic research. NGS generates massive genomic datasets that play a key role in the big data phenomenon that surrounds us today. To extract signals from high-dimensional NGS data and make valid statistical inferences and predictions, novel data analytic and statistical techniques are needed. This book contains 20 chapters written by prominent statisticians working with NGS data. The topics range from basic preprocessing and analysis with NGS data to more complex genomic applications such as copy number variation and isoform expression detection. Research statisticians who want to learn about this growing and exciting area will find this book useful. In addition, many chapters from this book could be included in graduate-level classes in statistical bioinformatics for training future biostatisticians who will be expected to deal with genomic data in basic biomedical research, genomic clinical trials and personalized med...

  10. Climate models on massively parallel computers

    International Nuclear Information System (INIS)

    Vitart, F.; Rouvillois, P.

    1993-01-01

    First results got on massively parallel computers (Multiple Instruction Multiple Data and Simple Instruction Multiple Data) allow to consider building of coupled models with high resolutions. This would make possible simulation of thermoaline circulation and other interaction phenomena between atmosphere and ocean. The increasing of computers powers, and then the improvement of resolution will go us to revise our approximations. Then hydrostatic approximation (in ocean circulation) will not be valid when the grid mesh will be of a dimension lower than a few kilometers: We shall have to find other models. The expert appraisement got in numerical analysis at the Center of Limeil-Valenton (CEL-V) will be used again to imagine global models taking in account atmosphere, ocean, ice floe and biosphere, allowing climate simulation until a regional scale

  11. Micro-mechanical Simulations of Soils using Massively Parallel Supercomputers

    Directory of Open Access Journals (Sweden)

    David W. Washington

    2004-06-01

    Full Text Available In this research a computer program, Trubal version 1.51, based on the Discrete Element Method was converted to run on a Connection Machine (CM-5,a massively parallel supercomputer with 512 nodes, to expedite the computational times of simulating Geotechnical boundary value problems. The dynamic memory algorithm in Trubal program did not perform efficiently in CM-2 machine with the Single Instruction Multiple Data (SIMD architecture. This was due to the communication overhead involving global array reductions, global array broadcast and random data movement. Therefore, a dynamic memory algorithm in Trubal program was converted to a static memory arrangement and Trubal program was successfully converted to run on CM-5 machines. The converted program was called "TRUBAL for Parallel Machines (TPM." Simulating two physical triaxial experiments and comparing simulation results with Trubal simulations validated the TPM program. With a 512 nodes CM-5 machine TPM produced a nine-fold speedup demonstrating the inherent parallelism within algorithms based on the Discrete Element Method.

  12. A Fast, High Quality, and Reproducible Parallel Lagged-Fibonacci Pseudorandom Number Generator

    Science.gov (United States)

    Mascagni, Michael; Cuccaro, Steven A.; Pryor, Daniel V.; Robinson, M. L.

    1995-07-01

    We study the suitability of the additive lagged-Fibonacci pseudo-random number generator for parallel computation. This generator has relatively short period with respect to the size of its seed. However, the short period is more than made up for with the huge number of full-period cycles it contains. These different full period cycles are called equivalence classes. We show how to enumerate the equivalence classes and how to compute seeds to select a given equivalence class, In addition, we present some theoretical measures of quality for this generator when used in parallel. Next, we conjecture on the size of these measures of quality for this generator. Extensive empirical evidence supports this conjecture. In addition, a probabilistic interpretation of these measures leads to another conjecture similarly supported by empirical evidence. Finally we give an explicit parallelization suitable for a fully reproducible asynchronous MIMD implementation.

  13. Visualizing Network Traffic to Understand the Performance of Massively Parallel Simulations

    KAUST Repository

    Landge, A. G.

    2012-12-01

    The performance of massively parallel applications is often heavily impacted by the cost of communication among compute nodes. However, determining how to best use the network is a formidable task, made challenging by the ever increasing size and complexity of modern supercomputers. This paper applies visualization techniques to aid parallel application developers in understanding the network activity by enabling a detailed exploration of the flow of packets through the hardware interconnect. In order to visualize this large and complex data, we employ two linked views of the hardware network. The first is a 2D view, that represents the network structure as one of several simplified planar projections. This view is designed to allow a user to easily identify trends and patterns in the network traffic. The second is a 3D view that augments the 2D view by preserving the physical network topology and providing a context that is familiar to the application developers. Using the massively parallel multi-physics code pF3D as a case study, we demonstrate that our tool provides valuable insight that we use to explain and optimize pF3D-s performance on an IBM Blue Gene/P system. © 1995-2012 IEEE.

  14. Reduced complexity and latency for a massive MIMO system using a parallel detection algorithm

    Directory of Open Access Journals (Sweden)

    Shoichi Higuchi

    2017-09-01

    Full Text Available In recent years, massive MIMO systems have been widely researched to realize high-speed data transmission. Since massive MIMO systems use a large number of antennas, these systems require huge complexity to detect the signal. In this paper, we propose a novel detection method for massive MIMO using parallel detection with maximum likelihood detection with QR decomposition and M-algorithm (QRM-MLD to reduce the complexity and latency. The proposed scheme obtains an R matrix after permutation of an H matrix and QR decomposition. The R matrix is also eliminated using a Gauss–Jordan elimination method. By using a modified R matrix, the proposed method can detect the transmitted signal using parallel detection. From the simulation results, the proposed scheme can achieve a reduced complexity and latency with a little degradation of the bit error rate (BER performance compared with the conventional method.

  15. Next-Generation Sequencing in Clinical Molecular Diagnostics of Cancer: Advantages and Challenges

    Directory of Open Access Journals (Sweden)

    Rajyalakshmi Luthra

    2015-10-01

    Full Text Available The application of next-generation sequencing (NGS to characterize cancer genomes has resulted in the discovery of numerous genetic markers. Consequently, the number of markers that warrant routine screening in molecular diagnostic laboratories, often from limited tumor material, has increased. This increased demand has been difficult to manage by traditional low- and/or medium-throughput sequencing platforms. Massively parallel sequencing capabilities of NGS provide a much-needed alternative for mutation screening in multiple genes with a single low investment of DNA. However, implementation of NGS technologies, most of which are for research use only (RUO, in a diagnostic laboratory, needs extensive validation in order to establish Clinical Laboratory Improvement Amendments (CLIA and College of American Pathologists (CAP-compliant performance characteristics. Here, we have reviewed approaches for validation of NGS technology for routine screening of tumors. We discuss the criteria for selecting gene markers to include in the NGS panel and the deciding factors for selecting target capture approaches and sequencing platforms. We also discuss challenges in result reporting, storage and retrieval of the voluminous sequencing data and the future potential of clinical NGS.

  16. FY1995 next generation highly parallel database / dataminig server using 100 PC's and ATM switch; 1995 nendo tasudai no pasokon wo ATM ketsugoshita jisedai choheiretsu database mining server no kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    The objective of the research is first to build a highly parallel processing system using 100 personal computers and an ATM switch. The former is a commodity for computer, while the latter can be regarded as a commodity for future communication systems. Second is to implement parallel relational database management system and parallel data mining system over the 100-PC cluster system. Third is to run decision-support queries typicalto data warehouses, to run association rule mining, and to prove the effectiveness of the proposed architecture as a next generation parallel database/datamining server. Performance/cost ratio of PC is significantly improved compared with workstations and proprietry systems due to its mass production. The cost of ATM switch is also considerably decreasing since ATM is being widely accepted as a communication-on infrastructure. By combining 100 PCs as computing commodities and ATM switch as a communication commodity, we built large sca-le parallel processing system inexpensively. Each mode employs the Pentium Pro CPU and the communication badwidth between PC's is more than 120Mbits/sec. A new parallel relational DBMS is design-ed and implemented. TPC-D, which is a standard benchmark for decision support applicants (100GBytes) is executed. Our system attained much higher performance than current commercial systems which are also much more expensive than ours. In addition, we developed a novel parallel data mining algorithm to extract associate rules. We implemented it in our system and succeeded toattain high performance. Thus it is verified that ATM connected PC cluster is very promising as a next generation platform for large scale database/dataminig server. (NEDO)

  17. Design and performance characterization of electronic structure calculations on massively parallel supercomputers

    DEFF Research Database (Denmark)

    Romero, N. A.; Glinsvad, Christian; Larsen, Ask Hjorth

    2013-01-01

    Density function theory (DFT) is the most widely employed electronic structure method because of its favorable scaling with system size and accuracy for a broad range of molecular and condensed-phase systems. The advent of massively parallel supercomputers has enhanced the scientific community...

  18. Image processing with massively parallel computer Quadrics Q1

    International Nuclear Information System (INIS)

    Della Rocca, A.B.; La Porta, L.; Ferriani, S.

    1995-05-01

    Aimed to evaluate the image processing capabilities of the massively parallel computer Quadrics Q1, a convolution algorithm that has been implemented is described in this report. At first the discrete convolution mathematical definition is recalled together with the main Q1 h/w and s/w features. Then the different codification forms of the algorythm are described and the Q1 performances are compared with those obtained by different computers. Finally, the conclusions report on main results and suggestions

  19. Massively parallel computing and the search for jets and black holes at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Halyo, V., E-mail: vhalyo@gmail.com; LeGresley, P.; Lujan, P.

    2014-04-21

    Massively parallel computing at the LHC could be the next leap necessary to reach an era of new discoveries at the LHC after the Higgs discovery. Scientific computing is a critical component of the LHC experiment, including operation, trigger, LHC computing GRID, simulation, and analysis. One way to improve the physics reach of the LHC is to take advantage of the flexibility of the trigger system by integrating coprocessors based on Graphics Processing Units (GPUs) or the Many Integrated Core (MIC) architecture into its server farm. This cutting edge technology provides not only the means to accelerate existing algorithms, but also the opportunity to develop new algorithms that select events in the trigger that previously would have evaded detection. In this paper we describe new algorithms that would allow us to select in the trigger new topological signatures that include non-prompt jet and black hole-like objects in the silicon tracker.

  20. Next-to-next-to-leading order N-jettiness soft function for one massive colored particle production at hadron colliders

    Energy Technology Data Exchange (ETDEWEB)

    Li, Hai Tao [ARC Centre of Excellence for Particle Physics at the Terascale,School of Physics and Astronomy, Monash University, VIC-3800 (Australia); Wang, Jian [PRISMA Cluster of Excellence Mainz Institute for Theoretical Physics, Johannes Gutenberg University, D-55099 Mainz (Germany); Physik Department T31, Technische Universität München,James-Franck-Straße 1, D-85748 Garching (Germany)

    2017-02-01

    The N-jettiness subtraction has proven to be an efficient method to perform differential QCD next-to-next-to-leading order (NNLO) calculations in the last few years. One important ingredient of this method is the NNLO soft function. We calculate this soft function for one massive colored particle production at hadron colliders. We select the color octet and color triplet cases to present the final results. We also discuss its application in NLO and NNLO differential calculations.

  1. Search of massive star formation with COMICS

    Science.gov (United States)

    Okamoto, Yoshiko K.

    2004-04-01

    Mid-infrared observations is useful for studies of massive star formation. Especially COMICS offers powerful tools: imaging survey of the circumstellar structures of forming massive stars such as massive disks and cavity structures, mass estimate from spectroscopy of fine structure lines, and high dispersion spectroscopy to census gas motion around formed stars. COMICS will open the next generation infrared studies of massive star formation.

  2. Implementation of a Monte Carlo algorithm for neutron transport on a massively parallel SIMD machine

    International Nuclear Information System (INIS)

    Baker, R.S.

    1992-01-01

    We present some results from the recent adaptation of a vectorized Monte Carlo algorithm to a massively parallel architecture. The performance of the algorithm on a single processor Cray Y-MP and a Thinking Machine Corporations CM-2 and CM-200 is compared for several test problems. The results show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when the algorithms are applied to realistic problems which require extensive variance reduction. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well

  3. Implementation of a Monte Carlo algorithm for neutron transport on a massively parallel SIMD machine

    International Nuclear Information System (INIS)

    Baker, R.S.

    1993-01-01

    We present some results from the recent adaptation of a vectorized Monte Carlo algorithm to a massively parallel architecture. The performance of the algorithm on a single processor Cray Y-MP and a Thinking Machine Corporations CM-2 and CM-200 is compared for several test problems. The results show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when the algorithms are applied to realistic problems which require extensive variance reduction. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well. (orig.)

  4. Implementation of QR up- and downdating on a massively parallel |computer

    DEFF Research Database (Denmark)

    Bendtsen, Claus; Hansen, Per Christian; Madsen, Kaj

    1995-01-01

    We describe an implementation of QR up- and downdating on a massively parallel computer (the Connection Machine CM-200) and show that the algorithm maps well onto the computer. In particular, we show how the use of corrected semi-normal equations for downdating can be efficiently implemented. We...... also illustrate the use of our algorithms in a new LP algorithm....

  5. Integrated massively parallel sequencing of 15 autosomal STRs and Amelogenin using a simplified library preparation approach.

    Science.gov (United States)

    Xue, Jian; Wu, Riga; Pan, Yajiao; Wang, Shunxia; Qu, Baowang; Qin, Ying; Shi, Yuequn; Zhang, Chuchu; Li, Ran; Zhang, Liyan; Zhou, Cheng; Sun, Hongyu

    2018-04-02

    Massively parallel sequencing (MPS) technologies, also termed as next-generation sequencing (NGS), are becoming increasingly popular in study of short tandem repeats (STR). However, current library preparation methods are usually based on ligation or two-round PCR that requires more steps, making it time-consuming (about 2 days), laborious and expensive. In this study, a 16-plex STR typing system was designed with fusion primer strategy based on the Ion Torrent S5 XL platform which could effectively resolve the above challenges for forensic DNA database-type samples (bloodstains, saliva stains, etc.). The efficiency of this system was tested in 253 Han Chinese participants. The libraries were prepared without DNA isolation and adapter ligation, and the whole process only required approximately 5 h. The proportion of thoroughly genotyped samples in which all the 16 loci were successfully genotyped was 86% (220/256). Of the samples, 99.7% showed 100% concordance between NGS-based STR typing and capillary electrophoresis (CE)-based STR typing. The inconsistency might have been caused by off-ladder alleles and mutations in primer binding sites. Overall, this panel enabled the large-scale genotyping of the DNA samples with controlled quality and quantity because it is a simple, operation-friendly process flow that saves labor, time and costs. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Beam dynamics simulations using a parallel version of PARMILA

    International Nuclear Information System (INIS)

    Ryne, R.D.

    1996-01-01

    The computer code PARMILA has been the primary tool for the design of proton and ion linacs in the United States for nearly three decades. Previously it was sufficient to perform simulations with of order 10000 particles, but recently the need to perform high resolution halo studies for next-generation, high intensity linacs has made it necessary to perform simulations with of order 100 million particles. With the advent of massively parallel computers such simulations are now within reach. Parallel computers already make it possible, for example, to perform beam dynamics calculations with tens of millions of particles, requiring over 10 GByte of core memory, in just a few hours. Also, parallel computers are becoming easier to use thanks to the availability of mature, Fortran-like languages such as Connection Machine Fortran and High Performance Fortran. We will describe our experience developing a parallel version of PARMILA and the performance of the new code

  7. Beam dynamics simulations using a parallel version of PARMILA

    International Nuclear Information System (INIS)

    Ryne, Robert

    1996-01-01

    The computer code PARMILA has been the primary tool for the design of proton and ion linacs in the United States for nearly three decades. Previously it was sufficient to perform simulations with of order 10000 particles, but recently the need to perform high resolution halo studies for next-generation, high intensity linacs has made it necessary to perform simulations with of order 100 million particles. With the advent of massively parallel computers such simulations are now within reach. Parallel computers already make it possible, for example, to perform beam dynamics calculations with tens of millions of particles, requiring over 10 GByte of core memory, in just a few hours. Also, parallel computers are becoming easier to use thanks to the availability of mature, Fortran-like languages such as Connection Machine Fortran and High Performance Fortran. We will describe our experience developing a parallel version of PARMILA and the performance of the new code. (author)

  8. FY1995 next generation highly parallel database / dataminig server using 100 PC's and ATM switch; 1995 nendo tasudai no pasokon wo ATM ketsugoshita jisedai choheiretsu database mining server no kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    The objective of the research is first to build a highly parallel processing system using 100 personal computers and an ATM switch. The former is a commodity for computer, while the latter can be regarded as a commodity for future communication systems. Second is to implement parallel relational database management system and parallel data mining system over the 100-PC cluster system. Third is to run decision-support queries typicalto data warehouses, to run association rule mining, and to prove the effectiveness of the proposed architecture as a next generation parallel database/datamining server. Performance/cost ratio of PC is significantly improved compared with workstations and proprietry systems due to its mass production. The cost of ATM switch is also considerably decreasing since ATM is being widely accepted as a communication-on infrastructure. By combining 100 PCs as computing commodities and ATM switch as a communication commodity, we built large sca-le parallel processing system inexpensively. Each mode employs the Pentium Pro CPU and the communication badwidth between PC's is more than 120Mbits/sec. A new parallel relational DBMS is design-ed and implemented. TPC-D, which is a standard benchmark for decision support applicants (100GBytes) is executed. Our system attained much higher performance than current commercial systems which are also much more expensive than ours. In addition, we developed a novel parallel data mining algorithm to extract associate rules. We implemented it in our system and succeeded toattain high performance. Thus it is verified that ATM connected PC cluster is very promising as a next generation platform for large scale database/dataminig server. (NEDO)

  9. Massively Parallel Single-Molecule Manipulation Using Centrifugal Force

    Science.gov (United States)

    Wong, Wesley; Halvorsen, Ken

    2011-03-01

    Precise manipulation of single molecules has led to remarkable insights in physics, chemistry, biology, and medicine. However, two issues that have impeded the widespread adoption of these techniques are equipment cost and the laborious nature of making measurements one molecule at a time. To meet these challenges, we have developed an approach that enables massively parallel single- molecule force measurements using centrifugal force. This approach is realized in the centrifuge force microscope, an instrument in which objects in an orbiting sample are subjected to a calibration-free, macroscopically uniform force- field while their micro-to-nanoscopic motions are observed. We demonstrate high- throughput single-molecule force spectroscopy with this technique by performing thousands of rupture experiments in parallel, characterizing force-dependent unbinding kinetics of an antibody-antigen pair in minutes rather than days. Currently, we are taking steps to integrate high-resolution detection, fluorescence, temperature control and a greater dynamic range in force. With significant benefits in efficiency, cost, simplicity, and versatility, single-molecule centrifugation has the potential to expand single-molecule experimentation to a wider range of researchers and experimental systems.

  10. A massively parallel strategy for STR marker development, capture, and genotyping.

    Science.gov (United States)

    Kistler, Logan; Johnson, Stephen M; Irwin, Mitchell T; Louis, Edward E; Ratan, Aakrosh; Perry, George H

    2017-09-06

    Short tandem repeat (STR) variants are highly polymorphic markers that facilitate powerful population genetic analyses. STRs are especially valuable in conservation and ecological genetic research, yielding detailed information on population structure and short-term demographic fluctuations. Massively parallel sequencing has not previously been leveraged for scalable, efficient STR recovery. Here, we present a pipeline for developing STR markers directly from high-throughput shotgun sequencing data without a reference genome, and an approach for highly parallel target STR recovery. We employed our approach to capture a panel of 5000 STRs from a test group of diademed sifakas (Propithecus diadema, n = 3), endangered Malagasy rainforest lemurs, and we report extremely efficient recovery of targeted loci-97.3-99.6% of STRs characterized with ≥10x non-redundant sequence coverage. We then tested our STR capture strategy on P. diadema fecal DNA, and report robust initial results and suggestions for future implementations. In addition to STR targets, this approach also generates large, genome-wide single nucleotide polymorphism (SNP) panels from flanking regions. Our method provides a cost-effective and scalable solution for rapid recovery of large STR and SNP datasets in any species without needing a reference genome, and can be used even with suboptimal DNA more easily acquired in conservation and ecological studies. Published by Oxford University Press on behalf of Nucleic Acids Research 2017.

  11. Comparison of Pre-Analytical FFPE Sample Preparation Methods and Their Impact on Massively Parallel Sequencing in Routine Diagnostics

    Science.gov (United States)

    Heydt, Carina; Fassunke, Jana; Künstlinger, Helen; Ihle, Michaela Angelika; König, Katharina; Heukamp, Lukas Carl; Schildhaus, Hans-Ulrich; Odenthal, Margarete; Büttner, Reinhard; Merkelbach-Bruse, Sabine

    2014-01-01

    Over the last years, massively parallel sequencing has rapidly evolved and has now transitioned into molecular pathology routine laboratories. It is an attractive platform for analysing multiple genes at the same time with very little input material. Therefore, the need for high quality DNA obtained from automated DNA extraction systems has increased, especially to those laboratories which are dealing with formalin-fixed paraffin-embedded (FFPE) material and high sample throughput. This study evaluated five automated FFPE DNA extraction systems as well as five DNA quantification systems using the three most common techniques, UV spectrophotometry, fluorescent dye-based quantification and quantitative PCR, on 26 FFPE tissue samples. Additionally, the effects on downstream applications were analysed to find the most suitable pre-analytical methods for massively parallel sequencing in routine diagnostics. The results revealed that the Maxwell 16 from Promega (Mannheim, Germany) seems to be the superior system for DNA extraction from FFPE material. The extracts had a 1.3–24.6-fold higher DNA concentration in comparison to the other extraction systems, a higher quality and were most suitable for downstream applications. The comparison of the five quantification methods showed intermethod variations but all methods could be used to estimate the right amount for PCR amplification and for massively parallel sequencing. Interestingly, the best results in massively parallel sequencing were obtained with a DNA input of 15 ng determined by the NanoDrop 2000c spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA). No difference could be detected in mutation analysis based on the results of the quantification methods. These findings emphasise, that it is particularly important to choose the most reliable and constant DNA extraction system, especially when using small biopsies and low elution volumes, and that all common DNA quantification techniques can be used for

  12. Comparison of pre-analytical FFPE sample preparation methods and their impact on massively parallel sequencing in routine diagnostics.

    Directory of Open Access Journals (Sweden)

    Carina Heydt

    Full Text Available Over the last years, massively parallel sequencing has rapidly evolved and has now transitioned into molecular pathology routine laboratories. It is an attractive platform for analysing multiple genes at the same time with very little input material. Therefore, the need for high quality DNA obtained from automated DNA extraction systems has increased, especially to those laboratories which are dealing with formalin-fixed paraffin-embedded (FFPE material and high sample throughput. This study evaluated five automated FFPE DNA extraction systems as well as five DNA quantification systems using the three most common techniques, UV spectrophotometry, fluorescent dye-based quantification and quantitative PCR, on 26 FFPE tissue samples. Additionally, the effects on downstream applications were analysed to find the most suitable pre-analytical methods for massively parallel sequencing in routine diagnostics. The results revealed that the Maxwell 16 from Promega (Mannheim, Germany seems to be the superior system for DNA extraction from FFPE material. The extracts had a 1.3-24.6-fold higher DNA concentration in comparison to the other extraction systems, a higher quality and were most suitable for downstream applications. The comparison of the five quantification methods showed intermethod variations but all methods could be used to estimate the right amount for PCR amplification and for massively parallel sequencing. Interestingly, the best results in massively parallel sequencing were obtained with a DNA input of 15 ng determined by the NanoDrop 2000c spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA. No difference could be detected in mutation analysis based on the results of the quantification methods. These findings emphasise, that it is particularly important to choose the most reliable and constant DNA extraction system, especially when using small biopsies and low elution volumes, and that all common DNA quantification techniques can

  13. Next generation light water reactors

    International Nuclear Information System (INIS)

    Omoto, Akira

    1992-01-01

    In the countries where the new order of nuclear reactors has ceased, the development of the light water reactors of new type has been discussed, aiming at the revival of nuclear power. Also in Japan, since it is expected that light water reactors continue to be the main power reactor for long period, the technology of light water reactors of next generation has been discussed. For the development of nuclear power, extremely long lead time is required. The light water reactors of next generation now in consideration will continue to be operated till the middle of the next century, therefore, they must take in advance sufficiently the needs of the age. The improvement of the way men and the facilities should be, the simple design, the flexibility to the trend of fuel cycle and so on are required for the light water reactors of next generation. The trend of the development of next generation light water reactors is discussed. The construction of an ABWR was started in September, 1991, as No. 6 plant in Kashiwazaki Kariwa Power Station. (K.I.)

  14. ng: What next-generation languages can teach us about HENP frameworks in the manycore era

    International Nuclear Information System (INIS)

    Binet, Sébastien

    2011-01-01

    Current High Energy and Nuclear Physics (HENP) frameworks were written before multicore systems became widely deployed. A 'single-thread' execution model naturally emerged from that environment, however, this no longer fits into the processing model on the dawn of the manycore era. Although previous work focused on minimizing the changes to be applied to the LHC frameworks (because of the data taking phase) while still trying to reap the benefits of the parallel-enhanced CPU architectures, this paper explores what new languages could bring to the design of the next-generation frameworks. Parallel programming is still in an intensive phase of R and D and no silver bullet exists despite the 30+ years of literature on the subject. Yet, several parallel programming styles have emerged: actors, message passing, communicating sequential processes, task-based programming, data flow programming, ... to name a few. We present the work of the prototyping of a next-generation framework in new and expressive languages (python and Go) to investigate how code clarity and robustness are affected and what are the downsides of using languages younger than FORTRAN/C/C++.

  15. Computational chaos in massively parallel neural networks

    Science.gov (United States)

    Barhen, Jacob; Gulati, Sandeep

    1989-01-01

    A fundamental issue which directly impacts the scalability of current theoretical neural network models to massively parallel embodiments, in both software as well as hardware, is the inherent and unavoidable concurrent asynchronicity of emerging fine-grained computational ensembles and the possible emergence of chaotic manifestations. Previous analyses attributed dynamical instability to the topology of the interconnection matrix, to parasitic components or to propagation delays. However, researchers have observed the existence of emergent computational chaos in a concurrently asynchronous framework, independent of the network topology. Researcher present a methodology enabling the effective asynchronous operation of large-scale neural networks. Necessary and sufficient conditions guaranteeing concurrent asynchronous convergence are established in terms of contracting operators. Lyapunov exponents are computed formally to characterize the underlying nonlinear dynamics. Simulation results are presented to illustrate network convergence to the correct results, even in the presence of large delays.

  16. Massively Parallel and Scalable Implicit Time Integration Algorithms for Structural Dynamics

    Science.gov (United States)

    Farhat, Charbel

    1997-01-01

    Explicit codes are often used to simulate the nonlinear dynamics of large-scale structural systems, even for low frequency response, because the storage and CPU requirements entailed by the repeated factorizations traditionally found in implicit codes rapidly overwhelm the available computing resources. With the advent of parallel processing, this trend is accelerating because of the following additional facts: (a) explicit schemes are easier to parallelize than implicit ones, and (b) explicit schemes induce short range interprocessor communications that are relatively inexpensive, while the factorization methods used in most implicit schemes induce long range interprocessor communications that often ruin the sought-after speed-up. However, the time step restriction imposed by the Courant stability condition on all explicit schemes cannot yet be offset by the speed of the currently available parallel hardware. Therefore, it is essential to develop efficient alternatives to direct methods that are also amenable to massively parallel processing because implicit codes using unconditionally stable time-integration algorithms are computationally more efficient when simulating the low-frequency dynamics of aerospace structures.

  17. A massively parallel method of characteristic neutral particle transport code for GPUs

    International Nuclear Information System (INIS)

    Boyd, W. R.; Smith, K.; Forget, B.

    2013-01-01

    Over the past 20 years, parallel computing has enabled computers to grow ever larger and more powerful while scientific applications have advanced in sophistication and resolution. This trend is being challenged, however, as the power consumption for conventional parallel computing architectures has risen to unsustainable levels and memory limitations have come to dominate compute performance. Heterogeneous computing platforms, such as Graphics Processing Units (GPUs), are an increasingly popular paradigm for solving these issues. This paper explores the applicability of GPUs for deterministic neutron transport. A 2D method of characteristics (MOC) code - OpenMOC - has been developed with solvers for both shared memory multi-core platforms as well as GPUs. The multi-threading and memory locality methodologies for the GPU solver are presented. Performance results for the 2D C5G7 benchmark demonstrate 25-35 x speedup for MOC on the GPU. The lessons learned from this case study will provide the basis for further exploration of MOC on GPUs as well as design decisions for hardware vendors exploring technologies for the next generation of machines for scientific computing. (authors)

  18. Routing performance analysis and optimization within a massively parallel computer

    Science.gov (United States)

    Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen

    2013-04-16

    An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.

  19. Accelerating Monte Carlo Molecular Simulations Using Novel Extrapolation Schemes Combined with Fast Database Generation on Massively Parallel Machines

    KAUST Repository

    Amir, Sahar Z.

    2013-05-01

    We introduce an efficient thermodynamically consistent technique to extrapolate and interpolate normalized Canonical NVT ensemble averages like pressure and energy for Lennard-Jones (L-J) fluids. Preliminary results show promising applicability in oil and gas modeling, where accurate determination of thermodynamic properties in reservoirs is challenging. The thermodynamic interpolation and thermodynamic extrapolation schemes predict ensemble averages at different thermodynamic conditions from expensively simulated data points. The methods reweight and reconstruct previously generated database values of Markov chains at neighboring temperature and density conditions. To investigate the efficiency of these methods, two databases corresponding to different combinations of normalized density and temperature are generated. One contains 175 Markov chains with 10,000,000 MC cycles each and the other contains 3000 Markov chains with 61,000,000 MC cycles each. For such massive database creation, two algorithms to parallelize the computations have been investigated. The accuracy of the thermodynamic extrapolation scheme is investigated with respect to classical interpolation and extrapolation. Finally, thermodynamic interpolation benefiting from four neighboring Markov chains points is implemented and compared with previous schemes. The thermodynamic interpolation scheme using knowledge from the four neighboring points proves to be more accurate than the thermodynamic extrapolation from the closest point only, while both thermodynamic extrapolation and thermodynamic interpolation are more accurate than the classical interpolation and extrapolation. The investigated extrapolation scheme has great potential in oil and gas reservoir modeling.That is, such a scheme has the potential to speed up the MCMC thermodynamic computation to be comparable with conventional Equation of State approaches in efficiency. In particular, this makes it applicable to large-scale optimization of L

  20. Intelligent trigger by massively parallel processors for high energy physics experiments

    International Nuclear Information System (INIS)

    Rohrbach, F.; Vesztergombi, G.

    1992-01-01

    The CERN-MPPC collaboration concentrates its effort on the development of machines based on massive parallelism with thousands of integrated processing elements, arranged in a string. Seven applications are under detailed studies within the collaboration: three for LHC, one for SSC, two for fixed target high energy physics at CERN and one for HDTV. Preliminary results are presented. They show that the objectives should be reached with the use of the ASP architecture. (author)

  1. Molecular diagnosis of glycogen storage disease and disorders with overlapping clinical symptoms by massive parallel sequencing.

    Science.gov (United States)

    Vega, Ana I; Medrano, Celia; Navarrete, Rosa; Desviat, Lourdes R; Merinero, Begoña; Rodríguez-Pombo, Pilar; Vitoria, Isidro; Ugarte, Magdalena; Pérez-Cerdá, Celia; Pérez, Belen

    2016-10-01

    Glycogen storage disease (GSD) is an umbrella term for a group of genetic disorders that involve the abnormal metabolism of glycogen; to date, 23 types of GSD have been identified. The nonspecific clinical presentation of GSD and the lack of specific biomarkers mean that Sanger sequencing is now widely relied on for making a diagnosis. However, this gene-by-gene sequencing technique is both laborious and costly, which is a consequence of the number of genes to be sequenced and the large size of some genes. This work reports the use of massive parallel sequencing to diagnose patients at our laboratory in Spain using either a customized gene panel (targeted exome sequencing) or the Illumina Clinical-Exome TruSight One Gene Panel (clinical exome sequencing (CES)). Sequence variants were matched against biochemical and clinical hallmarks. Pathogenic mutations were detected in 23 patients. Twenty-two mutations were recognized (mostly loss-of-function mutations), including 11 that were novel in GSD-associated genes. In addition, CES detected five patients with mutations in ALDOB, LIPA, NKX2-5, CPT2, or ANO5. Although these genes are not involved in GSD, they are associated with overlapping phenotypic characteristics such as hepatic, muscular, and cardiac dysfunction. These results show that next-generation sequencing, in combination with the detection of biochemical and clinical hallmarks, provides an accurate, high-throughput means of making genetic diagnoses of GSD and related diseases.Genet Med 18 10, 1037-1043.

  2. Massively parallel fabrication of repetitive nanostructures: nanolithography for nanoarrays

    International Nuclear Information System (INIS)

    Luttge, Regina

    2009-01-01

    This topical review provides an overview of nanolithographic techniques for nanoarrays. Using patterning techniques such as lithography, normally we aim for a higher order architecture similarly to functional systems in nature. Inspired by the wealth of complexity in nature, these architectures are translated into technical devices, for example, found in integrated circuitry or other systems in which structural elements work as discrete building blocks in microdevices. Ordered artificial nanostructures (arrays of pillars, holes and wires) have shown particular properties and bring about the opportunity to modify and tune the device operation. Moreover, these nanostructures deliver new applications, for example, the nanoscale control of spin direction within a nanomagnet. Subsequently, we can look for applications where this unique property of the smallest manufactured element is repetitively used such as, for example with respect to spin, in nanopatterned magnetic media for data storage. These nanostructures are generally called nanoarrays. Most of these applications require massively parallel produced nanopatterns which can be directly realized by laser interference (areas up to 4 cm 2 are easily achieved with a Lloyd's mirror set-up). In this topical review we will further highlight the application of laser interference as a tool for nanofabrication, its limitations and ultimate advantages towards a variety of devices including nanostructuring for photonic crystal devices, high resolution patterned media and surface modifications of medical implants. The unique properties of nanostructured surfaces have also found applications in biomedical nanoarrays used either for diagnostic or functional assays including catalytic reactions on chip. Bio-inspired templated nanoarrays will be presented in perspective to other massively parallel nanolithography techniques currently discussed in the scientific literature. (topical review)

  3. Massively parallel DNA sequencing facilitates diagnosis of patients with Usher syndrome type 1.

    Directory of Open Access Journals (Sweden)

    Hidekane Yoshimura

    Full Text Available Usher syndrome is an autosomal recessive disorder manifesting hearing loss, retinitis pigmentosa and vestibular dysfunction, and having three clinical subtypes. Usher syndrome type 1 is the most severe subtype due to its profound hearing loss, lack of vestibular responses, and retinitis pigmentosa that appears in prepuberty. Six of the corresponding genes have been identified, making early diagnosis through DNA testing possible, with many immediate and several long-term advantages for patients and their families. However, the conventional genetic techniques, such as direct sequence analysis, are both time-consuming and expensive. Targeted exon sequencing of selected genes using the massively parallel DNA sequencing technology will potentially enable us to systematically tackle previously intractable monogenic disorders and improve molecular diagnosis. Using this technique combined with direct sequence analysis, we screened 17 unrelated Usher syndrome type 1 patients and detected probable pathogenic variants in the 16 of them (94.1% who carried at least one mutation. Seven patients had the MYO7A mutation (41.2%, which is the most common type in Japanese. Most of the mutations were detected by only the massively parallel DNA sequencing. We report here four patients, who had probable pathogenic mutations in two different Usher syndrome type 1 genes, and one case of MYO7A/PCDH15 digenic inheritance. This is the first report of Usher syndrome mutation analysis using massively parallel DNA sequencing and the frequency of Usher syndrome type 1 genes in Japanese. Mutation screening using this technique has the power to quickly identify mutations of many causative genes while maintaining cost-benefit performance. In addition, the simultaneous mutation analysis of large numbers of genes is useful for detecting mutations in different genes that are possibly disease modifiers or of digenic inheritance.

  4. Massively parallel DNA sequencing facilitates diagnosis of patients with Usher syndrome type 1.

    Science.gov (United States)

    Yoshimura, Hidekane; Iwasaki, Satoshi; Nishio, Shin-Ya; Kumakawa, Kozo; Tono, Tetsuya; Kobayashi, Yumiko; Sato, Hiroaki; Nagai, Kyoko; Ishikawa, Kotaro; Ikezono, Tetsuo; Naito, Yasushi; Fukushima, Kunihiro; Oshikawa, Chie; Kimitsuki, Takashi; Nakanishi, Hiroshi; Usami, Shin-Ichi

    2014-01-01

    Usher syndrome is an autosomal recessive disorder manifesting hearing loss, retinitis pigmentosa and vestibular dysfunction, and having three clinical subtypes. Usher syndrome type 1 is the most severe subtype due to its profound hearing loss, lack of vestibular responses, and retinitis pigmentosa that appears in prepuberty. Six of the corresponding genes have been identified, making early diagnosis through DNA testing possible, with many immediate and several long-term advantages for patients and their families. However, the conventional genetic techniques, such as direct sequence analysis, are both time-consuming and expensive. Targeted exon sequencing of selected genes using the massively parallel DNA sequencing technology will potentially enable us to systematically tackle previously intractable monogenic disorders and improve molecular diagnosis. Using this technique combined with direct sequence analysis, we screened 17 unrelated Usher syndrome type 1 patients and detected probable pathogenic variants in the 16 of them (94.1%) who carried at least one mutation. Seven patients had the MYO7A mutation (41.2%), which is the most common type in Japanese. Most of the mutations were detected by only the massively parallel DNA sequencing. We report here four patients, who had probable pathogenic mutations in two different Usher syndrome type 1 genes, and one case of MYO7A/PCDH15 digenic inheritance. This is the first report of Usher syndrome mutation analysis using massively parallel DNA sequencing and the frequency of Usher syndrome type 1 genes in Japanese. Mutation screening using this technique has the power to quickly identify mutations of many causative genes while maintaining cost-benefit performance. In addition, the simultaneous mutation analysis of large numbers of genes is useful for detecting mutations in different genes that are possibly disease modifiers or of digenic inheritance.

  5. Optimization of multi-phase compressible lattice Boltzmann codes on massively parallel multi-core systems

    NARCIS (Netherlands)

    Biferale, L.; Mantovani, F.; Pivanti, M.; Pozzati, F.; Sbragaglia, M.; Schifano, S.F.; Toschi, F.; Tripiccione, R.

    2011-01-01

    We develop a Lattice Boltzmann code for computational fluid-dynamics and optimize it for massively parallel systems based on multi-core processors. Our code describes 2D multi-phase compressible flows. We analyze the performance bottlenecks that we find as we gradually expose a larger fraction of

  6. Efficient linear precoding for massive MIMO systems using truncated polynomial expansion

    KAUST Repository

    Mü ller, Axel; Kammoun, Abla; Bjö rnson, Emil; Debbah, Mé roú ane

    2014-01-01

    Massive multiple-input multiple-output (MIMO) techniques have been proposed as a solution to satisfy many requirements of next generation cellular systems. One downside of massive MIMO is the increased complexity of computing the precoding

  7. Enhanced memory architecture for massively parallel vision chip

    Science.gov (United States)

    Chen, Zhe; Yang, Jie; Liu, Liyuan; Wu, Nanjian

    2015-04-01

    Local memory architecture plays an important role in high performance massively parallel vision chip. In this paper, we propose an enhanced memory architecture with compact circuit area designed in a full-custom flow. The memory consists of separate master-stage static latches and shared slave-stage dynamic latches. We use split transmission transistors on the input data path to enhance tolerance for charge sharing and to achieve random read/write capabilities. The memory is designed in a 0.18 μm CMOS process. The area overhead of the memory achieves 16.6 μm2/bit. Simulation results show that the maximum operating frequency reaches 410 MHz and the corresponding peak dynamic power consumption for a 64-bit memory unit is 190 μW under 1.8 V supply voltage.

  8. PUMA: An Operating System for Massively Parallel Systems

    Directory of Open Access Journals (Sweden)

    Stephen R. Wheat

    1994-01-01

    Full Text Available This article presents an overview of PUMA (Performance-oriented, User-managed Messaging Architecture, a message-passing kernel for massively parallel systems. Message passing in PUMA is based on portals – an opening in the address space of an application process. Once an application process has established a portal, other processes can write values into the portal using a simple send operation. Because messages are written directly into the address space of the receiving process, there is no need to buffer messages in the PUMA kernel and later copy them into the applications address space. PUMA consists of two components: the quintessential kernel (Q-Kernel and the process control thread (PCT. Although the PCT provides management decisions, the Q-Kernel controls access and implements the policies specified by the PCT.

  9. Generation Next

    Science.gov (United States)

    Hawkins, B. Denise

    2010-01-01

    There is a shortage of accounting professors with Ph.D.s who can prepare the next generation. To help reverse the faculty deficit, the American Institute of Certified Public Accountants (CPAs) has created the new Accounting Doctoral Scholars program by pooling more than $17 million and soliciting commitments from more than 70 of the nation's…

  10. Massively parallel data processing for quantitative total flow imaging with optical coherence microscopy and tomography

    Science.gov (United States)

    Sylwestrzak, Marcin; Szlag, Daniel; Marchand, Paul J.; Kumar, Ashwin S.; Lasser, Theo

    2017-08-01

    We present an application of massively parallel processing of quantitative flow measurements data acquired using spectral optical coherence microscopy (SOCM). The need for massive signal processing of these particular datasets has been a major hurdle for many applications based on SOCM. In view of this difficulty, we implemented and adapted quantitative total flow estimation algorithms on graphics processing units (GPU) and achieved a 150 fold reduction in processing time when compared to a former CPU implementation. As SOCM constitutes the microscopy counterpart to spectral optical coherence tomography (SOCT), the developed processing procedure can be applied to both imaging modalities. We present the developed DLL library integrated in MATLAB (with an example) and have included the source code for adaptations and future improvements. Catalogue identifier: AFBT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AFBT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPLv3 No. of lines in distributed program, including test data, etc.: 913552 No. of bytes in distributed program, including test data, etc.: 270876249 Distribution format: tar.gz Programming language: CUDA/C, MATLAB. Computer: Intel x64 CPU, GPU supporting CUDA technology. Operating system: 64-bit Windows 7 Professional. Has the code been vectorized or parallelized?: Yes, CPU code has been vectorized in MATLAB, CUDA code has been parallelized. RAM: Dependent on users parameters, typically between several gigabytes and several tens of gigabytes Classification: 6.5, 18. Nature of problem: Speed up of data processing in optical coherence microscopy Solution method: Utilization of GPU for massively parallel data processing Additional comments: Compiled DLL library with source code and documentation, example of utilization (MATLAB script with raw data) Running time: 1,8 s for one B-scan (150 × faster in comparison to the CPU

  11. Next Generation Microshutter Arrays Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop the next generation MicroShutter Array (MSA) as a multi-object field selector for missions anticipated in the next two decades. For many...

  12. Project control - the next generation

    International Nuclear Information System (INIS)

    Iorii, V.F.; McKinnon, B.L.

    1993-01-01

    The Yucca Mountain Site Characterization Project (YMP) is the U.S. Department of Energy's (DOE) second largest Major System Acquisition Project. We have developed an integrated planning and control system (called PACS) that we believe represents the 'Next Generation' in project control. PACS integrates technical scope, cost, and schedule information for over 50 participating organizations and produces performances measurement reports for science and engineering managers at all levels. Our 'Next Generation' project control too, PACS, has been found to be in compliance with the new DOE Project Control System Guidelines. Additionally, the nuclear utility oversight group of the Edison Electric Institute has suggested PACS be used as a model for other civilian radioactive waste management projects. A 'Next Generation' project control tool will be necessary to do science in the 21st century

  13. A highly scalable massively parallel fast marching method for the Eikonal equation

    Science.gov (United States)

    Yang, Jianming; Stern, Frederick

    2017-03-01

    The fast marching method is a widely used numerical method for solving the Eikonal equation arising from a variety of scientific and engineering fields. It is long deemed inherently sequential and an efficient parallel algorithm applicable to large-scale practical applications is not available in the literature. In this study, we present a highly scalable massively parallel implementation of the fast marching method using a domain decomposition approach. Central to this algorithm is a novel restarted narrow band approach that coordinates the frequency of communications and the amount of computations extra to a sequential run for achieving an unprecedented parallel performance. Within each restart, the narrow band fast marching method is executed; simple synchronous local exchanges and global reductions are adopted for communicating updated data in the overlapping regions between neighboring subdomains and getting the latest front status, respectively. The independence of front characteristics is exploited through special data structures and augmented status tags to extract the masked parallelism within the fast marching method. The efficiency, flexibility, and applicability of the parallel algorithm are demonstrated through several examples. These problems are extensively tested on six grids with up to 1 billion points using different numbers of processes ranging from 1 to 65536. Remarkable parallel speedups are achieved using tens of thousands of processes. Detailed pseudo-codes for both the sequential and parallel algorithms are provided to illustrate the simplicity of the parallel implementation and its similarity to the sequential narrow band fast marching algorithm.

  14. Tinker-HP: a massively parallel molecular dynamics package for multiscale simulations of large complex systems with advanced point dipole polarizable force fields.

    Science.gov (United States)

    Lagardère, Louis; Jolly, Luc-Henri; Lipparini, Filippo; Aviat, Félix; Stamm, Benjamin; Jing, Zhifeng F; Harger, Matthew; Torabifard, Hedieh; Cisneros, G Andrés; Schnieders, Michael J; Gresh, Nohad; Maday, Yvon; Ren, Pengyu Y; Ponder, Jay W; Piquemal, Jean-Philip

    2018-01-28

    We present Tinker-HP, a massively MPI parallel package dedicated to classical molecular dynamics (MD) and to multiscale simulations, using advanced polarizable force fields (PFF) encompassing distributed multipoles electrostatics. Tinker-HP is an evolution of the popular Tinker package code that conserves its simplicity of use and its reference double precision implementation for CPUs. Grounded on interdisciplinary efforts with applied mathematics, Tinker-HP allows for long polarizable MD simulations on large systems up to millions of atoms. We detail in the paper the newly developed extension of massively parallel 3D spatial decomposition to point dipole polarizable models as well as their coupling to efficient Krylov iterative and non-iterative polarization solvers. The design of the code allows the use of various computer systems ranging from laboratory workstations to modern petascale supercomputers with thousands of cores. Tinker-HP proposes therefore the first high-performance scalable CPU computing environment for the development of next generation point dipole PFFs and for production simulations. Strategies linking Tinker-HP to Quantum Mechanics (QM) in the framework of multiscale polarizable self-consistent QM/MD simulations are also provided. The possibilities, performances and scalability of the software are demonstrated via benchmarks calculations using the polarizable AMOEBA force field on systems ranging from large water boxes of increasing size and ionic liquids to (very) large biosystems encompassing several proteins as well as the complete satellite tobacco mosaic virus and ribosome structures. For small systems, Tinker-HP appears to be competitive with the Tinker-OpenMM GPU implementation of Tinker. As the system size grows, Tinker-HP remains operational thanks to its access to distributed memory and takes advantage of its new algorithmic enabling for stable long timescale polarizable simulations. Overall, a several thousand-fold acceleration over

  15. New Dimensions of Research on Actinomycetes: Quest for Next Generation Antibiotics

    Directory of Open Access Journals (Sweden)

    Polpass Arul Jose

    2016-08-01

    Full Text Available Starting with the discovery of streptomycin, the promise of natural products research on actinomycetes has been captivat¬ing researchers and offered an array of life-saving antibiotics. However, most of the actinomycetes have received a little attention of researchers beyond isolation and activity screening. Noticeable gaps in genomic information and associated biosynthetic potential of actinomycetes are mainly the reasons for this situation, which has led to a decline in the discovery rate of novel antibiotics. Recent insights gained from genome mining have revealed a massive existence of previously unrecognized biosynthetic potential in actinomycetes. Successive developments in next-generation sequencing, genome editing, analytical separation and high-resolution spectroscopic methods have reinvigorated interest on such actinomycetes and opened new avenues for the discovery of natural and natural-inspired antibiotics. This article describes the new dimensions that have driven the ongoing resurgence of research on actinomycetes with historical background since the commencement in 1940, for the attention of worldwide researchers. Coupled with increasing advancement in molecular and analytical tools and techniques, the discovery of next-generation antibiotics could be possible by revisiting the untapped potential of actinomycetes from different natural sources.

  16. Massively parallel whole genome amplification for single-cell sequencing using droplet microfluidics.

    Science.gov (United States)

    Hosokawa, Masahito; Nishikawa, Yohei; Kogawa, Masato; Takeyama, Haruko

    2017-07-12

    Massively parallel single-cell genome sequencing is required to further understand genetic diversities in complex biological systems. Whole genome amplification (WGA) is the first step for single-cell sequencing, but its throughput and accuracy are insufficient in conventional reaction platforms. Here, we introduce single droplet multiple displacement amplification (sd-MDA), a method that enables massively parallel amplification of single cell genomes while maintaining sequence accuracy and specificity. Tens of thousands of single cells are compartmentalized in millions of picoliter droplets and then subjected to lysis and WGA by passive droplet fusion in microfluidic channels. Because single cells are isolated in compartments, their genomes are amplified to saturation without contamination. This enables the high-throughput acquisition of contamination-free and cell specific sequence reads from single cells (21,000 single-cells/h), resulting in enhancement of the sequence data quality compared to conventional methods. This method allowed WGA of both single bacterial cells and human cancer cells. The obtained sequencing coverage rivals those of conventional techniques with superior sequence quality. In addition, we also demonstrate de novo assembly of uncultured soil bacteria and obtain draft genomes from single cell sequencing. This sd-MDA is promising for flexible and scalable use in single-cell sequencing.

  17. QuickNGS elevates Next-Generation Sequencing data analysis to a new level of automation.

    Science.gov (United States)

    Wagle, Prerana; Nikolić, Miloš; Frommolt, Peter

    2015-07-01

    Next-Generation Sequencing (NGS) has emerged as a widely used tool in molecular biology. While time and cost for the sequencing itself are decreasing, the analysis of the massive amounts of data remains challenging. Since multiple algorithmic approaches for the basic data analysis have been developed, there is now an increasing need to efficiently use these tools to obtain results in reasonable time. We have developed QuickNGS, a new workflow system for laboratories with the need to analyze data from multiple NGS projects at a time. QuickNGS takes advantage of parallel computing resources, a comprehensive back-end database, and a careful selection of previously published algorithmic approaches to build fully automated data analysis workflows. We demonstrate the efficiency of our new software by a comprehensive analysis of 10 RNA-Seq samples which we can finish in only a few minutes of hands-on time. The approach we have taken is suitable to process even much larger numbers of samples and multiple projects at a time. Our approach considerably reduces the barriers that still limit the usability of the powerful NGS technology and finally decreases the time to be spent before proceeding to further downstream analysis and interpretation of the data.

  18. Block iterative restoration of astronomical images with the massively parallel processor

    International Nuclear Information System (INIS)

    Heap, S.R.; Lindler, D.J.

    1987-01-01

    A method is described for algebraic image restoration capable of treating astronomical images. For a typical 500 x 500 image, direct algebraic restoration would require the solution of a 250,000 x 250,000 linear system. The block iterative approach is used to reduce the problem to solving 4900 121 x 121 linear systems. The algorithm was implemented on the Goddard Massively Parallel Processor, which can solve a 121 x 121 system in approximately 0.06 seconds. Examples are shown of the results for various astronomical images

  19. ngs.plot: Quick mining and visualization of next-generation sequencing data by integrating genomic databases.

    Science.gov (United States)

    Shen, Li; Shao, Ningyi; Liu, Xiaochuan; Nestler, Eric

    2014-04-15

    Understanding the relationship between the millions of functional DNA elements and their protein regulators, and how they work in conjunction to manifest diverse phenotypes, is key to advancing our understanding of the mammalian genome. Next-generation sequencing technology is now used widely to probe these protein-DNA interactions and to profile gene expression at a genome-wide scale. As the cost of DNA sequencing continues to fall, the interpretation of the ever increasing amount of data generated represents a considerable challenge. We have developed ngs.plot - a standalone program to visualize enrichment patterns of DNA-interacting proteins at functionally important regions based on next-generation sequencing data. We demonstrate that ngs.plot is not only efficient but also scalable. We use a few examples to demonstrate that ngs.plot is easy to use and yet very powerful to generate figures that are publication ready. We conclude that ngs.plot is a useful tool to help fill the gap between massive datasets and genomic information in this era of big sequencing data.

  20. Scalable Parallel Distributed Coprocessor System for Graph Searching Problems with Massive Data

    Directory of Open Access Journals (Sweden)

    Wanrong Huang

    2017-01-01

    Full Text Available The Internet applications, such as network searching, electronic commerce, and modern medical applications, produce and process massive data. Considerable data parallelism exists in computation processes of data-intensive applications. A traversal algorithm, breadth-first search (BFS, is fundamental in many graph processing applications and metrics when a graph grows in scale. A variety of scientific programming methods have been proposed for accelerating and parallelizing BFS because of the poor temporal and spatial locality caused by inherent irregular memory access patterns. However, new parallel hardware could provide better improvement for scientific methods. To address small-world graph problems, we propose a scalable and novel field-programmable gate array-based heterogeneous multicore system for scientific programming. The core is multithread for streaming processing. And the communication network InfiniBand is adopted for scalability. We design a binary search algorithm to address mapping to unify all processor addresses. Within the limits permitted by the Graph500 test bench after 1D parallel hybrid BFS algorithm testing, our 8-core and 8-thread-per-core system achieved superior performance and efficiency compared with the prior work under the same degree of parallelism. Our system is efficient not as a special acceleration unit but as a processor platform that deals with graph searching applications.

  1. Massively Parallel, Molecular Analysis Platform Developed Using a CMOS Integrated Circuit With Biological Nanopores

    Science.gov (United States)

    Roever, Stefan

    2012-01-01

    A massively parallel, low cost molecular analysis platform will dramatically change the nature of protein, molecular and genomics research, DNA sequencing, and ultimately, molecular diagnostics. An integrated circuit (IC) with 264 sensors was fabricated using standard CMOS semiconductor processing technology. Each of these sensors is individually controlled with precision analog circuitry and is capable of single molecule measurements. Under electronic and software control, the IC was used to demonstrate the feasibility of creating and detecting lipid bilayers and biological nanopores using wild type α-hemolysin. The ability to dynamically create bilayers over each of the sensors will greatly accelerate pore development and pore mutation analysis. In addition, the noise performance of the IC was measured to be 30fA(rms). With this noise performance, single base detection of DNA was demonstrated using α-hemolysin. The data shows that a single molecule, electrical detection platform using biological nanopores can be operationalized and can ultimately scale to millions of sensors. Such a massively parallel platform will revolutionize molecular analysis and will completely change the field of molecular diagnostics in the future.

  2. Efficient linear precoding for massive MIMO systems using truncated polynomial expansion

    KAUST Repository

    Müller, Axel

    2014-06-01

    Massive multiple-input multiple-output (MIMO) techniques have been proposed as a solution to satisfy many requirements of next generation cellular systems. One downside of massive MIMO is the increased complexity of computing the precoding, especially since the relatively \\'antenna-efficient\\' regularized zero-forcing (RZF) is preferred to simple maximum ratio transmission. We develop in this paper a new class of precoders for single-cell massive MIMO systems. It is based on truncated polynomial expansion (TPE) and mimics the advantages of RZF, while offering reduced and scalable computational complexity that can be implemented in a convenient parallel fashion. Using random matrix theory we provide a closed-form expression of the signal-to-interference-and-noise ratio under TPE precoding and compare it to previous works on RZF. Furthermore, the sum rate maximizing polynomial coefficients in TPE precoding are calculated. By simulation, we find that to maintain a fixed peruser rate loss as compared to RZF, the polynomial degree does not need to scale with the system, but it should be increased with the quality of the channel knowledge and signal-to-noise ratio. © 2014 IEEE.

  3. Massively Parallel Sort-Merge Joins in Main Memory Multi-Core Database Systems

    OpenAIRE

    Martina-Cezara Albutiu, Alfons Kemper, Thomas Neumann

    2012-01-01

    Two emerging hardware trends will dominate the database system technology in the near future: increasing main memory capacities of several TB per server and massively parallel multi-core processing. Many algorithmic and control techniques in current database technology were devised for disk-based systems where I/O dominated the performance. In this work we take a new look at the well-known sort-merge join which, so far, has not been in the focus of research ...

  4. A Computational Fluid Dynamics Algorithm on a Massively Parallel Computer

    Science.gov (United States)

    Jespersen, Dennis C.; Levit, Creon

    1989-01-01

    The discipline of computational fluid dynamics is demanding ever-increasing computational power to deal with complex fluid flow problems. We investigate the performance of a finite-difference computational fluid dynamics algorithm on a massively parallel computer, the Connection Machine. Of special interest is an implicit time-stepping algorithm; to obtain maximum performance from the Connection Machine, it is necessary to use a nonstandard algorithm to solve the linear systems that arise in the implicit algorithm. We find that the Connection Machine ran achieve very high computation rates on both explicit and implicit algorithms. The performance of the Connection Machine puts it in the same class as today's most powerful conventional supercomputers.

  5. Next-Generation Tools For Next-Generation Surveys

    Science.gov (United States)

    Murray, S. G.

    2017-04-01

    The next generation of large-scale galaxy surveys, across the electromagnetic spectrum, loom on the horizon as explosively game-changing datasets, in terms of our understanding of cosmology and structure formation. We are on the brink of a torrent of data that is set to both confirm and constrain current theories to an unprecedented level, and potentially overturn many of our conceptions. One of the great challenges of this forthcoming deluge is to extract maximal scientific content from the vast array of raw data. This challenge requires not only well-understood and robust physical models, but a commensurate network of software implementations with which to efficiently apply them. The halo model, a semi-analytic treatment of cosmological spatial statistics down to nonlinear scales, provides an excellent mathematical framework for exploring the nature of dark matter. This thesis presents a next-generation toolkit based on the halo model formalism, intended to fulfil the requirements of next-generation surveys. Our toolkit comprises three tools: (i) hmf, a comprehensive and flexible calculator for halo mass functions (HMFs) within extended Press-Schechter theory, (ii) the MRP distribution for extremely efficient analytic characterisation of HMFs, and (iii) halomod, an extension of hmf which provides support for the full range of halo model components. In addition to the development and technical presentation of these tools, we apply each to the task of physical modelling. With hmf, we determine the precision of our knowledge of the HMF, due to uncertainty in our knowledge of the cosmological parameters, over the past decade of cosmic microwave background (CMB) experiments. We place rule-of-thumb uncertainties on the predicted HMF for the Planck cosmology, and find that current limits on the precision are driven by modeling uncertainties rather than those from cosmological parameters. With the MRP, we create and test a method for robustly fitting the HMF to observed

  6. Massive Parallelism of Monte-Carlo Simulation on Low-End Hardware using Graphic Processing Units

    International Nuclear Information System (INIS)

    Mburu, Joe Mwangi; Hah, Chang Joo Hah

    2014-01-01

    Within the past decade, research has been done on utilizing GPU massive parallelization in core simulation with impressive results but unfortunately, not much commercial application has been done in the nuclear field especially in reactor core simulation. The purpose of this paper is to give an introductory concept on the topic and illustrate the potential of exploiting the massive parallel nature of GPU computing on a simple monte-carlo simulation with very minimal hardware specifications. To do a comparative analysis, a simple two dimension monte-carlo simulation is implemented for both the CPU and GPU in order to evaluate performance gain based on the computing devices. The heterogeneous platform utilized in this analysis is done on a slow notebook with only 1GHz processor. The end results are quite surprising whereby high speedups obtained are almost a factor of 10. In this work, we have utilized heterogeneous computing in a GPU-based approach in applying potential high arithmetic intensive calculation. By applying a complex monte-carlo simulation on GPU platform, we have speed up the computational process by almost a factor of 10 based on one million neutrons. This shows how easy, cheap and efficient it is in using GPU in accelerating scientific computing and the results should encourage in exploring further this avenue especially in nuclear reactor physics simulation where deterministic and stochastic calculations are quite favourable in parallelization

  7. Massive Parallelism of Monte-Carlo Simulation on Low-End Hardware using Graphic Processing Units

    Energy Technology Data Exchange (ETDEWEB)

    Mburu, Joe Mwangi; Hah, Chang Joo Hah [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2014-05-15

    Within the past decade, research has been done on utilizing GPU massive parallelization in core simulation with impressive results but unfortunately, not much commercial application has been done in the nuclear field especially in reactor core simulation. The purpose of this paper is to give an introductory concept on the topic and illustrate the potential of exploiting the massive parallel nature of GPU computing on a simple monte-carlo simulation with very minimal hardware specifications. To do a comparative analysis, a simple two dimension monte-carlo simulation is implemented for both the CPU and GPU in order to evaluate performance gain based on the computing devices. The heterogeneous platform utilized in this analysis is done on a slow notebook with only 1GHz processor. The end results are quite surprising whereby high speedups obtained are almost a factor of 10. In this work, we have utilized heterogeneous computing in a GPU-based approach in applying potential high arithmetic intensive calculation. By applying a complex monte-carlo simulation on GPU platform, we have speed up the computational process by almost a factor of 10 based on one million neutrons. This shows how easy, cheap and efficient it is in using GPU in accelerating scientific computing and the results should encourage in exploring further this avenue especially in nuclear reactor physics simulation where deterministic and stochastic calculations are quite favourable in parallelization.

  8. Massively Parallel Dimension Independent Adaptive Metropolis

    KAUST Repository

    Chen, Yuxin

    2015-05-14

    This work considers black-box Bayesian inference over high-dimensional parameter spaces. The well-known and widely respected adaptive Metropolis (AM) algorithm is extended herein to asymptotically scale uniformly with respect to the underlying parameter dimension, by respecting the variance, for Gaussian targets. The result- ing algorithm, referred to as the dimension-independent adaptive Metropolis (DIAM) algorithm, also shows improved performance with respect to adaptive Metropolis on non-Gaussian targets. This algorithm is further improved, and the possibility of probing high-dimensional targets is enabled, via GPU-accelerated numerical libraries and periodically synchronized concurrent chains (justified a posteriori). Asymptoti- cally in dimension, this massively parallel dimension-independent adaptive Metropolis (MPDIAM) GPU implementation exhibits a factor of four improvement versus the CPU-based Intel MKL version alone, which is itself already a factor of three improve- ment versus the serial version. The scaling to multiple CPUs and GPUs exhibits a form of strong scaling in terms of the time necessary to reach a certain convergence criterion, through a combination of longer time per sample batch (weak scaling) and yet fewer necessary samples to convergence. This is illustrated by e ciently sampling from several Gaussian and non-Gaussian targets for dimension d 1000.

  9. Dynalight Next Generation

    DEFF Research Database (Denmark)

    Jørgensen, Bo Nørregaard; Ottosen, Carl-Otto; Dam-Hansen, Carsten

    2016-01-01

    The project aims to develop the next generation of energy cost-efficient artificial lighting control that enables greenhouse growers to adapt their use of artificial lighting dynamically to fluctuations in the price of electricity. This is a necessity as fluctuations in the price of electricity c...

  10. Cellular Automata-Based Parallel Random Number Generators Using FPGAs

    Directory of Open Access Journals (Sweden)

    David H. K. Hoe

    2012-01-01

    Full Text Available Cellular computing represents a new paradigm for implementing high-speed massively parallel machines. Cellular automata (CA, which consist of an array of locally connected processing elements, are a basic form of a cellular-based architecture. The use of field programmable gate arrays (FPGAs for implementing CA accelerators has shown promising results. This paper investigates the design of CA-based pseudo-random number generators (PRNGs using an FPGA platform. To improve the quality of the random numbers that are generated, the basic CA structure is enhanced in two ways. First, the addition of a superrule to each CA cell is considered. The resulting self-programmable CA (SPCA uses the superrule to determine when to make a dynamic rule change in each CA cell. The superrule takes its inputs from neighboring cells and can be considered itself a second CA working in parallel with the main CA. When implemented on an FPGA, the use of lookup tables in each logic cell removes any restrictions on how the super-rules should be defined. Second, a hybrid configuration is formed by combining a CA with a linear feedback shift register (LFSR. This is advantageous for FPGA designs due to the compactness of the LFSR implementations. A standard software package for statistically evaluating the quality of random number sequences known as Diehard is used to validate the results. Both the SPCA and the hybrid CA/LFSR were found to pass all the Diehard tests.

  11. NASA's Next Generation Space Geodesy Program

    Science.gov (United States)

    Merkowitz, S. M.; Desai, S. D.; Gross, R. S.; Hillard, L. M.; Lemoine, F. G.; Long, J. L.; Ma, C.; McGarry, J. F.; Murphy, D.; Noll, C. E.; hide

    2012-01-01

    Requirements for the ITRF have increased dramatically since the 1980s. The most stringent requirement comes from critical sea level monitoring programs: a global accuracy of 1.0 mm, and 0.1mm/yr stability, a factor of 10 to 20 beyond current capability. Other requirements for the ITRF coming from ice mass change, ground motion, and mass transport studies are similar. Current and future satellite missions will have ever-increasing measurement capability and will lead to increasingly sophisticated models of these and other changes in the Earth system. Ground space geodesy networks with enhanced measurement capability will be essential to meeting the ITRF requirements and properly interpreting the satellite data. These networks must be globally distributed and built for longevity, to provide the robust data necessary to generate improved models for proper interpretation of the observed geophysical signals. NASA has embarked on a Space Geodesy Program with a long-range goal to build, deploy and operate a next generation NASA Space Geodetic Network (SGN). The plan is to build integrated, multi-technique next-generation space geodetic observing systems as the core contribution to a global network designed to produce the higher quality data required to maintain the Terrestrial Reference Frame and provide information essential for fully realizing the measurement potential of the current and coming generation of Earth Observing spacecraft. Phase 1 of this project has been funded to (1) Establish and demonstrate a next-generation prototype integrated Space Geodetic Station at Goddard's Geophysical and Astronomical Observatory (GGAO), including next-generation SLR and VLBI systems along with modern GNSS and DORIS; (2) Complete ongoing Network Design Studies that describe the appropriate number and distribution of next-generation Space Geodetic Stations for an improved global network; (3) Upgrade analysis capability to handle the next-generation data; (4) Implement a modern

  12. Generation 'Next' and nuclear power

    International Nuclear Information System (INIS)

    Sergeev, A.A.

    2001-01-01

    My generation was labeled by Russian mass media as generation 'Next.' My technical education is above average. My current position is as a mechanical engineer in the leading research and development institute for Russian nuclear engineering for peaceful applications. It is noteworthy to point out that many of our developments were really first-of-a-kind in the history of engineering. However, it is difficult to grasp the importance of these accomplishments, especially since the progress of nuclear technologies is at a standstill. Can generation 'Next' be independent in their attitude towards nuclear power or shall we rely on the opinions of elder colleagues in our industry? (authors)

  13. Scalable and massively parallel Monte Carlo photon transport simulations for heterogeneous computing platforms.

    Science.gov (United States)

    Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian

    2018-01-01

    We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  14. Time-dependent density-functional theory in massively parallel computer architectures: the OCTOPUS project.

    Science.gov (United States)

    Andrade, Xavier; Alberdi-Rodriguez, Joseba; Strubbe, David A; Oliveira, Micael J T; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Louie, Steven G; Aspuru-Guzik, Alán; Rubio, Angel; Marques, Miguel A L

    2012-06-13

    Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures.

  15. Time-dependent density-functional theory in massively parallel computer architectures: the octopus project

    Science.gov (United States)

    Andrade, Xavier; Alberdi-Rodriguez, Joseba; Strubbe, David A.; Oliveira, Micael J. T.; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Louie, Steven G.; Aspuru-Guzik, Alán; Rubio, Angel; Marques, Miguel A. L.

    2012-06-01

    Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures.

  16. Time-dependent density-functional theory in massively parallel computer architectures: the octopus project

    International Nuclear Information System (INIS)

    Andrade, Xavier; Aspuru-Guzik, Alán; Alberdi-Rodriguez, Joseba; Rubio, Angel; Strubbe, David A; Louie, Steven G; Oliveira, Micael J T; Nogueira, Fernando; Castro, Alberto; Muguerza, Javier; Arruabarrena, Agustin; Marques, Miguel A L

    2012-01-01

    Octopus is a general-purpose density-functional theory (DFT) code, with a particular emphasis on the time-dependent version of DFT (TDDFT). In this paper we present the ongoing efforts to achieve the parallelization of octopus. We focus on the real-time variant of TDDFT, where the time-dependent Kohn-Sham equations are directly propagated in time. This approach has great potential for execution in massively parallel systems such as modern supercomputers with thousands of processors and graphics processing units (GPUs). For harvesting the potential of conventional supercomputers, the main strategy is a multi-level parallelization scheme that combines the inherent scalability of real-time TDDFT with a real-space grid domain-partitioning approach. A scalable Poisson solver is critical for the efficiency of this scheme. For GPUs, we show how using blocks of Kohn-Sham states provides the required level of data parallelism and that this strategy is also applicable for code optimization on standard processors. Our results show that real-time TDDFT, as implemented in octopus, can be the method of choice for studying the excited states of large molecular systems in modern parallel architectures. (topical review)

  17. The Next Great Generation?

    Science.gov (United States)

    Brownstein, Andrew

    2000-01-01

    Discusses ideas from a new book, "Millennials Rising: The Next Great Generation," (by Neil Howe and William Strauss) suggesting that youth culture is on the cusp of a radical shift with the generation beginning with this year's college freshmen who are typically team oriented, optimistic, and poised for greatness on a global scale. Includes a…

  18. Development of technology for next generation reactor - Development of next generation reactor in Korea -

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jong Kyun; Chang, Moon Heuy; Hwang, Yung Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); and others

    1993-09-01

    The project, development of next generation reactor, aims overall related technology development and obtainment of related license in 2001. The development direction is to determine the reactor type and to build up the design concept in 1994. For development trend analysis of foreign next generation reactor, level-1 PSA, fuel cycle analysis and computer code development are performed on System 80+ and AP 600. Especially for design characteristics analysis and volume upgrade of AP 600, nuclear fuel and reactor core design analysis, coolant circuit design analysis, mechanical structure design analysis and safety analysis etc. are performed. (Author).

  19. Tailoring next-generation biofuels and their combustion in next-generation engines

    Energy Technology Data Exchange (ETDEWEB)

    Gladden, John Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wu, Weihua [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Taatjes, Craig A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Scheer, Adam Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Turner, Kevin M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Yu, Eizadora T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); O' Bryan, Greg [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Powell, Amy Jo [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gao, Connie W. [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2013-11-01

    Increasing energy costs, the dependence on foreign oil supplies, and environmental concerns have emphasized the need to produce sustainable renewable fuels and chemicals. The strategy for producing next-generation biofuels must include efficient processes for biomass conversion to liquid fuels and the fuels must be compatible with current and future engines. Unfortunately, biofuel development generally takes place without any consideration of combustion characteristics, and combustion scientists typically measure biofuels properties without any feedback to the production design. We seek to optimize the fuel/engine system by bringing combustion performance, specifically for advanced next-generation engines, into the development of novel biosynthetic fuel pathways. Here we report an innovative coupling of combustion chemistry, from fundamentals to engine measurements, to the optimization of fuel production using metabolic engineering. We have established the necessary connections among the fundamental chemistry, engine science, and synthetic biology for fuel production, building a powerful framework for co-development of engines and biofuels.

  20. Massively parallel unsupervised single-particle cryo-EM data clustering via statistical manifold learning.

    Directory of Open Access Journals (Sweden)

    Jiayi Wu

    Full Text Available Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM. We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization.

  1. Massively parallel unsupervised single-particle cryo-EM data clustering via statistical manifold learning.

    Science.gov (United States)

    Wu, Jiayi; Ma, Yong-Bei; Congdon, Charles; Brett, Bevin; Chen, Shuobing; Xu, Yaofang; Ouyang, Qi; Mao, Youdong

    2017-01-01

    Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM) data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR) in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM). We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC) environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization.

  2. Tolerating correlated failures in Massively Parallel Stream Processing Engines

    DEFF Research Database (Denmark)

    Su, L.; Zhou, Y.

    2016-01-01

    Fault-tolerance techniques for stream processing engines can be categorized into passive and active approaches. A typical passive approach periodically checkpoints a processing task's runtime states and can recover a failed task by restoring its runtime state using its latest checkpoint. On the o......Fault-tolerance techniques for stream processing engines can be categorized into passive and active approaches. A typical passive approach periodically checkpoints a processing task's runtime states and can recover a failed task by restoring its runtime state using its latest checkpoint....... On the other hand, an active approach usually employs backup nodes to run replicated tasks. Upon failure, the active replica can take over the processing of the failed task with minimal latency. However, both approaches have their own inadequacies in Massively Parallel Stream Processing Engines (MPSPE...

  3. cellGPU: Massively parallel simulations of dynamic vertex models

    Science.gov (United States)

    Sussman, Daniel M.

    2017-10-01

    Vertex models represent confluent tissue by polygonal or polyhedral tilings of space, with the individual cells interacting via force laws that depend on both the geometry of the cells and the topology of the tessellation. This dependence on the connectivity of the cellular network introduces several complications to performing molecular-dynamics-like simulations of vertex models, and in particular makes parallelizing the simulations difficult. cellGPU addresses this difficulty and lays the foundation for massively parallelized, GPU-based simulations of these models. This article discusses its implementation for a pair of two-dimensional models, and compares the typical performance that can be expected between running cellGPU entirely on the CPU versus its performance when running on a range of commercial and server-grade graphics cards. By implementing the calculation of topological changes and forces on cells in a highly parallelizable fashion, cellGPU enables researchers to simulate time- and length-scales previously inaccessible via existing single-threaded CPU implementations. Program Files doi:http://dx.doi.org/10.17632/6j2cj29t3r.1 Licensing provisions: MIT Programming language: CUDA/C++ Nature of problem: Simulations of off-lattice "vertex models" of cells, in which the interaction forces depend on both the geometry and the topology of the cellular aggregate. Solution method: Highly parallelized GPU-accelerated dynamical simulations in which the force calculations and the topological features can be handled on either the CPU or GPU. Additional comments: The code is hosted at https://gitlab.com/dmsussman/cellGPU, with documentation additionally maintained at http://dmsussman.gitlab.io/cellGPUdocumentation

  4. Molecular Diagnostics in Pathology: Time for a Next-Generation Pathologist?

    Science.gov (United States)

    Fassan, Matteo

    2018-03-01

    - Comprehensive molecular investigations of mainstream carcinogenic processes have led to the use of effective molecular targeted agents in most cases of solid tumors in clinical settings. - To update readers regarding the evolving role of the pathologist in the therapeutic decision-making process and the introduction of next-generation technologies into pathology practice. - Current literature on the topic, primarily sourced from the PubMed (National Center for Biotechnology Information, Bethesda, Maryland) database, were reviewed. - Adequate evaluation of cytologic-based and tissue-based predictive diagnostic biomarkers largely depends on both proper pathologic characterization and customized processing of biospecimens. Moreover, increased requests for molecular testing have paralleled the recent, sharp decrease in tumor material to be analyzed-material that currently comprises cytology specimens or, at minimum, small biopsies in most cases of metastatic/advanced disease. Traditional diagnostic pathology has been completely revolutionized by the introduction of next-generation technologies, which provide multigene, targeted mutational profiling, even in the most complex of clinical cases. Combining traditional and molecular knowledge, pathologists integrate the morphological, clinical, and molecular dimensions of a disease, leading to a proper diagnosis and, therefore, the most-appropriate tailored therapy.

  5. Cluster cosmology with next-generation surveys.

    Science.gov (United States)

    Ascaso, B.

    2017-03-01

    The advent of next-generation surveys will provide a large number of cluster detections that will serve the basis for constraining cos mological parameters using cluster counts. The main two observational ingredients needed are the cluster selection function and the calibration of the mass-observable relation. In this talk, we present the methodology designed to obtain robust predictions of both ingredients based on realistic cosmological simulations mimicking the following next-generation surveys: J-PAS, LSST and Euclid. We display recent results on the selection functions for these mentioned surveys together with others coming from other next-generation surveys such as eROSITA, ACTpol and SPTpol. We notice that the optical and IR surveys will reach the lowest masses between 0.3next-generation surveys and introduce very preliminary results.

  6. Massively parallel diffuse optical tomography

    Energy Technology Data Exchange (ETDEWEB)

    Sandusky, John V.; Pitts, Todd A.

    2017-09-05

    Diffuse optical tomography systems and methods are described herein. In a general embodiment, the diffuse optical tomography system comprises a plurality of sensor heads, the plurality of sensor heads comprising respective optical emitter systems and respective sensor systems. A sensor head in the plurality of sensors heads is caused to act as an illuminator, such that its optical emitter system transmits a transillumination beam towards a portion of a sample. Other sensor heads in the plurality of sensor heads act as observers, detecting portions of the transillumination beam that radiate from the sample in the fields of view of the respective sensory systems of the other sensor heads. Thus, sensor heads in the plurality of sensors heads generate sensor data in parallel.

  7. A Massively Parallel Solver for the Mechanical Harmonic Analysis of Accelerator Cavities

    International Nuclear Information System (INIS)

    2015-01-01

    ACE3P is a 3D massively parallel simulation suite that developed at SLAC National Accelerator Laboratory that can perform coupled electromagnetic, thermal and mechanical study. Effectively utilizing supercomputer resources, ACE3P has become a key simulation tool for particle accelerator R and D. A new frequency domain solver to perform mechanical harmonic response analysis of accelerator components is developed within the existing parallel framework. This solver is designed to determine the frequency response of the mechanical system to external harmonic excitations for time-efficient accurate analysis of the large-scale problems. Coupled with the ACE3P electromagnetic modules, this capability complements a set of multi-physics tools for a comprehensive study of microphonics in superconducting accelerating cavities in order to understand the RF response and feedback requirements for the operational reliability of a particle accelerator. (auth)

  8. Application of Massively Parallel Sequencing in the Clinical Diagnostic Testing of Inherited Cardiac Conditions

    Directory of Open Access Journals (Sweden)

    Ivone U. S. Leong

    2014-06-01

    Full Text Available Sudden cardiac death in people between the ages of 1–40 years is a devastating event and is frequently caused by several heritable cardiac disorders. These disorders include cardiac ion channelopathies, such as long QT syndrome, catecholaminergic polymorphic ventricular tachycardia and Brugada syndrome and cardiomyopathies, such as hypertrophic cardiomyopathy and arrhythmogenic right ventricular cardiomyopathy. Through careful molecular genetic evaluation of DNA from sudden death victims, the causative gene mutation can be uncovered, and the rest of the family can be screened and preventative measures implemented in at-risk individuals. The current screening approach in most diagnostic laboratories uses Sanger-based sequencing; however, this method is time consuming and labour intensive. The development of massively parallel sequencing has made it possible to produce millions of sequence reads simultaneously and is potentially an ideal approach to screen for mutations in genes that are associated with sudden cardiac death. This approach offers mutation screening at reduced cost and turnaround time. Here, we will review the current commercially available enrichment kits, massively parallel sequencing (MPS platforms, downstream data analysis and its application to sudden cardiac death in a diagnostic environment.

  9. Next Generation Inverter

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Zilai [General Motors LLC, Detroit, MI (United States); Gough, Charles [General Motors LLC, Detroit, MI (United States)

    2016-04-22

    The goal of this Cooperative Agreement was the development of a Next Generation Inverter for General Motors’ electrified vehicles, including battery electric vehicles, range extended electric vehicles, plug-in hybrid electric vehicles and hybrid electric vehicles. The inverter is a critical electronics component that converts battery power (DC) to and from the electric power for the motor (AC).

  10. Technical presentation: Next Generation Oscilloscopes

    CERN Multimedia

    PH Department

    2011-01-01

      Rohde & Schwarz "Next Generation Oscilloscopes" - Introduction and Presentation Agenda: Wednesday 23 March  -  09:30 to 11:30 (open end) Bldg. 13-2-005 Language: English 09.30 Presentation "Next Generation Oscilloscopes" from Rohde & Schwarz RTO / RTM in theory and practice Gerard Walker 10.15 Technical design details from R&D Dr. Markus Freidhof 10.45 Scope and Probe Roadmap (confidential) Guido Schulze 11.00 Open Discussion Feedback, first impression, wishes, needs and requirements from CERN All 11.30 Expert Talks, Hands on All Mr. Dr. Markus Freidhof, Head of R&D Oscilloscopes, Rohde & Schwarz, Germany; Mr. Guido Schulze, ...

  11. From Massively Parallel Algorithms and Fluctuating Time Horizons to Nonequilibrium Surface Growth

    International Nuclear Information System (INIS)

    Korniss, G.; Toroczkai, Z.; Novotny, M. A.; Rikvold, P. A.

    2000-01-01

    We study the asymptotic scaling properties of a massively parallel algorithm for discrete-event simulations where the discrete events are Poisson arrivals. The evolution of the simulated time horizon is analogous to a nonequilibrium surface. Monte Carlo simulations and a coarse-grained approximation indicate that the macroscopic landscape in the steady state is governed by the Edwards-Wilkinson Hamiltonian. Since the efficiency of the algorithm corresponds to the density of local minima in the associated surface, our results imply that the algorithm is asymptotically scalable. (c) 2000 The American Physical Society

  12. Passive and partially active fault tolerance for massively parallel stream processing engines

    DEFF Research Database (Denmark)

    Su, Li; Zhou, Yongluan

    2018-01-01

    . On the other hand, an active approach usually employs backup nodes to run replicated tasks. Upon failure, the active replica can take over the processing of the failed task with minimal latency. However, both approaches have their own inadequacies in Massively Parallel Stream Processing Engines (MPSPE...... also propose effective and efficient algorithms to optimize a partially active replication plan to maximize the quality of tentative outputs. We implemented PPA on top of Storm, an open-source MPSPE and conducted extensive experiments using both real and synthetic datasets to verify the effectiveness...

  13. Next generation toroidal devices

    International Nuclear Information System (INIS)

    Yoshikawa, Shoichi

    1998-10-01

    A general survey of the possible approach for the next generation toroidal devices was made. Either surprisingly or obviously (depending on one's view), the technical constraints along with the scientific considerations lead to a fairly limited set of systems for the most favorable approach for the next generation devices. Specifically if the magnetic field strength of 5 T or above is to be created by superconducting coils, it imposes minimum in the aspect ratio for the tokamak which is slightly higher than contemplated now for ITER design. The similar technical constraints make the minimum linear size of a stellarator large. Scientifically, it is indicated that a tokamak of 1.5 times in the linear dimension should be able to produce economically, especially if a hybrid reactor is allowed. For the next stellarator, it is strongly suggested that some kind of helical axis is necessary both for the (almost) absolute confinement of high energy particles and high stability and equilibrium beta limits. The author still favors a heliac most. Although it may not have been clearly stated in the main text, the stability afforded by the shearless layer may be exploited fully in a stellarator. (author)

  14. Next generation toroidal devices

    Energy Technology Data Exchange (ETDEWEB)

    Yoshikawa, Shoichi [Princeton Plasma Physics Lab., Princeton Univ., NJ (United States)

    1998-10-01

    A general survey of the possible approach for the next generation toroidal devices was made. Either surprisingly or obviously (depending on one`s view), the technical constraints along with the scientific considerations lead to a fairly limited set of systems for the most favorable approach for the next generation devices. Specifically if the magnetic field strength of 5 T or above is to be created by superconducting coils, it imposes minimum in the aspect ratio for the tokamak which is slightly higher than contemplated now for ITER design. The similar technical constraints make the minimum linear size of a stellarator large. Scientifically, it is indicated that a tokamak of 1.5 times in the linear dimension should be able to produce economically, especially if a hybrid reactor is allowed. For the next stellarator, it is strongly suggested that some kind of helical axis is necessary both for the (almost) absolute confinement of high energy particles and high stability and equilibrium beta limits. The author still favors a heliac most. Although it may not have been clearly stated in the main text, the stability afforded by the shearless layer may be exploited fully in a stellarator. (author)

  15. Massively Parallel Interrogation of Aptamer Sequence, Structure and Function

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, N O; Tok, J B; Tarasow, T M

    2008-02-08

    Optimization of high affinity reagents is a significant bottleneck in medicine and the life sciences. The ability to synthetically create thousands of permutations of a lead high-affinity reagent and survey the properties of individual permutations in parallel could potentially relieve this bottleneck. Aptamers are single stranded oligonucleotides affinity reagents isolated by in vitro selection processes and as a class have been shown to bind a wide variety of target molecules. Methodology/Principal Findings. High density DNA microarray technology was used to synthesize, in situ, arrays of approximately 3,900 aptamer sequence permutations in triplicate. These sequences were interrogated on-chip for their ability to bind the fluorescently-labeled cognate target, immunoglobulin E, resulting in the parallel execution of thousands of experiments. Fluorescence intensity at each array feature was well resolved and shown to be a function of the sequence present. The data demonstrated high intra- and interchip correlation between the same features as well as among the sequence triplicates within a single array. Consistent with aptamer mediated IgE binding, fluorescence intensity correlated strongly with specific aptamer sequences and the concentration of IgE applied to the array. The massively parallel sequence-function analyses provided by this approach confirmed the importance of a consensus sequence found in all 21 of the original IgE aptamer sequences and support a common stem:loop structure as being the secondary structure underlying IgE binding. The microarray application, data and results presented illustrate an efficient, high information content approach to optimizing aptamer function. It also provides a foundation from which to better understand and manipulate this important class of high affinity biomolecules.

  16. Massively parallel interrogation of aptamer sequence, structure and function.

    Directory of Open Access Journals (Sweden)

    Nicholas O Fischer

    Full Text Available BACKGROUND: Optimization of high affinity reagents is a significant bottleneck in medicine and the life sciences. The ability to synthetically create thousands of permutations of a lead high-affinity reagent and survey the properties of individual permutations in parallel could potentially relieve this bottleneck. Aptamers are single stranded oligonucleotides affinity reagents isolated by in vitro selection processes and as a class have been shown to bind a wide variety of target molecules. METHODOLOGY/PRINCIPAL FINDINGS: High density DNA microarray technology was used to synthesize, in situ, arrays of approximately 3,900 aptamer sequence permutations in triplicate. These sequences were interrogated on-chip for their ability to bind the fluorescently-labeled cognate target, immunoglobulin E, resulting in the parallel execution of thousands of experiments. Fluorescence intensity at each array feature was well resolved and shown to be a function of the sequence present. The data demonstrated high intra- and inter-chip correlation between the same features as well as among the sequence triplicates within a single array. Consistent with aptamer mediated IgE binding, fluorescence intensity correlated strongly with specific aptamer sequences and the concentration of IgE applied to the array. CONCLUSION AND SIGNIFICANCE: The massively parallel sequence-function analyses provided by this approach confirmed the importance of a consensus sequence found in all 21 of the original IgE aptamer sequences and support a common stem:loop structure as being the secondary structure underlying IgE binding. The microarray application, data and results presented illustrate an efficient, high information content approach to optimizing aptamer function. It also provides a foundation from which to better understand and manipulate this important class of high affinity biomolecules.

  17. High fidelity thermal-hydraulic analysis using CFD and massively parallel computers

    International Nuclear Information System (INIS)

    Weber, D.P.; Wei, T.Y.C.; Brewster, R.A.; Rock, Daniel T.; Rizwan-uddin

    2000-01-01

    Thermal-hydraulic analyses play an important role in design and reload analysis of nuclear power plants. These analyses have historically relied on early generation computational fluid dynamics capabilities, originally developed in the 1960s and 1970s. Over the last twenty years, however, dramatic improvements in both computational fluid dynamics codes in the commercial sector and in computing power have taken place. These developments offer the possibility of performing large scale, high fidelity, core thermal hydraulics analysis. Such analyses will allow a determination of the conservatism employed in traditional design approaches and possibly justify the operation of nuclear power systems at higher powers without compromising safety margins. The objective of this work is to demonstrate such a large scale analysis approach using a state of the art CFD code, STAR-CD, and the computing power of massively parallel computers, provided by IBM. A high fidelity representation of a current generation PWR was analyzed with the STAR-CD CFD code and the results were compared to traditional analyses based on the VIPRE code. Current design methodology typically involves a simplified representation of the assemblies, where a single average pin is used in each assembly to determine the hot assembly from a whole core analysis. After determining this assembly, increased refinement is used in the hot assembly, and possibly some of its neighbors, to refine the analysis for purposes of calculating DNBR. This latter calculation is performed with sub-channel codes such as VIPRE. The modeling simplifications that are used involve the approximate treatment of surrounding assemblies and coarse representation of the hot assembly, where the subchannel is the lowest level of discretization. In the high fidelity analysis performed in this study, both restrictions have been removed. Within the hot assembly, several hundred thousand to several million computational zones have been used, to

  18. A Next Generation BioPhotonics Workstation

    DEFF Research Database (Denmark)

    Glückstad, Jesper; Palima, Darwin; Tauro, Sandeep

    2011-01-01

    We are developing a Next Generation BioPhotonics Workstation to be applied in research on regulated microbial cell growth including their underlying physiological mechanisms, in vivo characterization of cell constituents and manufacturing of nanostructures and meta-materials.......We are developing a Next Generation BioPhotonics Workstation to be applied in research on regulated microbial cell growth including their underlying physiological mechanisms, in vivo characterization of cell constituents and manufacturing of nanostructures and meta-materials....

  19. Flexbar 3.0 - SIMD and multicore parallelization.

    Science.gov (United States)

    Roehr, Johannes T; Dieterich, Christoph; Reinert, Knut

    2017-09-15

    High-throughput sequencing machines can process many samples in a single run. For Illumina systems, sequencing reads are barcoded with an additional DNA tag that is contained in the respective sequencing adapters. The recognition of barcode and adapter sequences is hence commonly needed for the analysis of next-generation sequencing data. Flexbar performs demultiplexing based on barcodes and adapter trimming for such data. The massive amounts of data generated on modern sequencing machines demand that this preprocessing is done as efficiently as possible. We present Flexbar 3.0, the successor of the popular program Flexbar. It employs now twofold parallelism: multi-threading and additionally SIMD vectorization. Both types of parallelism are used to speed-up the computation of pair-wise sequence alignments, which are used for the detection of barcodes and adapters. Furthermore, new features were included to cover a wide range of applications. We evaluated the performance of Flexbar based on a simulated sequencing dataset. Our program outcompetes other tools in terms of speed and is among the best tools in the presented quality benchmark. https://github.com/seqan/flexbar. johannes.roehr@fu-berlin.de or knut.reinert@fu-berlin.de. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  20. MCBooster: a library for fast Monte Carlo generation of phase-space decays on massively parallel platforms.

    Science.gov (United States)

    Alves Júnior, A. A.; Sokoloff, M. D.

    2017-10-01

    MCBooster is a header-only, C++11-compliant library that provides routines to generate and perform calculations on large samples of phase space Monte Carlo events. To achieve superior performance, MCBooster is capable to perform most of its calculations in parallel using CUDA- and OpenMP-enabled devices. MCBooster is built on top of the Thrust library and runs on Linux systems. This contribution summarizes the main features of MCBooster. A basic description of the user interface and some examples of applications are provided, along with measurements of performance in a variety of environments

  1. Next Generation Microchannel Heat Exchangers

    CERN Document Server

    Ohadi, Michael; Dessiatoun, Serguei; Cetegen, Edvin

    2013-01-01

    In Next Generation Microchannel Heat Exchangers, the authors’ focus on the new generation highly efficient heat exchangers and presentation of novel data and technical expertise not available in the open literature.  Next generation micro channels offer record high heat transfer coefficients with pressure drops much less than conventional micro channel heat exchangers. These inherent features promise fast penetration into many mew markets, including high heat flux cooling of electronics, waste heat recovery and energy efficiency enhancement applications, alternative energy systems, as well as applications in mass exchangers and chemical reactor systems. The combination of up to the minute research findings and technical know-how make this book very timely as the search for high performance heat and mass exchangers that can cut costs in materials consumption intensifies.

  2. RES-E-NEXT: Next Generation of RES-E Policy Instruments

    Energy Technology Data Exchange (ETDEWEB)

    Miller, M.; Bird, L.; Cochran, J.; Milligan, M.; Bazilian, M. [National Renewable Energy Laboratory, Golden, CO (United States); Denny, E.; Dillon, J.; Bialek, J.; O’Malley, M. [Ecar Limited (Ireland); Neuhoff, K. [DIW Berlin (Germany)

    2013-07-04

    The RES-E-NEXT study identifies policies that are required for the next phase of renewable energy support. The study analyses policy options that secure high shares of renewable electricity generation and adequate grid infrastructure, enhance flexibility and ensure an appropriate market design. Measures have limited costs or even save money, and policies can be gradually implemented.

  3. Progress on next generation linear colliders

    International Nuclear Information System (INIS)

    Ruth, R.D.

    1989-01-01

    In this paper, I focus on reviewing the issues and progress on a next generation linear collider with the general parameters of energy, luminosity, length, power, technology. The energy range is dictated by physics with a mass reach well beyond LEP, although somewhat short of SSC. The luminosity is that required to obtain 10 3 /minus/ 10 4 units of R 0 per year. The length is consistent with a site on Stanford land with collisions occurring on the SLAC site. The power was determined by economic considerations. Finally, the technology was limited by the desire to have a next generation linear collider before the next century. 25 refs., 3 figs., 6 tabs

  4. Next generation of accelerators

    International Nuclear Information System (INIS)

    Richter, B.

    1979-01-01

    Existing high-energy accelerators are reviewed, along with those under construction or being designed. Finally, some of the physics issues which go into setting machine parameters, and some of the features of the design of next generation electron and proton machines are discussed

  5. Prospects for next-generation e+e- linear colliders

    International Nuclear Information System (INIS)

    Ruth, R.D.

    1990-02-01

    The purpose of this paper is to review progress in the US towards a next generation linear collider. During 1988, there were three workshops held on linear colliders: ''Physics of Linear Colliders,'' in Capri, Italy, June 14--18, 1988; Snowmass 88 (Linear Collider subsection) June 27--July 15, 1988; and SLAC International Workshop on Next Generation Linear Colliders, November 28--December 9, 1988. In this paper, I focus on reviewing the issues and progress on a next generation linear collider. The energy range is dictated by physics with a mass reach well beyond LEP, although somewhat short of SSC. The luminosity is that required to obtain 10 3 --10 4 units of R 0 per year. The length is consistent with a site on Stanford land with collision occurring on the SLAC site; the power was determined by economic considerations. Finally, the technology as limited by the desire to have a next generation linear collider by the next century. 37 refs., 3 figs., 6 tabs

  6. Test generation for digital circuits using parallel processing

    Science.gov (United States)

    Hartmann, Carlos R.; Ali, Akhtar-Uz-Zaman M.

    1990-12-01

    The problem of test generation for digital logic circuits is an NP-Hard problem. Recently, the availability of low cost, high performance parallel machines has spurred interest in developing fast parallel algorithms for computer-aided design and test. This report describes a method of applying a 15-valued logic system for digital logic circuit test vector generation in a parallel programming environment. A concept called fault site testing allows for test generation, in parallel, that targets more than one fault at a given location. The multi-valued logic system allows results obtained by distinct processors and/or processes to be merged by means of simple set intersections. A machine-independent description is given for the proposed algorithm.

  7. Multiplexed microsatellite recovery using massively parallel sequencing

    Science.gov (United States)

    Jennings, T.N.; Knaus, B.J.; Mullins, T.D.; Haig, S.M.; Cronn, R.C.

    2011-01-01

    Conservation and management of natural populations requires accurate and inexpensive genotyping methods. Traditional microsatellite, or simple sequence repeat (SSR), marker analysis remains a popular genotyping method because of the comparatively low cost of marker development, ease of analysis and high power of genotype discrimination. With the availability of massively parallel sequencing (MPS), it is now possible to sequence microsatellite-enriched genomic libraries in multiplex pools. To test this approach, we prepared seven microsatellite-enriched, barcoded genomic libraries from diverse taxa (two conifer trees, five birds) and sequenced these on one lane of the Illumina Genome Analyzer using paired-end 80-bp reads. In this experiment, we screened 6.1 million sequences and identified 356958 unique microreads that contained di- or trinucleotide microsatellites. Examination of four species shows that our conversion rate from raw sequences to polymorphic markers compares favourably to Sanger- and 454-based methods. The advantage of multiplexed MPS is that the staggering capacity of modern microread sequencing is spread across many libraries; this reduces sample preparation and sequencing costs to less than $400 (USD) per species. This price is sufficiently low that microsatellite libraries could be prepared and sequenced for all 1373 organisms listed as 'threatened' and 'endangered' in the United States for under $0.5M (USD).

  8. GPAW - massively parallel electronic structure calculations with Python-based software

    DEFF Research Database (Denmark)

    Enkovaara, Jussi; Romero, Nichols A.; Shende, Sameer

    2011-01-01

    of the productivity enhancing features together with a good numerical performance. We have used this approach in implementing an electronic structure simulation software GPAW using the combination of Python and C programming languages. While the chosen approach works well in standard workstations and Unix...... popular choice. While dynamic, interpreted languages, such as Python, can increase the effciency of programmer, they cannot compete directly with the raw performance of compiled languages. However, by using an interpreted language together with a compiled language, it is possible to have most...... environments, massively parallel supercomputing systems can present some challenges in porting, debugging and profiling the software. In this paper we describe some details of the implementation and discuss the advantages and challenges of the combined Python/C approach. We show that despite the challenges...

  9. Overview of NASA's Next Generation Air Transportation System (NextGen) Research

    Science.gov (United States)

    Swenson, Harry N.

    2009-01-01

    This slide presentation is an overview of the research for the Next Generation Air Transportation System (NextGen). Included is a review of the current air transportation system and the challenges of air transportation research. Also included is a review of the current research highlights and significant accomplishments.

  10. Mobile Phones Democratize and Cultivate Next-Generation Imaging, Diagnostics and Measurement Tools

    Science.gov (United States)

    Ozcan, Aydogan

    2014-01-01

    In this article, I discuss some of the emerging applications and the future opportunities and challenges created by the use of mobile phones and their embedded components for the development of next-generation imaging, sensing, diagnostics and measurement tools. The massive volume of mobile phone users, which has now reached ~7 billion, drives the rapid improvements of the hardware, software and high-end imaging and sensing technologies embedded in our phones, transforming the mobile phone into a cost-effective and yet extremely powerful platform to run e.g., biomedical tests and perform scientific measurements that would normally require advanced laboratory instruments. This rapidly evolving and continuing trend will help us transform how medicine, engineering and sciences are practiced and taught globally. PMID:24647550

  11. Next Generation Social Networks

    DEFF Research Database (Denmark)

    Sørensen, Lene Tolstrup; Skouby, Knud Erik

    2008-01-01

    different online networks for communities of people who share interests or individuals who presents themselves through user produced content is what makes up the social networking of today. The purpose of this paper is to discuss perceived user requirements to the next generation social networks. The paper...

  12. Next generation CANDU plants

    International Nuclear Information System (INIS)

    Hedges, K.R.; Yu, S.K.W.

    1998-01-01

    Future CANDU designs will continue to meet the emerging design and performance requirements expected by the operating utilities. The next generation CANDU products will integrate new technologies into both the product features as well as into the engineering and construction work processes associated with delivering the products. The timely incorporation of advanced design features is the approach adopted for the development of the next generation of CANDU. AECL's current products consist of 700MW Class CANDU 6 and 900 MW Class CANDU 9. Evolutionary improvements are continuing with our CANDU products to enhance their adaptability to meet customers ever increasing need for higher output. Our key product drivers are for improved safety, environmental protection and improved cost effectiveness. Towards these goals we have made excellent progress in Research and Development and our investments are continuing in areas such as fuel channels and passive safety. Our long term focus is utilizing the fuel cycle flexibility of CANDU reactors as part of the long term energy mix

  13. Optical Subsystems for Next Generation Access Networks

    DEFF Research Database (Denmark)

    Lazaro, J.A; Polo, V.; Schrenk, B.

    2011-01-01

    Recent optical technologies are providing higher flexibility to next generation access networks: on the one hand, providing progressive FTTx and specifically FTTH deployment, progressively shortening the copper access network; on the other hand, also opening fixed-mobile convergence solutions...... in next generation PON architectures. It is provided an overview of the optical subsystems developed for the implementation of the proposed NG-Access Networks....

  14. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo; Kronbichler, Martin; Bangerth, Wolfgang

    2010-01-01

    Today's large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  15. Massively Parallel Finite Element Programming

    KAUST Repository

    Heister, Timo

    2010-01-01

    Today\\'s large finite element simulations require parallel algorithms to scale on clusters with thousands or tens of thousands of processor cores. We present data structures and algorithms to take advantage of the power of high performance computers in generic finite element codes. Existing generic finite element libraries often restrict the parallelization to parallel linear algebra routines. This is a limiting factor when solving on more than a few hundreds of cores. We describe routines for distributed storage of all major components coupled with efficient, scalable algorithms. We give an overview of our effort to enable the modern and generic finite element library deal.II to take advantage of the power of large clusters. In particular, we describe the construction of a distributed mesh and develop algorithms to fully parallelize the finite element calculation. Numerical results demonstrate good scalability. © 2010 Springer-Verlag.

  16. Massive parallel 3D PIC simulation of negative ion extraction

    Science.gov (United States)

    Revel, Adrien; Mochalskyy, Serhiy; Montellano, Ivar Mauricio; Wünderlich, Dirk; Fantz, Ursel; Minea, Tiberiu

    2017-09-01

    The 3D PIC-MCC code ONIX is dedicated to modeling Negative hydrogen/deuterium Ion (NI) extraction and co-extraction of electrons from radio-frequency driven, low pressure plasma sources. It provides valuable insight on the complex phenomena involved in the extraction process. In previous calculations, a mesh size larger than the Debye length was used, implying numerical electron heating. Important steps have been achieved in terms of computation performance and parallelization efficiency allowing successful massive parallel calculations (4096 cores), imperative to resolve the Debye length. In addition, the numerical algorithms have been improved in terms of grid treatment, i.e., the electric field near the complex geometry boundaries (plasma grid) is calculated more accurately. The revised model preserves the full 3D treatment, but can take advantage of a highly refined mesh. ONIX was used to investigate the role of the mesh size, the re-injection scheme for lost particles (extracted or wall absorbed), and the electron thermalization process on the calculated extracted current and plasma characteristics. It is demonstrated that all numerical schemes give the same NI current distribution for extracted ions. Concerning the electrons, the pair-injection technique is found well-adapted to simulate the sheath in front of the plasma grid.

  17. Building next-generation converged networks theory and practice

    CERN Document Server

    Pathan, Al-Sakib Khan

    2013-01-01

    Supplying a comprehensive introduction to next-generation networks, Building Next-Generation Converged Networks: Theory and Practice strikes a balance between how and why things work and how to make them work. It compiles recent advancements along with basic issues from the wide range of fields related to next generation networks. Containing the contributions of 56 industry experts and researchers from 16 different countries, the book presents relevant theoretical frameworks and the latest research. It investigates new technologies such as IPv6 over Low Power Wireless Personal Area Network (6L

  18. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  19. Key thrusts in next generation CANDU. Annex 10

    International Nuclear Information System (INIS)

    Shalaby, B.A.; Torgerson, D.F.; Duffey, R.B.

    2002-01-01

    Current electricity markets and the competitiveness of other generation options such as CCGT have influenced the directions of future nuclear generation. The next generation CANDU has used its key characteristics as the basis to leap frog into a new design featuring improved economics, enhanced passive safety, enhanced operability and demonstrated fuel cycle flexibility. Many enabling technologies spinning of current CANDU design features are used in the next generation design. Some of these technologies have been developed in support of existing plants and near term designs while others will need to be developed and tested. This paper will discuss the key principles driving the next generation CANDU design and the fuel cycle flexibility of the CANDU system which provide synergism with the PWR fuel cycle. (author)

  20. Massively parallel performance of neutron transport response matrix algorithms

    International Nuclear Information System (INIS)

    Hanebutte, U.R.; Lewis, E.E.

    1993-01-01

    Massively parallel red/black response matrix algorithms for the solution of within-group neutron transport problems are implemented on the Connection Machines-2, 200 and 5. The response matrices are dericed from the diamond-differences and linear-linear nodal discrete ordinate and variational nodal P 3 approximations. The unaccelerated performance of the iterative procedure is examined relative to the maximum rated performances of the machines. The effects of processor partitions size, of virtual processor ratio and of problems size are examined in detail. For the red/black algorithm, the ratio of inter-node communication to computing times is found to be quite small, normally of the order of ten percent or less. Performance increases with problems size and with virtual processor ratio, within the memeory per physical processor limitation. Algorithm adaptation to courser grain machines is straight-forward, with total computing time being virtually inversely proportional to the number of physical processors. (orig.)

  1. Simultaneous genomic identification and profiling of a single cell using semiconductor-based next generation sequencing

    Directory of Open Access Journals (Sweden)

    Manabu Watanabe

    2014-09-01

    Full Text Available Combining single-cell methods and next-generation sequencing should provide a powerful means to understand single-cell biology and obviate the effects of sample heterogeneity. Here we report a single-cell identification method and seamless cancer gene profiling using semiconductor-based massively parallel sequencing. A549 cells (adenocarcinomic human alveolar basal epithelial cell line were used as a model. Single-cell capture was performed using laser capture microdissection (LCM with an Arcturus® XT system, and a captured single cell and a bulk population of A549 cells (≈106 cells were subjected to whole genome amplification (WGA. For cell identification, a multiplex PCR method (AmpliSeq™ SNP HID panel was used to enrich 136 highly discriminatory SNPs with a genotype concordance probability of 1031–35. For cancer gene profiling, we used mutation profiling that was performed in parallel using a hotspot panel for 50 cancer-related genes. Sequencing was performed using a semiconductor-based bench top sequencer. The distribution of sequence reads for both HID and Cancer panel amplicons was consistent across these samples. For the bulk population of cells, the percentages of sequence covered at coverage of more than 100× were 99.04% for the HID panel and 98.83% for the Cancer panel, while for the single cell percentages of sequence covered at coverage of more than 100× were 55.93% for the HID panel and 65.96% for the Cancer panel. Partial amplification failure or randomly distributed non-amplified regions across samples from single cells during the WGA procedures or random allele drop out probably caused these differences. However, comparative analyses showed that this method successfully discriminated a single A549 cancer cell from a bulk population of A549 cells. Thus, our approach provides a powerful means to overcome tumor sample heterogeneity when searching for somatic mutations.

  2. Designing the next generation (fifth generation computers)

    International Nuclear Information System (INIS)

    Wallich, P.

    1983-01-01

    A description is given of the designs necessary to develop fifth generation computers. An analysis is offered of problems and developments in parallelism, VLSI, artificial intelligence, knowledge engineering and natural language processing. Software developments are outlined including logic programming, object-oriented programming and exploratory programming. Computer architecture is detailed including concurrent computer architecture

  3. Next Generation Biopharmaceuticals: Product Development.

    Science.gov (United States)

    Mathaes, Roman; Mahler, Hanns-Christian

    2018-04-11

    Therapeutic proteins show a rapid market growth. The relatively young biotech industry already represents 20 % of the total global pharma market. The biotech industry environment has traditionally been fast-pasted and intellectually stimulated. Nowadays the top ten best selling drugs are dominated by monoclonal antibodies (mABs).Despite mABs being the biggest medical breakthrough in the last 25 years, technical innovation does not stand still.The goal remains to preserve the benefits of a conventional mAB (serum half-life and specificity) whilst further improving efficacy and safety and to open new and better avenues for treating patients, e.g., improving the potency of molecules, target binding, tissue penetration, tailored pharmacokinetics, and reduced adverse effects or immunogenicity.The next generation of biopharmaceuticals can pose specific chemistry, manufacturing, and control (CMC) challenges. In contrast to conventional proteins, next-generation biopharmaceuticals often require lyophilization of the final drug product to ensure storage stability over shelf-life time. In addition, next-generation biopharmaceuticals require analytical methods that cover different ways of possible degradation patterns and pathways, and product development is a long way from being straight forward. The element of "prior knowledge" does not exist equally for most novel formats compared to antibodies, and thus the assessment of critical quality attributes (CQAs) and the definition of CQA assessment criteria and specifications is difficult, especially in early-stage development.

  4. My-Forensic-Loci-queries (MyFLq) framework for analysis of forensic STR data generated by massive parallel sequencing.

    Science.gov (United States)

    Van Neste, Christophe; Vandewoestyne, Mado; Van Criekinge, Wim; Deforce, Dieter; Van Nieuwerburgh, Filip

    2014-03-01

    Forensic scientists are currently investigating how to transition from capillary electrophoresis (CE) to massive parallel sequencing (MPS) for analysis of forensic DNA profiles. MPS offers several advantages over CE such as virtually unlimited multiplexy of loci, combining both short tandem repeat (STR) and single nucleotide polymorphism (SNP) loci, small amplicons without constraints of size separation, more discrimination power, deep mixture resolution and sample multiplexing. We present our bioinformatic framework My-Forensic-Loci-queries (MyFLq) for analysis of MPS forensic data. For allele calling, the framework uses a MySQL reference allele database with automatically determined regions of interest (ROIs) by a generic maximal flanking algorithm which makes it possible to use any STR or SNP forensic locus. Python scripts were designed to automatically make allele calls starting from raw MPS data. We also present a method to assess the usefulness and overall performance of a forensic locus with respect to MPS, as well as methods to estimate whether an unknown allele, which sequence is not present in the MySQL database, is in fact a new allele or a sequencing error. The MyFLq framework was applied to an Illumina MiSeq dataset of a forensic Illumina amplicon library, generated from multilocus STR polymerase chain reaction (PCR) on both single contributor samples and multiple person DNA mixtures. Although the multilocus PCR was not yet optimized for MPS in terms of amplicon length or locus selection, the results show excellent results for most loci. The results show a high signal-to-noise ratio, correct allele calls, and a low limit of detection for minor DNA contributors in mixed DNA samples. Technically, forensic MPS affords great promise for routine implementation in forensic genomics. The method is also applicable to adjacent disciplines such as molecular autopsy in legal medicine and in mitochondrial DNA research. Copyright © 2013 The Authors. Published by

  5. NASA's Next Generation Space Geodesy Network

    Science.gov (United States)

    Desai, S. D.; Gross, R. S.; Hilliard, L.; Lemoine, F. G.; Long, J. L.; Ma, C.; McGarry, J. F.; Merkowitz, S. M.; Murphy, D.; Noll, C. E.; hide

    2012-01-01

    NASA's Space Geodesy Project (SGP) is developing a prototype core site for a next generation Space Geodetic Network (SGN). Each of the sites in this planned network co-locate current state-of-the-art stations from all four space geodetic observing systems, GNSS, SLR, VLBI, and DORIS, with the goal of achieving modern requirements for the International Terrestrial Reference Frame (ITRF). In particular, the driving ITRF requirements for this network are 1.0 mm in accuracy and 0.1 mm/yr in stability, a factor of 10-20 beyond current capabilities. Development of the prototype core site, located at NASA's Geophysical and Astronomical Observatory at the Goddard Space Flight Center, started in 2011 and will be completed by the end of 2013. In January 2012, two operational GNSS stations, GODS and GOON, were established at the prototype site within 100 m of each other. Both stations are being proposed for inclusion into the IGS network. In addition, work is underway for the inclusion of next generation SLR and VLBI stations along with a modern DORIS station. An automated survey system is being developed to measure inter-technique vectorties, and network design studies are being performed to define the appropriate number and distribution of these next generation space geodetic core sites that are required to achieve the driving ITRF requirements. We present the status of this prototype next generation space geodetic core site, results from the analysis of data from the established geodetic stations, and results from the ongoing network design studies.

  6. Quantification of massively parallel sequencing libraries - a comparative study of eight methods

    DEFF Research Database (Denmark)

    Hussing, Christian; Kampmann, Marie-Louise; Mogensen, Helle Smidt

    2018-01-01

    Quantification of massively parallel sequencing libraries is important for acquisition of monoclonal beads or clusters prior to clonal amplification and to avoid large variations in library coverage when multiple samples are included in one sequencing analysis. No gold standard for quantification...... estimates followed by Qubit and electrophoresis-based instruments (Bioanalyzer, TapeStation, GX Touch, and Fragment Analyzer), while SYBR Green and TaqMan based qPCR assays gave the lowest estimates. qPCR gave more accurate predictions of sequencing coverage than Qubit and TapeStation did. Costs, time......-consumption, workflow simplicity, and ability to quantify multiple samples are discussed. Technical specifications, advantages, and disadvantages of the various methods are pointed out....

  7. Next Generation Solar Collectors for CSP

    Energy Technology Data Exchange (ETDEWEB)

    Molnar, Attila [3M Company, St. Paul, MN (United States); Charles, Ruth [3M Company, St. Paul, MN (United States)

    2014-07-31

    The intent of “Next Generation Solar Collectors for CSP” program was to develop key technology elements for collectors in Phase 1 (Budget Period 1), design these elements in Phase 2 (Budget Period 2) and to deploy and test the final collector in Phase 3 (Budget Period 3). 3M and DOE mutually agreed to terminate the program at the end of Budget Period 1, primarily due to timeline issues. However, significant advancements were achieved in developing a next generation reflective material and panel that has the potential to significantly improve the efficiency of CSP systems.

  8. Parallel Algorithms for the Exascale Era

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Laboratory

    2016-10-19

    New parallel algorithms are needed to reach the Exascale level of parallelism with millions of cores. We look at some of the research developed by students in projects at LANL. The research blends ideas from the early days of computing while weaving in the fresh approach brought by students new to the field of high performance computing. We look at reproducibility of global sums and why it is important to parallel computing. Next we look at how the concept of hashing has led to the development of more scalable algorithms suitable for next-generation parallel computers. Nearly all of this work has been done by undergraduates and published in leading scientific journals.

  9. A National Demonstration Project Building the Next Generation

    International Nuclear Information System (INIS)

    Keuter, Dan; Hughey, Kenneth; Melancon, Steve; Quinn, Edward 'Ted'

    2002-01-01

    energy situation, and in a way which supports U.S. environmental objectives. A key element of this effort will be the reestablishment and maintenance of an industrial base, which can be accessed in response to changing national energy needs. Right now, in a cooperative program through the U.S. Department of Energy, U.S. and Russian dollars are paying for over 700 Russian nuclear scientists and engineers to complete design work on the Gas Turbine - Modular Helium Reactor (GT-MHR), a next generation nuclear power plant that is melt-down proof, substantially more efficient that the existing generation of reactors, creates substantially less waste and is extremely proliferation resistant. To date, the Russians are providing world class engineering design work, resulting in the program being on track to begin construction of this first of a kind reactor by the end of 2005. Just as important in parallel with this effort, a number of key U.S. utilities are speaking with Congress and the Administration to 'piggy back' off this U.S./Russian effort to promote a joint private-public partnership to construct in parallel a similar first of a kind reactor in the U.S. (authors)

  10. Decreasing Data Analytics Time: Hybrid Architecture MapReduce-Massive Parallel Processing for a Smart Grid

    Directory of Open Access Journals (Sweden)

    Abdeslam Mehenni

    2017-03-01

    Full Text Available As our populations grow in a world of limited resources enterprise seek ways to lighten our load on the planet. The idea of modifying consumer behavior appears as a foundation for smart grids. Enterprise demonstrates the value available from deep analysis of electricity consummation histories, consumers’ messages, and outage alerts, etc. Enterprise mines massive structured and unstructured data. In a nutshell, smart grids result in a flood of data that needs to be analyzed, for better adjust to demand and give customers more ability to delve into their power consumption. Simply put, smart grids will increasingly have a flexible data warehouse attached to them. The key driver for the adoption of data management strategies is clearly the need to handle and analyze the large amounts of information utilities are now faced with. New approaches to data integration are nauseating moment; Hadoop is in fact now being used by the utility to help manage the huge growth in data whilst maintaining coherence of the Data Warehouse. In this paper we define a new Meter Data Management System Architecture repository that differ with three leaders MDMS, where we use MapReduce programming model for ETL and Parallel DBMS in Query statements(Massive Parallel Processing MPP.

  11. Advanced Material Strategies for Next-Generation Additive Manufacturing.

    Science.gov (United States)

    Chang, Jinke; He, Jiankang; Mao, Mao; Zhou, Wenxing; Lei, Qi; Li, Xiao; Li, Dichen; Chua, Chee-Kai; Zhao, Xin

    2018-01-22

    Additive manufacturing (AM) has drawn tremendous attention in various fields. In recent years, great efforts have been made to develop novel additive manufacturing processes such as micro-/nano-scale 3D printing, bioprinting, and 4D printing for the fabrication of complex 3D structures with high resolution, living components, and multimaterials. The development of advanced functional materials is important for the implementation of these novel additive manufacturing processes. Here, a state-of-the-art review on advanced material strategies for novel additive manufacturing processes is provided, mainly including conductive materials, biomaterials, and smart materials. The advantages, limitations, and future perspectives of these materials for additive manufacturing are discussed. It is believed that the innovations of material strategies in parallel with the evolution of additive manufacturing processes will provide numerous possibilities for the fabrication of complex smart constructs with multiple functions, which will significantly widen the application fields of next-generation additive manufacturing.

  12. Advanced Material Strategies for Next-Generation Additive Manufacturing

    Directory of Open Access Journals (Sweden)

    Jinke Chang

    2018-01-01

    Full Text Available Additive manufacturing (AM has drawn tremendous attention in various fields. In recent years, great efforts have been made to develop novel additive manufacturing processes such as micro-/nano-scale 3D printing, bioprinting, and 4D printing for the fabrication of complex 3D structures with high resolution, living components, and multimaterials. The development of advanced functional materials is important for the implementation of these novel additive manufacturing processes. Here, a state-of-the-art review on advanced material strategies for novel additive manufacturing processes is provided, mainly including conductive materials, biomaterials, and smart materials. The advantages, limitations, and future perspectives of these materials for additive manufacturing are discussed. It is believed that the innovations of material strategies in parallel with the evolution of additive manufacturing processes will provide numerous possibilities for the fabrication of complex smart constructs with multiple functions, which will significantly widen the application fields of next-generation additive manufacturing.

  13. Advanced Material Strategies for Next-Generation Additive Manufacturing

    Science.gov (United States)

    Chang, Jinke; He, Jiankang; Zhou, Wenxing; Lei, Qi; Li, Xiao; Li, Dichen

    2018-01-01

    Additive manufacturing (AM) has drawn tremendous attention in various fields. In recent years, great efforts have been made to develop novel additive manufacturing processes such as micro-/nano-scale 3D printing, bioprinting, and 4D printing for the fabrication of complex 3D structures with high resolution, living components, and multimaterials. The development of advanced functional materials is important for the implementation of these novel additive manufacturing processes. Here, a state-of-the-art review on advanced material strategies for novel additive manufacturing processes is provided, mainly including conductive materials, biomaterials, and smart materials. The advantages, limitations, and future perspectives of these materials for additive manufacturing are discussed. It is believed that the innovations of material strategies in parallel with the evolution of additive manufacturing processes will provide numerous possibilities for the fabrication of complex smart constructs with multiple functions, which will significantly widen the application fields of next-generation additive manufacturing. PMID:29361754

  14. Massively parallel electrical conductivity imaging of the subsurface: Applications to hydrocarbon exploration

    Science.gov (United States)

    Newman, Gregory A.; Commer, Michael

    2009-07-01

    Three-dimensional (3D) geophysical imaging is now receiving considerable attention for electrical conductivity mapping of potential offshore oil and gas reservoirs. The imaging technology employs controlled source electromagnetic (CSEM) and magnetotelluric (MT) fields and treats geological media exhibiting transverse anisotropy. Moreover when combined with established seismic methods, direct imaging of reservoir fluids is possible. Because of the size of the 3D conductivity imaging problem, strategies are required exploiting computational parallelism and optimal meshing. The algorithm thus developed has been shown to scale to tens of thousands of processors. In one imaging experiment, 32,768 tasks/processors on the IBM Watson Research Blue Gene/L supercomputer were successfully utilized. Over a 24 hour period we were able to image a large scale field data set that previously required over four months of processing time on distributed clusters based on Intel or AMD processors utilizing 1024 tasks on an InfiniBand fabric. Electrical conductivity imaging using massively parallel computational resources produces results that cannot be obtained otherwise and are consistent with timeframes required for practical exploration problems.

  15. Massively parallel electrical conductivity imaging of the subsurface: Applications to hydrocarbon exploration

    International Nuclear Information System (INIS)

    Newman, Gregory A; Commer, Michael

    2009-01-01

    Three-dimensional (3D) geophysical imaging is now receiving considerable attention for electrical conductivity mapping of potential offshore oil and gas reservoirs. The imaging technology employs controlled source electromagnetic (CSEM) and magnetotelluric (MT) fields and treats geological media exhibiting transverse anisotropy. Moreover when combined with established seismic methods, direct imaging of reservoir fluids is possible. Because of the size of the 3D conductivity imaging problem, strategies are required exploiting computational parallelism and optimal meshing. The algorithm thus developed has been shown to scale to tens of thousands of processors. In one imaging experiment, 32,768 tasks/processors on the IBM Watson Research Blue Gene/L supercomputer were successfully utilized. Over a 24 hour period we were able to image a large scale field data set that previously required over four months of processing time on distributed clusters based on Intel or AMD processors utilizing 1024 tasks on an InfiniBand fabric. Electrical conductivity imaging using massively parallel computational resources produces results that cannot be obtained otherwise and are consistent with timeframes required for practical exploration problems.

  16. COSMOS: Python library for massively parallel workflows.

    Science.gov (United States)

    Gafni, Erik; Luquette, Lovelace J; Lancaster, Alex K; Hawkins, Jared B; Jung, Jae-Yoon; Souilmi, Yassine; Wall, Dennis P; Tonellato, Peter J

    2014-10-15

    Efficient workflows to shepherd clinically generated genomic data through the multiple stages of a next-generation sequencing pipeline are of critical importance in translational biomedical science. Here we present COSMOS, a Python library for workflow management that allows formal description of pipelines and partitioning of jobs. In addition, it includes a user interface for tracking the progress of jobs, abstraction of the queuing system and fine-grained control over the workflow. Workflows can be created on traditional computing clusters as well as cloud-based services. Source code is available for academic non-commercial research purposes. Links to code and documentation are provided at http://lpm.hms.harvard.edu and http://wall-lab.stanford.edu. dpwall@stanford.edu or peter_tonellato@hms.harvard.edu. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  17. Next generation PWR

    International Nuclear Information System (INIS)

    Tanaka, Toshihiko; Fukuda, Toshihiko; Usui, Shuji

    2001-01-01

    Development of LWR for power generation in Japan has been intended to upgrade its reliability, safety, operability, maintenance and economy as well as to increase its capacity in order, since nuclear power generation for commercial use was begun on 1970, to steadily increase its generation power. And, in Japan, ABWR (advanced BWR) of the most promising LWR in the world, was already used actually and APWR (advanced PWR) with the largest output in the world is also at a step of its actual use. And, development of the APWR in Japan was begun on 1980s, and is at a step of plan on construction of its first machine at early of this century. However, by large change of social affairs, economy of nuclear power generation is extremely required, to be positioned at an APWR improved development reactor promoted by collaboration of five PWR generation companies and the Mitsubishi Electric Co., Ltd. Therefore, on its development, investigation on effect of change in social affairs on nuclear power stations was at first carried out, to establish a design requirement for the next generation PWR. Here were described on outline, reactor core design, safety concept, and safety evaluation of APWR+ and development of an innovative PWR. (G.K.)

  18. Massively parallel simulator of optical coherence tomography of inhomogeneous turbid media.

    Science.gov (United States)

    Malektaji, Siavash; Lima, Ivan T; Escobar I, Mauricio R; Sherif, Sherif S

    2017-10-01

    An accurate and practical simulator for Optical Coherence Tomography (OCT) could be an important tool to study the underlying physical phenomena in OCT such as multiple light scattering. Recently, many researchers have investigated simulation of OCT of turbid media, e.g., tissue, using Monte Carlo methods. The main drawback of these earlier simulators is the long computational time required to produce accurate results. We developed a massively parallel simulator of OCT of inhomogeneous turbid media that obtains both Class I diffusive reflectivity, due to ballistic and quasi-ballistic scattered photons, and Class II diffusive reflectivity due to multiply scattered photons. This Monte Carlo-based simulator is implemented on graphic processing units (GPUs), using the Compute Unified Device Architecture (CUDA) platform and programming model, to exploit the parallel nature of propagation of photons in tissue. It models an arbitrary shaped sample medium as a tetrahedron-based mesh and uses an advanced importance sampling scheme. This new simulator speeds up simulations of OCT of inhomogeneous turbid media by about two orders of magnitude. To demonstrate this result, we have compared the computation times of our new parallel simulator and its serial counterpart using two samples of inhomogeneous turbid media. We have shown that our parallel implementation reduced simulation time of OCT of the first sample medium from 407 min to 92 min by using a single GPU card, to 12 min by using 8 GPU cards and to 7 min by using 16 GPU cards. For the second sample medium, the OCT simulation time was reduced from 209 h to 35.6 h by using a single GPU card, and to 4.65 h by using 8 GPU cards, and to only 2 h by using 16 GPU cards. Therefore our new parallel simulator is considerably more practical to use than its central processing unit (CPU)-based counterpart. Our new parallel OCT simulator could be a practical tool to study the different physical phenomena underlying OCT

  19. Parallel plasma fluid turbulence calculations

    International Nuclear Information System (INIS)

    Leboeuf, J.N.; Carreras, B.A.; Charlton, L.A.; Drake, J.B.; Lynch, V.E.; Newman, D.E.; Sidikman, K.L.; Spong, D.A.

    1994-01-01

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated

  20. Research in Parallel Algorithms and Software for Computational Aerosciences

    Science.gov (United States)

    Domel, Neal D.

    1996-01-01

    Phase 1 is complete for the development of a computational fluid dynamics CFD) parallel code with automatic grid generation and adaptation for the Euler analysis of flow over complex geometries. SPLITFLOW, an unstructured Cartesian grid code developed at Lockheed Martin Tactical Aircraft Systems, has been modified for a distributed memory/massively parallel computing environment. The parallel code is operational on an SGI network, Cray J90 and C90 vector machines, SGI Power Challenge, and Cray T3D and IBM SP2 massively parallel machines. Parallel Virtual Machine (PVM) is the message passing protocol for portability to various architectures. A domain decomposition technique was developed which enforces dynamic load balancing to improve solution speed and memory requirements. A host/node algorithm distributes the tasks. The solver parallelizes very well, and scales with the number of processors. Partially parallelized and non-parallelized tasks consume most of the wall clock time in a very fine grain environment. Timing comparisons on a Cray C90 demonstrate that Parallel SPLITFLOW runs 2.4 times faster on 8 processors than its non-parallel counterpart autotasked over 8 processors.

  1. Massive Exploration of Perturbed Conditions of the Blood Coagulation Cascade through GPU Parallelization

    Directory of Open Access Journals (Sweden)

    Paolo Cazzaniga

    2014-01-01

    high-performance computing solutions is motivated by the need of performing large numbers of in silico analysis to study the behavior of biological systems in different conditions, which necessitate a computing power that usually overtakes the capability of standard desktop computers. In this work we present coagSODA, a CUDA-powered computational tool that was purposely developed for the analysis of a large mechanistic model of the blood coagulation cascade (BCC, defined according to both mass-action kinetics and Hill functions. coagSODA allows the execution of parallel simulations of the dynamics of the BCC by automatically deriving the system of ordinary differential equations and then exploiting the numerical integration algorithm LSODA. We present the biological results achieved with a massive exploration of perturbed conditions of the BCC, carried out with one-dimensional and bi-dimensional parameter sweep analysis, and show that GPU-accelerated parallel simulations of this model can increase the computational performances up to a 181× speedup compared to the corresponding sequential simulations.

  2. Beam dynamics calculations and particle tracking using massively parallel processors

    International Nuclear Information System (INIS)

    Ryne, R.D.; Habib, S.

    1995-01-01

    During the past decade massively parallel processors (MPPs) have slowly gained acceptance within the scientific community. At present these machines typically contain a few hundred to one thousand off-the-shelf microprocessors and a total memory of up to 32 GBytes. The potential performance of these machines is illustrated by the fact that a month long job on a high end workstation might require only a few hours on an MPP. The acceptance of MPPs has been slow for a variety of reasons. For example, some algorithms are not easily parallelizable. Also, in the past these machines were difficult to program. But in recent years the development of Fortran-like languages such as CM Fortran and High Performance Fortran have made MPPs much easier to use. In the following we will describe how MPPs can be used for beam dynamics calculations and long term particle tracking

  3. Use of massively parallel computing to improve modelling accuracy within the nuclear sector

    Directory of Open Access Journals (Sweden)

    L M Evans

    2016-06-01

    This work presents recent advancements in three techniques: Uncertainty quantification (UQ; Cellular automata finite element (CAFE; Image based finite element methods (IBFEM. Case studies are presented demonstrating their suitability for use in nuclear engineering made possible by advancements in parallel computing hardware that is projected to be available for industry within the next decade costing of the order of $100k.

  4. Touch imprint cytology with massively parallel sequencing (TIC-seq): a simple and rapid method to snapshot genetic alterations in tumors.

    Science.gov (United States)

    Amemiya, Kenji; Hirotsu, Yosuke; Goto, Taichiro; Nakagomi, Hiroshi; Mochizuki, Hitoshi; Oyama, Toshio; Omata, Masao

    2016-12-01

    Identifying genetic alterations in tumors is critical for molecular targeting of therapy. In the clinical setting, formalin-fixed paraffin-embedded (FFPE) tissue is usually employed for genetic analysis. However, DNA extracted from FFPE tissue is often not suitable for analysis because of its low levels and poor quality. Additionally, FFPE sample preparation is time-consuming. To provide early treatment for cancer patients, a more rapid and robust method is required for precision medicine. We present a simple method for genetic analysis, called touch imprint cytology combined with massively paralleled sequencing (touch imprint cytology [TIC]-seq), to detect somatic mutations in tumors. We prepared FFPE tissues and TIC specimens from tumors in nine lung cancer patients and one patient with breast cancer. We found that the quality and quantity of TIC DNA was higher than that of FFPE DNA, which requires microdissection to enrich DNA from target tissues. Targeted sequencing using a next-generation sequencer obtained sufficient sequence data using TIC DNA. Most (92%) somatic mutations in lung primary tumors were found to be consistent between TIC and FFPE DNA. We also applied TIC DNA to primary and metastatic tumor tissues to analyze tumor heterogeneity in a breast cancer patient, and showed that common and distinct mutations among primary and metastatic sites could be classified into two distinct histological subtypes. TIC-seq is an alternative and feasible method to analyze genomic alterations in tumors by simply touching the cut surface of specimens to slides. © 2016 The Authors. Cancer Medicine published by John Wiley & Sons Ltd.

  5. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    International Nuclear Information System (INIS)

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.

    2015-01-01

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results

  6. Convergence analysis of a class of massively parallel direction splitting algorithms for the Navier-Stokes equations in simple domains

    KAUST Repository

    Guermond, Jean-Luc; Minev, Peter D.; Salgado, Abner J.

    2012-01-01

    We provide a convergence analysis for a new fractional timestepping technique for the incompressible Navier-Stokes equations based on direction splitting. This new technique is of linear complexity, unconditionally stable and convergent, and suitable for massive parallelization. © 2012 American Mathematical Society.

  7. Real-Time Optimization and Control of Next-Generation Distribution

    Science.gov (United States)

    -Generation Distribution Infrastructure Real-Time Optimization and Control of Next-Generation Distribution developing a system-theoretic distribution network management framework that unifies real-time voltage and Infrastructure | Grid Modernization | NREL Real-Time Optimization and Control of Next

  8. A scalable approach to modeling groundwater flow on massively parallel computers

    International Nuclear Information System (INIS)

    Ashby, S.F.; Falgout, R.D.; Tompson, A.F.B.

    1995-12-01

    We describe a fully scalable approach to the simulation of groundwater flow on a hierarchy of computing platforms, ranging from workstations to massively parallel computers. Specifically, we advocate the use of scalable conceptual models in which the subsurface model is defined independently of the computational grid on which the simulation takes place. We also describe a scalable multigrid algorithm for computing the groundwater flow velocities. We axe thus able to leverage both the engineer's time spent developing the conceptual model and the computing resources used in the numerical simulation. We have successfully employed this approach at the LLNL site, where we have run simulations ranging in size from just a few thousand spatial zones (on workstations) to more than eight million spatial zones (on the CRAY T3D)-all using the same conceptual model

  9. Multi-gigabit optical interconnects for next-generation on-board digital equipment

    Science.gov (United States)

    Venet, Norbert; Favaro, Henri; Sotom, Michel; Maignan, Michel; Berthon, Jacques

    2017-11-01

    Parallel optical interconnects are experimentally assessed as a technology that may offer the high-throughput data communication capabilities required to the next-generation on-board digital processing units. An optical backplane interconnect was breadboarded, on the basis of a digital transparent processor that provides flexible connectivity and variable bandwidth in telecom missions with multi-beam antenna coverage. The unit selected for the demonstration required that more than tens of Gbit/s be supported by the backplane. The demonstration made use of commercial parallel optical link modules at 850 nm wavelength, with 12 channels running at up to 2.5 Gbit/s. A flexible optical fibre circuit was developed so as to route board-to-board connections. It was plugged to the optical transmitter and receiver modules through 12-fibre MPO connectors. BER below 10-14 and optical link budgets in excess of 12 dB were measured, which would enable to integrate broadcasting. Integration of the optical backplane interconnect was successfully demonstrated by validating the overall digital processor functionality.

  10. Hacking the next generation

    CERN Document Server

    Dhanjani, Nitesh; Hardin, Brett

    2009-01-01

    With the advent of rich Internet applications, the explosion of social media, and the increased use of powerful cloud computing infrastructures, a new generation of attackers has added cunning new techniques to its arsenal. For anyone involved in defending an application or a network of systems, Hacking: The Next Generation is one of the few books to identify a variety of emerging attack vectors. You'll not only find valuable information on new hacks that attempt to exploit technical flaws, you'll also learn how attackers take advantage of individuals via social networking sites, and abuse

  11. Identification and characterization of Highlands J virus from a Mississippi sandhill crane using unbiased next-generation sequencing

    Science.gov (United States)

    Ip, Hon S.; Wiley, Michael R.; Long, Renee; Gustavo, Palacios; Shearn-Bochsler, Valerie; Whitehouse, Chris A.

    2014-01-01

    Advances in massively parallel DNA sequencing platforms, commonly termed next-generation sequencing (NGS) technologies, have greatly reduced time, labor, and cost associated with DNA sequencing. Thus, NGS has become a routine tool for new viral pathogen discovery and will likely become the standard for routine laboratory diagnostics of infectious diseases in the near future. This study demonstrated the application of NGS for the rapid identification and characterization of a virus isolated from the brain of an endangered Mississippi sandhill crane. This bird was part of a population restoration effort and was found in an emaciated state several days after Hurricane Isaac passed over the refuge in Mississippi in 2012. Post-mortem examination had identified trichostrongyliasis as the possible cause of death, but because a virus with morphology consistent with a togavirus was isolated from the brain of the bird, an arboviral etiology was strongly suspected. Because individual molecular assays for several known arboviruses were negative, unbiased NGS by Illumina MiSeq was used to definitively identify and characterize the causative viral agent. Whole genome sequencing and phylogenetic analysis revealed the viral isolate to be the Highlands J virus, a known avian pathogen. This study demonstrates the use of unbiased NGS for the rapid detection and characterization of an unidentified viral pathogen and the application of this technology to wildlife disease diagnostics and conservation medicine.

  12. Inter-laboratory evaluation of the EUROFORGEN Global ancestry-informative SNP panel by massively parallel sequencing using the Ion PGM™

    DEFF Research Database (Denmark)

    Eduardoff, M; Gross, T E; Santos, C

    2016-01-01

    Seq™ PCR primers was designed for the Global AIM-SNPs to perform massively parallel sequencing using the Ion PGM™ system. This study assessed individual SNP genotyping precision using the Ion PGM™, the forensic sensitivity of the multiplex using dilution series, degraded DNA plus simple mixtures...

  13. Detection of reverse transcriptase termination sites using cDNA ligation and massive parallel sequencing

    DEFF Research Database (Denmark)

    Kielpinski, Lukasz J; Boyd, Mette; Sandelin, Albin

    2013-01-01

    Detection of reverse transcriptase termination sites is important in many different applications, such as structural probing of RNAs, rapid amplification of cDNA 5' ends (5' RACE), cap analysis of gene expression, and detection of RNA modifications and protein-RNA cross-links. The throughput...... of these methods can be increased by applying massive parallel sequencing technologies.Here, we describe a versatile method for detection of reverse transcriptase termination sites based on ligation of an adapter to the 3' end of cDNA with bacteriophage TS2126 RNA ligase (CircLigase™). In the following PCR...

  14. The NASA Next Generation Stirling Technology Program Overview

    Science.gov (United States)

    Schreiber, J. G.; Shaltens, R. K.; Wong, W. A.

    2005-12-01

    NASAs Science Mission Directorate is developing the next generation Stirling technology for future Radioisotope Power Systems (RPS) for surface and deep space missions. The next generation Stirling convertor is one of two advanced power conversion technologies currently being developed for future NASA missions, and is capable of operating for both planetary atmospheres and deep space environments. The Stirling convertor (free-piston engine integrated with a linear alternator) produces about 90 We(ac) and has a specific power of about 90 We/kg. Operating conditions of Thot at 850 degree C and Trej at 90 degree C results in the Stirling convertor estimated efficiency of about 40 per cent. Using the next generation Stirling convertor in future RPS, the "system" specific power is estimated at 8 We/kg. The design lifetime is three years on the surface of Mars and fourteen years in deep space missions. Electrical power of about 160 We (BOM) is produced by two (2) free-piston Stirling convertors heated by two (2) General Purpose Heat Source (GPHS) modules. This development is being performed by Sunpower, Athens, OH with Pratt & Whitney, Rocketdyne, Canoga Park, CA under contract to Glenn Research Center (GRC), Cleveland, Ohio. GRC is guiding the independent testing and technology development for the next generation Stirling generator.

  15. Rhamnolipids--next generation surfactants?

    Science.gov (United States)

    Müller, Markus Michael; Kügler, Johannes H; Henkel, Marius; Gerlitzki, Melanie; Hörmann, Barbara; Pöhnlein, Martin; Syldatk, Christoph; Hausmann, Rudolf

    2012-12-31

    The demand for bio-based processes and materials in the petrochemical industry has significantly increased during the last decade because of the expected running out of petroleum. This trend can be ascribed to three main causes: (1) the increased use of renewable resources for chemical synthesis of already established product classes, (2) the replacement of chemical synthesis of already established product classes by new biotechnological processes based on renewable resources, and (3) the biotechnological production of new molecules with new features or better performances than already established comparable chemically synthesized products. All three approaches are currently being pursued for surfactant production. Biosurfactants are a very promising and interesting substance class because they are based on renewable resources, sustainable, and biologically degradable. Alkyl polyglycosides are chemically synthesized biosurfactants established on the surfactant market. The first microbiological biosurfactants on the market were sophorolipids. Of all currently known biosurfactants, rhamnolipids have the highest potential for becoming the next generation of biosurfactants introduced on the market. Although the metabolic pathways and genetic regulation of biosynthesis are known qualitatively, the quantitative understanding relevant for bioreactor cultivation is still missing. Additionally, high product titers have been exclusively described with vegetable oil as sole carbon source in combination with Pseudomonas aeruginosa strains. Competitive productivity is still out of reach for heterologous hosts or non-pathogenic natural producer strains. Thus, on the one hand there is a need to gain a deeper understanding of the regulation of rhamnolipid production on process and cellular level during bioreactor cultivations. On the other hand, there is a need for metabolizable renewable substrates, which do not compete with food and feed. A sustainable bioeconomy approach should

  16. Physical Configuration of the Next Generation Home Network

    Science.gov (United States)

    Terada, Shohei; Kakishima, Yu; Hanawa, Dai; Oguchi, Kimio

    The number of broadband users is rapidly increasing worldwide. Japan already has over 10 million FTTH users. Another trend is the rapid digitalization of home electrical equipment e. g. digital cameras and hard disc recorders. These trends will encourage the emergence of the next generation home network. In this paper, we introduce the next generation home network image and describe the five domains into which home devices can be classified. We then clarify the optimum medium with which to configure the network given the requirements imposed by the home environment. Wiring cable lengths for three network topologies are calculated. The results gained from the next generation home network implemented on the first phase testbed are shown. Finally, our conclusions are given.

  17. A Faster Parallel Algorithm and Efficient Multithreaded Implementations for Evaluating Betweenness Centrality on Massive Datasets

    Energy Technology Data Exchange (ETDEWEB)

    Madduri, Kamesh; Ediger, David; Jiang, Karl; Bader, David A.; Chavarria-Miranda, Daniel

    2009-02-15

    We present a new lock-free parallel algorithm for computing betweenness centralityof massive small-world networks. With minor changes to the data structures, ouralgorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in HPCS SSCA#2, a benchmark extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the Threadstorm processor, and a single-socket Sun multicore server with the UltraSPARC T2 processor. For a small-world network of 134 million vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.

  18. Toward green next-generation passive optical networks

    Science.gov (United States)

    Srivastava, Anand

    2015-01-01

    Energy efficiency has become an increasingly important aspect of designing access networks, due to both increased concerns for global warming and increased network costs related to energy consumption. Comparing access, metro, and core, the access constitutes a substantial part of the per subscriber network energy consumption and is regarded as the bottleneck for increased network energy efficiency. One of the main opportunities for reducing network energy consumption lies in efficiency improvements of the customer premises equipment. Access networks in general are designed for low utilization while supporting high peak access rates. The combination of large contribution to overall network power consumption and low Utilization implies large potential for CPE power saving modes where functionality is powered off during periods of idleness. Next-generation passive optical network, which is considered one of the most promising optical access networks, has notably matured in the past few years and is envisioned to massively evolve in the near future. This trend will increase the power requirements of NG-PON and make it no longer coveted. This paper will first provide a comprehensive survey of the previously reported studies on tackling this problem. A novel solution framework is then introduced, which aims to explore the maximum design dimensions and achieve the best possible power saving while maintaining the QoS requirements for each type of service.

  19. Parallel grid generation algorithm for distributed memory computers

    Science.gov (United States)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  20. Massive parallelization of a 3D finite difference electromagnetic forward solution using domain decomposition methods on multiple CUDA enabled GPUs

    Science.gov (United States)

    Schultz, A.

    2010-12-01

    3D forward solvers lie at the core of inverse formulations used to image the variation of electrical conductivity within the Earth's interior. This property is associated with variations in temperature, composition, phase, presence of volatiles, and in specific settings, the presence of groundwater, geothermal resources, oil/gas or minerals. The high cost of 3D solutions has been a stumbling block to wider adoption of 3D methods. Parallel algorithms for modeling frequency domain 3D EM problems have not achieved wide scale adoption, with emphasis on fairly coarse grained parallelism using MPI and similar approaches. The communications bandwidth as well as the latency required to send and receive network communication packets is a limiting factor in implementing fine grained parallel strategies, inhibiting wide adoption of these algorithms. Leading Graphics Processor Unit (GPU) companies now produce GPUs with hundreds of GPU processor cores per die. The footprint, in silicon, of the GPU's restricted instruction set is much smaller than the general purpose instruction set required of a CPU. Consequently, the density of processor cores on a GPU can be much greater than on a CPU. GPUs also have local memory, registers and high speed communication with host CPUs, usually through PCIe type interconnects. The extremely low cost and high computational power of GPUs provides the EM geophysics community with an opportunity to achieve fine grained (i.e. massive) parallelization of codes on low cost hardware. The current generation of GPUs (e.g. NVidia Fermi) provides 3 billion transistors per chip die, with nearly 500 processor cores and up to 6 GB of fast (DDR5) GPU memory. This latest generation of GPU supports fast hardware double precision (64 bit) floating point operations of the type required for frequency domain EM forward solutions. Each Fermi GPU board can sustain nearly 1 TFLOP in double precision, and multiple boards can be installed in the host computer system. We

  1. Massively parallel signature sequencing and bioinformatics analysis identifies up-regulation of TGFBI and SOX4 in human glioblastoma.

    Directory of Open Access Journals (Sweden)

    Biaoyang Lin

    Full Text Available BACKGROUND: A comprehensive network-based understanding of molecular pathways abnormally altered in glioblastoma multiforme (GBM is essential for developing effective therapeutic approaches for this deadly disease. METHODOLOGY/PRINCIPAL FINDINGS: Applying a next generation sequencing technology, massively parallel signature sequencing (MPSS, we identified a total of 4535 genes that are differentially expressed between normal brain and GBM tissue. The expression changes of three up-regulated genes, CHI3L1, CHI3L2, and FOXM1, and two down-regulated genes, neurogranin and L1CAM, were confirmed by quantitative PCR. Pathway analysis revealed that TGF- beta pathway related genes were significantly up-regulated in GBM tumor samples. An integrative pathway analysis of the TGF beta signaling network identified two alternative TGF-beta signaling pathways mediated by SOX4 (sex determining region Y-box 4 and TGFBI (Transforming growth factor beta induced. Quantitative RT-PCR and immunohistochemistry staining demonstrated that SOX4 and TGFBI expression is elevated in GBM tissues compared with normal brain tissues at both the RNA and protein levels. In vitro functional studies confirmed that TGFBI and SOX4 expression is increased by TGF-beta stimulation and decreased by a specific inhibitor of TGF-beta receptor 1 kinase. CONCLUSIONS/SIGNIFICANCE: Our MPSS database for GBM and normal brain tissues provides a useful resource for the scientific community. The identification of non-SMAD mediated TGF-beta signaling pathways acting through SOX4 and TGFBI (GENE ID:7045 in GBM indicates that these alternative pathways should be considered, in addition to the canonical SMAD mediated pathway, in the development of new therapeutic strategies targeting TGF-beta signaling in GBM. Finally, the construction of an extended TGF-beta signaling network with overlaid gene expression changes between GBM and normal brain extends our understanding of the biology of GBM.

  2. Massively parallel simulations of strong electronic correlations: Realistic Coulomb vertex and multiplet effects

    Science.gov (United States)

    Baumgärtel, M.; Ghanem, K.; Kiani, A.; Koch, E.; Pavarini, E.; Sims, H.; Zhang, G.

    2017-07-01

    We discuss the efficient implementation of general impurity solvers for dynamical mean-field theory. We show that both Lanczos and quantum Monte Carlo in different flavors (Hirsch-Fye, continuous-time hybridization- and interaction-expansion) exhibit excellent scaling on massively parallel supercomputers. We apply these algorithms to simulate realistic model Hamiltonians including the full Coulomb vertex, crystal-field splitting, and spin-orbit interaction. We discuss how to remove the sign problem in the presence of non-diagonal crystal-field and hybridization matrices. We show how to extract the physically observable quantities from imaginary time data, in particular correlation functions and susceptibilities. Finally, we present benchmarks and applications for representative correlated systems.

  3. Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array

    Science.gov (United States)

    Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul

    2008-04-01

    This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.

  4. Thermal hydrodynamic modeling and simulation of hot-gas duct for next-generation nuclear reactor

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Injun [School of Mechanical Engineering, Yeungnam University, Gyeongsan 712-749 (Korea, Republic of); Hong, Sungdeok; Kim, Chansoo [Korea Atomic Energy Research Institute, Daejeon 305-353 (Korea, Republic of); Bai, Cheolho; Hong, Sungyull [School of Mechanical Engineering, Yeungnam University, Gyeongsan 712-749 (Korea, Republic of); Shim, Jaesool, E-mail: jshim@ynu.ac.kr [School of Mechanical Engineering, Yeungnam University, Gyeongsan 712-749 (Korea, Republic of)

    2016-12-15

    Highlights: • Thermal hydrodynamic nonlinear model is presented to examine a hot gas duct (HGD) used in a fourth-generation nuclear power reactor. • Experiments and simulation were compared to validate the nonlinear porous model. • Natural convection and radiation are considered to study the effect on the surface temperature of the HGD. • Local Nusselt number is obtained for the optimum design of a possible next-generation HGD. - Abstract: A very high-temperature gas-cooled reactor (VHTR) is a fourth-generation nuclear power reactor that requires an intermediate loop that consists of a hot-gas duct (HGD), an intermediate heat exchanger (IHX), and a process heat exchanger for massive hydrogen production. In this study, a mathematical model and simulation were developed for the HGD in a small-scale nitrogen gas loop that was designed and manufactured by the Korea Atomic Energy Research Institute. These were used to investigate the effect of various important factors on the surface of the HGD. In the modeling, a porous model was considered for a Kaowool insulator inside the HGD. The natural convection and radiation are included in the model. For validation, the modeled external surface temperatures are compared with experimental results obtained while changing the inlet temperatures of the nitrogen working fluid. The simulation results show very good agreement with the experiments. The external surface temperatures of the HGD are obtained with respect to the porosity of insulator, emissivity of radiation, and pressure of the working fluid. The local Nusselt number is also obtained for the optimum design of a possible next-generation HGD.

  5. Performance evaluations of advanced massively parallel platforms based on gyrokinetic toroidal five-dimensional Eulerian code GT5D

    International Nuclear Information System (INIS)

    Idomura, Yasuhiro; Jolliet, Sebastien

    2010-01-01

    A gyrokinetic toroidal five dimensional Eulerian code GT5D is ported on six advanced massively parallel platforms and comprehensive benchmark tests are performed. A parallelisation technique based on physical properties of the gyrokinetic equation is presented. By extending the parallelisation technique with a hybrid parallel model, the scalability of the code is improved on platforms with multi-core processors. In the benchmark tests, a good salability is confirmed up to several thousands cores on every platforms, and the maximum sustained performance of ∼18.6 Tflops is achieved using 16384 cores of BX900. (author)

  6. Next generation vaccines.

    Science.gov (United States)

    Riedmann, Eva M

    2011-07-01

    In February this year, about 100 delegates gathered for three days in Vienna (Austria) for the Next Generation Vaccines conference. The meeting held in the Vienna Hilton Hotel from 23rd-25th February 2011 had a strong focus on biotech and industry. The conference organizer Jacob Fleming managed to put together a versatile program ranging from the future generation of vaccines to manufacturing, vaccine distribution and delivery, to regulatory and public health issues. Carefully selected top industry experts presented first-hand experience and shared solutions for overcoming the latest challenges in the field of vaccinology. The program also included several case study presentations on novel vaccine candidates in different stages of development. An interactive pre-conference workshop as well as interactive panel discussions during the meeting allowed all delegates to gain new knowledge and become involved in lively discussions on timely, interesting and sometimes controversial topics related to vaccines.

  7. NOAA NEXt-Generation RADar (NEXRAD) Products

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of Level III weather radar products collected from Next-Generation Radar (NEXRAD) stations located in the contiguous United States, Alaska,...

  8. The Next Generation Science Standards

    Science.gov (United States)

    Pruitt, Stephen L.

    2015-01-01

    The Next Generation Science Standards (NGSS Lead States 2013) were released almost two years ago. Work tied to the NGSS, their adoption, and implementation continues to move forward around the country. Stephen L. Pruitt, senior vice president, science, at Achieve, an independent, nonpartisan, nonprofit education reform organization that was a lead…

  9. Transitioning NWChem to the Next Generation of Manycore Machines

    Energy Technology Data Exchange (ETDEWEB)

    Bylaska, Eric J.; Apra, Edoardo; Kowalski, Karol; Jacquelin, Mathias; De Jong, Wibe A.; Vishnu, Abhinav; Palmer, Bruce J.; Daily, Jeffrey A.; Straatsma, Tjerk P.; Hammond, Jeff R.; Klemm, Michael

    2017-11-09

    The NorthWest Chemistry (NWChem) modeling software is a popular molecular chemistry simulation software that was designed from the start to work on massively parallel processing supercomputers[6, 28, 49]. It contains an umbrella of modules that today includes Self Consistent Field (SCF), second order Mller-Plesset perturbation theory (MP2), Coupled Cluster, multi-conguration selfconsistent eld (MCSCF), selected conguration interaction (CI), tensor contraction engine (TCE) many body methods, density functional theory (DFT), time-dependent density functional theory (TDDFT), real time time-dependent density functional theory, pseudopotential plane-wave density functional theory (PSPW), band structure (BAND), ab initio molecular dynamics, Car-Parrinello molecular dynamics, classical molecular dynamics (MD), QM/MM, AIMD/MM, GIAO NMR, COSMO, COSMO-SMD, and RISM solvation models, free energy simulations, reaction path optimization, parallel in time, among other capabilities[ 22]. Moreover new capabilities continue to be added with each new release.

  10. Pharmacokinetic and pharmacodynamic considerations for the next generation protein therapeutics.

    Science.gov (United States)

    Shah, Dhaval K

    2015-10-01

    Increasingly sophisticated protein engineering efforts have been undertaken lately to generate protein therapeutics with desired properties. This has resulted in the discovery of the next generation of protein therapeutics, which include: engineered antibodies, immunoconjugates, bi/multi-specific proteins, antibody mimetic novel scaffolds, and engineered ligands/receptors. These novel protein therapeutics possess unique physicochemical properties and act via a unique mechanism-of-action, which collectively makes their pharmacokinetics (PK) and pharmacodynamics (PD) different than other established biological molecules. Consequently, in order to support the discovery and development of these next generation molecules, it becomes important to understand the determinants controlling their PK/PD. This review discusses the determinants that a PK/PD scientist should consider during the design and development of next generation protein therapeutics. In addition, the role of systems PK/PD models in enabling rational development of the next generation protein therapeutics is emphasized.

  11. IPv6: The Next Generation Internet Protocol

    Indian Academy of Sciences (India)

    addressing, new generation internet. 2. ... required the creation of the next generation of Internet ... IPv6 standards have defined the following Extension headers ..... addresses are represented as x:x:x:x:x:x:x:x, where each x is the hexadecimal ...

  12. Next generation farms at Fermilab

    International Nuclear Information System (INIS)

    Cudzewicz, R., Giacchetti, L., Leininger, M., Levshina, T., Pasetes, R., Schweitzer, M., Wolbers, S.

    1997-01-01

    The current generation of UNIX farms at Fermilab are rapidly approaching the end of their useful life. The workstations were purchased during the years 1991-1992 and represented the most cost-effective computing available at that time. Acquisition of new workstations is being made to upgrade the UNIX farms for the purpose of providing large amounts of computing for reconstruction of data being collected at the 1996-1997 fixed-target run, as well as to provide simulation computing for CMS, the Auger project, accelerator calculations and other projects that require massive amounts of CPU. 4 refs., 1 fig., 2 tabs

  13. Differences Between Distributed and Parallel Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brightwell, R.; Maccabe, A.B.; Rissen, R.

    1998-10-01

    Distributed systems have been studied for twenty years and are now coming into wider use as fast networks and powerful workstations become more readily available. In many respects a massively parallel computer resembles a network of workstations and it is tempting to port a distributed operating system to such a machine. However, there are significant differences between these two environments and a parallel operating system is needed to get the best performance out of a massively parallel system. This report characterizes the differences between distributed systems, networks of workstations, and massively parallel systems and analyzes the impact of these differences on operating system design. In the second part of the report, we introduce Puma, an operating system specifically developed for massively parallel systems. We describe Puma portals, the basic building blocks for message passing paradigms implemented on top of Puma, and show how the differences observed in the first part of the report have influenced the design and implementation of Puma.

  14. Next-generation wireless technologies 4G and beyond

    CERN Document Server

    Chilamkurti, Naveen; Chaouchi, Hakima

    2013-01-01

    This comprehensive text/reference examines the various challenges to secure, efficient and cost-effective next-generation wireless networking. Topics and features: presents the latest advances, standards and technical challenges in a broad range of emerging wireless technologies; discusses cooperative and mesh networks, delay tolerant networks, and other next-generation networks such as LTE; examines real-world applications of vehicular communications, broadband wireless technologies, RFID technology, and energy-efficient wireless communications; introduces developments towards the 'Internet o

  15. CHOLLA: A NEW MASSIVELY PARALLEL HYDRODYNAMICS CODE FOR ASTROPHYSICAL SIMULATION

    International Nuclear Information System (INIS)

    Schneider, Evan E.; Robertson, Brant E.

    2015-01-01

    We present Computational Hydrodynamics On ParaLLel Architectures (Cholla ), a new three-dimensional hydrodynamics code that harnesses the power of graphics processing units (GPUs) to accelerate astrophysical simulations. Cholla models the Euler equations on a static mesh using state-of-the-art techniques, including the unsplit Corner Transport Upwind algorithm, a variety of exact and approximate Riemann solvers, and multiple spatial reconstruction techniques including the piecewise parabolic method (PPM). Using GPUs, Cholla evolves the fluid properties of thousands of cells simultaneously and can update over 10 million cells per GPU-second while using an exact Riemann solver and PPM reconstruction. Owing to the massively parallel architecture of GPUs and the design of the Cholla code, astrophysical simulations with physically interesting grid resolutions (≳256 3 ) can easily be computed on a single device. We use the Message Passing Interface library to extend calculations onto multiple devices and demonstrate nearly ideal scaling beyond 64 GPUs. A suite of test problems highlights the physical accuracy of our modeling and provides a useful comparison to other codes. We then use Cholla to simulate the interaction of a shock wave with a gas cloud in the interstellar medium, showing that the evolution of the cloud is highly dependent on its density structure. We reconcile the computed mixing time of a turbulent cloud with a realistic density distribution destroyed by a strong shock with the existing analytic theory for spherical cloud destruction by describing the system in terms of its median gas density

  16. CHOLLA: A NEW MASSIVELY PARALLEL HYDRODYNAMICS CODE FOR ASTROPHYSICAL SIMULATION

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, Evan E.; Robertson, Brant E. [Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721 (United States)

    2015-04-15

    We present Computational Hydrodynamics On ParaLLel Architectures (Cholla ), a new three-dimensional hydrodynamics code that harnesses the power of graphics processing units (GPUs) to accelerate astrophysical simulations. Cholla models the Euler equations on a static mesh using state-of-the-art techniques, including the unsplit Corner Transport Upwind algorithm, a variety of exact and approximate Riemann solvers, and multiple spatial reconstruction techniques including the piecewise parabolic method (PPM). Using GPUs, Cholla evolves the fluid properties of thousands of cells simultaneously and can update over 10 million cells per GPU-second while using an exact Riemann solver and PPM reconstruction. Owing to the massively parallel architecture of GPUs and the design of the Cholla code, astrophysical simulations with physically interesting grid resolutions (≳256{sup 3}) can easily be computed on a single device. We use the Message Passing Interface library to extend calculations onto multiple devices and demonstrate nearly ideal scaling beyond 64 GPUs. A suite of test problems highlights the physical accuracy of our modeling and provides a useful comparison to other codes. We then use Cholla to simulate the interaction of a shock wave with a gas cloud in the interstellar medium, showing that the evolution of the cloud is highly dependent on its density structure. We reconcile the computed mixing time of a turbulent cloud with a realistic density distribution destroyed by a strong shock with the existing analytic theory for spherical cloud destruction by describing the system in terms of its median gas density.

  17. Efficient Sphere Detector Algorithm for Massive MIMO using GPU Hardware Accelerator

    KAUST Repository

    Arfaoui, Mohamed-Amine

    2016-06-01

    To further enhance the capacity of next generation wireless communication systems, massive MIMO has recently appeared as a necessary enabling technology to achieve high performance signal processing for large-scale multiple antennas. However, massive MIMO systems inevitably generate signal processing overheads, which translate into ever-increasing rate of complexity, and therefore, such system may not maintain the inherent real-time requirement of wireless systems. We redesign the non-linear sphere decoder method to increase the performance of the system, cast most memory-bound computations into compute-bound operations to reduce the overall complexity, and maintain the real-time processing thanks to the GPU computational power. We show a comprehensive complexity and performance analysis on an unprecedented MIMO system scale, which can ease the design phase toward simulating future massive MIMO wireless systems.

  18. Efficient Sphere Detector Algorithm for Massive MIMO using GPU Hardware Accelerator

    KAUST Repository

    Arfaoui, Mohamed-Amine; Ltaief, Hatem; Rezki, Zouheir; Alouini, Mohamed-Slim; Keyes, David E.

    2016-01-01

    To further enhance the capacity of next generation wireless communication systems, massive MIMO has recently appeared as a necessary enabling technology to achieve high performance signal processing for large-scale multiple antennas. However, massive MIMO systems inevitably generate signal processing overheads, which translate into ever-increasing rate of complexity, and therefore, such system may not maintain the inherent real-time requirement of wireless systems. We redesign the non-linear sphere decoder method to increase the performance of the system, cast most memory-bound computations into compute-bound operations to reduce the overall complexity, and maintain the real-time processing thanks to the GPU computational power. We show a comprehensive complexity and performance analysis on an unprecedented MIMO system scale, which can ease the design phase toward simulating future massive MIMO wireless systems.

  19. Phase space simulation of collisionless stellar systems on the massively parallel processor

    International Nuclear Information System (INIS)

    White, R.L.

    1987-01-01

    A numerical technique for solving the collisionless Boltzmann equation describing the time evolution of a self gravitating fluid in phase space was implemented on the Massively Parallel Processor (MPP). The code performs calculations for a two dimensional phase space grid (with one space and one velocity dimension). Some results from calculations are presented. The execution speed of the code is comparable to the speed of a single processor of a Cray-XMP. Advantages and disadvantages of the MPP architecture for this type of problem are discussed. The nearest neighbor connectivity of the MPP array does not pose a significant obstacle. Future MPP-like machines should have much more local memory and easier access to staging memory and disks in order to be effective for this type of problem

  20. NNLO massive corrections to Bhabha scattering and theoretical precision of BabaYaga rate at NLO

    International Nuclear Information System (INIS)

    Carloni Calame, C.M.; Nicrosini, O.; Piccinini, F.; Riemann, T.; Worek, M.

    2011-12-01

    We provide an exact calculation of next-to-next-to-leading order (NNLO) massive corrections to Bhabha scattering in QED, relevant for precision luminosity monitoring at meson factories. Using realistic reference event selections, exact numerical results for leptonic and hadronic corrections are given and compared with the corresponding approximate predictions of the event generator BabaYaga rate at NLO. It is shown that the NNLO massive corrections are necessary for luminosity measurements with per mille precision. At the same time they are found to be well accounted for in the generator at an accuracy level below the one per mille. An update of the total theoretical precision of BabaYaga rate at NLO is presented and possible directions for a further error reduction are sketched. (orig.)

  1. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  2. Next Generation of Photovoltaics New Concepts

    CERN Document Server

    Vega, Antonio; López, Antonio

    2012-01-01

    This book presents new concepts for a next generation of PV. Among these concepts are: Multijunction solar cells, multiple excitation solar cells (or how to take benefit of high energy photons for the creation of more than one electron hole-pair), intermediate band solar cells (or how to take advantage of below band-gap energy photons) and related technologies (for quantum dots, nitrides, thin films), advanced light management approaches (plasmonics). Written by world-class experts in next generation photovoltaics this book is an essential reference guide accessible to both beginners and experts working with solar cell technology. The book deeply analyzes the current state-of-the-art of the new photovoltaic approaches and outlines the implementation paths of these advanced devices. Topics addressed range from the fundamentals to the description of state-of-the-art of the new types of solar cells.

  3. Next generation biofuel engineering in prokaryotes

    Science.gov (United States)

    Gronenberg, Luisa S.; Marcheschi, Ryan J.; Liao, James C.

    2014-01-01

    Next-generation biofuels must be compatible with current transportation infrastructure and be derived from environmentally sustainable resources that do not compete with food crops. Many bacterial species have unique properties advantageous to the production of such next-generation fuels. However, no single species possesses all characteristics necessary to make high quantities of fuels from plant waste or CO2. Species containing a subset of the desired characteristics are used as starting points for engineering organisms with all desired attributes. Metabolic engineering of model organisms has yielded high titer production of advanced fuels, including alcohols, isoprenoids and fatty acid derivatives. Technical developments now allow engineering of native fuel producers, as well as lignocellulolytic and autotrophic bacteria, for the production of biofuels. Continued research on multiple fronts is required to engineer organisms for truly sustainable and economical biofuel production. PMID:23623045

  4. Next generation of photovoltaics. New concepts

    Energy Technology Data Exchange (ETDEWEB)

    Cristobal Lopez, Ana Belen; Marti Vega, Antonio; Luque Lopez, Antonio (eds.) [Univ. Politecnica de Madrid (Spain). Inst. de Energia Solar E.T.S.I. Telecomunicacion

    2012-07-01

    This book presents new concepts for a next generation of PV. Among these concepts are: Multijunction solar cells, multiple excitation solar cells (or how to take benefit of high energy photons for the creation of more than one electron hole-pair), intermediate band solar cells (or how to take advantage of below band-gap energy photons) and related technologies (for quantum dots, nitrides, thin films), advanced light management approaches (plasmonics). Written by world-class experts in next generation photovoltaics this book is an essential reference guide accessible to both beginners and experts working with solar cell technology. The book deeply analyzes the current state-of-the-art of the new photovoltaic approaches and outlines the implementation paths of these advanced devices. Topics addressed range from the fundamentals to the description of state-of-the-art of the new types of solar cells. (orig.)

  5. Genotypic tropism testing by massively parallel sequencing: qualitative and quantitative analysis

    Directory of Open Access Journals (Sweden)

    Thiele Bernhard

    2011-05-01

    Full Text Available Abstract Background Inferring viral tropism from genotype is a fast and inexpensive alternative to phenotypic testing. While being highly predictive when performed on clonal samples, sensitivity of predicting CXCR4-using (X4 variants drops substantially in clinical isolates. This is mainly attributed to minor variants not detected by standard bulk-sequencing. Massively parallel sequencing (MPS detects single clones thereby being much more sensitive. Using this technology we wanted to improve genotypic prediction of coreceptor usage. Methods Plasma samples from 55 antiretroviral-treated patients tested for coreceptor usage with the Monogram Trofile Assay were sequenced with standard population-based approaches. Fourteen of these samples were selected for further analysis with MPS. Tropism was predicted from each sequence with geno2pheno[coreceptor]. Results Prediction based on bulk-sequencing yielded 59.1% sensitivity and 90.9% specificity compared to the trofile assay. With MPS, 7600 reads were generated on average per isolate. Minorities of sequences with high confidence in CXCR4-usage were found in all samples, irrespective of phenotype. When using the default false-positive-rate of geno2pheno[coreceptor] (10%, and defining a minority cutoff of 5%, the results were concordant in all but one isolate. Conclusions The combination of MPS and coreceptor usage prediction results in a fast and accurate alternative to phenotypic assays. The detection of X4-viruses in all isolates suggests that coreceptor usage as well as fitness of minorities is important for therapy outcome. The high sensitivity of this technology in combination with a quantitative description of the viral population may allow implementing meaningful cutoffs for predicting response to CCR5-antagonists in the presence of X4-minorities.

  6. Genotypic tropism testing by massively parallel sequencing: qualitative and quantitative analysis.

    Science.gov (United States)

    Däumer, Martin; Kaiser, Rolf; Klein, Rolf; Lengauer, Thomas; Thiele, Bernhard; Thielen, Alexander

    2011-05-13

    Inferring viral tropism from genotype is a fast and inexpensive alternative to phenotypic testing. While being highly predictive when performed on clonal samples, sensitivity of predicting CXCR4-using (X4) variants drops substantially in clinical isolates. This is mainly attributed to minor variants not detected by standard bulk-sequencing. Massively parallel sequencing (MPS) detects single clones thereby being much more sensitive. Using this technology we wanted to improve genotypic prediction of coreceptor usage. Plasma samples from 55 antiretroviral-treated patients tested for coreceptor usage with the Monogram Trofile Assay were sequenced with standard population-based approaches. Fourteen of these samples were selected for further analysis with MPS. Tropism was predicted from each sequence with geno2pheno[coreceptor]. Prediction based on bulk-sequencing yielded 59.1% sensitivity and 90.9% specificity compared to the trofile assay. With MPS, 7600 reads were generated on average per isolate. Minorities of sequences with high confidence in CXCR4-usage were found in all samples, irrespective of phenotype. When using the default false-positive-rate of geno2pheno[coreceptor] (10%), and defining a minority cutoff of 5%, the results were concordant in all but one isolate. The combination of MPS and coreceptor usage prediction results in a fast and accurate alternative to phenotypic assays. The detection of X4-viruses in all isolates suggests that coreceptor usage as well as fitness of minorities is important for therapy outcome. The high sensitivity of this technology in combination with a quantitative description of the viral population may allow implementing meaningful cutoffs for predicting response to CCR5-antagonists in the presence of X4-minorities.

  7. Illuminating the evolution of equids and rodents with next-generation sequencing of ancient specimens

    DEFF Research Database (Denmark)

    Mouatt, Julia Thidamarth Vilstrup

    enrichment methods and the massive throughput and latest advances within DNA sequencing, the field of ancient DNA has flourished in later years. Those advances have even enabled the sequencing of complete genomes from the past, moving the field into genomic sciences. In this thesis we have used these latest......The sequencing of ancient DNA provides perspectives on the genetic history of past populations and extinct species. However, ancient DNA research presents specific limitations mostly due to DNA survival, damage and contamination. Yet with stringent laboratory procedures, the sensitivity of target...... developments within ancient DNA research, including target enrichment capture and Next-Generation Sequencing, to address a range of evolutionary questions related to two major mammalian groups, equids and rodents. In particular we have resolved phylogenetic relationships within equids using complete mitochond...

  8. Galaxy LIMS for next-generation sequencing

    NARCIS (Netherlands)

    Scholtalbers, J.; Rossler, J.; Sorn, P.; Graaf, J. de; Boisguerin, V.; Castle, J.; Sahin, U.

    2013-01-01

    SUMMARY: We have developed a laboratory information management system (LIMS) for a next-generation sequencing (NGS) laboratory within the existing Galaxy platform. The system provides lab technicians standard and customizable sample information forms, barcoded submission forms, tracking of input

  9. Massively Parallel Assimilation of TOGA/TAO and Topex/Poseidon Measurements into a Quasi Isopycnal Ocean General Circulation Model Using an Ensemble Kalman Filter

    Science.gov (United States)

    Keppenne, Christian L.; Rienecker, Michele; Borovikov, Anna Y.; Suarez, Max

    1999-01-01

    A massively parallel ensemble Kalman filter (EnKF)is used to assimilate temperature data from the TOGA/TAO array and altimetry from TOPEX/POSEIDON into a Pacific basin version of the NASA Seasonal to Interannual Prediction Project (NSIPP)ls quasi-isopycnal ocean general circulation model. The EnKF is an approximate Kalman filter in which the error-covariance propagation step is modeled by the integration of multiple instances of a numerical model. An estimate of the true error covariances is then inferred from the distribution of the ensemble of model state vectors. This inplementation of the filter takes advantage of the inherent parallelism in the EnKF algorithm by running all the model instances concurrently. The Kalman filter update step also occurs in parallel by having each processor process the observations that occur in the region of physical space for which it is responsible. The massively parallel data assimilation system is validated by withholding some of the data and then quantifying the extent to which the withheld information can be inferred from the assimilation of the remaining data. The distributions of the forecast and analysis error covariances predicted by the ENKF are also examined.

  10. Next-generation Digital Earth.

    Science.gov (United States)

    Goodchild, Michael F; Guo, Huadong; Annoni, Alessandro; Bian, Ling; de Bie, Kees; Campbell, Frederick; Craglia, Max; Ehlers, Manfred; van Genderen, John; Jackson, Davina; Lewis, Anthony J; Pesaresi, Martino; Remetey-Fülöpp, Gábor; Simpson, Richard; Skidmore, Andrew; Wang, Changlin; Woodgate, Peter

    2012-07-10

    A speech of then-Vice President Al Gore in 1998 created a vision for a Digital Earth, and played a role in stimulating the development of a first generation of virtual globes, typified by Google Earth, that achieved many but not all the elements of this vision. The technical achievements of Google Earth, and the functionality of this first generation of virtual globes, are reviewed against the Gore vision. Meanwhile, developments in technology continue, the era of "big data" has arrived, the general public is more and more engaged with technology through citizen science and crowd-sourcing, and advances have been made in our scientific understanding of the Earth system. However, although Google Earth stimulated progress in communicating the results of science, there continue to be substantial barriers in the public's access to science. All these factors prompt a reexamination of the initial vision of Digital Earth, and a discussion of the major elements that should be part of a next generation.

  11. A Next Generation Sequencing custom gene panel as first line diagnostic tool for atypical cases of syndromic obesity: Application in a case of Alström syndrome.

    Science.gov (United States)

    Maltese, Paolo E; Iarossi, Giancarlo; Ziccardi, Lucia; Colombo, Leonardo; Buzzonetti, Luca; Crinò, Antonino; Tezzele, Silvia; Bertelli, Matteo

    2018-02-01

    Obesity phenotype can be manifested as an isolated trait or accompanied by multisystem disorders as part of a syndromic picture. In both situations, same molecular pathways may be involved to different degrees. This evidence is stronger in syndromic obesity, in which phenotypes of different syndromes may overlap. In these cases, genetic testing can unequivocally provide a final diagnosis. Here we describe a patient who met the diagnostic criteria for Alström syndrome only during adolescence. Genetic testing was requested at 25 years of age for a final confirmation of the diagnosis. The genetic diagnosis of Alström syndrome was obtained through a Next Generation Sequencing genetic test approach using a custom-designed gene panel of 47 genes associated with syndromic and non-syndromic obesity. Genetic analysis revealed a novel homozygous frameshift variant p.(Arg1550Lysfs*10) on exon 8 of the ALMS1 gene. This case shows the need for a revision of the diagnostic criteria guidelines, as a consequence of the recent advent of massive parallel sequencing technology. Indications for genetic testing reported in these currently accepted diagnostic criteria for Alström syndrome, were drafted when sequencing was expensive and time consuming. Nowadays, Next Generation Sequencing testing could be considered as first line diagnostic tool not only for Alström syndrome but, more generally, for all those atypical or not clearly distinguishable cases of syndromic obesity, thus avoiding delayed diagnosis and treatments. Early diagnosis permits a better follow-up and pre-symptomatic interventions. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  12. Next-Generation Sequencing of Tubal Intraepithelial Carcinomas.

    Science.gov (United States)

    McDaniel, Andrew S; Stall, Jennifer N; Hovelson, Daniel H; Cani, Andi K; Liu, Chia-Jen; Tomlins, Scott A; Cho, Kathleen R

    2015-11-01

    High-grade serous carcinoma (HGSC) is the most prevalent and lethal form of ovarian cancer. HGSCs frequently arise in the distal fallopian tubes rather than the ovary, developing from small precursor lesions called serous tubal intraepithelial carcinomas (TICs, or more specifically, STICs). While STICs have been reported to harbor TP53 mutations, detailed molecular characterizations of these lesions are lacking. We performed targeted next-generation sequencing (NGS) on formalin-fixed, paraffin-embedded tissue from 4 women, 2 with HGSC and 2 with uterine endometrioid carcinoma (UEC) who were diagnosed as having synchronous STICs. We detected concordant mutations in both HGSCs with synchronous STICs, including TP53 mutations as well as assumed germline BRCA1/2 alterations, confirming a clonal association between these lesions. Next-generation sequencing confirmed the presence of a STIC clonally unrelated to 1 case of UEC, and NGS of the other tubal lesion diagnosed as a STIC unexpectedly supported the lesion as a micrometastasis from the associated UEC. We demonstrate that targeted NGS can identify genetic alterations in minute lesions, such as TICs, and confirm TP53 mutations as early driving events for HGSC. Next-generation sequencing also demonstrated unexpected associations between presumed STICs and synchronous carcinomas, providing evidence that some TICs are actually metastases rather than HGSC precursors.

  13. Detection of arboviruses and other micro-organisms in experimentally infected mosquitoes using massively parallel sequencing.

    Directory of Open Access Journals (Sweden)

    Sonja Hall-Mendelin

    Full Text Available Human disease incidence attributed to arbovirus infection is increasing throughout the world, with effective control interventions limited by issues of sustainability, insecticide resistance and the lack of effective vaccines. Several promising control strategies are currently under development, such as the release of mosquitoes trans-infected with virus-blocking Wolbachia bacteria. Implementation of any control program is dependent on effective virus surveillance and a thorough understanding of virus-vector interactions. Massively parallel sequencing has enormous potential for providing comprehensive genomic information that can be used to assess many aspects of arbovirus ecology, as well as to evaluate novel control strategies. To demonstrate proof-of-principle, we analyzed Aedes aegypti or Aedes albopictus experimentally infected with dengue, yellow fever or chikungunya viruses. Random amplification was used to prepare sufficient template for sequencing on the Personal Genome Machine. Viral sequences were present in all infected mosquitoes. In addition, in most cases, we were also able to identify the mosquito species and mosquito micro-organisms, including the bacterial endosymbiont Wolbachia. Importantly, naturally occurring Wolbachia strains could be differentiated from strains that had been trans-infected into the mosquito. The method allowed us to assemble near full-length viral genomes and detect other micro-organisms without prior sequence knowledge, in a single reaction. This is a step toward the application of massively parallel sequencing as an arbovirus surveillance tool. It has the potential to provide insight into virus transmission dynamics, and has applicability to the post-release monitoring of Wolbachia in mosquito populations.

  14. Optimizing a massive parallel sequencing workflow for quantitative miRNA expression analysis.

    Directory of Open Access Journals (Sweden)

    Francesca Cordero

    Full Text Available BACKGROUND: Massive Parallel Sequencing methods (MPS can extend and improve the knowledge obtained by conventional microarray technology, both for mRNAs and short non-coding RNAs, e.g. miRNAs. The processing methods used to extract and interpret the information are an important aspect of dealing with the vast amounts of data generated from short read sequencing. Although the number of computational tools for MPS data analysis is constantly growing, their strengths and weaknesses as part of a complex analytical pipe-line have not yet been well investigated. PRIMARY FINDINGS: A benchmark MPS miRNA dataset, resembling a situation in which miRNAs are spiked in biological replication experiments was assembled by merging a publicly available MPS spike-in miRNAs data set with MPS data derived from healthy donor peripheral blood mononuclear cells. Using this data set we observed that short reads counts estimation is strongly under estimated in case of duplicates miRNAs, if whole genome is used as reference. Furthermore, the sensitivity of miRNAs detection is strongly dependent by the primary tool used in the analysis. Within the six aligners tested, specifically devoted to miRNA detection, SHRiMP and MicroRazerS show the highest sensitivity. Differential expression estimation is quite efficient. Within the five tools investigated, two of them (DESseq, baySeq show a very good specificity and sensitivity in the detection of differential expression. CONCLUSIONS: The results provided by our analysis allow the definition of a clear and simple analytical optimized workflow for miRNAs digital quantitative analysis.

  15. Optimizing a massive parallel sequencing workflow for quantitative miRNA expression analysis.

    Science.gov (United States)

    Cordero, Francesca; Beccuti, Marco; Arigoni, Maddalena; Donatelli, Susanna; Calogero, Raffaele A

    2012-01-01

    Massive Parallel Sequencing methods (MPS) can extend and improve the knowledge obtained by conventional microarray technology, both for mRNAs and short non-coding RNAs, e.g. miRNAs. The processing methods used to extract and interpret the information are an important aspect of dealing with the vast amounts of data generated from short read sequencing. Although the number of computational tools for MPS data analysis is constantly growing, their strengths and weaknesses as part of a complex analytical pipe-line have not yet been well investigated. A benchmark MPS miRNA dataset, resembling a situation in which miRNAs are spiked in biological replication experiments was assembled by merging a publicly available MPS spike-in miRNAs data set with MPS data derived from healthy donor peripheral blood mononuclear cells. Using this data set we observed that short reads counts estimation is strongly under estimated in case of duplicates miRNAs, if whole genome is used as reference. Furthermore, the sensitivity of miRNAs detection is strongly dependent by the primary tool used in the analysis. Within the six aligners tested, specifically devoted to miRNA detection, SHRiMP and MicroRazerS show the highest sensitivity. Differential expression estimation is quite efficient. Within the five tools investigated, two of them (DESseq, baySeq) show a very good specificity and sensitivity in the detection of differential expression. The results provided by our analysis allow the definition of a clear and simple analytical optimized workflow for miRNAs digital quantitative analysis.

  16. Next Generation NASA Initiative for Space Geodesy

    Science.gov (United States)

    Merkowitz, S. M.; Desai, S.; Gross, R. S.; Hilliard, L.; Lemoine, F. G.; Long, J. L.; Ma, C.; McGarry J. F.; Murphy, D.; Noll, C. E.; hide

    2012-01-01

    Space geodesy measurement requirements have become more and more stringent as our understanding of the physical processes and our modeling techniques have improved. In addition, current and future spacecraft will have ever-increasing measurement capability and will lead to increasingly sophisticated models of changes in the Earth system. Ground-based space geodesy networks with enhanced measurement capability will be essential to meeting these oncoming requirements and properly interpreting the sate1!ite data. These networks must be globally distributed and built for longevity, to provide the robust data necessary to generate improved models for proper interpretation ofthe observed geophysical signals. These requirements have been articulated by the Global Geodetic Observing System (GGOS). The NASA Space Geodesy Project (SGP) is developing a prototype core site as the basis for a next generation Space Geodetic Network (SGN) that would be NASA's contribution to a global network designed to produce the higher quality data required to maintain the Terrestrial Reference Frame and provide information essential for fully realizing the measurement potential of the current and coming generation of Earth Observing spacecraft. Each of the sites in the SGN would include co-located, state of-the-art systems from all four space geodetic observing techniques (GNSS, SLR, VLBI, and DORIS). The prototype core site is being developed at NASA's Geophysical and Astronomical Observatory at Goddard Space Flight Center. The project commenced in 2011 and is scheduled for completion in late 2013. In January 2012, two multiconstellation GNSS receivers, GODS and GODN, were established at the prototype site as part of the local geodetic network. Development and testing are also underway on the next generation SLR and VLBI systems along with a modern DORIS station. An automated survey system is being developed to measure inter-technique vector ties, and network design studies are being

  17. Aptaligner: automated software for aligning pseudorandom DNA X-aptamers from next-generation sequencing data.

    Science.gov (United States)

    Lu, Emily; Elizondo-Riojas, Miguel-Angel; Chang, Jeffrey T; Volk, David E

    2014-06-10

    Next-generation sequencing results from bead-based aptamer libraries have demonstrated that traditional DNA/RNA alignment software is insufficient. This is particularly true for X-aptamers containing specialty bases (W, X, Y, Z, ...) that are identified by special encoding. Thus, we sought an automated program that uses the inherent design scheme of bead-based X-aptamers to create a hypothetical reference library and Markov modeling techniques to provide improved alignments. Aptaligner provides this feature as well as length error and noise level cutoff features, is parallelized to run on multiple central processing units (cores), and sorts sequences from a single chip into projects and subprojects.

  18. Computations on the massively parallel processor at the Goddard Space Flight Center

    Science.gov (United States)

    Strong, James P.

    1991-01-01

    Described are four significant algorithms implemented on the massively parallel processor (MPP) at the Goddard Space Flight Center. Two are in the area of image analysis. Of the other two, one is a mathematical simulation experiment and the other deals with the efficient transfer of data between distantly separated processors in the MPP array. The first algorithm presented is the automatic determination of elevations from stereo pairs. The second algorithm solves mathematical logistic equations capable of producing both ordered and chaotic (or random) solutions. This work can potentially lead to the simulation of artificial life processes. The third algorithm is the automatic segmentation of images into reasonable regions based on some similarity criterion, while the fourth is an implementation of a bitonic sort of data which significantly overcomes the nearest neighbor interconnection constraints on the MPP for transferring data between distant processors.

  19. An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator.

    Science.gov (United States)

    Wang, Runchun M; Thakur, Chetan S; van Schaik, André

    2018-01-01

    This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF) neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons). This cortex simulator achieved a low power dissipation of 1.62 μW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks.

  20. An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator

    Directory of Open Access Journals (Sweden)

    Runchun M. Wang

    2018-04-01

    Full Text Available This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons. This cortex simulator achieved a low power dissipation of 1.62 μW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks.

  1. Next-generation fiber lasers enabled by high-performance components

    Science.gov (United States)

    Kliner, D. A. V.; Victor, B.; Rivera, C.; Fanning, G.; Balsley, D.; Farrow, R. L.; Kennedy, K.; Hampton, S.; Hawke, R.; Soukup, E.; Reynolds, M.; Hodges, A.; Emery, J.; Brown, A.; Almonte, K.; Nelson, M.; Foley, B.; Dawson, D.; Hemenway, D. M.; Urbanek, W.; DeVito, M.; Bao, L.; Koponen, J.; Gross, K.

    2018-02-01

    Next-generation industrial fiber lasers enable challenging applications that cannot be addressed with legacy fiber lasers. Key features of next-generation fiber lasers include robust back-reflection protection, high power stability, wide power tunability, high-speed modulation and waveform generation, and facile field serviceability. These capabilities are enabled by high-performance components, particularly pump diodes and optical fibers, and by advanced fiber laser designs. We summarize the performance and reliability of nLIGHT diodes, fibers, and next-generation industrial fiber lasers at power levels of 500 W - 8 kW. We show back-reflection studies with up to 1 kW of back-reflected power, power-stability measurements in cw and modulated operation exhibiting sub-1% stability over a 5 - 100% power range, and high-speed modulation (100 kHz) and waveform generation with a bandwidth 20x higher than standard fiber lasers. We show results from representative applications, including cutting and welding of highly reflective metals (Cu and Al) for production of Li-ion battery modules and processing of carbon fiber reinforced polymers.

  2. Application of Raptor-M3G to reactor dosimetry problems on massively parallel architectures - 026

    International Nuclear Information System (INIS)

    Longoni, G.

    2010-01-01

    The solution of complex 3-D radiation transport problems requires significant resources both in terms of computation time and memory availability. Therefore, parallel algorithms and multi-processor architectures are required to solve efficiently large 3-D radiation transport problems. This paper presents the application of RAPTOR-M3G (Rapid Parallel Transport Of Radiation - Multiple 3D Geometries) to reactor dosimetry problems. RAPTOR-M3G is a newly developed parallel computer code designed to solve the discrete ordinates (SN) equations on multi-processor computer architectures. This paper presents the results for a reactor dosimetry problem using a 3-D model of a commercial 2-loop pressurized water reactor (PWR). The accuracy and performance of RAPTOR-M3G will be analyzed and the numerical results obtained from the calculation will be compared directly to measurements of the neutron field in the reactor cavity air gap. The parallel performance of RAPTOR-M3G on massively parallel architectures, where the number of computing nodes is in the order of hundreds, will be analyzed up to four hundred processors. The performance results will be presented based on two supercomputing architectures: the POPLE supercomputer operated by the Pittsburgh Supercomputing Center and the Westinghouse computer cluster. The Westinghouse computer cluster is equipped with a standard Ethernet network connection and an InfiniBand R interconnects capable of a bandwidth in excess of 20 GBit/sec. Therefore, the impact of the network architecture on RAPTOR-M3G performance will be analyzed as well. (authors)

  3. Running ATLAS workloads within massively parallel distributed applications using Athena Multi-Process framework (AthenaMP)

    CERN Document Server

    Calafiura, Paolo; The ATLAS collaboration; Seuster, Rolf; Tsulaia, Vakhtang; van Gemmeren, Peter

    2015-01-01

    AthenaMP is a multi-process version of the ATLAS reconstruction and data analysis framework Athena. By leveraging Linux fork and copy-on-write, it allows the sharing of memory pages between event processors running on the same compute node with little to no change in the application code. Originally targeted to optimize the memory footprint of reconstruction jobs, AthenaMP has demonstrated that it can reduce the memory usage of certain confugurations of ATLAS production jobs by a factor of 2. AthenaMP has also evolved to become the parallel event-processing core of the recently developed ATLAS infrastructure for fine-grained event processing (Event Service) which allows to run AthenaMP inside massively parallel distributed applications on hundreds of compute nodes simultaneously. We present the architecture of AthenaMP, various strategies implemented by AthenaMP for scheduling workload to worker processes (for example: Shared Event Queue and Shared Distributor of Event Tokens) and the usage of AthenaMP in the...

  4. Running ATLAS workloads within massively parallel distributed applications using Athena Multi-Process framework (AthenaMP)

    CERN Document Server

    Calafiura, Paolo; Seuster, Rolf; Tsulaia, Vakhtang; van Gemmeren, Peter

    2015-01-01

    AthenaMP is a multi-process version of the ATLAS reconstruction, simulation and data analysis framework Athena. By leveraging Linux fork and copy-on-write, it allows for sharing of memory pages between event processors running on the same compute node with little to no change in the application code. Originally targeted to optimize the memory footprint of reconstruction jobs, AthenaMP has demonstrated that it can reduce the memory usage of certain configurations of ATLAS production jobs by a factor of 2. AthenaMP has also evolved to become the parallel event-processing core of the recently developed ATLAS infrastructure for fine-grained event processing (Event Service) which allows to run AthenaMP inside massively parallel distributed applications on hundreds of compute nodes simultaneously. We present the architecture of AthenaMP, various strategies implemented by AthenaMP for scheduling workload to worker processes (for example: Shared Event Queue and Shared Distributor of Event Tokens) and the usage of Ath...

  5. Accelerating next generation sequencing data analysis with system level optimizations.

    Science.gov (United States)

    Kathiresan, Nagarajan; Temanni, Ramzi; Almabrazi, Hakeem; Syed, Najeeb; Jithesh, Puthen V; Al-Ali, Rashid

    2017-08-22

    Next generation sequencing (NGS) data analysis is highly compute intensive. In-memory computing, vectorization, bulk data transfer, CPU frequency scaling are some of the hardware features in the modern computing architectures. To get the best execution time and utilize these hardware features, it is necessary to tune the system level parameters before running the application. We studied the GATK-HaplotypeCaller which is part of common NGS workflows, that consume more than 43% of the total execution time. Multiple GATK 3.x versions were benchmarked and the execution time of HaplotypeCaller was optimized by various system level parameters which included: (i) tuning the parallel garbage collection and kernel shared memory to simulate in-memory computing, (ii) architecture-specific tuning in the PairHMM library for vectorization, (iii) including Java 1.8 features through GATK source code compilation and building a runtime environment for parallel sorting and bulk data transfer (iv) the default 'on-demand' mode of CPU frequency is over-clocked by using 'performance-mode' to accelerate the Java multi-threads. As a result, the HaplotypeCaller execution time was reduced by 82.66% in GATK 3.3 and 42.61% in GATK 3.7. Overall, the execution time of NGS pipeline was reduced to 70.60% and 34.14% for GATK 3.3 and GATK 3.7 respectively.

  6. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  7. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  8. Integration of microbiological, epidemiological and next generation sequencing technologies data for the managing of nosocomial infections

    Directory of Open Access Journals (Sweden)

    Matteo Brilli

    2018-02-01

    Full Text Available At its core, the work of clinical microbiologists consists in the retrieving of a few bytes of information (species identification; metabolic capacities; staining and antigenic properties; antibiotic resistance profiles, etc. from pathogenic agents. The development of next generation sequencing technologies (NGS, and the possibility to determine the entire genome for bacterial pathogens, fungi and protozoans will likely introduce a breakthrough in the amount of information generated by clinical microbiology laboratories: from bytes to Megabytes of information, for a single isolate. In parallel, the development of novel informatics tools, designed for the management and analysis of the so-called Big Data, offers the possibility to search for patterns in databases collecting genomic and microbiological information on the pathogens, as well as epidemiological data and information on the clinical parameters of the patients. Nosocomial infections and antibiotic resistance will likely represent major challenges for clinical microbiologists, in the next decades. In this paper, we describe how bacterial genomics based on NGS, integrated with novel informatic tools, could contribute to the control of hospital infections and multi-drug resistant pathogens.

  9. User's Guide for TOUGH2-MP - A Massively Parallel Version of the TOUGH2 Code

    International Nuclear Information System (INIS)

    Earth Sciences Division; Zhang, Keni; Zhang, Keni; Wu, Yu-Shu; Pruess, Karsten

    2008-01-01

    TOUGH2-MP is a massively parallel (MP) version of the TOUGH2 code, designed for computationally efficient parallel simulation of isothermal and nonisothermal flows of multicomponent, multiphase fluids in one, two, and three-dimensional porous and fractured media. In recent years, computational requirements have become increasingly intensive in large or highly nonlinear problems for applications in areas such as radioactive waste disposal, CO2 geological sequestration, environmental assessment and remediation, reservoir engineering, and groundwater hydrology. The primary objective of developing the parallel-simulation capability is to significantly improve the computational performance of the TOUGH2 family of codes. The particular goal for the parallel simulator is to achieve orders-of-magnitude improvement in computational time for models with ever-increasing complexity. TOUGH2-MP is designed to perform parallel simulation on multi-CPU computational platforms. An earlier version of TOUGH2-MP (V1.0) was based on the TOUGH2 Version 1.4 with EOS3, EOS9, and T2R3D modules, a software previously qualified for applications in the Yucca Mountain project, and was designed for execution on CRAY T3E and IBM SP supercomputers. The current version of TOUGH2-MP (V2.0) includes all fluid property modules of the standard version TOUGH2 V2.0. It provides computationally efficient capabilities using supercomputers, Linux clusters, or multi-core PCs, and also offers many user-friendly features. The parallel simulator inherits all process capabilities from V2.0 together with additional capabilities for handling fractured media from V1.4. This report provides a quick starting guide on how to set up and run the TOUGH2-MP program for users with a basic knowledge of running the (standard) version TOUGH2 code. The report also gives a brief technical description of the code, including a discussion of parallel methodology, code structure, as well as mathematical and numerical methods used

  10. AugerNext: innovative research studies for the next generation ground-based ultra-high energy cosmic ray experiment

    Directory of Open Access Journals (Sweden)

    Haungs Andreas

    2013-06-01

    Full Text Available The findings so far of the Pierre Auger Observatory and also of the Telescope Array define the requirements for a possible next generation experiment: it needs to be considerably increased in size, it needs a better sensitivity to composition, and it should cover the full sky. AugerNext aims to perform innovative research studies in order to prepare a proposal fulfilling these demands. Such R&D studies are primarily focused in the following areas iconsolidation of the detection of cosmic rays using MHz radio antennas; iiproof-of-principle of cosmic-ray microwave detection; iiitest of the large-scale application of a new generation photo-sensors; ivgeneralization of data communication techniques; vdevelopment of new ways of muon detection with surface arrays. These AugerNext studies on new innovative detection methods for a next generation cosmic-ray experiment are performed at the Pierre Auger Observatory. The AugerNext consortium consists presently of fourteen partner institutions from nine European countries supported by a network of European funding agencies and it is a principal element of the ASPERA/ApPEC strategic roadmaps.

  11. New materials for next-generation commercial transports

    National Research Council Canada - National Science Library

    Committee on New Materials for Advanced Civil Aircraft, Commission on Engineering and Technical Systems, National Research Council

    ... civil aircraft throughout their service life. The committee investigated the new materials and structural concepts that are likely to be incorporated into next generation commercial aircraft and the factors influencing application decisions...

  12. Preparation of next generation set of group cross sections. 3

    International Nuclear Information System (INIS)

    Kaneko, Kunio

    2002-03-01

    This fiscal year, based on the examination result about the evaluation energy range of heavy element unresolved resonance cross sections, the upper energy limit of the energy range, where ultra-fine group cross sections are produced, was raised to 50 keV, and an improvement of the group cross section processing system was promoted. At the same time, reflecting the result of studies carried out till now, a function producing delayed neutron data was added to the general-purpose group cross section processing system , thus the preparation of general purpose group cross section processing system has been completed. On the other hand, the energy structure, data constitution and data contents of next generation group cross section set were determined, and the specification of a 151 groups next generation group cross section set was defined. Based on the above specification, a concrete library format of the next generation cross section set has been determined. After having carried out the above-described work, using the general-purpose group cross section processing system , which was complete in this study, with use of the JENDL-3. 2 evaluated nuclear data, the 151 groups next generation group cross section of 92 nuclides and the ultra fine group resonance cross section library for 29 nuclides have been prepared. Utilizing the 151 groups next generation group cross section set and the ultra-fine group resonance cross-section library, a bench mark test calculation of fast reactors has been performed by using an advanced lattice calculation code. It was confirmed, by comparing the calculation result with a calculation result of continuous energy Monte Carlo code, that the 151 groups next generation cross section set has sufficient accuracy. (author)

  13. Cloud identification using genetic algorithms and massively parallel computation

    Science.gov (United States)

    Buckles, Bill P.; Petry, Frederick E.

    1996-01-01

    As a Guest Computational Investigator under the NASA administered component of the High Performance Computing and Communication Program, we implemented a massively parallel genetic algorithm on the MasPar SIMD computer. Experiments were conducted using Earth Science data in the domains of meteorology and oceanography. Results obtained in these domains are competitive with, and in most cases better than, similar problems solved using other methods. In the meteorological domain, we chose to identify clouds using AVHRR spectral data. Four cloud speciations were used although most researchers settle for three. Results were remarkedly consistent across all tests (91% accuracy). Refinements of this method may lead to more timely and complete information for Global Circulation Models (GCMS) that are prevalent in weather forecasting and global environment studies. In the oceanographic domain, we chose to identify ocean currents from a spectrometer having similar characteristics to AVHRR. Here the results were mixed (60% to 80% accuracy). Given that one is willing to run the experiment several times (say 10), then it is acceptable to claim the higher accuracy rating. This problem has never been successfully automated. Therefore, these results are encouraging even though less impressive than the cloud experiment. Successful conclusion of an automated ocean current detection system would impact coastal fishing, naval tactics, and the study of micro-climates. Finally we contributed to the basic knowledge of GA (genetic algorithm) behavior in parallel environments. We developed better knowledge of the use of subpopulations in the context of shared breeding pools and the migration of individuals. Rigorous experiments were conducted based on quantifiable performance criteria. While much of the work confirmed current wisdom, for the first time we were able to submit conclusive evidence. The software developed under this grant was placed in the public domain. An extensive user

  14. Parallel computing works

    Energy Technology Data Exchange (ETDEWEB)

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  15. Transitioning NWChem to the Next Generation of Manycore Machines

    Energy Technology Data Exchange (ETDEWEB)

    Bylaska, Eric J. [Pacific Northwest National Laboratory (PNNL); Apra, E [Pacific Northwest National Laboratory (PNNL); Kowalski, Karol [Pacific Northwest National Laboratory (PNNL); Jacquelin, Mathias [Lawrence Berkeley National Laboratory (LBNL); De Jong, Bert [Lawrence Berkeley National Laboratory (LBNL); Vishnu, Abhinav [ORNL; Palmer, Bruce [Lawrence Berkeley National Laboratory (LBNL); Daily, Jeff [Lawrence Berkeley National Laboratory (LBNL); Straatsma, T.P. [ORNL; Hammond, Jeffrey R. [ORNL; Klemm, Michael [Intel Corporation

    2017-11-01

    The NorthWest chemistry (NWChem) modeling software is a popular molecular chemistry simulation software that was designed from the start to work on massively parallel processing supercomputers [1-3]. It contains an umbrella of modules that today includes self-consistent eld (SCF), second order Møller-Plesset perturbation theory (MP2), coupled cluster (CC), multiconguration self-consistent eld (MCSCF), selected conguration interaction (CI), tensor contraction engine (TCE) many body methods, density functional theory (DFT), time-dependent density functional theory (TDDFT), real-time time-dependent density functional theory, pseudopotential plane-wave density functional theory (PSPW), band structure (BAND), ab initio molecular dynamics (AIMD), Car-Parrinello molecular dynamics (MD), classical MD, hybrid quantum mechanics molecular mechanics (QM/MM), hybrid ab initio molecular dynamics molecular mechanics (AIMD/MM), gauge independent atomic orbital nuclear magnetic resonance (GIAO NMR), conductor like screening solvation model (COSMO), conductor-like screening solvation model based on density (COSMO-SMD), and reference interaction site model (RISM) solvation models, free energy simulations, reaction path optimization, parallel in time, among other capabilities [4]. Moreover, new capabilities continue to be added with each new release.

  16. Space-charge-dominated beam dynamics simulations using the massively parallel processors (MPPs) of the Cray T3D

    International Nuclear Information System (INIS)

    Liu, H.

    1996-01-01

    Computer simulations using the multi-particle code PARMELA with a three-dimensional point-by-point space charge algorithm have turned out to be very helpful in supporting injector commissioning and operations at Thomas Jefferson National Accelerator Facility (Jefferson Lab, formerly called CEBAF). However, this algorithm, which defines a typical N 2 problem in CPU time scaling, is very time-consuming when N, the number of macro-particles, is large. Therefore, it is attractive to use massively parallel processors (MPPs) to speed up the simulations. Motivated by this, the authors modified the space charge subroutine for using the MPPs of the Cray T3D. The techniques used to parallelize and optimize the code on the T3D are discussed in this paper. The performance of the code on the T3D is examined in comparison with a Parallel Vector Processing supercomputer of the Cray C90 and an HP 735/15 high-end workstation

  17. Practical parallel computing

    CERN Document Server

    Morse, H Stephen

    1994-01-01

    Practical Parallel Computing provides information pertinent to the fundamental aspects of high-performance parallel processing. This book discusses the development of parallel applications on a variety of equipment.Organized into three parts encompassing 12 chapters, this book begins with an overview of the technology trends that converge to favor massively parallel hardware over traditional mainframes and vector machines. This text then gives a tutorial introduction to parallel hardware architectures. Other chapters provide worked-out examples of programs using several parallel languages. Thi

  18. Implementing the Next Generation Science Standards

    Science.gov (United States)

    Penuel, William R.; Harris, Christopher J.; DeBarger, Angela Haydel

    2015-01-01

    The Next Generation Science Standards embody a new vision for science education grounded in the idea that science is both a body of knowledge and a set of linked practices for developing knowledge. The authors describe strategies that they suggest school and district leaders consider when designing strategies to support NGSS implementation.

  19. OpenCL Implementation of a Parallel Universal Kriging Algorithm for Massive Spatial Data Interpolation on Heterogeneous Systems

    Directory of Open Access Journals (Sweden)

    Fang Huang

    2016-06-01

    Full Text Available In some digital Earth engineering applications, spatial interpolation algorithms are required to process and analyze large amounts of data. Due to its powerful computing capacity, heterogeneous computing has been used in many applications for data processing in various fields. In this study, we explore the design and implementation of a parallel universal kriging spatial interpolation algorithm using the OpenCL programming model on heterogeneous computing platforms for massive Geo-spatial data processing. This study focuses primarily on transforming the hotspots in serial algorithms, i.e., the universal kriging interpolation function, into the corresponding kernel function in OpenCL. We also employ parallelization and optimization techniques in our implementation to improve the code performance. Finally, based on the results of experiments performed on two different high performance heterogeneous platforms, i.e., an NVIDIA graphics processing unit system and an Intel Xeon Phi system (MIC, we show that the parallel universal kriging algorithm can achieve the highest speedup of up to 40× with a single computing device and the highest speedup of up to 80× with multiple devices.

  20. Next-Generation Sequencing Platforms

    Science.gov (United States)

    Mardis, Elaine R.

    2013-06-01

    Automated DNA sequencing instruments embody an elegant interplay among chemistry, engineering, software, and molecular biology and have built upon Sanger's founding discovery of dideoxynucleotide sequencing to perform once-unfathomable tasks. Combined with innovative physical mapping approaches that helped to establish long-range relationships between cloned stretches of genomic DNA, fluorescent DNA sequencers produced reference genome sequences for model organisms and for the reference human genome. New types of sequencing instruments that permit amazing acceleration of data-collection rates for DNA sequencing have been developed. The ability to generate genome-scale data sets is now transforming the nature of biological inquiry. Here, I provide an historical perspective of the field, focusing on the fundamental developments that predated the advent of next-generation sequencing instruments and providing information about how these instruments work, their application to biological research, and the newest types of sequencers that can extract data from single DNA molecules.

  1. Unbundling in Current Broadband and Next-Generation Ultra-Broadband Access Networks

    Science.gov (United States)

    Gaudino, Roberto; Giuliano, Romeo; Mazzenga, Franco; Valcarenghi, Luca; Vatalaro, Francesco

    2014-05-01

    This article overviews the methods that are currently under investigation for implementing multi-operator open-access/shared-access techniques in next-generation access ultra-broadband architectures, starting from the traditional "unbundling-of-the-local-loop" techniques implemented in legacy twisted-pair digital subscriber line access networks. A straightforward replication of these copper-based unbundling-of-the-local-loop techniques is usually not feasible on next-generation access networks, including fiber-to-the-home point-to-multipoint passive optical networks. To investigate this issue, the article first gives a concise description of traditional copper-based unbundling-of-the-local-loop solutions, then focalizes on both next-generation access hybrid fiber-copper digital subscriber line fiber-to-the-cabinet scenarios and on fiber to the home by accounting for the mix of regulatory and technological reasons driving the next-generation access migration path, focusing mostly on the European situation.

  2. Two-phase flow steam generator simulations on parallel computers using domain decomposition method

    International Nuclear Information System (INIS)

    Belliard, M.

    2003-01-01

    Within the framework of the Domain Decomposition Method (DDM), we present industrial steady state two-phase flow simulations of PWR Steam Generators (SG) using iteration-by-sub-domain methods: standard and Adaptive Dirichlet/Neumann methods (ADN). The averaged mixture balance equations are solved by a Fractional-Step algorithm, jointly with the Crank-Nicholson scheme and the Finite Element Method. The algorithm works with overlapping or non-overlapping sub-domains and with conforming or nonconforming meshing. Computations are run on PC networks or on massively parallel mainframe computers. A CEA code-linker and the PVM package are used (master-slave context). SG mock-up simulations, involving up to 32 sub-domains, highlight the efficiency (speed-up, scalability) and the robustness of the chosen approach. With the DDM, the computational problem size is easily increased to about 1,000,000 cells and the CPU time is significantly reduced. The difficulties related to industrial use are also discussed. (author)

  3. Mobile e-Learning for Next Generation Communication Environment

    Science.gov (United States)

    Wu, Tin-Yu; Chao, Han-Chieh

    2008-01-01

    This article develops an environment for mobile e-learning that includes an interactive course, virtual online labs, an interactive online test, and lab-exercise training platform on the fourth generation mobile communication system. The Next Generation Learning Environment (NeGL) promotes the term "knowledge economy." Inter-networking…

  4. Parallel Polarization State Generation.

    Science.gov (United States)

    She, Alan; Capasso, Federico

    2016-05-17

    The control of polarization, an essential property of light, is of wide scientific and technological interest. The general problem of generating arbitrary time-varying states of polarization (SOP) has always been mathematically formulated by a series of linear transformations, i.e. a product of matrices, imposing a serial architecture. Here we show a parallel architecture described by a sum of matrices. The theory is experimentally demonstrated by modulating spatially-separated polarization components of a laser using a digital micromirror device that are subsequently beam combined. This method greatly expands the parameter space for engineering devices that control polarization. Consequently, performance characteristics, such as speed, stability, and spectral range, are entirely dictated by the technologies of optical intensity modulation, including absorption, reflection, emission, and scattering. This opens up important prospects for polarization state generation (PSG) with unique performance characteristics with applications in spectroscopic ellipsometry, spectropolarimetry, communications, imaging, and security.

  5. The Secret Life of Exosomes: What Bees Can Teach Us About Next-Generation Therapeutics.

    Science.gov (United States)

    Marbán, Eduardo

    2018-01-16

    Mechanistic exploration has pinpointed nanosized extracellular vesicles, known as exosomes, as key mediators of the benefits of cell therapy. Exosomes appear to recapitulate the benefits of cells and more. As durable azoic entities, exosomes have numerous practical and conceptual advantages over cells. Will cells end up just being used to manufacture exosomes, or will they find lasting value as primary therapeutic agents? Here, a venerable natural process-the generation of honey-serves as an instructive parable. Flowers make nectar, which bees collect and process into honey. Cells make conditioned medium, which laboratory workers collect and process into exosomes. Unlike flowers, honey is durable, compact, and nutritious, but these facts do not negate the value of flowers themselves. The parallels suggest new ways of thinking about next-generation therapeutics. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  6. A Window Into Clinical Next-Generation Sequencing-Based Oncology Testing Practices.

    Science.gov (United States)

    Nagarajan, Rakesh; Bartley, Angela N; Bridge, Julia A; Jennings, Lawrence J; Kamel-Reid, Suzanne; Kim, Annette; Lazar, Alexander J; Lindeman, Neal I; Moncur, Joel; Rai, Alex J; Routbort, Mark J; Vasalos, Patricia; Merker, Jason D

    2017-12-01

    - Detection of acquired variants in cancer is a paradigm of precision medicine, yet little has been reported about clinical laboratory practices across a broad range of laboratories. - To use College of American Pathologists proficiency testing survey results to report on the results from surveys on next-generation sequencing-based oncology testing practices. - College of American Pathologists proficiency testing survey results from more than 250 laboratories currently performing molecular oncology testing were used to determine laboratory trends in next-generation sequencing-based oncology testing. - These presented data provide key information about the number of laboratories that currently offer or are planning to offer next-generation sequencing-based oncology testing. Furthermore, we present data from 60 laboratories performing next-generation sequencing-based oncology testing regarding specimen requirements and assay characteristics. The findings indicate that most laboratories are performing tumor-only targeted sequencing to detect single-nucleotide variants and small insertions and deletions, using desktop sequencers and predesigned commercial kits. Despite these trends, a diversity of approaches to testing exists. - This information should be useful to further inform a variety of topics, including national discussions involving clinical laboratory quality systems, regulation and oversight of next-generation sequencing-based oncology testing, and precision oncology efforts in a data-driven manner.

  7. A third-generation density-functional-theory-based method for calculating canonical molecular orbitals of large molecules.

    Science.gov (United States)

    Hirano, Toshiyuki; Sato, Fumitoshi

    2014-07-28

    We used grid-free modified Cholesky decomposition (CD) to develop a density-functional-theory (DFT)-based method for calculating the canonical molecular orbitals (CMOs) of large molecules. Our method can be used to calculate standard CMOs, analytically compute exchange-correlation terms, and maximise the capacity of next-generation supercomputers. Cholesky vectors were first analytically downscaled using low-rank pivoted CD and CD with adaptive metric (CDAM). The obtained Cholesky vectors were distributed and stored on each computer node in a parallel computer, and the Coulomb, Fock exchange, and pure exchange-correlation terms were calculated by multiplying the Cholesky vectors without evaluating molecular integrals in self-consistent field iterations. Our method enables DFT and massively distributed memory parallel computers to be used in order to very efficiently calculate the CMOs of large molecules.

  8. Next-generation mammalian genetics toward organism-level systems biology.

    Science.gov (United States)

    Susaki, Etsuo A; Ukai, Hideki; Ueda, Hiroki R

    2017-01-01

    Organism-level systems biology in mammals aims to identify, analyze, control, and design molecular and cellular networks executing various biological functions in mammals. In particular, system-level identification and analysis of molecular and cellular networks can be accelerated by next-generation mammalian genetics. Mammalian genetics without crossing, where all production and phenotyping studies of genome-edited animals are completed within a single generation drastically reduce the time, space, and effort of conducting the systems research. Next-generation mammalian genetics is based on recent technological advancements in genome editing and developmental engineering. The process begins with introduction of double-strand breaks into genomic DNA by using site-specific endonucleases, which results in highly efficient genome editing in mammalian zygotes or embryonic stem cells. By using nuclease-mediated genome editing in zygotes, or ~100% embryonic stem cell-derived mouse technology, whole-body knock-out and knock-in mice can be produced within a single generation. These emerging technologies allow us to produce multiple knock-out or knock-in strains in high-throughput manner. In this review, we discuss the basic concepts and related technologies as well as current challenges and future opportunities for next-generation mammalian genetics in organism-level systems biology.

  9. Towards Next Generation BI Systems

    DEFF Research Database (Denmark)

    Varga, Jovan; Romero, Oscar; Pedersen, Torben Bach

    2014-01-01

    Next generation Business Intelligence (BI) systems require integration of heterogeneous data sources and a strong user-centric orientation. Both needs entail machine-processable metadata to enable automation and allow end users to gain access to relevant data for their decision making processes....... This framework is based on the findings of a survey of current user-centric approaches mainly focusing on query recommendation assistance. Finally, we discuss the benefits of the framework and present the plans for future work....

  10. Parallel phase model : a programming model for high-end parallel machines with manycores.

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng (Syracuse University, Syracuse, NY); Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  11. Data Analysis and Next Generation Assessments

    Science.gov (United States)

    Pon, Kathy

    2013-01-01

    For the last decade, much of the work of California school administrators has been shaped by the accountability of the No Child Left Behind Act. Now as they stand at the precipice of Common Core Standards and next generation assessments, it is important to reflect on the proficiency educators have attained in using data to improve instruction and…

  12. Revealing the Physics of Galactic Winds Through Massively-Parallel Hydrodynamics Simulations

    Science.gov (United States)

    Schneider, Evan Elizabeth

    This thesis documents the hydrodynamics code Cholla and a numerical study of multiphase galactic winds. Cholla is a massively-parallel, GPU-based code designed for astrophysical simulations that is freely available to the astrophysics community. A static-mesh Eulerian code, Cholla is ideally suited to carrying out massive simulations (> 20483 cells) that require very high resolution. The code incorporates state-of-the-art hydrodynamics algorithms including third-order spatial reconstruction, exact and linearized Riemann solvers, and unsplit integration algorithms that account for transverse fluxes on multidimensional grids. Operator-split radiative cooling and a dual-energy formalism for high mach number flows are also included. An extensive test suite demonstrates Cholla's superior ability to model shocks and discontinuities, while the GPU-native design makes the code extremely computationally efficient - speeds of 5-10 million cell updates per GPU-second are typical on current hardware for 3D simulations with all of the aforementioned physics. The latter half of this work comprises a comprehensive study of the mixing between a hot, supernova-driven wind and cooler clouds representative of those observed in multiphase galactic winds. Both adiabatic and radiatively-cooling clouds are investigated. The analytic theory of cloud-crushing is applied to the problem, and adiabatic turbulent clouds are found to be mixed with the hot wind on similar timescales as the classic spherical case (4-5 t cc) with an appropriate rescaling of the cloud-crushing time. Radiatively cooling clouds survive considerably longer, and the differences in evolution between turbulent and spherical clouds cannot be reconciled with a simple rescaling. The rapid incorporation of low-density material into the hot wind implies efficient mass-loading of hot phases of galactic winds. At the same time, the extreme compression of high-density cloud material leads to long-lived but slow-moving clumps

  13. SWAMP+: multiple subsequence alignment using associative massive parallelism

    Energy Technology Data Exchange (ETDEWEB)

    Steinfadt, Shannon Irene [Los Alamos National Laboratory; Baker, Johnnie W [KENT STATE UNIV.

    2010-10-18

    A new parallel algorithm SWAMP+ incorporates the Smith-Waterman sequence alignment on an associative parallel model known as ASC. It is a highly sensitive parallel approach that expands traditional pairwise sequence alignment. This is the first parallel algorithm to provide multiple non-overlapping, non-intersecting subsequence alignments with the accuracy of Smith-Waterman. The efficient algorithm provides multiple alignments similar to BLAST while creating a better workflow for the end users. The parallel portions of the code run in O(m+n) time using m processors. When m = n, the algorithmic analysis becomes O(n) with a coefficient of two, yielding a linear speedup. Implementation of the algorithm on the SIMD ClearSpeed CSX620 confirms this theoretical linear speedup with real timings.

  14. Achieving universal access to next generation networks

    DEFF Research Database (Denmark)

    Falch, Morten; Henten, Anders

    The paper examines investment dimensions of next generation networks in a universal service perspective in a European context. The question is how new network infrastructures for getting access to communication, information and entertainment services in the present and future information society...

  15. Next generation breeding.

    Science.gov (United States)

    Barabaschi, Delfina; Tondelli, Alessandro; Desiderio, Francesca; Volante, Andrea; Vaccino, Patrizia; Valè, Giampiero; Cattivelli, Luigi

    2016-01-01

    The genomic revolution of the past decade has greatly improved our understanding of the genetic make-up of living organisms. The sequencing of crop genomes has completely changed our vision and interpretation of genome organization and evolution. Re-sequencing allows the identification of an unlimited number of markers as well as the analysis of germplasm allelic diversity based on allele mining approaches. High throughput marker technologies coupled with advanced phenotyping platforms provide new opportunities for discovering marker-trait associations which can sustain genomic-assisted breeding. The availability of genome sequencing information is enabling genome editing (site-specific mutagenesis), to obtain gene sequences desired by breeders. This review illustrates how next generation sequencing-derived information can be used to tailor genomic tools for different breeders' needs to revolutionize crop improvement. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. Structural variation discovery in the cancer genome using next generation sequencing: Computational solutions and perspectives

    Science.gov (United States)

    Liu, Biao; Conroy, Jeffrey M.; Morrison, Carl D.; Odunsi, Adekunle O.; Qin, Maochun; Wei, Lei; Trump, Donald L.; Johnson, Candace S.; Liu, Song; Wang, Jianmin

    2015-01-01

    Somatic Structural Variations (SVs) are a complex collection of chromosomal mutations that could directly contribute to carcinogenesis. Next Generation Sequencing (NGS) technology has emerged as the primary means of interrogating the SVs of the cancer genome in recent investigations. Sophisticated computational methods are required to accurately identify the SV events and delineate their breakpoints from the massive amounts of reads generated by a NGS experiment. In this review, we provide an overview of current analytic tools used for SV detection in NGS-based cancer studies. We summarize the features of common SV groups and the primary types of NGS signatures that can be used in SV detection methods. We discuss the principles and key similarities and differences of existing computational programs and comment on unresolved issues related to this research field. The aim of this article is to provide a practical guide of relevant concepts, computational methods, software tools and important factors for analyzing and interpreting NGS data for the detection of SVs in the cancer genome. PMID:25849937

  17. Next-Generation Sequencing: From Understanding Biology to Personalized Medicine

    Directory of Open Access Journals (Sweden)

    Benjamin Meder

    2013-03-01

    Full Text Available Within just a few years, the new methods for high-throughput next-generation sequencing have generated completely novel insights into the heritability and pathophysiology of human disease. In this review, we wish to highlight the benefits of the current state-of-the-art sequencing technologies for genetic and epigenetic research. We illustrate how these technologies help to constantly improve our understanding of genetic mechanisms in biological systems and summarize the progress made so far. This can be exemplified by the case of heritable heart muscle diseases, so-called cardiomyopathies. Here, next-generation sequencing is able to identify novel disease genes, and first clinical applications demonstrate the successful translation of this technology into personalized patient care.

  18. Parallel computing works!

    CERN Document Server

    Fox, Geoffrey C; Messina, Guiseppe C

    2014-01-01

    A clear illustration of how parallel computers can be successfully appliedto large-scale scientific computations. This book demonstrates how avariety of applications in physics, biology, mathematics and other scienceswere implemented on real parallel computers to produce new scientificresults. It investigates issues of fine-grained parallelism relevant forfuture supercomputers with particular emphasis on hypercube architecture. The authors describe how they used an experimental approach to configuredifferent massively parallel machines, design and implement basic systemsoftware, and develop

  19. Handover Based IMS Registration Scheme for Next Generation Mobile Networks

    Directory of Open Access Journals (Sweden)

    Shireen Tahira

    2017-01-01

    Full Text Available Next generation mobile networks aim to provide faster speed and more capacity along with energy efficiency to support video streaming and massive data sharing in social and communication networks. In these networks, user equipment has to register with IP Multimedia Subsystem (IMS which promises quality of service to the mobile users that frequently move across different access networks. After each handover caused due to mobility, IMS provides IPSec Security Association establishment and authentication phases. The main issue is that unnecessary reregistration after every handover results in latency and communication overhead. To tackle these issues, this paper presents a lightweight Fast IMS Mobility (FIM registration scheme that avoids unnecessary conventional registration phases such as security associations, authentication, and authorization. FIM maintains a flag to avoid deregistration and sends a subsequent message to provide necessary parameters to IMS servers after mobility. It also handles the change of IP address for user equipment and transferring the security associations from old to new servers. We have validated the performance of FIM by developing a testbed consisting of IMS servers and user equipment. The experimental results demonstrate the performance supremacy of FIM. It reduces media disruption time, number of messages, and packet loss up to 67%, 100%, and 61%, respectively, as compared to preliminaries.

  20. Next Generation Drivetrain Development and Test Program

    Energy Technology Data Exchange (ETDEWEB)

    Keller, Jonathan; Erdman, Bill; Blodgett, Doug; Halse, Chris; Grider, Dave

    2015-11-03

    This presentation was given at the Wind Energy IQ conference in Bremen, Germany, November 30 through December 2, 2105. It focused on the next-generation drivetrain architecture and drivetrain technology development and testing (including gearbox and inverter software and medium-voltage inverter modules.

  1. PFLOTRAN User Manual: A Massively Parallel Reactive Flow and Transport Model for Describing Surface and Subsurface Processes

    Energy Technology Data Exchange (ETDEWEB)

    Lichtner, Peter C. [OFM Research, Redmond, WA (United States); Hammond, Glenn E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lu, Chuan [Idaho National Lab. (INL), Idaho Falls, ID (United States); Karra, Satish [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bisht, Gautam [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Andre, Benjamin [National Center for Atmospheric Research, Boulder, CO (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mills, Richard [Intel Corporation, Portland, OR (United States); Univ. of Tennessee, Knoxville, TN (United States); Kumar, Jitendra [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-01-20

    PFLOTRAN solves a system of generally nonlinear partial differential equations describing multi-phase, multicomponent and multiscale reactive flow and transport in porous materials. The code is designed to run on massively parallel computing architectures as well as workstations and laptops (e.g. Hammond et al., 2011). Parallelization is achieved through domain decomposition using the PETSc (Portable Extensible Toolkit for Scientific Computation) libraries for the parallelization framework (Balay et al., 1997). PFLOTRAN has been developed from the ground up for parallel scalability and has been run on up to 218 processor cores with problem sizes up to 2 billion degrees of freedom. Written in object oriented Fortran 90, the code requires the latest compilers compatible with Fortran 2003. At the time of this writing this requires gcc 4.7.x, Intel 12.1.x and PGC compilers. As a requirement of running problems with a large number of degrees of freedom, PFLOTRAN allows reading input data that is too large to fit into memory allotted to a single processor core. The current limitation to the problem size PFLOTRAN can handle is the limitation of the HDF5 file format used for parallel IO to 32 bit integers. Noting that 232 = 4; 294; 967; 296, this gives an estimate of the maximum problem size that can be currently run with PFLOTRAN. Hopefully this limitation will be remedied in the near future.

  2. Heterogeneous next-generation wireless network interference model-and its applications

    KAUST Repository

    Mahmood, Nurul Huda

    2014-04-01

    Next-generation wireless systems facilitating better utilisation of the scarce radio spectrum have emerged as a response to inefficient and rigid spectrum assignment policies. These are comprised of intelligent radio nodes that opportunistically operate in the radio spectrum of existing primary systems, yet unwanted interference at the primary receivers is unavoidable. In order to design efficient next-generation systems and to minimise the adverse effect of their interference, it is necessary to realise how the resulting interference impacts the performance of the primary systems. In this work, a generalised framework for the interference analysis of such a next-generation system is presented where the nextgeneration transmitters may transmit randomly with different transmit powers. The analysis is built around a model developed for the statistical representation of the interference at the primary receivers, which is then used to evaluate various performance measures of the primary system. Applications of the derived interference model in designing the next-generation network system parameters are also demonstrated. Such approach provides a unified and generalised framework, the use of which allows a wide range of performance metrics can be evaluated. Findings of the analytical performance analyses are confirmed through extensive computer-based Monte-Carlo simulations. © 2012 John Wiley & Sons, Ltd.

  3. High-performance integrated virtual environment (HIVE): a robust infrastructure for next-generation sequence data analysis.

    Science.gov (United States)

    Simonyan, Vahan; Chumakov, Konstantin; Dingerdissen, Hayley; Faison, William; Goldweber, Scott; Golikov, Anton; Gulzar, Naila; Karagiannis, Konstantinos; Vinh Nguyen Lam, Phuc; Maudru, Thomas; Muravitskaja, Olesja; Osipova, Ekaterina; Pan, Yang; Pschenichnov, Alexey; Rostovtsev, Alexandre; Santana-Quintero, Luis; Smith, Krista; Thompson, Elaine E; Tkachenko, Valery; Torcivia-Rodriguez, John; Voskanian, Alin; Wan, Quan; Wang, Jing; Wu, Tsung-Jung; Wilson, Carolyn; Mazumder, Raja

    2016-01-01

    The High-performance Integrated Virtual Environment (HIVE) is a distributed storage and compute environment designed primarily to handle next-generation sequencing (NGS) data. This multicomponent cloud infrastructure provides secure web access for authorized users to deposit, retrieve, annotate and compute on NGS data, and to analyse the outcomes using web interface visual environments appropriately built in collaboration with research and regulatory scientists and other end users. Unlike many massively parallel computing environments, HIVE uses a cloud control server which virtualizes services, not processes. It is both very robust and flexible due to the abstraction layer introduced between computational requests and operating system processes. The novel paradigm of moving computations to the data, instead of moving data to computational nodes, has proven to be significantly less taxing for both hardware and network infrastructure.The honeycomb data model developed for HIVE integrates metadata into an object-oriented model. Its distinction from other object-oriented databases is in the additional implementation of a unified application program interface to search, view and manipulate data of all types. This model simplifies the introduction of new data types, thereby minimizing the need for database restructuring and streamlining the development of new integrated information systems. The honeycomb model employs a highly secure hierarchical access control and permission system, allowing determination of data access privileges in a finely granular manner without flooding the security subsystem with a multiplicity of rules. HIVE infrastructure will allow engineers and scientists to perform NGS analysis in a manner that is both efficient and secure. HIVE is actively supported in public and private domains, and project collaborations are welcomed. Database URL: https://hive.biochemistry.gwu.edu. © The Author(s) 2016. Published by Oxford University Press.

  4. Wideband aperture array using RF channelizers and massively parallel digital 2D IIR filterbank

    Science.gov (United States)

    Sengupta, Arindam; Madanayake, Arjuna; Gómez-García, Roberto; Engeberg, Erik D.

    2014-05-01

    Wideband receive-mode beamforming applications in wireless location, electronically-scanned antennas for radar, RF sensing, microwave imaging and wireless communications require digital aperture arrays that offer a relatively constant far-field beam over several octaves of bandwidth. Several beamforming schemes including the well-known true time-delay and the phased array beamformers have been realized using either finite impulse response (FIR) or fast Fourier transform (FFT) digital filter-sum based techniques. These beamforming algorithms offer the desired selectivity at the cost of a high computational complexity and frequency-dependant far-field array patterns. A novel approach to receiver beamforming is the use of massively parallel 2-D infinite impulse response (IIR) fan filterbanks for the synthesis of relatively frequency independent RF beams at an order of magnitude lower multiplier complexity compared to FFT or FIR filter based conventional algorithms. The 2-D IIR filterbanks demand fast digital processing that can support several octaves of RF bandwidth, fast analog-to-digital converters (ADCs) for RF-to-bits type direct conversion of wideband antenna element signals. Fast digital implementation platforms that can realize high-precision recursive filter structures necessary for real-time beamforming, at RF radio bandwidths, are also desired. We propose a novel technique that combines a passive RF channelizer, multichannel ADC technology, and single-phase massively parallel 2-D IIR digital fan filterbanks, realized at low complexity using FPGA and/or ASIC technology. There exists native support for a larger bandwidth than the maximum clock frequency of the digital implementation technology. We also strive to achieve More-than-Moore throughput by processing a wideband RF signal having content with N-fold (B = N Fclk/2) bandwidth compared to the maximum clock frequency Fclk Hz of the digital VLSI platform under consideration. Such increase in bandwidth is

  5. 76 FR 49776 - The Development and Evaluation of Next-Generation Smallpox Vaccines; Public Workshop

    Science.gov (United States)

    2011-08-11

    ...] The Development and Evaluation of Next-Generation Smallpox Vaccines; Public Workshop AGENCY: Food and... Evaluation of Next-Generation Smallpox Vaccines.'' The purpose of the public workshop is to identify and discuss the key issues related to the development and evaluation of next-generation smallpox vaccines. The...

  6. A massively parallel algorithm for the collision probability calculations in the Apollo-II code using the PVM library

    International Nuclear Information System (INIS)

    Stankovski, Z.

    1995-01-01

    The collision probability method in neutron transport, as applied to 2D geometries, consume a great amount of computer time, for a typical 2D assembly calculation evaluations. Consequently RZ or 3D calculations became prohibitive. In this paper we present a simple but efficient parallel algorithm based on the message passing host/node programing model. Parallelization was applied to the energy group treatment. Such approach permits parallelization of the existing code, requiring only limited modifications. Sequential/parallel computer portability is preserved, witch is a necessary condition for a industrial code. Sequential performances are also preserved. The algorithm is implemented on a CRAY 90 coupled to a 128 processor T3D computer, a 16 processor IBM SP1 and a network of workstations, using the Public Domain PVM library. The tests were executed for a 2D geometry with the standard 99-group library. All results were very satisfactory, the best ones with IBM SP1. Because of heterogeneity of the workstation network, we did ask high performances for this architecture. The same source code was used for all computers. A more impressive advantage of this algorithm will appear in the calculations of the SAPHYR project (with the future fine multigroup library of about 8000 groups) with a massively parallel computer, using several hundreds of processors. (author). 5 refs., 6 figs., 2 tabs

  7. A massively parallel algorithm for the collision probability calculations in the Apollo-II code using the PVM library

    International Nuclear Information System (INIS)

    Stankovski, Z.

    1995-01-01

    The collision probability method in neutron transport, as applied to 2D geometries, consume a great amount of computer time, for a typical 2D assembly calculation about 90% of the computing time is consumed in the collision probability evaluations. Consequently RZ or 3D calculations became prohibitive. In this paper the author presents a simple but efficient parallel algorithm based on the message passing host/node programmation model. Parallelization was applied to the energy group treatment. Such approach permits parallelization of the existing code, requiring only limited modifications. Sequential/parallel computer portability is preserved, which is a necessary condition for a industrial code. Sequential performances are also preserved. The algorithm is implemented on a CRAY 90 coupled to a 128 processor T3D computer, a 16 processor IBM SPI and a network of workstations, using the Public Domain PVM library. The tests were executed for a 2D geometry with the standard 99-group library. All results were very satisfactory, the best ones with IBM SPI. Because of heterogeneity of the workstation network, the author did not ask high performances for this architecture. The same source code was used for all computers. A more impressive advantage of this algorithm will appear in the calculations of the SAPHYR project (with the future fine multigroup library of about 8000 groups) with a massively parallel computer, using several hundreds of processors

  8. Exploring the potential of second-generation sequencing in diverse biological contexts

    DEFF Research Database (Denmark)

    Fordyce, Sarah Louise

    Second generation sequencing (SGS) has revolutionized the study of DNA, allowing massive parallel sequencing of nucleic acids with unprecedented depths of coverage. The research undertaken in this thesis occurred in parallel with the increased accessibility of SGS platforms for routine genetic...

  9. Applications of nanotechnology, next generation sequencing and microarrays in biomedical research.

    Science.gov (United States)

    Elingaramil, Sauli; Li, Xiaolong; He, Nongyue

    2013-07-01

    Next-generation sequencing technologies, microarrays and advances in bio nanotechnology have had an enormous impact on research within a short time frame. This impact appears certain to increase further as many biomedical institutions are now acquiring these prevailing new technologies. Beyond conventional sampling of genome content, wide-ranging applications are rapidly evolving for next-generation sequencing, microarrays and nanotechnology. To date, these technologies have been applied in a variety of contexts, including whole-genome sequencing, targeted re sequencing and discovery of transcription factor binding sites, noncoding RNA expression profiling and molecular diagnostics. This paper thus discusses current applications of nanotechnology, next-generation sequencing technologies and microarrays in biomedical research and highlights the transforming potential these technologies offer.

  10. Next generation surveillance system (NGSS)

    International Nuclear Information System (INIS)

    Aparo, Massimo

    2006-01-01

    Development of 'functional requirements' for transparency systems may offer a near-term mode of regional cooperation. New requirements under development at the IAEA may provide a foundation for this potential activity. The Next Generation Surveillance System (NGSS) will become the new IAEA remote monitoring system Under new requirements the NGSS would attempt to use more commercial components to reduce cost, increase radiation survivability and further increase reliability. The NGSS must be available in two years due to rapidly approaching obsolescence in the existing DCM family. (author)

  11. Optimizing the next generation optical access networks

    DEFF Research Database (Denmark)

    Amaya Fernández, Ferney Orlando; Soto, Ana Cardenas; Tafur Monroy, Idelfonso

    2009-01-01

    Several issues in the design and optimization of the next generation optical access network (NG-OAN) are presented. The noise, the distortion and the fiber optic nonlinearities are considered to optimize the video distribution link in a passive optical network (PON). A discussion of the effect...

  12. IPv6: The Next Generation Internet Protocol

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 8; Issue 3. IPv6: The Next Generation Internet Protocol - IPv4 and its Shortcomings. Harsha Srinath. General Article Volume 8 Issue 3 March 2003 pp 33-41. Fulltext. Click here to view fulltext PDF. Permanent link:

  13. IPv6: The Next Generation Internet Protocol

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 8; Issue 4. IPv6: The Next Generation Internet Protocol - New Features in IPv6. Harsha Srinath. General Article Volume 8 Issue 4 April 2003 pp 8-16. Fulltext. Click here to view fulltext PDF. Permanent link:

  14. Next-Generation Sequencing for Binary Protein-Protein Interactions

    Directory of Open Access Journals (Sweden)

    Bernhard eSuter

    2015-12-01

    Full Text Available The yeast two-hybrid (Y2H system exploits host cell genetics in order to display binary protein-protein interactions (PPIs via defined and selectable phenotypes. Numerous improvements have been made to this method, adapting the screening principle for diverse applications, including drug discovery and the scale-up for proteome wide interaction screens in human and other organisms. Here we discuss a systematic workflow and analysis scheme for screening data generated by Y2H and related assays that includes high-throughput selection procedures, readout of comprehensive results via next-generation sequencing (NGS, and the interpretation of interaction data via quantitative statistics. The novel assays and tools will serve the broader scientific community to harness the power of NGS technology to address PPI networks in health and disease. We discuss examples of how this next-generation platform can be applied to address specific questions in diverse fields of biology and medicine.

  15. Next Generation Safeguards Initiative: Human Capital Development

    International Nuclear Information System (INIS)

    Scholz, M.; Irola, G.; Glynn, K.

    2015-01-01

    Since 2008, the Human Capital Development (HCD) subprogramme of the U.S. National Nuclear Security Administration's (NNSA) Next Generation Safeguards Initiative (NGSI) has supported the recruitment, education, training, and retention of the next generation of international safeguards professionals to meet the needs of both the International Atomic Energy Agency (IAEA) and the United States. Specifically, HCD's efforts respond to data indicating that 82% of safeguards experts at U.S. Laboratories will have left the workforce within 15 years. This paper provides an update on the status of the subprogramme since its last presentation at the IAEA Safeguards Symposium in 2010. It highlights strengthened, integrated efforts in the areas of graduate and post-doctoral fellowships, young and midcareer professional support, short safeguards courses, and university engagement. It also discusses lessons learned from the U.S. experience in safeguards education and training as well as the importance of long-range strategies to develop a cohesive, effective, and efficient human capital development approach. (author)

  16. Massive MIMO meets small cell backhaul and cooperation

    CERN Document Server

    Yang, Howard H

    2017-01-01

    This brief explores the utilization of large antenna arrays in massive multiple-input-multiple-output (MIMO) for both interference suppression, where it can improve cell-edge user rates, and for wireless backhaul in small cell networks, where macro base stations can forward data to small access points in an energy efficient way. Massive MIMO is deemed as a critical technology for next generation wireless technology. By deploying an antenna array that has active elements in excess of the number of users, massive MIMO not only provides tremendous diversity gain but also powers new aspects for network design to improve performance. This brief investigates a better utilization of the excessive spatial dimensions to improve network performance. It combines random matrix theory and stochastic geometry to develop an analytical framework that accounts for all the key features of a network, including number of antenna array, base station density, inter-cell interference, random base station deployment, and network tra...

  17. Next Generation Sequencing of Ancient DNA: Requirements, Strategies and Perspectives

    Directory of Open Access Journals (Sweden)

    Michael Knapp

    2010-07-01

    Full Text Available The invention of next-generation-sequencing has revolutionized almost all fields of genetics, but few have profited from it as much as the field of ancient DNA research. From its beginnings as an interesting but rather marginal discipline, ancient DNA research is now on its way into the centre of evolutionary biology. In less than a year from its invention next-generation-sequencing had increased the amount of DNA sequence data available from extinct organisms by several orders of magnitude. Ancient DNA  research is now not only adding a temporal aspect to evolutionary studies and allowing for the observation of evolution in real time, it also provides important data to help understand the origins of our own species. Here we review progress that has been made in next-generation-sequencing of ancient DNA over the past five years and evaluate sequencing strategies and future directions.

  18. NOAA Next Generation Radar (NEXRAD) Level 3 Products

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of Level 3 weather radar products collected from Next-Generation Radar (NEXRAD) stations located in the contiguous United States, Alaska,...

  19. Parallel algorithms for online trackfinding at PANDA

    Energy Technology Data Exchange (ETDEWEB)

    Bianchi, Ludovico; Ritman, James; Stockmanns, Tobias [IKP, Forschungszentrum Juelich GmbH (Germany); Herten, Andreas [JSC, Forschungszentrum Juelich GmbH (Germany); Collaboration: PANDA-Collaboration

    2016-07-01

    The PANDA experiment, one of the four scientific pillars of the FAIR facility currently in construction in Darmstadt, is a next-generation particle detector that will study collisions of antiprotons with beam momenta of 1.5-15 GeV/c on a fixed proton target. Because of the broad physics scope and the similar signature of signal and background events, PANDA's strategy for data acquisition is to continuously record data from the whole detector and use this global information to perform online event reconstruction and filtering. A real-time rejection factor of up to 1000 must be achieved to match the incoming data rate for offline storage, making all components of the data processing system computationally very challenging. Online particle track identification and reconstruction is an essential step, since track information is used as input in all following phases. Online tracking algorithms must ensure a delicate balance between high tracking efficiency and quality, and minimal computational footprint. For this reason, a massively parallel solution exploiting multiple Graphic Processing Units (GPUs) is under investigation. The talk presents the core concepts of the algorithms being developed for primary trackfinding, along with details of their implementation on GPUs.

  20. Next Generation CANDU: Conceptual Design for a Short Construction Schedule

    International Nuclear Information System (INIS)

    Hopwood, Jerry M.; Love, Ian J.W.; Elgohary, Medhat; Fairclough, Neville

    2002-01-01

    Atomic Energy of Canada Ltd. (AECL) has very successful experience in implementing new construction methods at the Qinshan (Phase III) twin unit CANDU 6 plant in China. This paper examines the construction method that must be implemented during the conceptual design phase of a project if short construction schedules are to be met. A project schedule of 48 months has been developed for the nth unit of NG (Next Generation) CANDU with a 42 month construction period from 1. Concrete to In-Service. An overall construction strategy has been developed involving paralleling project activities that are normally conducted in series. Many parts of the plant will be fabricated as modules and be installed using heavy lift cranes. The Reactor Building (RB), being on the critical path, has been the focus of considerable assessment, looking at alternative ways of applying the construction strategy to this building. A construction method has been chosen which will result in excess of 80% of internal work being completed as modules or as very streamlined traditional construction. This method is being further evaluated as the detailed layout proceeds. Other areas of the plant have been integrated into the schedule and new construction methods are being applied to these so that further modularization and even greater paralleling of activities will be achieved. It is concluded that the optimized construction method is a requirement, which must be implemented through all phases of design to make a 42 month construction schedule a reality. If the construction methods are appropriately chosen, the schedule reductions achieved will make nuclear more competitive. (authors)

  1. Next generation sensors and systems

    CERN Document Server

    2016-01-01

    Written by experts in their area of research, this book has outlined the current status of the fundamentals and analytical concepts, modelling and design issues, technical details and practical applications of different types of sensors and discussed about the trends of next generation of sensors and systems happening in the area of Sensing technology. This book will be useful as a reference book for engineers and scientist especially the post-graduate students find will this book as reference book for their research on wearable sensors, devices and technologies.  .

  2. Diagnostics of Primary Immunodeficiencies through Next Generation Sequencing

    Directory of Open Access Journals (Sweden)

    Vera Gallo

    2016-11-01

    Full Text Available Background: Recently, a growing number of novel genetic defects underlying primary immunodeficiencies (PID have been identified, increasing the number of PID up to more than 250 well-defined forms. Next-generation sequencing (NGS technologies and proper filtering strategies greatly contributed to this rapid evolution, providing the possibility to rapidly and simultaneously analyze large numbers of genes or the whole exome. Objective: To evaluate the role of targeted next-generation sequencing and whole exome sequencing in the diagnosis of a case series, characterized by complex or atypical clinical features suggesting a PID, difficult to diagnose using the current diagnostic procedures.Methods: We retrospectively analyzed genetic variants identified through targeted next-generation sequencing or whole exome sequencing in 45 patients with complex PID of unknown etiology. Results: 40 variants were identified using targeted next-generation sequencing, while 5 were identified using whole exome sequencing. Newly identified genetic variants were classified into 4 groups: I variations associated with a well-defined PID; II variations associated with atypical features of a well-defined PID; III functionally relevant variations potentially involved in the immunological features; IV non-diagnostic genotype, in whom the link with phenotype is missing. We reached a conclusive genetic diagnosis in 7/45 patients (~16%. Among them, 4 patients presented with a typical well-defined PID. In the remaining 3 cases, mutations were associated with unexpected clinical features, expanding the phenotypic spectrum of typical PIDs. In addition, we identified 31 variants in 10 patients with complex phenotype, individually not causative per se of the disorder.Conclusion: NGS technologies represent a cost-effective and rapid first-line genetic approaches for the evaluation of complex PIDs. Whole exome sequencing, despite a moderate higher cost compared to targeted, is

  3. Next-Generation Pathology.

    Science.gov (United States)

    Caie, Peter D; Harrison, David J

    2016-01-01

    The field of pathology is rapidly transforming from a semiquantitative and empirical science toward a big data discipline. Large data sets from across multiple omics fields may now be extracted from a patient's tissue sample. Tissue is, however, complex, heterogeneous, and prone to artifact. A reductionist view of tissue and disease progression, which does not take this complexity into account, may lead to single biomarkers failing in clinical trials. The integration of standardized multi-omics big data and the retention of valuable information on spatial heterogeneity are imperative to model complex disease mechanisms. Mathematical modeling through systems pathology approaches is the ideal medium to distill the significant information from these large, multi-parametric, and hierarchical data sets. Systems pathology may also predict the dynamical response of disease progression or response to therapy regimens from a static tissue sample. Next-generation pathology will incorporate big data with systems medicine in order to personalize clinical practice for both prognostic and predictive patient care.

  4. Potential of OFDM for next generation optical access

    Science.gov (United States)

    Fritzsche, Daniel; Weis, Erik; Breuer, Dirk

    2011-01-01

    This paper shows the requirements for next generation optical access (NGOA) networks and analyzes the potential of OFDM (orthogonal frequency division multiplexing) for the use in such network scenarios. First, we show the motivation for NGOA systems based on the future requirements on FTTH access systems and list the advantages of OFDM in such scenarios. In the next part, the basics of OFDM and different methods to generate and detect optical OFDM signals are explained and analyzed. At the transmitter side the options include intensity modulation and the more advanced field modulation of the optical OFDM signal. At the receiver there is the choice between direct detection and coherent detection. As the result of this discussion we show our vision of the future use of OFDM in optical access networks.

  5. Neutronics activities for next generation devices

    International Nuclear Information System (INIS)

    Gohar, Y.

    1985-01-01

    Neutronic activities for the next generation devices are the subject of this paper. The main activities include TFCX and FPD blanket/shield studies, neutronic aspects of ETR/INTOR critical issues, and neutronics computational modules for the tokamak system code and tandem mirror reactor system code. Trade-off analyses, optimization studies, design problem investigations and computational models development for reactor parametric studies carried out for these activities are summarized

  6. Inter-laboratory evaluation of the EUROFORGEN Global ancestry-informative SNP panel by massively parallel sequencing using the Ion PGM™.

    Science.gov (United States)

    Eduardoff, M; Gross, T E; Santos, C; de la Puente, M; Ballard, D; Strobl, C; Børsting, C; Morling, N; Fusco, L; Hussing, C; Egyed, B; Souto, L; Uacyisrael, J; Syndercombe Court, D; Carracedo, Á; Lareu, M V; Schneider, P M; Parson, W; Phillips, C; Parson, W; Phillips, C

    2016-07-01

    The EUROFORGEN Global ancestry-informative SNP (AIM-SNPs) panel is a forensic multiplex of 128 markers designed to differentiate an individual's ancestry from amongst the five continental population groups of Africa, Europe, East Asia, Native America, and Oceania. A custom multiplex of AmpliSeq™ PCR primers was designed for the Global AIM-SNPs to perform massively parallel sequencing using the Ion PGM™ system. This study assessed individual SNP genotyping precision using the Ion PGM™, the forensic sensitivity of the multiplex using dilution series, degraded DNA plus simple mixtures, and the ancestry differentiation power of the final panel design, which required substitution of three original ancestry-informative SNPs with alternatives. Fourteen populations that had not been previously analyzed were genotyped using the custom multiplex and these studies allowed assessment of genotyping performance by comparison of data across five laboratories. Results indicate a low level of genotyping error can still occur from sequence misalignment caused by homopolymeric tracts close to the target SNP, despite careful scrutiny of candidate SNPs at the design stage. Such sequence misalignment required the exclusion of component SNP rs2080161 from the Global AIM-SNPs panel. However, the overall genotyping precision and sensitivity of this custom multiplex indicates the Ion PGM™ assay for the Global AIM-SNPs is highly suitable for forensic ancestry analysis with massively parallel sequencing. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Parallel finite elements with domain decomposition and its pre-processing

    International Nuclear Information System (INIS)

    Yoshida, A.; Yagawa, G.; Hamada, S.

    1993-01-01

    This paper describes a parallel finite element analysis using a domain decomposition method, and the pre-processing for the parallel calculation. Computer simulations are about to replace experiments in various fields, and the scale of model to be simulated tends to be extremely large. On the other hand, computational environment has drastically changed in these years. Especially, parallel processing on massively parallel computers or computer networks is considered to be promising techniques. In order to achieve high efficiency on such parallel computation environment, large granularity of tasks, a well-balanced workload distribution are key issues. It is also important to reduce the cost of pre-processing in such parallel FEM. From the point of view, the authors developed the domain decomposition FEM with the automatic and dynamic task-allocation mechanism and the automatic mesh generation/domain subdivision system for it. (author)

  8. gCUP: rapid GPU-based HIV-1 co-receptor usage prediction for next-generation sequencing.

    Science.gov (United States)

    Olejnik, Michael; Steuwer, Michel; Gorlatch, Sergei; Heider, Dominik

    2014-11-15

    Next-generation sequencing (NGS) has a large potential in HIV diagnostics, and genotypic prediction models have been developed and successfully tested in the recent years. However, albeit being highly accurate, these computational models lack computational efficiency to reach their full potential. In this study, we demonstrate the use of graphics processing units (GPUs) in combination with a computational prediction model for HIV tropism. Our new model named gCUP, parallelized and optimized for GPU, is highly accurate and can classify >175 000 sequences per second on an NVIDIA GeForce GTX 460. The computational efficiency of our new model is the next step to enable NGS technologies to reach clinical significance in HIV diagnostics. Moreover, our approach is not limited to HIV tropism prediction, but can also be easily adapted to other settings, e.g. drug resistance prediction. The source code can be downloaded at http://www.heiderlab.de d.heider@wz-straubing.de. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. A two-stage flow-based intrusion detection model for next-generation networks.

    Science.gov (United States)

    Umer, Muhammad Fahad; Sher, Muhammad; Bi, Yaxin

    2018-01-01

    The next-generation network provides state-of-the-art access-independent services over converged mobile and fixed networks. Security in the converged network environment is a major challenge. Traditional packet and protocol-based intrusion detection techniques cannot be used in next-generation networks due to slow throughput, low accuracy and their inability to inspect encrypted payload. An alternative solution for protection of next-generation networks is to use network flow records for detection of malicious activity in the network traffic. The network flow records are independent of access networks and user applications. In this paper, we propose a two-stage flow-based intrusion detection system for next-generation networks. The first stage uses an enhanced unsupervised one-class support vector machine which separates malicious flows from normal network traffic. The second stage uses a self-organizing map which automatically groups malicious flows into different alert clusters. We validated the proposed approach on two flow-based datasets and obtained promising results.

  10. An efficient parallel algorithm for matrix-vector multiplication

    Energy Technology Data Exchange (ETDEWEB)

    Hendrickson, B.; Leland, R.; Plimpton, S.

    1993-03-01

    The multiplication of a vector by a matrix is the kernel computation of many algorithms in scientific computation. A fast parallel algorithm for this calculation is therefore necessary if one is to make full use of the new generation of parallel supercomputers. This paper presents a high performance, parallel matrix-vector multiplication algorithm that is particularly well suited to hypercube multiprocessors. For an n x n matrix on p processors, the communication cost of this algorithm is O(n/[radical]p + log(p)), independent of the matrix sparsity pattern. The performance of the algorithm is demonstrated by employing it as the kernel in the well-known NAS conjugate gradient benchmark, where a run time of 6.09 seconds was observed. This is the best published performance on this benchmark achieved to date using a massively parallel supercomputer.

  11. Radio resource management for next generation mobile communication systems

    DEFF Research Database (Denmark)

    Wang, Hua

    The key feature of next generation (4G) mobile communication system is the ability to deliver a variety of multimedia services with different Quality-of-Service (QoS) requirements. Compared to the third generation (3G) mobile communication systems, 4G mobile communication system introduces several...

  12. Parallel processor programs in the Federal Government

    Science.gov (United States)

    Schneck, P. B.; Austin, D.; Squires, S. L.; Lehmann, J.; Mizell, D.; Wallgren, K.

    1985-01-01

    In 1982, a report dealing with the nation's research needs in high-speed computing called for increased access to supercomputing resources for the research community, research in computational mathematics, and increased research in the technology base needed for the next generation of supercomputers. Since that time a number of programs addressing future generations of computers, particularly parallel processors, have been started by U.S. government agencies. The present paper provides a description of the largest government programs in parallel processing. Established in fiscal year 1985 by the Institute for Defense Analyses for the National Security Agency, the Supercomputing Research Center will pursue research to advance the state of the art in supercomputing. Attention is also given to the DOE applied mathematical sciences research program, the NYU Ultracomputer project, the DARPA multiprocessor system architectures program, NSF research on multiprocessor systems, ONR activities in parallel computing, and NASA parallel processor projects.

  13. Energy and luminosity requirements for the next generation of linear colliders

    International Nuclear Information System (INIS)

    Amaldi, U.

    1987-01-01

    In order to gain new knowledge ('new physics') from 'next generation' linear colliders energy and luminosity are important variables when considering the design of these new elementary particle probes. The standard model of the electroweak interaction is reviewed and stipulations for postulated Higgs particle, a new neutral Z particle, and a new quark and a neutral lepton searches with next generation colliders are given

  14. Dependable Hydrogen and Industrial Heat Generation from the Next Generation Nuclear Plant

    Energy Technology Data Exchange (ETDEWEB)

    Charles V. Park; Michael W. Patterson; Vincent C. Maio; Piyush Sabharwall

    2009-03-01

    The Department of Energy is working with industry to develop a next generation, high-temperature gas-cooled nuclear reactor (HTGR) as a part of the effort to supply the US with abundant, clean and secure energy. The Next Generation Nuclear Plant (NGNP) project, led by the Idaho National Laboratory, will demonstrate the ability of the HTGR to generate hydrogen, electricity, and high-quality process heat for a wide range of industrial applications. Substituting HTGR power for traditional fossil fuel resources reduces the cost and supply vulnerability of natural gas and oil, and reduces or eliminates greenhouse gas emissions. As authorized by the Energy Policy Act of 2005, industry leaders are developing designs for the construction of a commercial prototype producing up to 600 MWt of power by 2021. This paper describes a variety of critical applications that are appropriate for the HTGR with an emphasis placed on applications requiring a clean and reliable source of hydrogen. An overview of the NGNP project status and its significant technology development efforts are also presented.

  15. Perspectives on the development of next generation reactor systems safety analysis codes

    International Nuclear Information System (INIS)

    Zhang, H.

    2015-01-01

    'Full text:' Existing reactor system analysis codes, such as RELAP5-3D and TRAC, have gained worldwide success in supporting reactor safety analyses, as well as design and licensing of new reactors. These codes are important assets to the nuclear engineering research community, as well as to the nuclear industry. However, most of these codes were originally developed during the 1970s', and it becomes necessary to develop next-generation reactor system analysis codes for several reasons. Firstly, as new reactor designs emerge, there are new challenges emerging in numerical simulations of reactor systems such as long lasting transients and multi-physics phenomena. These new requirements are beyond the range of applicability of the existing system analysis codes. Advanced modeling and numerical methods must be taken into consideration to improve the existing capabilities. Secondly, by developing next-generation reactor system analysis codes, the knowledge (know how) in two phase flow modeling and the highly complex constitutive models will be transferred to the young generation of nuclear engineers. And thirdly, all computer codes have limited shelf life. It becomes less and less cost-effective to maintain a legacy code, due to the fast change of computer hardware and software environment. There are several critical perspectives in terms of developing next-generation reactor system analysis codes: 1) The success of the next-generation codes must be built upon the success of the existing codes. The knowledge of the existing codes, not just simply the manuals and codes, but knowing why and how, must be transferred to the next-generation codes. The next-generation codes should encompass the capability of the existing codes. The shortcomings of existing codes should be identified, understood, and properly categorized, for example into model deficiencies or numerical method deficiencies. 2) State-of-the-art models and numerical methods must be considered to

  16. Perspectives on the development of next generation reactor systems safety analysis codes

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, H., E-mail: Hongbin.Zhang@inl.gov [Idaho National Laboratory, Idaho Falls, ID (United States)

    2015-07-01

    'Full text:' Existing reactor system analysis codes, such as RELAP5-3D and TRAC, have gained worldwide success in supporting reactor safety analyses, as well as design and licensing of new reactors. These codes are important assets to the nuclear engineering research community, as well as to the nuclear industry. However, most of these codes were originally developed during the 1970s', and it becomes necessary to develop next-generation reactor system analysis codes for several reasons. Firstly, as new reactor designs emerge, there are new challenges emerging in numerical simulations of reactor systems such as long lasting transients and multi-physics phenomena. These new requirements are beyond the range of applicability of the existing system analysis codes. Advanced modeling and numerical methods must be taken into consideration to improve the existing capabilities. Secondly, by developing next-generation reactor system analysis codes, the knowledge (know how) in two phase flow modeling and the highly complex constitutive models will be transferred to the young generation of nuclear engineers. And thirdly, all computer codes have limited shelf life. It becomes less and less cost-effective to maintain a legacy code, due to the fast change of computer hardware and software environment. There are several critical perspectives in terms of developing next-generation reactor system analysis codes: 1) The success of the next-generation codes must be built upon the success of the existing codes. The knowledge of the existing codes, not just simply the manuals and codes, but knowing why and how, must be transferred to the next-generation codes. The next-generation codes should encompass the capability of the existing codes. The shortcomings of existing codes should be identified, understood, and properly categorized, for example into model deficiencies or numerical method deficiencies. 2) State-of-the-art models and numerical methods must be considered to

  17. Towards Interactive Visual Exploration of Parallel Programs using a Domain-Specific Language

    KAUST Repository

    Klein, Tobias; Bruckner, Stefan; Grö ller, M. Eduard; Hadwiger, Markus; Rautek, Peter

    2016-01-01

    The use of GPUs and the massively parallel computing paradigm have become wide-spread. We describe a framework for the interactive visualization and visual analysis of the run-time behavior of massively parallel programs, especially OpenCL kernels. This facilitates understanding a program's function and structure, finding the causes of possible slowdowns, locating program bugs, and interactively exploring and visually comparing different code variants in order to improve performance and correctness. Our approach enables very specific, user-centered analysis, both in terms of the recording of the run-time behavior and the visualization itself. Instead of having to manually write instrumented code to record data, simple code annotations tell the source-to-source compiler which code instrumentation to generate automatically. The visualization part of our framework then enables the interactive analysis of kernel run-time behavior in a way that can be very specific to a particular problem or optimization goal, such as analyzing the causes of memory bank conflicts or understanding an entire parallel algorithm.

  18. Towards Interactive Visual Exploration of Parallel Programs using a Domain-Specific Language

    KAUST Repository

    Klein, Tobias

    2016-04-19

    The use of GPUs and the massively parallel computing paradigm have become wide-spread. We describe a framework for the interactive visualization and visual analysis of the run-time behavior of massively parallel programs, especially OpenCL kernels. This facilitates understanding a program\\'s function and structure, finding the causes of possible slowdowns, locating program bugs, and interactively exploring and visually comparing different code variants in order to improve performance and correctness. Our approach enables very specific, user-centered analysis, both in terms of the recording of the run-time behavior and the visualization itself. Instead of having to manually write instrumented code to record data, simple code annotations tell the source-to-source compiler which code instrumentation to generate automatically. The visualization part of our framework then enables the interactive analysis of kernel run-time behavior in a way that can be very specific to a particular problem or optimization goal, such as analyzing the causes of memory bank conflicts or understanding an entire parallel algorithm.

  19. Synchronization System for Next Generation Light Sources

    Energy Technology Data Exchange (ETDEWEB)

    Zavriyev, Anton [MagiQ Technologies, Inc., Somerville, MA (United States)

    2014-03-27

    An alternative synchronization technique – one that would allow explicit control of the pulse train including its repetition rate and delay is clearly desired. We propose such a scheme. Our method is based on optical interferometry and permits synchronization of the pulse trains generated by two independent mode-locked lasers. As the next generation x-ray sources will be driven by a clock signal derived from a mode-locked optical source, our technique will provide a way to synchronize x-ray probe with the optical pump pulses.

  20. Mobility Models for Next Generation Wireless Networks Ad Hoc, Vehicular and Mesh Networks

    CERN Document Server

    Santi, Paolo

    2012-01-01

    Mobility Models for Next Generation Wireless Networks: Ad Hoc, Vehicular and Mesh Networks provides the reader with an overview of mobility modelling, encompassing both theoretical and practical aspects related to the challenging mobility modelling task. It also: Provides up-to-date coverage of mobility models for next generation wireless networksOffers an in-depth discussion of the most representative mobility models for major next generation wireless network application scenarios, including WLAN/mesh networks, vehicular networks, wireless sensor networks, and

  1. PCR-Free Enrichment of Mitochondrial DNA from Human Blood and Cell Lines for High Quality Next-Generation DNA Sequencing.

    Directory of Open Access Journals (Sweden)

    Meetha P Gould

    Full Text Available Recent advances in sequencing technology allow for accurate detection of mitochondrial sequence variants, even those in low abundance at heteroplasmic sites. Considerable sequencing cost savings can be achieved by enriching samples for mitochondrial (relative to nuclear DNA. Reduction in nuclear DNA (nDNA content can also help to avoid false positive variants resulting from nuclear mitochondrial sequences (numts. We isolate intact mitochondrial organelles from both human cell lines and blood components using two separate methods: a magnetic bead binding protocol and differential centrifugation. DNA is extracted and further enriched for mitochondrial DNA (mtDNA by an enzyme digest. Only 1 ng of the purified DNA is necessary for library preparation and next generation sequence (NGS analysis. Enrichment methods are assessed and compared using mtDNA (versus nDNA content as a metric, measured by using real-time quantitative PCR and NGS read analysis. Among the various strategies examined, the optimal is differential centrifugation isolation followed by exonuclease digest. This strategy yields >35% mtDNA reads in blood and cell lines, which corresponds to hundreds-fold enrichment over baseline. The strategy also avoids false variant calls that, as we show, can be induced by the long-range PCR approaches that are the current standard in enrichment procedures. This optimization procedure allows mtDNA enrichment for efficient and accurate massively parallel sequencing, enabling NGS from samples with small amounts of starting material. This will decrease costs by increasing the number of samples that may be multiplexed, ultimately facilitating efforts to better understand mitochondria-related diseases.

  2. Precision medicine for cancer with next-generation functional diagnostics.

    Science.gov (United States)

    Friedman, Adam A; Letai, Anthony; Fisher, David E; Flaherty, Keith T

    2015-12-01

    Precision medicine is about matching the right drugs to the right patients. Although this approach is technology agnostic, in cancer there is a tendency to make precision medicine synonymous with genomics. However, genome-based cancer therapeutic matching is limited by incomplete biological understanding of the relationship between phenotype and cancer genotype. This limitation can be addressed by functional testing of live patient tumour cells exposed to potential therapies. Recently, several 'next-generation' functional diagnostic technologies have been reported, including novel methods for tumour manipulation, molecularly precise assays of tumour responses and device-based in situ approaches; these address the limitations of the older generation of chemosensitivity tests. The promise of these new technologies suggests a future diagnostic strategy that integrates functional testing with next-generation sequencing and immunoprofiling to precisely match combination therapies to individual cancer patients.

  3. LiNbO3: A photovoltaic substrate for massive parallel manipulation and patterning of nano-objects

    International Nuclear Information System (INIS)

    Carrascosa, M.; García-Cabañes, A.; Jubera, M.; Ramiro, J. B.; Agulló-López, F.

    2015-01-01

    The application of evanescent photovoltaic (PV) fields, generated by visible illumination of Fe:LiNbO 3 substrates, for parallel massive trapping and manipulation of micro- and nano-objects is critically reviewed. The technique has been often referred to as photovoltaic or photorefractive tweezers. The main advantage of the new method is that the involved electrophoretic and/or dielectrophoretic forces do not require any electrodes and large scale manipulation of nano-objects can be easily achieved using the patterning capabilities of light. The paper describes the experimental techniques for particle trapping and the main reported experimental results obtained with a variety of micro- and nano-particles (dielectric and conductive) and different illumination configurations (single beam, holographic geometry, and spatial light modulator projection). The report also pays attention to the physical basis of the method, namely, the coupling of the evanescent photorefractive fields to the dielectric response of the nano-particles. The role of a number of physical parameters such as the contrast and spatial periodicities of the illumination pattern or the particle deposition method is discussed. Moreover, the main properties of the obtained particle patterns in relation to potential applications are summarized, and first demonstrations reviewed. Finally, the PV method is discussed in comparison to other patterning strategies, such as those based on the pyroelectric response and the electric fields associated to domain poling of ferroelectric materials

  4. Next-generation storm tracking for minimizing service interruption

    Energy Technology Data Exchange (ETDEWEB)

    Sznaider, R. [Meteorlogix, Minneapolis, MN (United States)

    2002-08-01

    Several technological changes have taken place in the field of weather radar since its discovery during World War II. A wide variety of industries have benefited over the years from conventional weather radar displays, providing assistance in forecasting and estimating the potential severity of storms. The characteristics of individual storm cells can now be derived from the next-generation of weather radar systems (NEXRAD). The determination of which storm cells possess distinct features such as large hail or developing tornadoes was made possible through the fusing of various pieces of information with radar pictures. To exactly determine when and where a storm will hit, this data can be combined and overlaid into a display that includes the geographical physical landmarks of a specific region. Combining Geographic Information Systems (GIS) and storm tracking provides a more complete, timely and accurate forecast, which clearly benefits the electric utilities industries. The generation and production of energy are dependent on how hot or cold it will be today and tomorrow. The author described each major feature of this next-generation weather radar system. 9 figs.

  5. Educating the next generation of nature entrepreneurs

    Science.gov (United States)

    Judith C. Jobse; Loes Witteveen; Judith Santegoets; Daan van der Linde

    2015-01-01

    With this paper, it is illustrated that a focus on entrepreneurship training in the nature and wilderness sector is relevant for diverse organisations and situations. The first curricula on nature entrepreneurship are currently being developed. In this paper the authors describe a project that focusses on educating the next generation of nature entrepreneurs, reflect...

  6. The Next Generation: Students Discuss Archaeology in the 21st Century

    OpenAIRE

    Sands, Ashley; Butler, Kristin

    2010-01-01

    The Next Generation Project is a multi-agent, multi-directional cultural diplomacy effort. The need for communication among emerging archaeologists has never been greater. Increasingly, archaeological sites are impacted by military activity, destroyed through the development of dams and building projects, and torn apart through looting. The Next Generation Project works to develop communication via social networking sites online and through in-person meetings at international conferences. As ...

  7. Introduction to massively-parallel computing in high-energy physics

    CERN Document Server

    AUTHOR|(CDS)2083520

    1993-01-01

    Ever since computers were first used for scientific and numerical work, there has existed an "arms race" between the technical development of faster computing hardware, and the desires of scientists to solve larger problems in shorter time-scales. However, the vast leaps in processor performance achieved through advances in semi-conductor science have reached a hiatus as the technology comes up against the physical limits of the speed of light and quantum effects. This has lead all high performance computer manufacturers to turn towards a parallel architecture for their new machines. In these lectures we will introduce the history and concepts behind parallel computing, and review the various parallel architectures and software environments currently available. We will then introduce programming methodologies that allow efficient exploitation of parallel machines, and present case studies of the parallelization of typical High Energy Physics codes for the two main classes of parallel computing architecture (S...

  8. NIRS report of investigations for the development of the next generation PET apparatus. FY 2000

    International Nuclear Information System (INIS)

    2001-03-01

    This is a summary of study reports from representative technology fields for the development of the next generation PET apparatus directing to 3-D images, and is hoped to be useful for future smooth cooperation between the fields. The investigation started from April 2000 in National Institute of Radiological Sciences (NIRS) with cooperation of other facilities, universities and companies. The report involves chapters of: Detector volume and geometrical efficiency- Design criterion for the next generation PET; Scintillator for PET; An investigation of detector and front-end electronics for the next generation PET; A measurement system of depth of interaction; Detector simulator; Development of an evaluation system for PET detector; On the signal processing system for the next generation PET; List-mode data acquisition method for the next generation PET; List-mode data acquisition simulator; Image reconstruction; A Monte Carlo simulator for the next generation PET scanners; Out-of-field of view (FOV) radioactivity; and Published papers and presentations. (N.I.)

  9. Exploiting parallel R in the cloud with SPRINT.

    Science.gov (United States)

    Piotrowski, M; McGilvary, G A; Sloan, T M; Mewissen, M; Lloyd, A D; Forster, T; Mitchell, L; Ghazal, P; Hill, J

    2013-01-01

    Advances in DNA Microarray devices and next-generation massively parallel DNA sequencing platforms have led to an exponential growth in data availability but the arising opportunities require adequate computing resources. High Performance Computing (HPC) in the Cloud offers an affordable way of meeting this need. Bioconductor, a popular tool for high-throughput genomic data analysis, is distributed as add-on modules for the R statistical programming language but R has no native capabilities for exploiting multi-processor architectures. SPRINT is an R package that enables easy access to HPC for genomics researchers. This paper investigates: setting up and running SPRINT-enabled genomic analyses on Amazon's Elastic Compute Cloud (EC2), the advantages of submitting applications to EC2 from different parts of the world and, if resource underutilization can improve application performance. The SPRINT parallel implementations of correlation, permutation testing, partitioning around medoids and the multi-purpose papply have been benchmarked on data sets of various size on Amazon EC2. Jobs have been submitted from both the UK and Thailand to investigate monetary differences. It is possible to obtain good, scalable performance but the level of improvement is dependent upon the nature of the algorithm. Resource underutilization can further improve the time to result. End-user's location impacts on costs due to factors such as local taxation. Although not designed to satisfy HPC requirements, Amazon EC2 and cloud computing in general provides an interesting alternative and provides new possibilities for smaller organisations with limited funds.

  10. The next generation of power reactors - safety characteristics

    International Nuclear Information System (INIS)

    Modro, S.M.

    1995-01-01

    The next generation of commercial nuclear power reactors is characterized by a new approach to achieving reliability of their safety systems. In contrast to current generation reactors, these designs apply passive safety features that rely on gravity-driven transfer processes or stored energy, such as gas-pressurized accumulators or electric batteries. This paper discusses the passive safety system of the AP600 and Simplified Boiling Water Reactor (SBWR) designs

  11. Galaxy bispectrum from massive spinning particles

    Science.gov (United States)

    Moradinezhad Dizgah, Azadeh; Lee, Hayden; Muñoz, Julian B.; Dvorkin, Cora

    2018-05-01

    Massive spinning particles, if present during inflation, lead to a distinctive bispectrum of primordial perturbations, the shape and amplitude of which depend on the masses and spins of the extra particles. This signal, in turn, leaves an imprint in the statistical distribution of galaxies; in particular, as a non-vanishing galaxy bispectrum, which can be used to probe the masses and spins of these particles. In this paper, we present for the first time a new theoretical template for the bispectrum generated by massive spinning particles, valid for a general triangle configuration. We then proceed to perform a Fisher-matrix forecast to assess the potential of two next-generation spectroscopic galaxy surveys, EUCLID and DESI, to constrain the primordial non-Gaussianity sourced by these extra particles. We model the galaxy bispectrum using tree-level perturbation theory, accounting for redshift-space distortions and the Alcock-Paczynski effect, and forecast constraints on the primordial non-Gaussianity parameters marginalizing over all relevant biases and cosmological parameters. Our results suggest that these surveys would potentially be sensitive to any primordial non-Gaussianity with an amplitude larger than fNL≈ 1, for massive particles with spins 2, 3, and 4. Interestingly, if non-Gaussianities are present at that level, these surveys will be able to infer the masses of these spinning particles to within tens of percent. If detected, this would provide a very clear window into the particle content of our Universe during inflation.

  12. Modular and efficient ozone systems based on massively parallel chemical processing in microchannel plasma arrays: performance and commercialization

    Science.gov (United States)

    Kim, M.-H.; Cho, J. H.; Park, S.-J.; Eden, J. G.

    2017-08-01

    Plasmachemical systems based on the production of a specific molecule (O3) in literally thousands of microchannel plasmas simultaneously have been demonstrated, developed and engineered over the past seven years, and commercialized. At the heart of this new plasma technology is the plasma chip, a flat aluminum strip fabricated by photolithographic and wet chemical processes and comprising 24-48 channels, micromachined into nanoporous aluminum oxide, with embedded electrodes. By integrating 4-6 chips into a module, the mass output of an ozone microplasma system is scaled linearly with the number of modules operating in parallel. A 115 g/hr (2.7 kg/day) ozone system, for example, is realized by the combined output of 18 modules comprising 72 chips and 1,800 microchannels. The implications of this plasma processing architecture for scaling ozone production capability, and reducing capital and service costs when introducing redundancy into the system, are profound. In contrast to conventional ozone generator technology, microplasma systems operate reliably (albeit with reduced output) in ambient air and humidity levels up to 90%, a characteristic attributable to the water adsorption/desorption properties and electrical breakdown strength of nanoporous alumina. Extensive testing has documented chip and system lifetimes (MTBF) beyond 5,000 hours, and efficiencies >130 g/kWh when oxygen is the feedstock gas. Furthermore, the weight and volume of microplasma systems are a factor of 3-10 lower than those for conventional ozone systems of comparable output. Massively-parallel plasmachemical processing offers functionality, performance, and commercial value beyond that afforded by conventional technology, and is currently in operation in more than 30 countries worldwide.

  13. Next generation solar energy. From fundamentals to applications

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-07-01

    Within the International Conference between 12th and 14th December, 2011 in Erlangen (Federal Republic of Germany) the following lectures were presented: (1) The opto-electronic physics required to approach the Shockley-Queisser efficiency limit (E. Yablonovitch); (2) The Shockley-Queisser-limit and beyond (G.H. Bauer); (3) Designing composite nanomaterials for photovoltaic devices (B. Rech); (4) Light-Material interactions in energy conversion (H. Atwater); (5) Functional imaging of hybrid nanostructures - Visualizing mechanisms of solar energy utilization (L. Lauhon); (6) Are photosynthetic proteins suitable for PV applications (Y. Rosenwaks); (7) Detailed balance limit in photovoltaic systems (U. Rau); (8) Plasmonics and nanophotonics for next generation photovoltaics (E. Garnett); (9) Dispersion, wave propagation and efficiency analysis of nanowire solar cells (B. Witzigmann); (10) Application of nanostructures to next generation photovoltaics - Opportunities and challenges from an industrial research perspective (L. Tsakalakos); (11) Triplet states in organic and organometallic photovoltaic cells (K.S. Schanze); (12) New photoelectrode architectures (J.T. Hupp); (13) Dendrimers for optoelectronic and photovoltaic applications (P. Ceroni); (14) Photon management with luminescent materials (J. Goldschmidt); (15) Economical aspects of next generation solar cell technologies (W. Hoffmann); (16) Scalability in solar energy conversion - First-row transition metal-based chromophores for dye-sensitized solar cells (J. McCusker); (17) Designing organic materials for photovoltaic devices (A. Harriman); (18) Molecular photovoltaics - What can we learn from model studies (B. Albinsson); (19) Porphyrin-sensitised titanium dioxide solar cells (D. Officer); (20) Light-harvesting: Charge separation, and charge-transportation properties of novel materials for organic photovoltaics (H. Imahori); (21) Phthalocyanines for molecular photovoltaics (T. Torres); (22) Photophysics of

  14. Next Generation, Si-Compatible Materials and Devices in the Si-Ge-Sn System

    Science.gov (United States)

    2015-10-09

    and conclusions The work initially focused on growth of next generation Ge1-ySny alloys on Ge buffered Si wafers via UHV CVD depositions of Ge3H8...Abstract The work initially focused on growth of next generation Ge1-ySny alloys on Ge buffered Si wafers via UHV CVD depositions of Ge3H8, SnD4. The...AFRL-AFOSR-VA-TR-2016-0044 Next generation, Si -compatible materials and devices in the Si - Ge -Sn system John Kouvetakis ARIZONA STATE UNIVERSITY Final

  15. A quantitative assessment of the Hadoop framework for analyzing massively parallel DNA sequencing data.

    Science.gov (United States)

    Siretskiy, Alexey; Sundqvist, Tore; Voznesenskiy, Mikhail; Spjuth, Ola

    2015-01-01

    New high-throughput technologies, such as massively parallel sequencing, have transformed the life sciences into a data-intensive field. The most common e-infrastructure for analyzing this data consists of batch systems that are based on high-performance computing resources; however, the bioinformatics software that is built on this platform does not scale well in the general case. Recently, the Hadoop platform has emerged as an interesting option to address the challenges of increasingly large datasets with distributed storage, distributed processing, built-in data locality, fault tolerance, and an appealing programming methodology. In this work we introduce metrics and report on a quantitative comparison between Hadoop and a single node of conventional high-performance computing resources for the tasks of short read mapping and variant calling. We calculate efficiency as a function of data size and observe that the Hadoop platform is more efficient for biologically relevant data sizes in terms of computing hours for both split and un-split data files. We also quantify the advantages of the data locality provided by Hadoop for NGS problems, and show that a classical architecture with network-attached storage will not scale when computing resources increase in numbers. Measurements were performed using ten datasets of different sizes, up to 100 gigabases, using the pipeline implemented in Crossbow. To make a fair comparison, we implemented an improved preprocessor for Hadoop with better performance for splittable data files. For improved usability, we implemented a graphical user interface for Crossbow in a private cloud environment using the CloudGene platform. All of the code and data in this study are freely available as open source in public repositories. From our experiments we can conclude that the improved Hadoop pipeline scales better than the same pipeline on high-performance computing resources, we also conclude that Hadoop is an economically viable

  16. HLA typing: Conventional techniques v.next-generation sequencing

    African Journals Online (AJOL)

    The existing techniques have contributed significantly to our current knowledge of allelic diversity. At present, sequence-based typing (SBT) methods, in particular next-generation sequencing. (NGS), provide the highest possible resolution. NGS platforms were initially only used for genomic sequencing, but also showed.

  17. Parallel visualization on leadership computing resources

    Energy Technology Data Exchange (ETDEWEB)

    Peterka, T; Ross, R B [Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL 60439 (United States); Shen, H-W [Department of Computer Science and Engineering, Ohio State University, Columbus, OH 43210 (United States); Ma, K-L [Department of Computer Science, University of California at Davis, Davis, CA 95616 (United States); Kendall, W [Department of Electrical Engineering and Computer Science, University of Tennessee at Knoxville, Knoxville, TN 37996 (United States); Yu, H, E-mail: tpeterka@mcs.anl.go [Sandia National Laboratories, California, Livermore, CA 94551 (United States)

    2009-07-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  18. Parallel visualization on leadership computing resources

    International Nuclear Information System (INIS)

    Peterka, T; Ross, R B; Shen, H-W; Ma, K-L; Kendall, W; Yu, H

    2009-01-01

    Changes are needed in the way that visualization is performed, if we expect the analysis of scientific data to be effective at the petascale and beyond. By using similar techniques as those used to parallelize simulations, such as parallel I/O, load balancing, and effective use of interprocess communication, the supercomputers that compute these datasets can also serve as analysis and visualization engines for them. Our team is assessing the feasibility of performing parallel scientific visualization on some of the most powerful computational resources of the U.S. Department of Energy's National Laboratories in order to pave the way for analyzing the next generation of computational results. This paper highlights some of the conclusions of that research.

  19. Next-generation sequencing for endocrine cancers: Recent advances and challenges.

    Science.gov (United States)

    Suresh, Padmanaban S; Venkatesh, Thejaswini; Tsutsumi, Rie; Shetty, Abhishek

    2017-05-01

    Contemporary molecular biology research tools have enriched numerous areas of biomedical research that address challenging diseases, including endocrine cancers (pituitary, thyroid, parathyroid, adrenal, testicular, ovarian, and neuroendocrine cancers). These tools have placed several intriguing clues before the scientific community. Endocrine cancers pose a major challenge in health care and research despite considerable attempts by researchers to understand their etiology. Microarray analyses have provided gene signatures from many cells, tissues, and organs that can differentiate healthy states from diseased ones, and even show patterns that correlate with stages of a disease. Microarray data can also elucidate the responses of endocrine tumors to therapeutic treatments. The rapid progress in next-generation sequencing methods has overcome many of the initial challenges of these technologies, and their advantages over microarray techniques have enabled them to emerge as valuable aids for clinical research applications (prognosis, identification of drug targets, etc.). A comprehensive review describing the recent advances in next-generation sequencing methods and their application in the evaluation of endocrine and endocrine-related cancers is lacking. The main purpose of this review is to illustrate the concepts that collectively constitute our current view of the possibilities offered by next-generation sequencing technological platforms, challenges to relevant applications, and perspectives on the future of clinical genetic testing of patients with endocrine tumors. We focus on recent discoveries in the use of next-generation sequencing methods for clinical diagnosis of endocrine tumors in patients and conclude with a discussion on persisting challenges and future objectives.

  20. The next generation CANDU 6

    International Nuclear Information System (INIS)

    Hopwood, J.M.

    1999-01-01

    AECL's product line of CANDU 6 and CANDU 9 nuclear power plants are adapted to respond to changing market conditions, experience feedback and technological development by a continuous improvement process of design evolution. The CANDU 6 Nuclear Power Plant design is a successful family of nuclear units, with the first four units entering service in 1983, and the most recent entering service this year. A further four CANDU 6 units are under construction. Starting in 1996, a focused forward-looking development program is under way at AECL to incorporate a series of individual improvements and integrate them into the CANDU 6, leading to the evolutionary development of the next-generation enhanced CANDU 6. The CANDU 6 improvements program includes all aspects of an NPP project, including engineering tools improvements, design for improved constructability, scheduling for faster, more streamlined commissioning, and improved operating performance. This enhanced CANDU 6 product will combine the benefits of design provenness (drawing on the more than 70 reactor-years experience of the seven operating CANDU 6 units), with the advantages of an evolutionary next-generation design. Features of the enhanced CANDU 6 design include: Advanced Human Machine Interface - built around the Advanced CANDU Control Centre; Advanced fuel design - using the newly demonstrated CANFLEX fuel bundle; Improved Efficiency based on improved utilization of waste heat; Streamlined System Design - including simplifications to improve performance and safety system reliability; Advanced Engineering Tools, -- featuring linked electronic databases from 3D CADDS, equipment specification and material management; Advanced Construction Techniques - based on open top equipment installation and the use of small skid mounted modules; Options defined for Passive Heat Sink capability and low-enrichment core optimization. (author)

  1. Bioinformatics for Next Generation Sequencing Data

    Directory of Open Access Journals (Sweden)

    Alberto Magi

    2010-09-01

    Full Text Available The emergence of next-generation sequencing (NGS platforms imposes increasing demands on statistical methods and bioinformatic tools for the analysis and the management of the huge amounts of data generated by these technologies. Even at the early stages of their commercial availability, a large number of softwares already exist for analyzing NGS data. These tools can be fit into many general categories including alignment of sequence reads to a reference, base-calling and/or polymorphism detection, de novo assembly from paired or unpaired reads, structural variant detection and genome browsing. This manuscript aims to guide readers in the choice of the available computational tools that can be used to face the several steps of the data analysis workflow.

  2. Next Generation HeliMag UXO Mapping Technology

    Science.gov (United States)

    2010-01-01

    Ancillary instrumentation records aircraft height above ground and attitude. A fluxgate magnetometer is used to allow for aeromagnetic compensation of... Magnetometer System WWII World War II WAA wide area assessment ACKNOWLEDGEMENTS This Next Generation HeliMag Unexploded Ordnance (UXO) Mapping...for deployment of seven total-field magnetometers on a Kevlar reinforced boom mounted on a Bell 206L helicopter. The objectives of this

  3. Random number generators for large-scale parallel Monte Carlo simulations on FPGA

    Science.gov (United States)

    Lin, Y.; Wang, F.; Liu, B.

    2018-05-01

    Through parallelization, field programmable gate array (FPGA) can achieve unprecedented speeds in large-scale parallel Monte Carlo (LPMC) simulations. FPGA presents both new constraints and new opportunities for the implementations of random number generators (RNGs), which are key elements of any Monte Carlo (MC) simulation system. Using empirical and application based tests, this study evaluates all of the four RNGs used in previous FPGA based MC studies and newly proposed FPGA implementations for two well-known high-quality RNGs that are suitable for LPMC studies on FPGA. One of the newly proposed FPGA implementations: a parallel version of additive lagged Fibonacci generator (Parallel ALFG) is found to be the best among the evaluated RNGs in fulfilling the needs of LPMC simulations on FPGA.

  4. Application of Next Generation Sequencing on Genetic Testing

    DEFF Research Database (Denmark)

    Li, Jian

    The discovery of genetic factors behind increasing number of human diseases and the growth of education of genetic knowledge to the public make demands for genetic testing increase rapidly. However, traditional genetic testing methods cannot meet all kinds of the requirements. Next generation seq...

  5. HLA typing: Conventional techniques v. next-generation sequencing ...

    African Journals Online (AJOL)

    Background. The large number of population-specific polymorphisms present in the HLA complex in the South African (SA) population reduces the probability of finding an adequate HLA-matched donor for individuals in need of an unrelated haematopoietic stem cell transplantation (HSCT). Next-generation sequencing ...

  6. NGSS and the Next Generation of Science Teachers

    Science.gov (United States)

    Bybee, Rodger W.

    2014-01-01

    This article centers on the "Next Generation Science Standards" (NGSS) and their implications for teacher development, particularly at the undergraduate level. After an introduction to NGSS and the influence of standards in the educational system, the article addresses specific educational shifts--interconnecting science and engineering…

  7. Next-generation sequencing offers new insights into DNA degradation

    DEFF Research Database (Denmark)

    Overballe-Petersen, Søren; Orlando, Ludovic Antoine Alexandre; Willerslev, Eske

    2012-01-01

    The processes underlying DNA degradation are central to various disciplines, including cancer research, forensics and archaeology. The sequencing of ancient DNA molecules on next-generation sequencing platforms provides direct measurements of cytosine deamination, depurination and fragmentation...... rates that previously were obtained only from extrapolations of results from in vitro kinetic experiments performed over short timescales. For example, recent next-generation sequencing of ancient DNA reveals purine bases as one of the main targets of postmortem hydrolytic damage, through base...... elimination and strand breakage. It also shows substantially increased rates of DNA base-loss at guanosine. In this review, we argue that the latter results from an electron resonance structure unique to guanosine rather than adenosine having an extra resonance structure over guanosine as previously suggested....

  8. Technology for the Next-Generation-Mobile User Experience

    Science.gov (United States)

    Delagi, Greg

    The current mobile-handset market is a vital and growing one, being driven by technology advances, including increased bandwidth and processing performance, as well as reduced power consumption and improved screen technologies. The 3G/4G handsets of today are multimedia internet devices with increased screen size, HD video and gaming, interactive touch screens, HD camera and camcorders, as well as incredible social, entertainment, and productivity applications. While mobile-technology advancements to date have made us more social in many ways, new advancements over the next decade will bring us to the next level, allowing mobile users to experience new types of "virtual" social interactions with all the senses. The mobile handsets of the future will be smart autonomous-lifestyle devices with a multitude of incorporated sensors, applications and display options, all designed to make your life easier and more productive! With future display media, including 3D imaging, virtual interaction and conferencing will be possible, making every call feel like you are in the same room, providing an experience far beyond today's video conferencing technology. 3D touch-screen with integrated image-projection technologies will work in conjunction with gesturing to bring a new era of intuitive mobile device applications, interaction, and information sharing. Looking to the future, there are many challenges to be faced in delivering a smart mobile companion device that will meet the user demands. One demand will be for the availability of new and compelling services, and features on the "mobile companion". These mobile companions will be more than just Internet devices, and will function as on-the-go workstations, allowing users to function as if they were sitting in front of their computer in the office or at home. The massive amounts of data that will be transmitted through, to and from these mobile companions will require immense improvements in system performance, including

  9. Enabling inspection solutions for future mask technologies through the development of massively parallel E-Beam inspection

    Science.gov (United States)

    Malloy, Matt; Thiel, Brad; Bunday, Benjamin D.; Wurm, Stefan; Jindal, Vibhu; Mukhtar, Maseeh; Quoi, Kathy; Kemen, Thomas; Zeidler, Dirk; Eberle, Anna Lena; Garbowski, Tomasz; Dellemann, Gregor; Peters, Jan Hendrik

    2015-09-01

    The new device architectures and materials being introduced for sub-10nm manufacturing, combined with the complexity of multiple patterning and the need for improved hotspot detection strategies, have pushed current wafer inspection technologies to their limits. In parallel, gaps in mask inspection capability are growing as new generations of mask technologies are developed to support these sub-10nm wafer manufacturing requirements. In particular, the challenges associated with nanoimprint and extreme ultraviolet (EUV) mask inspection require new strategies that enable fast inspection at high sensitivity. The tradeoffs between sensitivity and throughput for optical and e-beam inspection are well understood. Optical inspection offers the highest throughput and is the current workhorse of the industry for both wafer and mask inspection. E-beam inspection offers the highest sensitivity but has historically lacked the throughput required for widespread adoption in the manufacturing environment. It is unlikely that continued incremental improvements to either technology will meet tomorrow's requirements, and therefore a new inspection technology approach is required; one that combines the high-throughput performance of optical with the high-sensitivity capabilities of e-beam inspection. To support the industry in meeting these challenges SUNY Poly SEMATECH has evaluated disruptive technologies that can meet the requirements for high volume manufacturing (HVM), for both the wafer fab [1] and the mask shop. Highspeed massively parallel e-beam defect inspection has been identified as the leading candidate for addressing the key gaps limiting today's patterned defect inspection techniques. As of late 2014 SUNY Poly SEMATECH completed a review, system analysis, and proof of concept evaluation of multiple e-beam technologies for defect inspection. A champion approach has been identified based on a multibeam technology from Carl Zeiss. This paper includes a discussion on the

  10. Thermal generation of the magnetic field in the surface layers of massive stars

    Science.gov (United States)

    Urpin, V.

    2017-11-01

    A new magnetic field-generation mechanism based on the Nernst effect is considered in hot massive stars. This mechanism can operate in the upper atmospheres of O and B stars where departures from the LTE form a region with the inverse temperature gradient.

  11. NOAA Next Generation Radar (NEXRAD) Level 2 Base Data

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset consists of Level II weather radar data collected from Next-Generation Radar (NEXRAD) stations located in the contiguous United States, Alaska, Hawaii,...

  12. 76 FR 77939 - Proposed Provision of Navigation Services for the Next Generation Air Transportation System...

    Science.gov (United States)

    2011-12-15

    ... DEPARTMENT OF TRANSPORTATION Federal Aviation Administration 14 CFR Parts 91, 121, 125, 129, and 135 Proposed Provision of Navigation Services for the Next Generation Air Transportation System (Next...) navigation infrastructure to enable performance-based navigation (PBN) as part of the Next Generation Air...

  13. GASPRNG: GPU accelerated scalable parallel random number generator library

    Science.gov (United States)

    Gao, Shuang; Peterson, Gregory D.

    2013-04-01

    Graphics processors represent a promising technology for accelerating computational science applications. Many computational science applications require fast and scalable random number generation with good statistical properties, so they use the Scalable Parallel Random Number Generators library (SPRNG). We present the GPU Accelerated SPRNG library (GASPRNG) to accelerate SPRNG in GPU-based high performance computing systems. GASPRNG includes code for a host CPU and CUDA code for execution on NVIDIA graphics processing units (GPUs) along with a programming interface to support various usage models for pseudorandom numbers and computational science applications executing on the CPU, GPU, or both. This paper describes the implementation approach used to produce high performance and also describes how to use the programming interface. The programming interface allows a user to be able to use GASPRNG the same way as SPRNG on traditional serial or parallel computers as well as to develop tightly coupled programs executing primarily on the GPU. We also describe how to install GASPRNG and use it. To help illustrate linking with GASPRNG, various demonstration codes are included for the different usage models. GASPRNG on a single GPU shows up to 280x speedup over SPRNG on a single CPU core and is able to scale for larger systems in the same manner as SPRNG. Because GASPRNG generates identical streams of pseudorandom numbers as SPRNG, users can be confident about the quality of GASPRNG for scalable computational science applications. Catalogue identifier: AEOI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOI_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: UTK license. No. of lines in distributed program, including test data, etc.: 167900 No. of bytes in distributed program, including test data, etc.: 1422058 Distribution format: tar.gz Programming language: C and CUDA. Computer: Any PC or

  14. Cloud Sourcing – Next Generation Outsourcing?

    OpenAIRE

    Muhic, Mirella; Johansson, Björn

    2014-01-01

    Although Cloud Sourcing has been around for some time it could be questioned what actually is known about it. This paper presents a literature review on the specific question if Cloud Sourcing could be seen as the next generation of outsourcing. The reason for doing this is that from an initial sourcing study we found that the sourcing decisions seems to go in the direction of outsourcing as a service which could be described as Cloud Sourcing. Whereas some are convinced that Cloud Sourcing r...

  15. Securing Networks from Modern Threats using Next Generation Firewalls

    OpenAIRE

    Delgiusto, Valter

    2016-01-01

    Classic firewalls have long been unable to cope with modern threats that ordinary Internet users are exposed to. This thesis discusses their successors - the next-generation firewalls. The first part of the thesis describes modern threats and attacks. We described in detail the DoS and APT attacks, which are among the most frequent and which may cause most damage to the system under attack. Then we explained the theoretical basics of firewalls and described the functionalities of next gen...

  16. Parallel computing by Monte Carlo codes MVP/GMVP

    International Nuclear Information System (INIS)

    Nagaya, Yasunobu; Nakagawa, Masayuki; Mori, Takamasa

    2001-01-01

    General-purpose Monte Carlo codes MVP/GMVP are well-vectorized and thus enable us to perform high-speed Monte Carlo calculations. In order to achieve more speedups, we parallelized the codes on the different types of parallel computing platforms or by using a standard parallelization library MPI. The platforms used for benchmark calculations are a distributed-memory vector-parallel computer Fujitsu VPP500, a distributed-memory massively parallel computer Intel paragon and a distributed-memory scalar-parallel computer Hitachi SR2201, IBM SP2. As mentioned generally, linear speedup could be obtained for large-scale problems but parallelization efficiency decreased as the batch size per a processing element(PE) was smaller. It was also found that the statistical uncertainty for assembly powers was less than 0.1% by the PWR full-core calculation with more than 10 million histories and it took about 1.5 hours by massively parallel computing. (author)

  17. Next generation of energy production systems

    International Nuclear Information System (INIS)

    Rouault, J.; Garnier, J.C.; Carre, F.

    2003-01-01

    This document gathers the slides that have been presented at the Gedepeon conference. Gedepeon is a research group involving scientists from Cea (French atomic energy commission), CNRS (national center of scientific research), EDF (electricity of France) and Framatome that is devoted to the study of new energy sources and particularly to the study of the future generations of nuclear systems. The contributions have been classed into 9 topics: 1) gas cooled reactors, 2) molten salt reactors (MSBR), 3) the recycling of plutonium and americium, 4) reprocessing of molten salt reactor fuels, 5) behavior of graphite under radiation, 6) metallic materials for molten salt reactors, 7) refractory fuels of gas cooled reactors, 8) the nuclear cycle for the next generations of nuclear systems, and 9) organization of research programs on the new energy sources

  18. Improvements on non-equilibrium and transport Green function techniques: The next-generation TRANSIESTA

    Science.gov (United States)

    Papior, Nick; Lorente, Nicolás; Frederiksen, Thomas; García, Alberto; Brandbyge, Mads

    2017-03-01

    We present novel methods implemented within the non-equilibrium Green function code (NEGF) TRANSIESTA based on density functional theory (DFT). Our flexible, next-generation DFT-NEGF code handles devices with one or multiple electrodes (Ne ≥ 1) with individual chemical potentials and electronic temperatures. We describe its novel methods for electrostatic gating, contour optimizations, and assertion of charge conservation, as well as the newly implemented algorithms for optimized and scalable matrix inversion, performance-critical pivoting, and hybrid parallelization. Additionally, a generic NEGF "post-processing" code (TBTRANS/PHTRANS) for electron and phonon transport is presented with several novelties such as Hamiltonian interpolations, Ne ≥ 1 electrode capability, bond-currents, generalized interface for user-defined tight-binding transport, transmission projection using eigenstates of a projected Hamiltonian, and fast inversion algorithms for large-scale simulations easily exceeding 106 atoms on workstation computers. The new features of both codes are demonstrated and bench-marked for relevant test systems.

  19. Novel nanostructures for next generation dye-sensitized solar cells

    KAUST Repository

    Té treault, Nicolas; Grä tzel, Michael

    2012-01-01

    Herein, we review our latest advancements in nanostructured photoanodes for next generation photovoltaics in general and dye-sensitized solar cells in particular. Bottom-up self-assembly techniques are developed to fabricate large-area 3D

  20. Rapid profiling of the antigen regions recognized by serum antibodies using massively parallel sequencing of antigen-specific libraries.

    KAUST Repository

    Domina, Maria; Lanza Cariccio, Veronica; Benfatto, Salvatore; D'Aliberti, Deborah; Venza, Mario; Borgogni, Erica; Castellino, Flora; Biondo, Carmelo; D'Andrea, Daniel; Grassi, Luigi; Tramontano, Anna; Teti, Giuseppe; Felici, Franco; Beninati, Concetta

    2014-01-01

    There is a need for techniques capable of identifying the antigenic epitopes targeted by polyclonal antibody responses during deliberate or natural immunization. Although successful, traditional phage library screening is laborious and can map only some of the epitopes. To accelerate and improve epitope identification, we have employed massive sequencing of phage-displayed antigen-specific libraries using the Illumina MiSeq platform. This enabled us to precisely identify the regions of a model antigen, the meningococcal NadA virulence factor, targeted by serum antibodies in vaccinated individuals and to rank hundreds of antigenic fragments according to their immunoreactivity. We found that next generation sequencing can significantly empower the analysis of antigen-specific libraries by allowing simultaneous processing of dozens of library/serum combinations in less than two days, including the time required for antibody-mediated library selection. Moreover, compared with traditional plaque picking, the new technology (named Phage-based Representation OF Immuno-Ligand Epitope Repertoire or PROFILER) provides superior resolution in epitope identification. PROFILER seems ideally suited to streamline and guide rational antigen design, adjuvant selection, and quality control of newly produced vaccines. Furthermore, this method is also susceptible to find important applications in other fields covered by traditional quantitative serology.

  1. Rapid profiling of the antigen regions recognized by serum antibodies using massively parallel sequencing of antigen-specific libraries.

    Directory of Open Access Journals (Sweden)

    Maria Domina

    Full Text Available There is a need for techniques capable of identifying the antigenic epitopes targeted by polyclonal antibody responses during deliberate or natural immunization. Although successful, traditional phage library screening is laborious and can map only some of the epitopes. To accelerate and improve epitope identification, we have employed massive sequencing of phage-displayed antigen-specific libraries using the Illumina MiSeq platform. This enabled us to precisely identify the regions of a model antigen, the meningococcal NadA virulence factor, targeted by serum antibodies in vaccinated individuals and to rank hundreds of antigenic fragments according to their immunoreactivity. We found that next generation sequencing can significantly empower the analysis of antigen-specific libraries by allowing simultaneous processing of dozens of library/serum combinations in less than two days, including the time required for antibody-mediated library selection. Moreover, compared with traditional plaque picking, the new technology (named Phage-based Representation OF Immuno-Ligand Epitope Repertoire or PROFILER provides superior resolution in epitope identification. PROFILER seems ideally suited to streamline and guide rational antigen design, adjuvant selection, and quality control of newly produced vaccines. Furthermore, this method is also susceptible to find important applications in other fields covered by traditional quantitative serology.

  2. Rapid profiling of the antigen regions recognized by serum antibodies using massively parallel sequencing of antigen-specific libraries.

    KAUST Repository

    Domina, Maria

    2014-12-04

    There is a need for techniques capable of identifying the antigenic epitopes targeted by polyclonal antibody responses during deliberate or natural immunization. Although successful, traditional phage library screening is laborious and can map only some of the epitopes. To accelerate and improve epitope identification, we have employed massive sequencing of phage-displayed antigen-specific libraries using the Illumina MiSeq platform. This enabled us to precisely identify the regions of a model antigen, the meningococcal NadA virulence factor, targeted by serum antibodies in vaccinated individuals and to rank hundreds of antigenic fragments according to their immunoreactivity. We found that next generation sequencing can significantly empower the analysis of antigen-specific libraries by allowing simultaneous processing of dozens of library/serum combinations in less than two days, including the time required for antibody-mediated library selection. Moreover, compared with traditional plaque picking, the new technology (named Phage-based Representation OF Immuno-Ligand Epitope Repertoire or PROFILER) provides superior resolution in epitope identification. PROFILER seems ideally suited to streamline and guide rational antigen design, adjuvant selection, and quality control of newly produced vaccines. Furthermore, this method is also susceptible to find important applications in other fields covered by traditional quantitative serology.

  3. Next generation HOM-damping

    Science.gov (United States)

    Marhauser, Frank

    2017-06-01

    Research and development for superconducting radio-frequency cavities has made enormous progress over the last decades from the understanding of theoretical limitations to the industrial mass fabrication of cavities for large-scale particle accelerators. Key technologies remain hot topics due to continuously growing demands on cavity performance, particularly when in pursuit of high quality beams at higher beam currents or higher luminosities than currently achievable. This relates to higher order mode (HOM) damping requirements. Meeting the desired beam properties implies avoiding coupled multi-bunch or beam break-up instabilities depending on the machine and beam parameters that will set the acceptable cavity impedance thresholds. The use of cavity HOM-dampers is crucial to absorb the wakefields, comprised by all beam-induced cavity Eigenmodes, to beam-dynamically safe levels and to reduce the heat load at cryogenic temperature. Cavity damping concepts may vary, but are principally based on coaxial and waveguide couplers as well as beam line absorbers or any combination. Next generation energy recovery linacs and circular colliders call for cavities with strong HOM-damping that can exceed the state-of-the-art, while the operating mode efficiency shall not be significantly compromised concurrently. This imposes major challenges given the rather limited damping concepts. A detailed survey of established cavities is provided scrutinizing the achieved damping performance, shortcomings, and potential improvements. The scaling of the highest passband mode impedances is numerically evaluated in dependence on the number of cells for a single-cell up to a nine-cell cavity, which reveals the increased probability of trapped modes. This is followed by simulations for single-cell and five-cell cavities, which incorporate multiple damping schemes to assess the most efficient concepts. The usage and viability of on-cell dampers is elucidated for the single-cell cavity since it

  4. BATCH-GE: Batch analysis of Next-Generation Sequencing data for genome editing assessment

    Science.gov (United States)

    Boel, Annekatrien; Steyaert, Woutert; De Rocker, Nina; Menten, Björn; Callewaert, Bert; De Paepe, Anne; Coucke, Paul; Willaert, Andy

    2016-01-01

    Targeted mutagenesis by the CRISPR/Cas9 system is currently revolutionizing genetics. The ease of this technique has enabled genome engineering in-vitro and in a range of model organisms and has pushed experimental dimensions to unprecedented proportions. Due to its tremendous progress in terms of speed, read length, throughput and cost, Next-Generation Sequencing (NGS) has been increasingly used for the analysis of CRISPR/Cas9 genome editing experiments. However, the current tools for genome editing assessment lack flexibility and fall short in the analysis of large amounts of NGS data. Therefore, we designed BATCH-GE, an easy-to-use bioinformatics tool for batch analysis of NGS-generated genome editing data, available from https://github.com/WouterSteyaert/BATCH-GE.git. BATCH-GE detects and reports indel mutations and other precise genome editing events and calculates the corresponding mutagenesis efficiencies for a large number of samples in parallel. Furthermore, this new tool provides flexibility by allowing the user to adapt a number of input variables. The performance of BATCH-GE was evaluated in two genome editing experiments, aiming to generate knock-out and knock-in zebrafish mutants. This tool will not only contribute to the evaluation of CRISPR/Cas9-based experiments, but will be of use in any genome editing experiment and has the ability to analyze data from every organism with a sequenced genome. PMID:27461955

  5. Next Generation Protein Interactomes for Plant Systems Biology and Biomass Feedstock Research

    Energy Technology Data Exchange (ETDEWEB)

    Ecker, Joseph Robert [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; Trigg, Shelly [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; Univ. of California, San Diego, CA (United States). Biological Sciences Dept.; Garza, Renee [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; Song, Haili [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; MacWilliams, Andrew [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; Nery, Joseph [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; Reina, Joaquin [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; Bartlett, Anna [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; Castanon, Rosa [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; Goubil, Adeline [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; Feeney, Joseph [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; O' Malley, Ronan [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; Huang, Shao-shan Carol [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; Zhang, Zhuzhu [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.; Galli, Mary [The Salk Inst. for Biological Studies, La Jolla, CA (United States). Genome Analysis and Plant Biology Lab.

    2016-11-30

    Biofuel crop cultivation is a necessary step in heading towards a sustainable future, making their genomic studies a priority. While technology platforms that currently exist for studying non-model crop species, like switch-grass or sorghum, have yielded large quantities of genomic and expression data, still a large gap exists between molecular mechanism and phenotype. The aspect of molecular activity at the level of protein-protein interactions has recently begun to bridge this gap, providing a more global perspective. Interactome analysis has defined more specific functional roles of proteins based on their interaction partners, neighborhoods, and other network features, making it possible to distinguish unique modules of immune response to different plant pathogens(Jiang, Dong, and Zhang 2016). As we work towards cultivating heartier biofuel crops, interactome data will lead to uncovering crop-specific defense and development networks. However, the collection of protein interaction data has been limited to expensive, time-consuming, hard-to-scale assays that mostly require cloned ORF collections. For these reasons, we have successfully developed a highly scalable, economical, and sensitive yeast two-hybrid assay, ProCREate, that can be universally applied to generate proteome-wide primary interactome data. ProCREate enables en masse pooling and massively paralleled sequencing for the identification of interacting proteins by exploiting Cre-lox recombination. ProCREate can be used to screen ORF/cDNA libraries from feedstock plant tissues. The interactome data generated will yield deeper insight into many molecular processes and pathways that can be used to guide improvement of feedstock productivity and sustainability.

  6. Preparing the Next Generation of Educators for Democracy

    Science.gov (United States)

    Embry-Jenlink, Karen

    2018-01-01

    In the keynote address of the 42nd annual meeting of the Southeastern Regional Educators Association (SRATE), ATE President Karen Embry-Jenlink examines the critical role of teacher educators in preparing the next generation of citizens and leaders to sustain democracy. Drawing from historic and current events and personal experience,…

  7. Next Generation Science Standards: All Standards, All Students

    Science.gov (United States)

    Lee, Okhee; Miller, Emily C.; Januszyk, Rita

    2014-01-01

    The Next Generation Science Standards (NGSS) offer a vision of science teaching and learning that presents both learning opportunities and demands for all students, particularly student groups that have traditionally been underserved in science classrooms. The NGSS have addressed issues of diversity and equity from their inception, and the NGSS…

  8. A task parallel implementation of fast multipole methods

    KAUST Repository

    Taura, Kenjiro; Nakashima, Jun; Yokota, Rio; Maruyama, Naoya

    2012-01-01

    This paper describes a task parallel implementation of ExaFMM, an open source implementation of fast multipole methods (FMM), using a lightweight task parallel library MassiveThreads. Although there have been many attempts on parallelizing FMM

  9. Next generation advanced nuclear reactor designs

    International Nuclear Information System (INIS)

    Turgut, M. H.

    2009-01-01

    Growing energy demand by technological developments and the increase of the world population and gradually diminishing energy resources made nuclear power an indispensable option. The renewable energy sources like solar, wind and geothermal may be suited to meet some local needs. Environment friendly nuclear energy which is a suitable solution to large scale demands tends to develop highly economical, advanced next generation reactors by incorporating technological developments and years of operating experience. The enhancement of safety and reliability, facilitation of maintainability, impeccable compatibility with the environment are the goals of the new generation reactors. The protection of the investment and property is considered as well as the protection of the environment and mankind. They became economically attractive compared to fossil-fired units by the use of standard designs, replacing some active systems by passive, reducing construction time and increasing the operation lifetime. The evolutionary designs were introduced at first by ameliorating the conventional plants, than revolutionary systems which are denoted as generation IV were verged to meet future needs. The investigations on the advanced, proliferation resistant fuel cycle technologies were initiated to minimize the radioactive waste burden by using new generation fast reactors and ADS transmuters.

  10. Next-Generation Sequencing in the Mycology Lab.

    Science.gov (United States)

    Zoll, Jan; Snelders, Eveline; Verweij, Paul E; Melchers, Willem J G

    New state-of-the-art techniques in sequencing offer valuable tools in both detection of mycobiota and in understanding of the molecular mechanisms of resistance against antifungal compounds and virulence. Introduction of new sequencing platform with enhanced capacity and a reduction in costs for sequence analysis provides a potential powerful tool in mycological diagnosis and research. In this review, we summarize the applications of next-generation sequencing techniques in mycology.

  11. Fiber to the home: next generation network

    Science.gov (United States)

    Yang, Chengxin; Guo, Baoping

    2006-07-01

    Next generation networks capable of carrying converged telephone, television (TV), very high-speed internet, and very high-speed bi-directional data services (like video-on-demand (VOD), Game etc.) strategy for Fiber To The Home (FTTH) is presented. The potential market is analyzed. The barriers and some proper strategy are also discussed. Several technical problems like various powering methods, optical fiber cables, and different network architecture are discussed too.

  12. Next generation multi-particle event generators for the MSSM

    International Nuclear Information System (INIS)

    Reuter, J.; Kilian, W.; Hagiwara, K.; Krauss, F.; Schumann, S.; Rainwater, D.

    2005-12-01

    We present a next generation of multi-particle Monte Carlo (MC) Event generators for LHC and ILC for the MSSM, namely the three program packages Madgraph/MadEvent, WHiZard/O'Mega and Sherpa/Amegic++. The interesting but difficult phenomenology of supersymmetric models at the upcoming colliders demands a corresponding complexity and maturity from simulation tools. This includes multi-particle final states, reducible and irreducible backgrounds, spin correlations, real emission of photons and gluons, etc., which are incorporated in the programs presented here. The framework of a model with such a huge particle content and as complicated as the MSSM makes strenuous tests and comparison of codes inevitable. Various tests show agreement among the three different programs; the tables of cross sections produced in these tests may serve as a future reference for other codes. Furthermore, first MSSM physics analyses performed with these programs are presented here. (orig.)

  13. Next-Generation Thermal Infrared Multi-Body Radiometer Experiment (TIMBRE)

    Science.gov (United States)

    Kenyon, M.; Mariani, G.; Johnson, B.; Brageot, E.; Hayne, P.

    2016-10-01

    We have developed an instrument concept called TIMBRE which belongs to the important class of instruments called thermal imaging radiometers (TIRs). TIMBRE is the next-generation TIR with unparalleled performance compared to the state-of-the-art.

  14. Next generation ATCA control infrastructure for the CMS Phase-2 upgrades

    CERN Document Server

    Smith, Wesley; Svetek, Aleš; Tikalsky, Jes; Fobes, Robert; Dasu, Sridhara; Smith, Wesley; Vicente, Marcelo

    2017-01-01

    A next generation control infrastructure to be used in Advanced TCA (ATCA) blades at CMS experiment is being designed and tested. Several ATCA systems are being prepared for the High-Luminosity LHC (HL-LHC) and will be installed at CMS during technical stops. The next generation control infrastructure will provide all the necessary hardware, firmware and software required in these systems, decreasing development time and increasing flexibility. The complete infrastructure includes an Intelligent Platform Management Controller (IPMC), a Module Management Controller (MMC) and an Embedded Linux Mezzanine (ELM) processing card.

  15. What can next generation sequencing do for you? Next generation sequencing as a valuable tool in plant research

    OpenAIRE

    Bräutigam, Andrea; Gowik, Udo

    2010-01-01

    Next generation sequencing (NGS) technologies have opened fascinating opportunities for the analysis of plants with and without a sequenced genome on a genomic scale. During the last few years, NGS methods have become widely available and cost effective. They can be applied to a wide variety of biological questions, from the sequencing of complete eukaryotic genomes and transcriptomes, to the genome-scale analysis of DNA-protein interactions. In this review, we focus on the use of NGS for pla...

  16. Engineering-Based Thermal CFD Simulations on Massive Parallel Systems

    KAUST Repository

    Frisch, Jé rô me; Mundani, Ralf-Peter; Rank, Ernst; van Treeck, Christoph

    2015-01-01

    The development of parallel Computational Fluid Dynamics (CFD) codes is a challenging task that entails efficient parallelization concepts and strategies in order to achieve good scalability values when running those codes on modern supercomputers

  17. Targeted enrichment strategies for next-generation plant biology

    Science.gov (United States)

    Richard Cronn; Brian J. Knaus; Aaron Liston; Peter J. Maughan; Matthew Parks; John V. Syring; Joshua. Udall

    2012-01-01

    The dramatic advances offered by modem DNA sequencers continue to redefine the limits of what can be accomplished in comparative plant biology. Even with recent achievements, however, plant genomes present obstacles that can make it difficult to execute large-scale population and phylogenetic studies on next-generation sequencing platforms. Factors like large genome...

  18. Next Generation Science Standards: Adoption and Implementation Workbook

    Science.gov (United States)

    Peltzman, Alissa; Rodriguez, Nick

    2013-01-01

    The Next Generation Science Standards (NGSS) represent the culmination of years of collaboration and effort by states, science educators and experts from across the United States. Based on the National Research Council's "A Framework for K-12 Science Education" and developed in partnership with 26 lead states, the NGSS, when…

  19. Framework for Leading Next Generation Science Standards Implementation

    Science.gov (United States)

    Stiles, Katherine; Mundry, Susan; DiRanna, Kathy

    2017-01-01

    In response to the need to develop leaders to guide the implementation of the Next Generation Science Standards (NGSS), the Carnegie Corporation of New York provided funding to WestEd to develop a framework that defines the leadership knowledge and actions needed to effectively implement the NGSS. The development of the framework entailed…

  20. Validation of Metagenomic Next-Generation Sequencing Tests for Universal Pathogen Detection.

    Science.gov (United States)

    Schlaberg, Robert; Chiu, Charles Y; Miller, Steve; Procop, Gary W; Weinstock, George

    2017-06-01

    - Metagenomic sequencing can be used for detection of any pathogens using unbiased, shotgun next-generation sequencing (NGS), without the need for sequence-specific amplification. Proof-of-concept has been demonstrated in infectious disease outbreaks of unknown causes and in patients with suspected infections but negative results for conventional tests. Metagenomic NGS tests hold great promise to improve infectious disease diagnostics, especially in immunocompromised and critically ill patients. - To discuss challenges and provide example solutions for validating metagenomic pathogen detection tests in clinical laboratories. A summary of current regulatory requirements, largely based on prior guidance for NGS testing in constitutional genetics and oncology, is provided. - Examples from 2 separate validation studies are provided for steps from assay design, and validation of wet bench and bioinformatics protocols, to quality control and assurance. - Although laboratory and data analysis workflows are still complex, metagenomic NGS tests for infectious diseases are increasingly being validated in clinical laboratories. Many parallels exist to NGS tests in other fields. Nevertheless, specimen preparation, rapidly evolving data analysis algorithms, and incomplete reference sequence databases are idiosyncratic to the field of microbiology and often overlooked.

  1. Mobility management techniques for the next-generation wireless networks

    Science.gov (United States)

    Sun, Junzhao; Howie, Douglas P.; Sauvola, Jaakko J.

    2001-10-01

    The tremendous demands from social market are pushing the booming development of mobile communications faster than ever before, leading to plenty of new advanced techniques emerging. With the converging of mobile and wireless communications with Internet services, the boundary between mobile personal telecommunications and wireless computer networks is disappearing. Wireless networks of the next generation need the support of all the advances on new architectures, standards, and protocols. Mobility management is an important issue in the area of mobile communications, which can be best solved at the network layer. One of the key features of the next generation wireless networks is all-IP infrastructure. This paper discusses the mobility management schemes for the next generation mobile networks through extending IP's functions with mobility support. A global hierarchical framework model for the mobility management of wireless networks is presented, in which the mobility management is divided into two complementary tasks: macro mobility and micro mobility. As the macro mobility solution, a basic principle of Mobile IP is introduced, together with the optimal schemes and the advances in IPv6. The disadvantages of the Mobile IP on solving the micro mobility problem are analyzed, on the basis of which three main proposals are discussed as the micro mobility solutions for mobile communications, including Hierarchical Mobile IP (HMIP), Cellular IP, and Handoff-Aware Wireless Access Internet Infrastructure (HAWAII). A unified model is also described in which the different micro mobility solutions can coexist simultaneously in mobile networks.

  2. Next Generation Summer School

    Science.gov (United States)

    Eugenia, Marcu

    2013-04-01

    On 21.06.2010 the "Next Generation" Summer School has opened the doors for its first students. They were introduced in the astronomy world by astronomical observations, astronomy and radio-astronomy lectures, laboratory projects meant to initiate them into modern radio astronomy and radio communications. The didactic programme was structure as fallowing: 1) Astronomical elements from the visible spectrum (lectures + practical projects) 2) Radio astronomy elements (lectures + practical projects) 3) Radio communication base (didactic- recreative games) The students and professors accommodation was at the Agroturistic Pension "Popasul Iancului" situated at 800m from the Marisel Observatory. First day (summer solstice day) began with a practical activity: determination of the meridian by measurements of the shadow (the direction of one vertical alignment, when it has the smallest length). The experiment is very instructive and interesting because combines notions of physics, spatial geometry and basic astronomy elements. Next day the activities took place in four stages: the students processed the experimental data obtained on first day (on sheets of millimetre paper they represented the length of the shadow alignments according the time), each team realised its own sun quadrant, point were given considering the design and functionality of these quadrant, the four teams had to mimic important constellations on carton boards with phosphorescent sticky stars and the students, accompanied by the professors took a hiking trip to the surroundings, marking the interest point coordinates, using a GPS to establish the geographical coronations and at the end of the day the students realised a small map of central Marisel area based on the GPS data. On the third day, the students were introduced to basic notions of radio astronomy, the principal categories of artificial Earth satellites: low orbit satellites (LEO), Medium orbit satellites (MEO) and geostationary satellites (GEO

  3. Next-generation batteries and fuel cells for commercial, military, and space applications

    CERN Document Server

    Jha, A R

    2012-01-01

    Distilling complex theoretical physical concepts into an understandable technical framework, Next-Generation Batteries and Fuel Cells for Commercial, Military, and Space Applications describes primary and secondary (rechargeable) batteries for various commercial, military, spacecraft, and satellite applications for covert communications, surveillance, and reconnaissance missions. It emphasizes the cost, reliability, longevity, and safety of the next generation of high-capacity batteries for applications where high energy density, minimum weight and size, and reliability in harsh conditions are

  4. Convergence of wireless, wireline, and photonics next generation networks

    CERN Document Server

    Iniewski, Krzysztof

    2010-01-01

    Filled with illustrations and practical examples from industry, this book provides a brief but comprehensive introduction to the next-generation wireless networks that will soon replace more traditional wired technologies. Written by a mixture of top industrial experts and key academic professors, it is the only book available that covers both wireless networks (such as wireless local area and personal area networks) and optical networks (such as long-haul and metropolitan networks) in one volume. It gives engineers and engineering students the necessary knowledge to meet challenges of next-ge

  5. Lessons learned from microsatellite development for nonmodel organisms using 454 pyrosequencing

    Czech Academy of Sciences Publication Activity Database

    Schoebel, C. N.; Brodbeck, S.; Buehler, D.; Cornejo, C.; Gajurel, J.; Hartikainen, H.; Keller, D.; Leys, M.; Říčanová, Štěpánka; Segelbacher, G.; Werth, S.; Csencsics, D.

    2013-01-01

    Roč. 26, č. 3 (2013), s. 600-611 ISSN 1010-061X Institutional support: RVO:68081766 Keywords : comparative studies * conservation genetics * massively parallel sequencing * next generation sequencing technology * population genetics * shotgun sequencing Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 3.483, year: 2013

  6. Efficient Cryptography for the Next Generation Secure Cloud

    Science.gov (United States)

    Kupcu, Alptekin

    2010-01-01

    Peer-to-peer (P2P) systems, and client-server type storage and computation outsourcing constitute some of the major applications that the next generation cloud schemes will address. Since these applications are just emerging, it is the perfect time to design them with security and privacy in mind. Furthermore, considering the high-churn…

  7. Statistical Approaches for Next-Generation Sequencing Data

    OpenAIRE

    Qiao, Dandi

    2012-01-01

    During the last two decades, genotyping technology has advanced rapidly, which enabled the tremendous success of genome-wide association studies (GWAS) in the search of disease susceptibility loci (DSLs). However, only a small fraction of the overall predicted heritability can be explained by the DSLs discovered. One possible explanation for this ”missing heritability” phenomenon is that many causal variants are rare. The recent development of high-throughput next-generation sequencing (NGS) ...

  8. Architectural and Algorithmic Requirements for a Next-Generation System Analysis Code

    Energy Technology Data Exchange (ETDEWEB)

    V.A. Mousseau

    2010-05-01

    This document presents high-level architectural and system requirements for a next-generation system analysis code (NGSAC) to support reactor safety decision-making by plant operators and others, especially in the context of light water reactor plant life extension. The capabilities of NGSAC will be different from those of current-generation codes, not only because computers have evolved significantly in the generations since the current paradigm was first implemented, but because the decision-making processes that need the support of next-generation codes are very different from the decision-making processes that drove the licensing and design of the current fleet of commercial nuclear power reactors. The implications of these newer decision-making processes for NGSAC requirements are discussed, and resulting top-level goals for the NGSAC are formulated. From these goals, the general architectural and system requirements for the NGSAC are derived.

  9. ERP II: Next-generation Extended Enterprise Resource Planning

    DEFF Research Database (Denmark)

    Møller, Charles

    2004-01-01

    ERP II (ERP/2) systems is a new concept introduced by Gartner Group in 2000 in order to label the latest extensions of the ERP-systems. The purpose of this paper is to explore the next-generation of ERP systems, the Extended Enterprise Resource Planning (EERP or as we prefer to use: e...... impact on extended enterprise architecture.....

  10. ERP II - Next-generation Extended Enterprise Resource Planning

    DEFF Research Database (Denmark)

    Møller, Charles

    2003-01-01

    ERP II (ERP/2) systems is a new concept introduced by Gartner Group in 2000 in order to label the latest extensions of the ERP-systems. The purpose of this paper is to explore the next-generation of ERP systems, the Extended Enterprise Resource Planning (EERP or as we prefer to use: e...... impact on extended enterprise architecture....

  11. Parallel Application Performance on Two Generations of Intel Xeon HPC Platforms

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Christopher H.; Long, Hai; Sides, Scott; Vaidhynathan, Deepthi; Jones, Wesley

    2015-10-15

    Two next-generation node configurations hosting the Haswell microarchitecture were tested with a suite of microbenchmarks and application examples, and compared with a current Ivy Bridge production node on NREL" tm s Peregrine high-performance computing cluster. A primary conclusion from this study is that the additional cores are of little value to individual task performance--limitations to application parallelism, or resource contention among concurrently running but independent tasks, limits effective utilization of these added cores. Hyperthreading generally impacts throughput negatively, but can improve performance in the absence of detailed attention to runtime workflow configuration. The observations offer some guidance to procurement of future HPC systems at NREL. First, raw core count must be balanced with available resources, particularly memory bandwidth. Balance-of-system will determine value more than processor capability alone. Second, hyperthreading continues to be largely irrelevant to the workloads that are commonly seen, and were tested here, at NREL. Finally, perhaps the most impactful enhancement to productivity might occur through enabling multiple concurrent jobs per node. Given the right type and size of workload, more may be achieved by doing many slow things at once, than fast things in order.

  12. Geometric representation of the generator of duality in massless and massive p-form field theories

    International Nuclear Information System (INIS)

    Contreras, Ernesto; Martinez, Yisely; Leal, Lorenzo

    2010-01-01

    We study the invariance under duality transformations in massless and massive p-form field theories and obtain the Noether generators of the infinitesimal transformations that correspond to this symmetry. These generators can be realized in geometrical representations that generalize the loop representation of the Maxwell field, allowing for a geometrical interpretation which is studied.

  13. Next Generation Geothermal Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Brugman, John; Hattar, Mai; Nichols, Kenneth; Esaki, Yuri

    1995-09-01

    A number of current and prospective power plant concepts were investigated to evaluate their potential to serve as the basis of the next generation geothermal power plant (NGGPP). The NGGPP has been envisaged as a power plant that would be more cost competitive (than current geothermal power plants) with fossil fuel power plants, would efficiently use resources and mitigate the risk of reservoir under-performance, and minimize or eliminate emission of pollutants and consumption of surface and ground water. Power plant concepts were analyzed using resource characteristics at ten different geothermal sites located in the western United States. Concepts were developed into viable power plant processes, capital costs were estimated and levelized busbar costs determined. Thus, the study results should be considered as useful indicators of the commercial viability of the various power plants concepts that were investigated. Broadly, the different power plant concepts that were analyzed in this study fall into the following categories: commercial binary and flash plants, advanced binary plants, advanced flash plants, flash/binary hybrid plants, and fossil/geothed hybrid plants. Commercial binary plants were evaluated using commercial isobutane as a working fluid; both air-cooling and water-cooling were considered. Advanced binary concepts included cycles using synchronous turbine-generators, cycles with metastable expansion, and cycles utilizing mixtures as working fluids. Dual flash steam plants were used as the model for the commercial flash cycle. The following advanced flash concepts were examined: dual flash with rotary separator turbine, dual flash with steam reheater, dual flash with hot water turbine, and subatmospheric flash. Both dual flash and binary cycles were combined with other cycles to develop a number of hybrid cycles: dual flash binary bottoming cycle, dual flash backpressure turbine binary cycle, dual flash gas turbine cycle, and binary gas turbine

  14. Prolonged photo-carriers generated in a massive-and-anisotropic Dirac material.

    Science.gov (United States)

    Nurmamat, Munisa; Ishida, Yukiaki; Yori, Ryohei; Sumida, Kazuki; Zhu, Siyuan; Nakatake, Masashi; Ueda, Yoshifumi; Taniguchi, Masaki; Shin, Shik; Akahama, Yuichi; Kimura, Akio

    2018-06-13

    Transient electron-hole pairs generated in semiconductors can exhibit unconventional excitonic condensation. Anisotropy in the carrier mass is considered as the key to elongate the life time of the pairs, and hence to stabilize the condensation. Here we employ time- and angle-resolved photoemission spectroscopy to explore the dynamics of photo-generated carriers in black phosphorus. The electronic structure above the Fermi level has been successfully observed, and a massive-and-anisotropic Dirac-type dispersions are confirmed; more importantly, we directly observe that the photo-carriers generated across the direct band gap have the life time exceeding 400 ps. Our finding confirms that black phosphorus is a suitable platform for excitonic condensations, and also open an avenue for future applications in broadband mid-infrared BP-based optoelectronic devices.

  15. Environmental Information for the U.S. Next Generation Air Transportation System (NextGen)

    Science.gov (United States)

    Murray, J.; Miner, C.; Pace, D.; Minnis, P.; Mecikalski, J.; Feltz, W.; Johnson, D.; Iskendarian, H.; Haynes, J.

    2009-09-01

    It is estimated that weather is responsible for approximately 70% of all air traffic delays and cancellations in the United States. Annually, this produces an overall economic loss of nearly 40B. The FAA and NASA have determined that weather impacts and other environmental constraints on the U.S. National Airspace System (NAS) will increase to the point of system unsustainability unless the NAS is radically transformed. A Next Generation Air Transportation System (NextGen) is planned to accommodate the anticipated demand for increased system capacity and the super-density operations that this transformation will entail. The heart of the environmental information component that is being developed for NextGen will be a 4-dimensional data cube which will include a single authoritative source comprising probabilistic weather information for NextGen Air Traffic Management (ATM) systems. Aviation weather constraints and safety hazards typically comprise meso-scale, storm-scale and microscale observables that can significantly impact both terminal and enroute aviation operations. With these operational impacts in mind, functional and performance requirements for the NextGen weather system were established which require significant improvements in observation and forecasting capabilities. This will include satellite observations from geostationary and/or polar-orbiting hyperspectral sounders, multi-spectral imagers, lightning mappers, space weather monitors and other environmental observing systems. It will also require improved in situ and remotely sensed observations from ground-based and airborne systems. These observations will be used to better understand and to develop forecasting applications for convective weather, in-flight icing, turbulence, ceilings and visibility, volcanic ash, space weather and the environmental impacts of aviation. Cutting-edge collaborative research efforts and results from NASA, NOAA and the FAA which address these phenomena are summarized

  16. Significant Association between Sulfate-Reducing Bacteria and Uranium-Reducing Microbial Communities as Revealed by a Combined Massively Parallel Sequencing-Indicator Species Approach▿ †

    OpenAIRE

    Cardenas, Erick; Wu, Wei-Min; Leigh, Mary Beth; Carley, Jack; Carroll, Sue; Gentry, Terry; Luo, Jian; Watson, David; Gu, Baohua; Ginder-Vogel, Matthew; Kitanidis, Peter K.; Jardine, Philip M.; Zhou, Jizhong; Criddle, Craig S.; Marsh, Terence L.

    2010-01-01

    Massively parallel sequencing has provided a more affordable and high-throughput method to study microbial communities, although it has mostly been used in an exploratory fashion. We combined pyrosequencing with a strict indicator species statistical analysis to test if bacteria specifically responded to ethanol injection that successfully promoted dissimilatory uranium(VI) reduction in the subsurface of a uranium contamination plume at the Oak Ridge Field Research Center in Tennessee. Remedi...

  17. The contribution of next generation sequencing to epilepsy genetics

    DEFF Research Database (Denmark)

    Møller, Rikke S.; Dahl, Hans A.; Helbig, Ingo

    2015-01-01

    During the last decade, next generation sequencing technologies such as targeted gene panels, whole exome sequencing and whole genome sequencing have led to an explosion of gene identifications in monogenic epilepsies including both familial epilepsies and severe epilepsies, often referred to as ...

  18. Converged Wireless Networking and Optimization for Next Generation Services

    Directory of Open Access Journals (Sweden)

    J. Rodriguez

    2010-01-01

    Full Text Available The Next Generation Network (NGN vision is tending towards the convergence of internet and mobile services providing the impetus for new market opportunities in combining the appealing services of internet with the roaming capability of mobile networks. However, this convergence does not go far enough, and with the emergence of new coexistence scenarios, there is a clear need to evolve the current architecture to provide cost-effective end-to-end communication. The LOOP project, a EUREKA-CELTIC driven initiative, is one piece in the jigsaw by helping European industry to sustain a leading role in telecommunications and manufacturing of high-value products and machinery by delivering pioneering converged wireless networking solutions that can be successfully demonstrated. This paper provides an overview of the LOOP project and the key achievements that have been tunneled into first prototypes for showcasing next generation services for operators and process manufacturers.

  19. Cost and schedule reduction for next-generation Candu

    International Nuclear Information System (INIS)

    Hopwood, J.M.; Yu, S.; Pakan, M.; Soulard, M.

    2002-01-01

    AECL has developed a suite of technologies for Candu R reactors that enable the next step in the evolution of the Candu family of heavy-water-moderated fuel-channel reactors. These technologies have been combined in the design for the Advanced Candu Reactor TM1 (ACRTM), AECL's next generation Candu power plant. The ACR design builds extensively on the existing Candu experience base, but includes innovations, in design and in delivery technology, that provide very substantial reductions in capital cost and in project schedules. In this paper, main features of next generation design and delivery are summarized, to provide the background basis for the cost and schedule reductions that have been achieved. In particular the paper outlines the impact of the innovative design steps for ACR: - Selection of slightly enriched fuel bundle design; - Use of light water coolant in place of traditional Candu heavy water coolant; - Compact core design with unique reactor physics benefits; - Optimized coolant and turbine system conditions. In addition to the direct cost benefits arising from efficiency improvement, and from the reduction in heavy water, the next generation Candu configuration results in numerous additional indirect cost benefits, including: - Reduction in number and complexity of reactivity mechanisms; - Reduction in number of heavy water auxiliary systems; - Simplification in heat transport and its support systems; - Simplified human-machine interface. The paper also describes the ACR approach to design for constructability. The application of module assembly and open-top construction techniques, based on Candu and other worldwide experience, has been proven to generate savings in both schedule durations and overall project cost, by reducing premium on-site activities, and by improving efficiency of system and subsystem assembly. AECL's up-to-date experience in the use of 3-D CADDS and related engineering tools has also been proven to reduce both engineering and

  20. Resonance analysis in parallel voltage-controlled Distributed Generation inverters

    DEFF Research Database (Denmark)

    Wang, Xiongfei; Blaabjerg, Frede; Chen, Zhe

    2013-01-01

    Thanks to the fast responses of the inner voltage and current control loops, the dynamic behaviors of parallel voltage-controlled Distributed Generation (DG) inverters not only relies on the stability of load sharing among them, but subjects to the interactions between the voltage control loops...

  1. Massive Supergravity and Deconstruction

    CERN Document Server

    Gregoire, T; Shadmi, Y; Gregoire, Thomas; Schwartz, Matthew D; Shadmi, Yael

    2004-01-01

    We present a simple superfield Lagrangian for massive supergravity. It comprises the minimal supergravity Lagrangian with interactions as well as mass terms for the metric superfield and the chiral compensator. This is the natural generalization of the Fierz-Pauli Lagrangian for massive gravity which comprises mass terms for the metric and its trace. We show that the on-shell bosonic and fermionic fields are degenerate and have the appropriate spins: 2, 3/2, 3/2 and 1. We then study this interacting Lagrangian using goldstone superfields. We find that a chiral multiplet of goldstones gets a kinetic term through mixing, just as the scalar goldstone does in the non-supersymmetric case. This produces Planck scale (Mpl) interactions with matter and all the discontinuities and unitarity bounds associated with massive gravity. In particular, the scale of strong coupling is (Mpl m^4)^1/5, where m is the multiplet's mass. Next, we consider applications of massive supergravity to deconstruction. We estimate various qu...

  2. Next-Generation Sequencing of Antibody Display Repertoires

    Directory of Open Access Journals (Sweden)

    Romain Rouet

    2018-02-01

    Full Text Available In vitro selection technology has transformed the development of therapeutic monoclonal antibodies. Using methods such as phage, ribosome, and yeast display, high affinity binders can be selected from diverse repertoires. Here, we review strategies for the next-generation sequencing (NGS of phage- and other antibody-display libraries, as well as NGS platforms and analysis tools. Moreover, we discuss recent examples relating to the use of NGS to assess library diversity, clonal enrichment, and affinity maturation.

  3. Next-generation digital information storage in DNA.

    Science.gov (United States)

    Church, George M; Gao, Yuan; Kosuri, Sriram

    2012-09-28

    Digital information is accumulating at an astounding rate, straining our ability to store and archive it. DNA is among the most dense and stable information media known. The development of new technologies in both DNA synthesis and sequencing make DNA an increasingly feasible digital storage medium. We developed a strategy to encode arbitrary digital information in DNA, wrote a 5.27-megabit book using DNA microchips, and read the book by using next-generation DNA sequencing.

  4. Popular Imagination and Identity Politics: Reading the Future in "Star Trek: The Next Generation."

    Science.gov (United States)

    Ott, Brian L.; Aoki, Eric

    2001-01-01

    Analyzes the television series "Star Trek: The Next Generation." Theorizes the relationship between collective visions of the future and the identity politics of the present. Argues that "The Next Generation" invites audiences to participate in a shared sense of the future that constrains human agency and (re)produces the…

  5. Next Generation Life Support (NGLS): Rapid Cycle Amine Swing Bed

    Data.gov (United States)

    National Aeronautics and Space Administration — The Rapid Cycle Amine (RCA) swingbed has been identified as a technology with high potential to meet the stringent requirements for the next generation spacesuit's...

  6. Solar-bound weakly interacting massive particles a no-frills phenomenology

    CERN Document Server

    Collar, J I

    1999-01-01

    The case for a stable population of solar-bound Earth-crossing Weakly Interacting Massive Particles (WIMPs) is reviewed. A practical general expression for their speed distribution in the laboratory frame is derived under basic assumptions. If such a population exists -even with a conservative phase-space density-, the next generation of large-mass, low-threshold underground bolometers should bring about a sizable enhancement in WIMP sensitivity. Finally, a characteristic yearly modulation in their recoil signal, arising from the ellipticity of the Earth's orbit, is presented.

  7. Modelling of control system architecture for next-generation accelerators

    International Nuclear Information System (INIS)

    Liu, Shi-Yao; Kurokawa, Shin-ichi

    1990-01-01

    Functional, hardware and software system architectures define the fundamental structure of control systems. Modelling is a protocol of system architecture used in system design. This paper reviews various modellings adopted in past ten years and suggests a new modelling for next generation accelerators. (author)

  8. Next Generation UV Coronagraph Instrumentation for Solar Cycle-24

    Indian Academy of Sciences (India)

    2016-01-27

    Jan 27, 2016 ... New concepts for next generation instrumentation include imaging ultraviolet spectrocoronagraphs and large aperture ultraviolet coronagraph spectrometers. An imaging instrument would be the first to obtain absolute spectral line intensities of the extended corona over a wide field of view. Such images ...

  9. Power Electronics for the Next Generation Wind Turbine System

    DEFF Research Database (Denmark)

    Ma, Ke

    This book presents recent studies on the power electronics used for the next generation wind turbine system. Some criteria and tools for evaluating and improving the critical performances of the wind power converters have been proposed and established. The book addresses some emerging problems...

  10. Development of a framework for the neutronics analysis system for next generation (3)

    International Nuclear Information System (INIS)

    Yokoyama, Kenji; Hirai, Yasushi; Hyoudou, Hideaki; Tatsumi, Masahiro

    2010-02-01

    Development of innovative analysis methods and models in fundamental studies for next-generation nuclear reactor systems is in progress. In order to efficiently and effectively reflect the latest analysis methods and models to primary design of commercial reactor and/or in-core fuel management for power reactors, a next-generation analysis system MARBLE has been developed. The next-generation analysis system provides solutions to the following requirements: (1) flexibility, extensibility and user-friendliness that can apply new methods and models rapidly and effectively for fundamental studies, (2) quantitative proof of solution accuracy and adaptive scoping range for design studies, (3) coupling analysis among different study domains for the purpose of rationalization of plant systems and improvement of reliability, (4) maintainability and reusability for system extensions for the purpose of total quality management and development efficiency. The next-generation analysis system supports many fields, such as thermal-hydraulic analysis, structure analysis, reactor physics etc., and now we are studying reactor physics analysis system for fast reactor in advance. As for reactor physics analysis methods for fast reactor, we have established the JUPITER standard analysis methods based on the past study. But, there has been a problem of extreme inefficiency due to lack of functionality in the conventional analysis system when changing analysis targets and/or modeling levels. That is why, we have developed the next-generation analysis system for reactor physics which reproduces the JUPITER standard analysis method that has been developed so far and newly realizes burnup and design analysis for fast reactor and functions for cross section adjustment. In the present study, we examined in detail the existing design and implementation of ZPPR critical experiment analysis database followed by unification of models within the framework of the next-generation analysis system by

  11. A MapReduce-Based Parallel Frequent Pattern Growth Algorithm for Spatiotemporal Association Analysis of Mobile Trajectory Big Data

    Directory of Open Access Journals (Sweden)

    Dawen Xia

    2018-01-01

    Full Text Available Frequent pattern mining is an effective approach for spatiotemporal association analysis of mobile trajectory big data in data-driven intelligent transportation systems. While existing parallel algorithms have been successfully applied to frequent pattern mining of large-scale trajectory data, two major challenges are how to overcome the inherent defects of Hadoop to cope with taxi trajectory big data including massive small files and how to discover the implicitly spatiotemporal frequent patterns with MapReduce. To conquer these challenges, this paper presents a MapReduce-based Parallel Frequent Pattern growth (MR-PFP algorithm to analyze the spatiotemporal characteristics of taxi operating using large-scale taxi trajectories with massive small file processing strategies on a Hadoop platform. More specifically, we first implement three methods, that is, Hadoop Archives (HAR, CombineFileInputFormat (CFIF, and Sequence Files (SF, to overcome the existing defects of Hadoop and then propose two strategies based on their performance evaluations. Next, we incorporate SF into Frequent Pattern growth (FP-growth algorithm and then implement the optimized FP-growth algorithm on a MapReduce framework. Finally, we analyze the characteristics of taxi operating in both spatial and temporal dimensions by MR-PFP in parallel. The results demonstrate that MR-PFP is superior to existing Parallel FP-growth (PFP algorithm in efficiency and scalability.

  12. The de novo assembly of mitochondrial genomes of the extinct passenger pigeon (Ectopistes migratorius with next generation sequencing.

    Directory of Open Access Journals (Sweden)

    Chih-Ming Hung

    Full Text Available The information from ancient DNA (aDNA provides an unparalleled opportunity to infer phylogenetic relationships and population history of extinct species and to investigate genetic evolution directly. However, the degraded and fragmented nature of aDNA has posed technical challenges for studies based on conventional PCR amplification. In this study, we present an approach based on next generation sequencing to efficiently sequence the complete mitochondrial genome (mitogenome of two extinct passenger pigeons (Ectopistes migratorius using de novo assembly of massive short (90 bp, paired-end or single-end reads. Although varying levels of human contamination and low levels of postmortem nucleotide lesion were observed, they did not impact sequencing accuracy. Our results demonstrated that the de novo assembly of shotgun sequence reads could be a potent approach to sequence mitogenomes, and offered an efficient way to infer evolutionary history of extinct species.

  13. The De Novo Assembly of Mitochondrial Genomes of the Extinct Passenger Pigeon (Ectopistes migratorius) with Next Generation Sequencing

    Science.gov (United States)

    Hung, Chih-Ming; Lin, Rong-Chien; Chu, Jui-Hua; Yeh, Chia-Fen; Yao, Chiou-Ju; Li, Shou-Hsien

    2013-01-01

    The information from ancient DNA (aDNA) provides an unparalleled opportunity to infer phylogenetic relationships and population history of extinct species and to investigate genetic evolution directly. However, the degraded and fragmented nature of aDNA has posed technical challenges for studies based on conventional PCR amplification. In this study, we present an approach based on next generation sequencing to efficiently sequence the complete mitochondrial genome (mitogenome) of two extinct passenger pigeons (Ectopistes migratorius) using de novo assembly of massive short (90 bp), paired-end or single-end reads. Although varying levels of human contamination and low levels of postmortem nucleotide lesion were observed, they did not impact sequencing accuracy. Our results demonstrated that the de novo assembly of shotgun sequence reads could be a potent approach to sequence mitogenomes, and offered an efficient way to infer evolutionary history of extinct species. PMID:23437111

  14. Effects of parallel planning on agreement production.

    Science.gov (United States)

    Veenstra, Alma; Meyer, Antje S; Acheson, Daniel J

    2015-11-01

    An important issue in current psycholinguistics is how the time course of utterance planning affects the generation of grammatical structures. The current study investigated the influence of parallel activation of the components of complex noun phrases on the generation of subject-verb agreement. Specifically, the lexical interference account (Gillespie & Pearlmutter, 2011b; Solomon & Pearlmutter, 2004) predicts more agreement errors (i.e., attraction) for subject phrases in which the head and local noun mismatch in number (e.g., the apple next to the pears) when nouns are planned in parallel than when they are planned in sequence. We used a speeded picture description task that yielded sentences such as the apple next to the pears is red. The objects mentioned in the noun phrase were either semantically related or unrelated. To induce agreement errors, pictures sometimes mismatched in number. In order to manipulate the likelihood of parallel processing of the objects and to test the hypothesized relationship between parallel processing and the rate of agreement errors, the pictures were either placed close together or far apart. Analyses of the participants' eye movements and speech onset latencies indicated slower processing of the first object and stronger interference from the related (compared to the unrelated) second object in the close than in the far condition. Analyses of the agreement errors yielded an attraction effect, with more errors in mismatching than in matching conditions. However, the magnitude of the attraction effect did not differ across the close and far conditions. Thus, spatial proximity encouraged parallel processing of the pictures, which led to interference of the associated conceptual and/or lexical representation, but, contrary to the prediction, it did not lead to more attraction errors. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Next-generation Sequencing-based genomic profiling: Fostering innovation in cancer care?

    Directory of Open Access Journals (Sweden)

    Gustavo S. Fernandes

    Full Text Available OBJECTIVES: With the development of next-generation sequencing (NGS technologies, DNA sequencing has been increasingly utilized in clinical practice. Our goal was to investigate the impact of genomic evaluation on treatment decisions for heavily pretreated patients with metastatic cancer. METHODS: We analyzed metastatic cancer patients from a single institution whose cancers had progressed after all available standard-of-care therapies and whose tumors underwent next-generation sequencing analysis. We determined the percentage of patients who received any therapy directed by the test, and its efficacy. RESULTS: From July 2013 to December 2015, 185 consecutive patients were tested using a commercially available next-generation sequencing-based test, and 157 patients were eligible. Sixty-six patients (42.0% were female, and 91 (58.0% were male. The mean age at diagnosis was 52.2 years, and the mean number of pre-test lines of systemic treatment was 2.7. One hundred and seventy-seven patients (95.6% had at least one identified gene alteration. Twenty-four patients (15.2% underwent systemic treatment directed by the test result. Of these, one patient had a complete response, four (16.7% had partial responses, two (8.3% had stable disease, and 17 (70.8% had disease progression as the best result. The median progression-free survival time with matched therapy was 1.6 months, and the median overall survival was 10 months. CONCLUSION: We identified a high prevalence of gene alterations using an next-generation sequencing test. Although some benefit was associated with the matched therapy, most of the patients had disease progression as the best response, indicating the limited biological potential and unclear clinical relevance of this practice.

  16. Developing the next generation of nuclear workers at OPG

    International Nuclear Information System (INIS)

    Spekkens, P.

    2007-01-01

    This presentation is about developing the next generation of nuclear workers at Ontario Power Generation (OPG). Industry developments are creating urgent need to hire, train and retain new staff. OPG has an aggressive hiring campaign. Training organization is challenged to accommodate influx of new staff. Collaborating with colleges and universities is increasing the supply of qualified recruits with an interest in nuclear. Program for functional and leadership training have been developed. Knowledge retention is urgently required

  17. Heterogeneous next-generation wireless network interference model-and its applications

    KAUST Repository

    Mahmood, Nurul Huda; Yilmaz, Ferkan; Alouini, Mohamed-Slim; Ø ien, Geir Egil

    2014-01-01

    Next-generation wireless systems facilitating better utilisation of the scarce radio spectrum have emerged as a response to inefficient and rigid spectrum assignment policies. These are comprised of intelligent radio nodes that opportunistically

  18. Epidemiological Studies to Support the Development of Next Generation Influenza Vaccines.

    Science.gov (United States)

    Petrie, Joshua G; Gordon, Aubree

    2018-03-26

    The National Institute of Allergy and Infectious Diseases recently published a strategic plan for the development of a universal influenza vaccine. This plan focuses on improving understanding of influenza infection, the development of influenza immunity, and rational design of new vaccines. Epidemiological studies such as prospective, longitudinal cohort studies are essential to the completion of these objectives. In this review, we discuss the contributions of epidemiological studies to our current knowledge of vaccines and correlates of immunity, and how they can contribute to the development and evaluation of the next generation of influenza vaccines. These studies have been critical in monitoring the effectiveness of current influenza vaccines, identifying issues such as low vaccine effectiveness, reduced effectiveness among those who receive repeated vaccination, and issues related to egg adaptation during the manufacturing process. Epidemiological studies have also identified population-level correlates of protection that can inform the design and development of next generation influenza vaccines. Going forward, there is an enduring need for epidemiological studies to continue advancing knowledge of correlates of protection and the development of immunity, to evaluate and monitor the effectiveness of next generation influenza vaccines, and to inform recommendations for their use.

  19. Next generation multi-material 3D food printer concept

    NARCIS (Netherlands)

    Klomp, D.J.; Anderson, P.D.

    2017-01-01

    3D food printing is a new rapidly developing technology capable of creating food structures that are impossible to create with normal processing techniques. Challenges in this field are creating texture and multi-material food products. To address these challenges a next generation food printer will

  20. Next-generation sequencing approaches to understanding the oral microbiome

    NARCIS (Netherlands)

    Zaura, E.

    2012-01-01

    Until recently, the focus in dental research has been on studying a small fraction of the oral microbiome—so-called opportunistic pathogens. With the advent of next-generation sequencing (NGS) technologies, researchers now have the tools that allow for profiling of the microbiomes and metagenomes at

  1. Big Data Perspective and Challenges in Next Generation Networks

    Directory of Open Access Journals (Sweden)

    Kashif Sultan

    2018-06-01

    Full Text Available With the development towards the next generation cellular networks, i.e., 5G, the focus has shifted towards meeting the higher data rate requirements, potential of micro cells and millimeter wave spectrum. The goals for next generation networks are very high data rates, low latency and handling of big data. The achievement of these goals definitely require newer architecture designs, upgraded technologies with possible backward support, better security algorithms and intelligent decision making capability. In this survey, we identify the opportunities which can be provided by 5G networks and discuss the underlying challenges towards implementation and realization of the goals of 5G. This survey also provides a discussion on the recent developments made towards standardization, the architectures which may be potential candidates for deployment and the energy concerns in 5G networks. Finally, the paper presents a big data perspective and the potential of machine learning for optimization and decision making in 5G networks.

  2. Recent progress in nanostructured next-generation field emission devices

    International Nuclear Information System (INIS)

    Mittal, Gaurav; Lahiri, Indranil

    2014-01-01

    Field emission has been known to mankind for more than a century, and extensive research in this field for the last 40–50 years has led to development of exciting applications such as electron sources, miniature x-ray devices, display materials, etc. In the last decade, large-area field emitters were projected as an important material to revolutionize healthcare and medical devices, and space research. With the advent of nanotechnology and advancements related to carbon nanotubes, field emitters are demonstrating highly enhanced performance and novel applications. Next-generation emitters need ultra-high emission current density, high brightness, excellent stability and reproducible performance. Novel design considerations and application of new materials can lead to achievement of these capabilities. This article presents an overview of recent developments in this field and their effects on improved performance of field emitters. These advancements are demonstrated to hold great potential for application in next-generation field emission devices. (topical review)

  3. Recent progress in nanostructured next-generation field emission devices

    Science.gov (United States)

    Mittal, Gaurav; Lahiri, Indranil

    2014-08-01

    Field emission has been known to mankind for more than a century, and extensive research in this field for the last 40-50 years has led to development of exciting applications such as electron sources, miniature x-ray devices, display materials, etc. In the last decade, large-area field emitters were projected as an important material to revolutionize healthcare and medical devices, and space research. With the advent of nanotechnology and advancements related to carbon nanotubes, field emitters are demonstrating highly enhanced performance and novel applications. Next-generation emitters need ultra-high emission current density, high brightness, excellent stability and reproducible performance. Novel design considerations and application of new materials can lead to achievement of these capabilities. This article presents an overview of recent developments in this field and their effects on improved performance of field emitters. These advancements are demonstrated to hold great potential for application in next-generation field emission devices.

  4. Hierarchical Image Segmentation of Remotely Sensed Data using Massively Parallel GNU-LINUX Software

    Science.gov (United States)

    Tilton, James C.

    2003-01-01

    A hierarchical set of image segmentations is a set of several image segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. In [1], Tilton, et a1 describes an approach for producing hierarchical segmentations (called HSEG) and gave a progress report on exploiting these hierarchical segmentations for image information mining. The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HSWO) approach to region growing, which was described as early as 1989 by Beaulieu and Goldberg. The HSWO approach seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing (e.g. Horowitz and T. Pavlidis, [3]). In addition, HSEG optionally interjects between HSWO region growing iterations, merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the utility of the segmentation results, especially for larger images, it also significantly increases HSEG s computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) was devised, which includes special code to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. The recursive nature of RHSEG makes for a straightforward parallel implementation. This paper describes the HSEG algorithm, its recursive formulation (referred to as RHSEG), and the implementation of RHSEG using massively parallel GNU-LINUX software. Results with Landsat TM data are included comparing RHSEG with classic

  5. Carrier ethernet network control plane based on the Next Generation Network

    DEFF Research Database (Denmark)

    Fu, Rong; Wang, Yanmeng; Berger, Michael Stubert

    2008-01-01

    This paper contributes on presenting a step towards the realization of Carrier Ethernet control plane based on the next generation network (NGN). Specifically, transport MPLS (T-MPLS) is taken as the transport technology in Carrier Ethernet. It begins with providing an overview of the evolving...... architecture of the next generation network (NGN). As an essential candidate among the NGN transport technologies, the definition of Carrier Ethernet (CE) is also introduced here. The second part of this paper depicts the contribution on the T-MPLS based Carrier Ethernet network with control plane based on NGN...... at illustrating the improvement of the Carrier Ethernet network with the NGN control plane....

  6. Development of source term evaluation method for Korean Next Generation Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Keon Jae; Cheong, Jae Hak; Park, Jin Baek; Kim, Guk Gee [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-10-15

    This project had investigate several design features of radioactive waste processing system and method to predict nuclide concentration at primary coolant basic concept of next generation reactor and safety goals at the former phase. In this project several prediction methods of source term are evaluated conglomerately and detailed contents of this project are : model evaluation of nuclide concentration at Reactor Coolant System, evaluation of primary and secondary coolant concentration of reference Nuclear Power Plant(NPP), investigation of prediction parameter of source term evaluation, basic parameter of PWR, operational parameter, respectively, radionuclide removal system and adjustment values of reference NPP, suggestion of source term prediction method of next generation NPP.

  7. Next generation of relativistic heavy ion accelerators

    International Nuclear Information System (INIS)

    Grunder, H.; Leemann, C.; Selph, F.

    1978-06-01

    Results are presented of exploratory and preliminary studies of a next generation of heavy ion accelerators. The conclusion is reached that useful luminosities are feasible in a colliding beam facility for relativistic heavy ions. Such an accelerator complex may be laid out in such a way as to provide extractebeams for fixed target operation, therefore allowing experimentation in an energy region overlapping with that presently available. These dual goals seem achievable without undue complications, or penalties with respect to cost and/or performance

  8. Thermonuclear ignition in the next generation tokamaks

    International Nuclear Information System (INIS)

    Johner, J.

    1989-04-01

    The extrapolation of experimental rules describing energy confinement and magnetohydrodynamic - stability limits, in known tokamaks, allow to show that stable thermonuclear ignition equilibria should exist in this configuration, if the product aB t x of the dimensions by a magnetic-field power is large enough. Quantitative application of this result to several next-generation tokamak projects show that those kinds of equilibria could exist in such devices, which would also have enough additional heating power to promote an effective accessible ignition

  9. Next Generation Antibody Therapeutics Using Bispecific Antibody Technology.

    Science.gov (United States)

    Igawa, Tomoyuki

    2017-01-01

    Nearly fifty monoclonal antibodies have been approved to date, and the market for monoclonal antibodies is expected to continue to grow. Since global competition in the field of antibody therapeutics is intense, we need to establish novel antibody engineering technologies to provide true benefit for patients, with differentiated product values. Bispecific antibodies are among the next generation of antibody therapeutics that can bind to two different target antigens by the two arms of immunoglobulin G (IgG) molecule, and are thus believed to be applicable to various therapeutic needs. Until recently, large scale manufacturing of human IgG bispecific antibody was impossible. We have established a technology, named asymmetric re-engineering technology (ART)-Ig, to enable large scale manufacturing of bispecific antibodies. Three examples of next generation antibody therapeutics using ART-Ig technology are described. Recent updates on bispecific antibodies against factor IXa and factor X for the treatment of hemophilia A, bispecific antibodies against a tumor specific antigen and T cell surface marker CD3 for cancer immunotherapy, and bispecific antibodies against two different epitopes of soluble antigen with pH-dependent binding property for the elimination of soluble antigen from plasma are also described.

  10. Feasibility and application on steam injector for next-generation reactor

    International Nuclear Information System (INIS)

    Narabayashi, Tadashi; Ishiyama, Takenori; Miyano, Hiroshi; Nei, Hiromichi; Shioiri, Akio

    1991-01-01

    A feasibility study has been conducted on steam injector for a next generation reactor. The steam injector is a simple, compact passive device for water injection, such as Passive Core Injection System (PCIS) of Passive Containment Cooling System (PCCS), because of easy start-up without an AC power. An analysis model for a steam injector characteristics has been developed, and investigated with a visualized fundamental test for a two-stage Steam Injector System (SIS) for PCIS and a one-stage low pressure SIS for PCCS. The test results showed good agreement with the analysis results. The analysis and the test results showed the SIS could work over a very wide range of the steam pressure, and is applicable for PCIS or PCCS in the next generation reactors. (author)

  11. Results of Analyses of the Next Generation Solvent for Parsons

    International Nuclear Information System (INIS)

    Peters, T.; Washington, A.; Fink, S.

    2012-01-01

    Savannah River National Laboratory (SRNL) prepared a nominal 150 gallon batch of Next Generation Solvent (NGS) for Parsons. This material was then analyzed and tested for cesium mass transfer efficiency. The bulk of the results indicate that the solvent is qualified as acceptable for use in the upcoming pilot-scale testing at Parsons Technology Center. This report describes the analysis and testing of a batch of Next Generation Solvent (NGS) prepared in support of pilot-scale testing in the Parsons Technology Center. A total of ∼150 gallons of NGS solvent was prepared in late November of 2011. Details for the work are contained in a controlled laboratory notebook. Analysis of the Parsons NGS solvent indicates that the material is acceptable for use. SRNL is continuing to improve the analytical method for the guanidine.

  12. Next-generation text-mining mediated generation of chemical response-specific gene sets for interpretation of gene expression data

    NARCIS (Netherlands)

    K.M. Hettne (Kristina); J. Boorsma (Jeffrey); D.A.M. van Dartel (Dorien A M); J.J. Goeman (Jelle); E.C. de Jong (Esther); A.H. Piersma (Aldert); R.H. Stierum (Rob); J. Kleinjans (Jos); J.A. Kors (Jan)

    2013-01-01

    textabstractBackground: Availability of chemical response-specific lists of genes (gene sets) for pharmacological and/or toxic effect prediction for compounds is limited. We hypothesize that more gene sets can be created by next-generation text mining (next-gen TM), and that these can be used with

  13. Seamless-merging-oriented parallel inverse lithography technology

    International Nuclear Information System (INIS)

    Yang Yiwei; Shi Zheng; Shen Shanhu

    2009-01-01

    Inverse lithography technology (ILT), a promising resolution enhancement technology (RET) used in next generations of IC manufacture, has the capability to push lithography to its limit. However, the existing methods of ILT are either time-consuming due to the large layout in a single process, or not accurate enough due to simply block merging in the parallel process. The seamless-merging-oriented parallel ILT method proposed in this paper is fast because of the parallel process; and most importantly, convergence enhancement penalty terms (CEPT) introduced in the parallel ILT optimization process take the environment into consideration as well as environmental change through target updating. This method increases the similarity of the overlapped area between guard-bands and work units, makes the merging process approach seamless and hence reduces hot-spots. The experimental results show that seamless-merging-oriented parallel ILT not only accelerates the optimization process, but also significantly improves the quality of ILT.

  14. The "Next Generation Science Standards" and the Life Sciences

    Science.gov (United States)

    Bybee, Rodger W.

    2013-01-01

    Publication of the "Next Generation Science Standards" will be just short of two decades since publication of the "National Science Education Standards" (NRC 1996). In that time, biology and science education communities have advanced, and the new standards will reflect that progress (NRC 1999, 2007, 2009; Kress and Barrett…

  15. Electric vehicle charge patterns and the electricity generation mix and competitiveness of next generation vehicles

    International Nuclear Information System (INIS)

    Masuta, Taisuke; Murata, Akinobu; Endo, Eiichi

    2014-01-01

    Highlights: • The energy system of whole of Japan is analyzed in this study. • An advanced model based on MARKAL is used for the energy system analysis. • The impact of charge patterns of EVs on electricity generation mix is evaluated. • Technology competitiveness of the next generation vehicles is also evaluated. - Abstract: The nuclear accident of 2011 brought about a reconsideration of the future electricity generation mix of power systems in Japan. A debate on whether to phase out nuclear power plants and replace them with renewable energy sources is taking place. Demand-side management becomes increasingly important in future Japanese power systems with a large-scale integration of renewable energy sources. This paper considers the charge control of electric vehicles (EVs) through demand-side management. There have been many studies of the control or operation methods of EVs known as vehicle-to-grid (V2G), and it is important to evaluate both their short-term and long-term operation. In this study, we employ energy system to evaluate the impact of the charge patterns of EVs on both the electricity generation mix and the technology competitiveness of the next generation vehicles. An advanced energy system model based on Market Allocation (MARKAL) is used to consider power system control in detail

  16. Modeling the video distribution link in the Next Generation Optical Access Networks

    International Nuclear Information System (INIS)

    Amaya, F; Cardenas, A; Tafur, I

    2011-01-01

    In this work we present a model for the design and optimization of the video distribution link in the next generation optical access network. We analyze the video distribution performance in a SCM-WDM link, including the noise, the distortion and the fiber optic nonlinearities. Additionally, we consider in the model the effect of distributed Raman amplification, used to extent the capacity and the reach of the optical link. In the model, we use the nonlinear Schroedinger equation with the purpose to obtain capacity limitations and design constrains of the next generation optical access networks.

  17. Three-dimensional gyrokinetic particle-in-cell simulation of plasmas on a massively parallel computer: Final report on LDRD Core Competency Project, FY 1991--FY 1993

    International Nuclear Information System (INIS)

    Byers, J.A.; Williams, T.J.; Cohen, B.I.; Dimits, A.M.

    1994-01-01

    One of the programs of the Magnetic fusion Energy (MFE) Theory and computations Program is studying the anomalous transport of thermal energy across the field lines in the core of a tokamak. We use the method of gyrokinetic particle-in-cell simulation in this study. For this LDRD project we employed massively parallel processing, new algorithms, and new algorithms, and new formal techniques to improve this research. Specifically, we sought to take steps toward: researching experimentally-relevant parameters in our simulations, learning parallel computing to have as a resource for our group, and achieving a 100 x speedup over our starting-point Cray2 simulation code's performance

  18. The study of methodologies of software development for the next generation of HEP detector software

    International Nuclear Information System (INIS)

    Ding Yuzheng; Wang Taijie; Dai Guiliang

    1997-01-01

    The author discusses the characteristics of the next generation of HEP (High Energy Physics) detector software, and describes the basic strategy for the usage of object oriented methodologies, languages and tools in the development of the next generation of HEP detector software

  19. Challenges in the Setup of Large-scale Next-Generation Sequencing Analysis Workflows

    Directory of Open Access Journals (Sweden)

    Pranav Kulkarni

    Full Text Available While Next-Generation Sequencing (NGS can now be considered an established analysis technology for research applications across the life sciences, the analysis workflows still require substantial bioinformatics expertise. Typical challenges include the appropriate selection of analytical software tools, the speedup of the overall procedure using HPC parallelization and acceleration technology, the development of automation strategies, data storage solutions and finally the development of methods for full exploitation of the analysis results across multiple experimental conditions. Recently, NGS has begun to expand into clinical environments, where it facilitates diagnostics enabling personalized therapeutic approaches, but is also accompanied by new technological, legal and ethical challenges. There are probably as many overall concepts for the analysis of the data as there are academic research institutions. Among these concepts are, for instance, complex IT architectures developed in-house, ready-to-use technologies installed on-site as well as comprehensive Everything as a Service (XaaS solutions. In this mini-review, we summarize the key points to consider in the setup of the analysis architectures, mostly for scientific rather than diagnostic purposes, and provide an overview of the current state of the art and challenges of the field.

  20. Next-generation text-mining mediated generation of chemical response-specific gene sets for interpretation of gene expression data

    NARCIS (Netherlands)

    Hettne, K.M.; Boorsma, A.; Dartel, D.A. van; Goeman, J.J.; Jong, E. de; Piersma, A.H.; Stierum, R.H.; Kleinjans, J.C.; Kors, J.A.

    2013-01-01

    BACKGROUND: Availability of chemical response-specific lists of genes (gene sets) for pharmacological and/or toxic effect prediction for compounds is limited. We hypothesize that more gene sets can be created by next-generation text mining (next-gen TM), and that these can be used with gene set

  1. Next-generation text-mining mediated generation of chemical response-specific gene sets for interpretation of gene expression data

    NARCIS (Netherlands)

    Hettne, K.M.; Boorsma, A.; Dartel, van D.A.M.; Goeman, J.J.; Jong, de E.; Piersma, A.H.; Stierum, R.H.; Kleinjans, J.C.; Kors, J.A.

    2013-01-01

    Background: Availability of chemical response-specific lists of genes (gene sets) for pharmacological and/or toxic effect prediction for compounds is limited. We hypothesize that more gene sets can be created by next-generation text mining (next-gen TM), and that these can be used with gene set

  2. Multiple Access Techniques for Next Generation Wireless: Recent Advances and Future Perspectives

    Directory of Open Access Journals (Sweden)

    Shree Krishna Sharma

    2016-01-01

    Full Text Available The advances in multiple access techniques has been one of the key drivers in moving from one cellular generation to another. Starting from the first generation, several multiple access techniques have been explored in different generations and various emerging multiplexing/multiple access techniques are being investigated for the next generation of cellular networks. In this context, this paper first provides a detailed review on the existing Space Division Multiple Access (SDMA related works. Subsequently, it highlights the main features and the drawbacks of various existing and emerging multiplexing/multiple access techniques. Finally, we propose a novel concept of clustered orthogonal signature division multiple access for the next generation of cellular networks. The proposed concept envisions to employ joint antenna coding in order to enhance the orthogonality of SDMA beams with the objective of enhancing the spectral efficiency of future cellular networks.

  3. Next generation science standards available for comment

    Science.gov (United States)

    Asher, Pranoti

    2012-05-01

    The first public draft of the Next Generation Science Standards (NGSS) is now available for public comment. Feedback on the standards is sought from people who have a stake in science education, including individuals in the K-12, higher education, business, and research communities. Development of NGSS is a state-led effort to define the content and practices students need to learn from kindergarten through high school. NGSS will be based on the U.S. National Research Council's reportFramework for K-12 Science Education.

  4. Design Principles of Next-Generation Digital Gaming for Education.

    Science.gov (United States)

    Squire, Kurt; Jenkins, Henry; Holland, Walter; Miller, Heather; O'Driscoll, Alice; Tan, Katie Philip; Todd, Katie.

    2003-01-01

    Discusses the rapid growth of digital games, describes research at MIT that is exploring the potential of digital games for supporting learning, and offers hypotheses about the design of next-generation educational video and computer games. Highlights include simulations and games; and design principles, including context and using information to…

  5. Identification and Characterization of Key Human Performance Issues and Research in the Next Generation Air Transportation System (NextGen)

    Science.gov (United States)

    Lee, Paul U.; Sheridan, Tom; Poage, james L.; Martin, Lynne Hazel; Jobe, Kimberly K.

    2010-01-01

    This report identifies key human-performance-related issues associated with Next Generation Air Transportation System (NextGen) research in the NASA NextGen-Airspace Project. Four Research Focus Areas (RFAs) in the NextGen-Airspace Project - namely Separation Assurance (SA), Airspace Super Density Operations (ASDO), Traffic Flow Management (TFM), and Dynamic Airspace Configuration (DAC) - were examined closely. In the course of the research, it was determined that the identified human performance issues needed to be analyzed in the context of NextGen operations rather than through basic human factors research. The main gaps in human factors research in NextGen were found in the need for accurate identification of key human-systems related issues within the context of specific NextGen concepts and better design of the operational requirements for those concepts. By focusing on human-system related issues for individual concepts, key human performance issues for the four RFAs were identified and described in this report. In addition, mixed equipage airspace with components of two RFAs were characterized to illustrate potential human performance issues that arise from the integration of multiple concepts.

  6. 75 FR 82387 - Next Generation Risk Assessment Public Dialogue Conference

    Science.gov (United States)

    2010-12-30

    ... ENVIRONMENTAL PROTECTION AGENCY [FRL-9246-7] Next Generation Risk Assessment Public Dialogue Conference AGENCY: Environmental Protection Agency (EPA). ACTION: Notice of Public Dialogue Conference... methods with the National Institutes of Environmental Health Sciences' National Toxicology Program, Center...

  7. TIA: algorithms for development of identity-linked SNP islands for analysis by massively parallel DNA sequencing.

    Science.gov (United States)

    Farris, M Heath; Scott, Andrew R; Texter, Pamela A; Bartlett, Marta; Coleman, Patricia; Masters, David

    2018-04-11

    Single nucleotide polymorphisms (SNPs) located within the human genome have been shown to have utility as markers of identity in the differentiation of DNA from individual contributors. Massively parallel DNA sequencing (MPS) technologies and human genome SNP databases allow for the design of suites of identity-linked target regions, amenable to sequencing in a multiplexed and massively parallel manner. Therefore, tools are needed for leveraging the genotypic information found within SNP databases for the discovery of genomic targets that can be evaluated on MPS platforms. The SNP island target identification algorithm (TIA) was developed as a user-tunable system to leverage SNP information within databases. Using data within the 1000 Genomes Project SNP database, human genome regions were identified that contain globally ubiquitous identity-linked SNPs and that were responsive to targeted resequencing on MPS platforms. Algorithmic filters were used to exclude target regions that did not conform to user-tunable SNP island target characteristics. To validate the accuracy of TIA for discovering these identity-linked SNP islands within the human genome, SNP island target regions were amplified from 70 contributor genomic DNA samples using the polymerase chain reaction. Multiplexed amplicons were sequenced using the Illumina MiSeq platform, and the resulting sequences were analyzed for SNP variations. 166 putative identity-linked SNPs were targeted in the identified genomic regions. Of the 309 SNPs that provided discerning power across individual SNP profiles, 74 previously undefined SNPs were identified during evaluation of targets from individual genomes. Overall, DNA samples of 70 individuals were uniquely identified using a subset of the suite of identity-linked SNP islands. TIA offers a tunable genome search tool for the discovery of targeted genomic regions that are scalable in the population frequency and numbers of SNPs contained within the SNP island regions

  8. The challenges of M2M massive access in wireless cellular networks

    Directory of Open Access Journals (Sweden)

    Andrea Biral

    2015-02-01

    Full Text Available The next generation of communication systems, which is commonly referred to as 5G, is expected to support, besides the traditional voice and data services, new communication paradigms, such as Internet of Things (IoT and Machine-to-Machine (M2M services, which involve communication between Machine-Type Devices (MTDs in a fully automated fashion, thus, without or with minimal human intervention. Although the general requirements of 5G systems are progressively taking shape, the technological issues raised by such a vision are still partially unclear. Nonetheless, general consensus has been reached upon some specific challenges, such as the need for 5G wireless access networks to support massive access by MTDs, as a consequence of the proliferation of M2M services. In this paper, we describe the main challenges raised by the M2M vision, focusing in particular on the problems related to the support of massive MTD access in current cellular communication systems. Then we analyze the most common approaches proposed in the literature to enable the coexistence of conventional and M2M services in the current and next generation of cellular wireless systems. We finally conclude by pointing out the research challenges that require further investigation in order to provide full support to the M2M paradigm.

  9. Quantitative miRNA expression analysis: comparing microarrays with next-generation sequencing

    DEFF Research Database (Denmark)

    Willenbrock, Hanni; Salomon, Jesper; Søkilde, Rolf

    2009-01-01

    Recently, next-generation sequencing has been introduced as a promising, new platform for assessing the copy number of transcripts, while the existing microarray technology is considered less reliable for absolute, quantitative expression measurements. Nonetheless, so far, results from the two...... technologies have only been compared based on biological data, leading to the conclusion that, although they are somewhat correlated, expression values differ significantly. Here, we use synthetic RNA samples, resembling human microRNA samples, to find that microarray expression measures actually correlate...... better with sample RNA content than expression measures obtained from sequencing data. In addition, microarrays appear highly sensitive and perform equivalently to next-generation sequencing in terms of reproducibility and relative ratio quantification....

  10. RIPng- A next Generation Routing Protocal (IPv6) | Falaye | Journal ...

    African Journals Online (AJOL)

    ... Information Protocol Next Generation (RIPng) owing to the current depletion rate of IPv4. ... that support the Internet Protocol Version 6 (IPv6).addressing scheme. ... A brief history is given; its various versions are discussed, and detailed ...

  11. Clinical utility of a 377 gene custom next-generation sequencing ...

    Indian Academy of Sciences (India)

    JEN BEVILACQUA

    2017-07-26

    Jul 26, 2017 ... Clinical utility of a 377 gene custom next-generation sequencing epilepsy panel ... number of genes, making it a very attractive option for a condition as .... clinical value of various test offerings to guide decision making.

  12. Optimum fuel allocation in parallel steam generator systems

    International Nuclear Information System (INIS)

    Bollettini, U.; Cangioli, E.; Cerri, G.; Rome Univ. 'La Sapienza'; Trento Univ.

    1991-01-01

    An optimization procedure was developed to allocate fuels into parallel steam generators. The procedure takes into account the level of performance deterioration connected with the loading history (fossil fuel allocation and maintenance) of each steam generator. The optimization objective function is the system hourly cost, overall steam demand being satisfied. Costs are due to fuel and electric power supply and to plant depreciation and maintenance as well. In order to easily updata the state of each steam generator, particular care was put in the general formulation of the steam production function by adopting a special efficiency-load curve description based on a deterioration scaling parameter. The influence of the characteristic time interval length on the optimum operation result is investigated. A special implementation of the method based on minimum cost paths is suggested

  13. AgMIP: Next Generation Models and Assessments

    Science.gov (United States)

    Rosenzweig, C.

    2014-12-01

    Next steps in developing next-generation crop models fall into several categories: significant improvements in simulation of important crop processes and responses to stress; extension from simplified crop models to complex cropping systems models; and scaling up from site-based models to landscape, national, continental, and global scales. Crop processes that require major leaps in understanding and simulation in order to narrow uncertainties around how crops will respond to changing atmospheric conditions include genetics; carbon, temperature, water, and nitrogen; ozone; and nutrition. The field of crop modeling has been built on a single crop-by-crop approach. It is now time to create a new paradigm, moving from 'crop' to 'cropping system.' A first step is to set up the simulation technology so that modelers can rapidly incorporate multiple crops within fields, and multiple crops over time. Then the response of these more complex cropping systems can be tested under different sustainable intensification management strategies utilizing the updated simulation environments. Model improvements for diseases, pests, and weeds include developing process-based models for important diseases, frameworks for coupling air-borne diseases to crop models, gathering significantly more data on crop impacts, and enabling the evaluation of pest management strategies. Most smallholder farming in the world involves integrated crop-livestock systems that cannot be represented by crop modeling alone. Thus, next-generation cropping system models need to include key linkages to livestock. Livestock linkages to be incorporated include growth and productivity models for grasslands and rangelands as well as the usual annual crops. There are several approaches for scaling up, including use of gridded models and development of simpler quasi-empirical models for landscape-scale analysis. On the assessment side, AgMIP is leading a community process for coordinated contributions to IPCC AR6

  14. The Next Generation Science Standards: The Features and Challenges

    Science.gov (United States)

    Pruitt, Stephen L.

    2014-01-01

    Beginning in January of 2010, the Carnegie Corporation of New York funded a two-step process to develop a new set of state developed science standards intended to prepare students for college and career readiness in science. These new internationally benchmarked science standards, the Next Generation Science Standards (NGSS) were completed in…

  15. Raytheon's next generation compact inline cryocooler architecture

    Science.gov (United States)

    Schaefer, B. R.; Bellis, L.; Ellis, M. J.; Conrad, T.

    2014-01-01

    Since the 1970s, Raytheon has developed, built, tested and integrated high performance cryocoolers. Our versatile designs for single and multi-stage cryocoolers provide reliable operation for temperatures from 10 to 200 Kelvin with power levels ranging from 50 W to nearly 600 W. These advanced cryocoolers incorporate clearance seals, flexure suspensions, hermetic housings and dynamic balancing to provide long service life and reliable operation in all relevant environments. Today, sensors face a multitude of cryocooler integration challenges such as exported disturbance, efficiency, scalability, maturity, and cost. As a result, cryocooler selection is application dependent, oftentimes requiring extensive trade studies to determine the most suitable architecture. To optimally meet the needs of next generation passive IR sensors, the Compact Inline Raytheon Stirling 1-Stage (CI-RS1), Compact Inline Raytheon Single Stage Pulse Tube (CI-RP1) and Compact Inline Raytheon Hybrid Stirling/Pulse Tube 2-Stage (CI-RSP2) cryocoolers are being developed to satisfy this suite of requirements. This lightweight, compact, efficient, low vibration cryocooler combines proven 1-stage (RS1 or RP1) and 2-stage (RSP2) cold-head architectures with an inventive set of warm-end mechanisms into a single cooler module, allowing the moving mechanisms for the compressor and the Stirling displacer to be consolidated onto a common axis and in a common working volume. The CI cryocooler is a significant departure from the current Stirling cryocoolers in which the compressor mechanisms are remote from the Stirling displacer mechanism. Placing all of the mechanisms in a single volume and on a single axis provides benefits in terms of package size (30% reduction), mass (30% reduction), thermodynamic efficiency (>20% improvement) and exported vibration performance (≤25 mN peak in all three orthogonal axes at frequencies from 1 to 500 Hz). The main benefit of axial symmetry is that proven balancing

  16. Raytheon's next generation compact inline cryocooler architecture

    International Nuclear Information System (INIS)

    Schaefer, B. R.; Bellis, L.; Ellis, M. J.; Conrad, T.

    2014-01-01

    Since the 1970s, Raytheon has developed, built, tested and integrated high performance cryocoolers. Our versatile designs for single and multi-stage cryocoolers provide reliable operation for temperatures from 10 to 200 Kelvin with power levels ranging from 50 W to nearly 600 W. These advanced cryocoolers incorporate clearance seals, flexure suspensions, hermetic housings and dynamic balancing to provide long service life and reliable operation in all relevant environments. Today, sensors face a multitude of cryocooler integration challenges such as exported disturbance, efficiency, scalability, maturity, and cost. As a result, cryocooler selection is application dependent, oftentimes requiring extensive trade studies to determine the most suitable architecture. To optimally meet the needs of next generation passive IR sensors, the Compact Inline Raytheon Stirling 1-Stage (CI-RS1), Compact Inline Raytheon Single Stage Pulse Tube (CI-RP1) and Compact Inline Raytheon Hybrid Stirling/Pulse Tube 2-Stage (CI-RSP2) cryocoolers are being developed to satisfy this suite of requirements. This lightweight, compact, efficient, low vibration cryocooler combines proven 1-stage (RS1 or RP1) and 2-stage (RSP2) cold-head architectures with an inventive set of warm-end mechanisms into a single cooler module, allowing the moving mechanisms for the compressor and the Stirling displacer to be consolidated onto a common axis and in a common working volume. The CI cryocooler is a significant departure from the current Stirling cryocoolers in which the compressor mechanisms are remote from the Stirling displacer mechanism. Placing all of the mechanisms in a single volume and on a single axis provides benefits in terms of package size (30% reduction), mass (30% reduction), thermodynamic efficiency (>20% improvement) and exported vibration performance (≤25 mN peak in all three orthogonal axes at frequencies from 1 to 500 Hz). The main benefit of axial symmetry is that proven balancing

  17. Career Advancement: Meeting the Challenges Confronting the Next Generation of Endocrinologists and Endocrine Researchers.

    Science.gov (United States)

    Santen, Richard J; Joham, Anju; Fishbein, Lauren; Vella, Kristen R; Ebeling, Peter R; Gibson-Helm, Melanie; Teede, Helena

    2016-12-01

    Challenges and opportunities face the next generation (Next-Gen) of endocrine researchers and clinicians, the lifeblood of the field of endocrinology for the future. A symposium jointly sponsored by The Endocrine Society and the Endocrine Society of Australia was convened to discuss approaches to addressing the present and future Next-Gen needs. Data collection by literature review, assessment of previously completed questionnaires, commissioning of a new questionnaire, and summarization of symposium discussions were studied. Next-Gen endocrine researchers face diminishing grant funding in inflation-adjusted terms. The average age of individuals being awarded their first independent investigator funding has increased to age 45 years. For clinicians, a workforce gap exists between endocrinologists needed and those currently trained. Clinicians in practice are increasingly becoming employees of integrated hospital systems, resulting in greater time spent on nonclinical issues. Workforce data and published reviews identify challenges specifically related to early career women in endocrinology. Strategies to Address Issues: Recommendations encompassed the areas of grant support for research, mentoring, education, templates for career development, specific programs for Next-Gen members by senior colleagues as outlined in the text, networking, team science, and life/work integration. Endocrine societies focusing on Next-Gen members provide a powerful mechanism to support these critical areas. A concerted effort to empower, train, and support the next generation of clinical endocrinologists and endocrine researchers is necessary to ensure the viability and vibrancy of our discipline and to optimize our contributions to improving health outcomes. Collaborative engagement of endocrine societies globally will be necessary to support our next generation moving forward.

  18. "ASTRO 101" Course Materials 2.0: Next Generation Lecture Tutorials and Beyond

    Science.gov (United States)

    Slater, Stephanie; Grazier, Kevin

    2015-01-01

    Early efforts to create course materials were often local in scale and were based on "gut instinct," and classroom experience and observation. While subsequent efforts were often based on those same instincts and observations of classrooms, they also incorporated the results of many years of education research. These "second generation" course materials, such as lecture tutorials, relied heavily on research indicating that instructors need to actively engage students in the learning process. While imperfect, these curricular innovations, have provided evidence that research-based materials can be constructed, can easily be disseminated to a broad audience, and can provide measureable improvement in student learning across many settings. In order to improve upon this prior work, next generation materials must build upon the strengths of these innovations while engineering in findings from education research, cognitive science, and instructor feedback. A next wave of materials, including a set of next generation lecture tutorials, have been constructed with attention to the body of research on student motivation, and cognitive load; and they are responsive to our body of knowledge on learning difficulties related to specific content in the domain. From instructor feedback, these materials have been constructed to have broader coverage of the materials typically taught in an ASTRO 101 course, to take less class time, and to be more affordable for students. This next generation of lecture tutorials may serve as a template of the ways in which course materials can be reengineered to respond to current instructor and student needs.

  19. Next generation Zero-Code control system UI

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Developing ergonomic user interfaces for control systems is challenging, especially during machine upgrade and commissioning where several small changes may suddenly be required. Zero-code systems, such as *Inspector*, provide agile features for creating and maintaining control system interfaces. More so, these next generation Zero-code systems bring simplicity and uniformity and brake the boundaries between Users and Developers. In this talk we present *Inspector*, a CERN made Zero-code application development system, and we introduce the major differences and advantages of using Zero-code control systems to develop operational UI.

  20. Beamstrahlung spectra in next generation linear colliders

    Energy Technology Data Exchange (ETDEWEB)

    Barklow, T.; Chen, P. (Stanford Linear Accelerator Center, Menlo Park, CA (United States)); Kozanecki, W. (DAPNIA-SPP, CEN-Saclay (France))

    1992-04-01

    For the next generation of linear colliders, the energy loss due to beamstrahlung during the collision of the e{sup +}e{sup {minus}} beams is expected to substantially influence the effective center-of-mass energy distribution of the colliding particles. In this paper, we first derive analytical formulae for the electron and photon energy spectra under multiple beamstrahlung processes, and for the e{sup +}e{sup {minus}} and {gamma}{gamma} differential luminosities. We then apply our formulation to various classes of 500 GeV e{sup +}e{sup {minus}} linear collider designs currently under study.