WorldWideScience

Sample records for supercomputers array processes

  1. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  2. SUPERCOMPUTERS FOR AIDING ECONOMIC PROCESSES WITH REFERENCE TO THE FINANCIAL SECTOR

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2014-12-01

    Full Text Available The article discusses the use of supercomputers to support business processes with particular emphasis on the financial sector. A reference was made to the selected projects that support economic development. In particular, we propose the use of supercomputers to perform artificial intel-ligence methods in banking. The proposed methods combined with modern technology enables a significant increase in the competitiveness of enterprises and banks by adding new functionality.

  3. ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.

    Science.gov (United States)

    Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping

    2018-04-27

    A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.

  4. Introduction to Reconfigurable Supercomputing

    CERN Document Server

    Lanzagorta, Marco; Rosenberg, Robert

    2010-01-01

    This book covers technologies, applications, tools, languages, procedures, advantages, and disadvantages of reconfigurable supercomputing using Field Programmable Gate Arrays (FPGAs). The target audience is the community of users of High Performance Computers (HPe who may benefit from porting their applications into a reconfigurable environment. As such, this book is intended to guide the HPC user through the many algorithmic considerations, hardware alternatives, usability issues, programming languages, and design tools that need to be understood before embarking on the creation of reconfigur

  5. What is supercomputing ?

    International Nuclear Information System (INIS)

    Asai, Kiyoshi

    1992-01-01

    Supercomputing means the high speed computation using a supercomputer. Supercomputers and the technical term ''supercomputing'' have spread since ten years ago. The performances of the main computers installed so far in Japan Atomic Energy Research Institute are compared. There are two methods to increase computing speed by using existing circuit elements, parallel processor system and vector processor system. CRAY-1 is the first successful vector computer. Supercomputing technology was first applied to meteorological organizations in foreign countries, and to aviation and atomic energy research institutes in Japan. The supercomputing for atomic energy depends on the trend of technical development in atomic energy, and the contents are divided into the increase of computing speed in existing simulation calculation and the acceleration of the new technical development of atomic energy. The examples of supercomputing in Japan Atomic Energy Research Institute are reported. (K.I.)

  6. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential of PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.

  7. Supercomputational science

    CERN Document Server

    Wilson, S

    1990-01-01

    In contemporary research, the supercomputer now ranks, along with radio telescopes, particle accelerators and the other apparatus of "big science", as an expensive resource, which is nevertheless essential for state of the art research. Supercomputers are usually provided as shar.ed central facilities. However, unlike, telescopes and accelerators, they are find a wide range of applications which extends across a broad spectrum of research activity. The difference in performance between a "good" and a "bad" computer program on a traditional serial computer may be a factor of two or three, but on a contemporary supercomputer it can easily be a factor of one hundred or even more! Furthermore, this factor is likely to increase with future generations of machines. In keeping with the large capital and recurrent costs of these machines, it is appropriate to devote effort to training and familiarization so that supercomputers are employed to best effect. This volume records the lectures delivered at a Summer School ...

  8. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee

    2011-11-15

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  9. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee; Kaushik, Dinesh; Winfer, Andrew

    2011-01-01

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  10. An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.

    Science.gov (United States)

    Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei

    2017-12-01

    Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.

  11. Enabling department-scale supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  12. FPS scientific and supercomputers computers in chemistry

    International Nuclear Information System (INIS)

    Curington, I.J.

    1987-01-01

    FPS Array Processors, scientific computers, and highly parallel supercomputers are used in nearly all aspects of compute-intensive computational chemistry. A survey is made of work utilizing this equipment, both published and current research. The relationship of the computer architecture to computational chemistry is discussed, with specific reference to Molecular Dynamics, Quantum Monte Carlo simulations, and Molecular Graphics applications. Recent installations of the FPS T-Series are highlighted, and examples of Molecular Graphics programs running on the FPS-5000 are shown

  13. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  14. The ETA10 supercomputer system

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, Inc. ETA 10 is a next-generation supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamics MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed. (orig.)

  15. Computational Dimensionalities of Global Supercomputing

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2013-12-01

    Full Text Available This Invited Paper pertains to subject of my Plenary Keynote Speech at the 17th World Multi-Conference on Systemics, Cybernetics and Informatics (WMSCI 2013 held in Orlando, Florida on July 9-12, 2013. The title of my Plenary Keynote Speech was: "Dimensionalities of Computation: from Global Supercomputing to Data, Text and Web Mining" but this Invited Paper will focus only on the "Computational Dimensionalities of Global Supercomputing" and is based upon a summary of the contents of several individual articles that have been previously written with myself as lead author and published in [75], [76], [77], [78], [79], [80] and [11]. The topics of these of the Plenary Speech included Overview of Current Research in Global Supercomputing [75], Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing [76], Data Mining Supercomputing with SAS™ JMP® Genomics ([77], [79], [80], and Visualization by Supercomputing Data Mining [81]. ______________________ [11.] Committee on the Future of Supercomputing, National Research Council (2003, The Future of Supercomputing: An Interim Report, ISBN-13: 978-0-309-09016- 2, http://www.nap.edu/catalog/10784.html [75.] Segall, Richard S.; Zhang, Qingyu and Cook, Jeffrey S.(2013, "Overview of Current Research in Global Supercomputing", Proceedings of Forty- Fourth Meeting of Southwest Decision Sciences Institute (SWDSI, Albuquerque, NM, March 12-16, 2013. [76.] Segall, Richard S. and Zhang, Qingyu (2010, "Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing", Proceedings of 5th INFORMS Workshop on Data Mining and Health Informatics, Austin, TX, November 6, 2010. [77.] Segall, Richard S., Zhang, Qingyu and Pierce, Ryan M.(2010, "Data Mining Supercomputing with SAS™ JMP®; Genomics: Research-in-Progress, Proceedings of 2010 Conference on Applied Research in Information Technology, sponsored by

  16. Japanese supercomputer technology

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Ewald, R.H.; Worlton, W.J.

    1982-01-01

    In February 1982, computer scientists from the Los Alamos National Laboratory and Lawrence Livermore National Laboratory visited several Japanese computer manufacturers. The purpose of these visits was to assess the state of the art of Japanese supercomputer technology and to advise Japanese computer vendors of the needs of the US Department of Energy (DOE) for more powerful supercomputers. The Japanese foresee a domestic need for large-scale computing capabilities for nuclear fusion, image analysis for the Earth Resources Satellite, meteorological forecast, electrical power system analysis (power flow, stability, optimization), structural and thermal analysis of satellites, and very large scale integrated circuit design and simulation. To meet this need, Japan has launched an ambitious program to advance supercomputer technology. This program is described

  17. An assessment of worldwide supercomputer usage

    Energy Technology Data Exchange (ETDEWEB)

    Wasserman, H.J.; Simmons, M.L.; Hayes, A.H.

    1995-01-01

    This report provides a comparative study of advanced supercomputing usage in Japan and the United States as of Spring 1994. It is based on the findings of a group of US scientists whose careers have centered on programming, evaluating, and designing high-performance supercomputers for over ten years. The report is a follow-on to an assessment of supercomputing technology in Europe and Japan that was published in 1993. Whereas the previous study focused on supercomputer manufacturing capabilities, the primary focus of the current work was to compare where and how supercomputers are used. Research for this report was conducted through both literature studies and field research in Japan.

  18. Integration of Panda Workload Management System with supercomputers

    Science.gov (United States)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  19. The ETA systems plans for supercomputers

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, is a class VII supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamic MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed

  20. Status of supercomputers in the US

    International Nuclear Information System (INIS)

    Fernbach, S.

    1985-01-01

    Current Supercomputers; that is, the Class VI machines which first became available in 1976 are being delivered in greater quantity than ever before. In addition, manufacturers are busily working on Class VII machines to be ready for delivery in CY 1987. Mainframes are being modified or designed to take on some features of the supercomputers and new companies with the intent of either competing directly in the supercomputer arena or in providing entry-level systems from which to graduate to supercomputers are springing up everywhere. Even well founded organizations like IBM and CDC are adding machines with vector instructions in their repertoires. Japanese - manufactured supercomputers are also being introduced into the U.S. Will these begin to compete with those of U.S. manufacture. Are they truly competitive. It turns out that both from the hardware and software points of view they may be superior. We may be facing the same problems in supercomputers that we faced in videosystems

  1. Large scale simulations of lattice QCD thermodynamics on Columbia Parallel Supercomputers

    International Nuclear Information System (INIS)

    Ohta, Shigemi

    1989-01-01

    The Columbia Parallel Supercomputer project aims at the construction of a parallel processing, multi-gigaflop computer optimized for numerical simulations of lattice QCD. The project has three stages; 16-node, 1/4GF machine completed in April 1985, 64-node, 1GF machine completed in August 1987, and 256-node, 16GF machine now under construction. The machines all share a common architecture; a two dimensional torus formed from a rectangular array of N 1 x N 2 independent and identical processors. A processor is capable of operating in a multi-instruction multi-data mode, except for periods of synchronous interprocessor communication with its four nearest neighbors. Here the thermodynamics simulations on the two working machines are reported. (orig./HSI)

  2. Visualization environment of the large-scale data of JAEA's supercomputer system

    Energy Technology Data Exchange (ETDEWEB)

    Sakamoto, Kensaku [Japan Atomic Energy Agency, Center for Computational Science and e-Systems, Tokai, Ibaraki (Japan); Hoshi, Yoshiyuki [Research Organization for Information Science and Technology (RIST), Tokai, Ibaraki (Japan)

    2013-11-15

    On research and development of various fields of nuclear energy, visualization of calculated data is especially useful to understand the result of simulation in an intuitive way. Many researchers who run simulations on the supercomputer in Japan Atomic Energy Agency (JAEA) are used to transfer calculated data files from the supercomputer to their local PCs for visualization. In recent years, as the size of calculated data has gotten larger with improvement of supercomputer performance, reduction of visualization processing time as well as efficient use of JAEA network is being required. As a solution, we introduced a remote visualization system which has abilities to utilize parallel processors on the supercomputer and to reduce the usage of network resources by transferring data of intermediate visualization process. This paper reports a study on the performance of image processing with the remote visualization system. The visualization processing time is measured and the influence of network speed is evaluated by varying the drawing mode, the size of visualization data and the number of processors. Based on this study, a guideline for using the remote visualization system is provided to show how the system can be used effectively. An upgrade policy of the next system is also shown. (author)

  3. Supercomputing and related national projects in Japan

    International Nuclear Information System (INIS)

    Miura, Kenichi

    1985-01-01

    Japanese supercomputer development activities in the industry and research projects are outlined. Architecture, technology, software, and applications of Fujitsu's Vector Processor Systems are described as an example of Japanese supercomputers. Applications of supercomputers to high energy physics are also discussed. (orig.)

  4. Supercomputer applications in nuclear research

    International Nuclear Information System (INIS)

    Ishiguro, Misako

    1992-01-01

    The utilization of supercomputers in Japan Atomic Energy Research Institute is mainly reported. The fields of atomic energy research which use supercomputers frequently and the contents of their computation are outlined. What is vectorizing is simply explained, and nuclear fusion, nuclear reactor physics, the hydrothermal safety of nuclear reactors, the parallel property that the atomic energy computations of fluids and others have, the algorithm for vector treatment and the effect of speed increase by vectorizing are discussed. At present Japan Atomic Energy Research Institute uses two systems of FACOM VP 2600/10 and three systems of M-780. The contents of computation changed from criticality computation around 1970, through the analysis of LOCA after the TMI accident, to nuclear fusion research, the design of new type reactors and reactor safety assessment at present. Also the method of using computers advanced from batch processing to time sharing processing, from one-dimensional to three dimensional computation, from steady, linear to unsteady nonlinear computation, from experimental analysis to numerical simulation and so on. (K.I.)

  5. Computational fluid dynamics: complex flows requiring supercomputers. January 1975-July 1988 (Citations from the INSPEC: Information Services for the Physics and Engineering Communities data base). Report for January 1975-July 1988

    International Nuclear Information System (INIS)

    1988-08-01

    This bibliography contains citations concerning computational fluid dynamics (CFD), a new method in computational science to perform complex flow simulations in three dimensions. Applications include aerodynamic design and analysis for aircraft, rockets, and missiles, and automobiles; heat-transfer studies; and combustion processes. Included are references to supercomputers, array processors, and parallel processors where needed for complete, integrated design. Also included are software packages and grid-generation techniques required to apply CFD numerical solutions. Numerical methods for fluid dynamics, not requiring supercomputers, are found in a separate published search. (Contains 83 citations fully indexed and including a title list.)

  6. Computational plasma physics and supercomputers

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1984-09-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular codes, but parallel processing poses new coding difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematics

  7. JINR supercomputer of the module type for event parallel analysis

    International Nuclear Information System (INIS)

    Kolpakov, I.F.; Senner, A.E.; Smirnov, V.A.

    1987-01-01

    A model of a supercomputer with 50 million of operations per second is suggested. Its realization allows one to solve JINR data analysis problems for large spectrometers (in particular DELPHY collaboration). The suggested module supercomputer is based on 32-bit commercial available microprocessor with a processing rate of about 1 MFLOPS. The processors are combined by means of VME standard busbars. MicroVAX-11 is a host computer organizing the operation of the system. Data input and output is realized via microVAX-11 computer periphery. Users' software is based on the FORTRAN-77. The supercomputer is connected with a JINR net port and all JINR users get an access to the suggested system

  8. TOP500 Supercomputers for June 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  9. TOP500 Supercomputers for June 2005

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2005-06-22

    25th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/L LNL BlueGene/L and IBM gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 25th edition of the TOP500 list of the world's fastest supercomputers was released today (June 22, 2005) at the 20th International Supercomputing Conference (ISC2005) in Heidelberg Germany.

  10. ATLAS Software Installation on Supercomputers

    CERN Document Server

    Undrus, Alexander; The ATLAS collaboration

    2018-01-01

    PowerPC and high performance computers (HPC) are important resources for computing in the ATLAS experiment. The future LHC data processing will require more resources than Grid computing, currently using approximately 100,000 cores at well over 100 sites, can provide. Supercomputers are extremely powerful as they use resources of hundreds of thousands CPUs joined together. However their architectures have different instruction sets. ATLAS binary software distributions for x86 chipsets do not fit these architectures, as emulation of these chipsets results in huge performance loss. This presentation describes the methodology of ATLAS software installation from source code on supercomputers. The installation procedure includes downloading the ATLAS code base as well as the source of about 50 external packages, such as ROOT and Geant4, followed by compilation, and rigorous unit and integration testing. The presentation reports the application of this procedure at Titan HPC and Summit PowerPC at Oak Ridge Computin...

  11. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  12. TOP500 Supercomputers for November 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  13. A training program for scientific supercomputing users

    Energy Technology Data Exchange (ETDEWEB)

    Hanson, F.; Moher, T.; Sabelli, N.; Solem, A.

    1988-01-01

    There is need for a mechanism to transfer supercomputing technology into the hands of scientists and engineers in such a way that they will acquire a foundation of knowledge that will permit integration of supercomputing as a tool in their research. Most computing center training emphasizes computer-specific information about how to use a particular computer system; most academic programs teach concepts to computer scientists. Only a few brief courses and new programs are designed for computational scientists. This paper describes an eleven-week training program aimed principally at graduate and postdoctoral students in computationally-intensive fields. The program is designed to balance the specificity of computing center courses, the abstractness of computer science courses, and the personal contact of traditional apprentice approaches. It is based on the experience of computer scientists and computational scientists, and consists of seminars and clinics given by many visiting and local faculty. It covers a variety of supercomputing concepts, issues, and practices related to architecture, operating systems, software design, numerical considerations, code optimization, graphics, communications, and networks. Its research component encourages understanding of scientific computing and supercomputer hardware issues. Flexibility in thinking about computing needs is emphasized by the use of several different supercomputer architectures, such as the Cray X/MP48 at the National Center for Supercomputing Applications at University of Illinois at Urbana-Champaign, IBM 3090 600E/VF at the Cornell National Supercomputer Facility, and Alliant FX/8 at the Advanced Computing Research Facility at Argonne National Laboratory. 11 refs., 6 tabs.

  14. INTEL: Intel based systems move up in supercomputing ranks

    CERN Multimedia

    2002-01-01

    "The TOP500 supercomputer rankings released today at the Supercomputing 2002 conference show a dramatic increase in the number of Intel-based systems being deployed in high-performance computing (HPC) or supercomputing areas" (1/2 page).

  15. World's fastest supercomputer opens up to users

    Science.gov (United States)

    Xin, Ling

    2016-08-01

    China's latest supercomputer - Sunway TaihuLight - has claimed the crown as the world's fastest computer according to the latest TOP500 list, released at the International Supercomputer Conference in Frankfurt in late June.

  16. Micro-mechanical Simulations of Soils using Massively Parallel Supercomputers

    Directory of Open Access Journals (Sweden)

    David W. Washington

    2004-06-01

    Full Text Available In this research a computer program, Trubal version 1.51, based on the Discrete Element Method was converted to run on a Connection Machine (CM-5,a massively parallel supercomputer with 512 nodes, to expedite the computational times of simulating Geotechnical boundary value problems. The dynamic memory algorithm in Trubal program did not perform efficiently in CM-2 machine with the Single Instruction Multiple Data (SIMD architecture. This was due to the communication overhead involving global array reductions, global array broadcast and random data movement. Therefore, a dynamic memory algorithm in Trubal program was converted to a static memory arrangement and Trubal program was successfully converted to run on CM-5 machines. The converted program was called "TRUBAL for Parallel Machines (TPM." Simulating two physical triaxial experiments and comparing simulation results with Trubal simulations validated the TPM program. With a 512 nodes CM-5 machine TPM produced a nine-fold speedup demonstrating the inherent parallelism within algorithms based on the Discrete Element Method.

  17. OpenMP Performance on the Columbia Supercomputer

    Science.gov (United States)

    Haoqiang, Jin; Hood, Robert

    2005-01-01

    This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.

  18. Sensor array signal processing

    CERN Document Server

    Naidu, Prabhakar S

    2009-01-01

    Chapter One: An Overview of Wavefields 1.1 Types of Wavefields and the Governing Equations 1.2 Wavefield in open space 1.3 Wavefield in bounded space 1.4 Stochastic wavefield 1.5 Multipath propagation 1.6 Propagation through random medium 1.7 ExercisesChapter Two: Sensor Array Systems 2.1 Uniform linear array (ULA) 2.2 Planar array 2.3 Distributed sensor array 2.4 Broadband sensor array 2.5 Source and sensor arrays 2.6 Multi-component sensor array2.7 ExercisesChapter Three: Frequency Wavenumber Processing 3.1 Digital filters in the w-k domain 3.2 Mapping of 1D into 2D filters 3.3 Multichannel Wiener filters 3.4 Wiener filters for ULA and UCA 3.5 Predictive noise cancellation 3.6 Exercises Chapter Four: Source Localization: Frequency Wavenumber Spectrum4.1 Frequency wavenumber spectrum 4.2 Beamformation 4.3 Capon's w-k spectrum 4.4 Maximum entropy w-k spectrum 4.5 Doppler-Azimuth Processing4.6 ExercisesChapter Five: Source Localization: Subspace Methods 5.1 Subspace methods (Narrowband) 5.2 Subspace methods (B...

  19. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  20. Supercomputing - Use Cases, Advances, The Future (1/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the first day, we will focus on the history and theory of supercomputing, the top500 list and the hardware that makes supercomputers tick. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP an...

  1. Supercomputing - Use Cases, Advances, The Future (2/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the second day, we will focus on software and software paradigms driving supercomputers, workloads that need supercomputing treatment, advances in technology and possible future developments. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and i...

  2. Using a Cray Y-MP as an array processor for a RISC Workstation

    Science.gov (United States)

    Lamaster, Hugh; Rogallo, Sarah J.

    1992-01-01

    As microprocessors increase in power, the economics of centralized computing has changed dramatically. At the beginning of the 1980's, mainframes and super computers were often considered to be cost-effective machines for scalar computing. Today, microprocessor-based RISC (reduced-instruction-set computer) systems have displaced many uses of mainframes and supercomputers. Supercomputers are still cost competitive when processing jobs that require both large memory size and high memory bandwidth. One such application is array processing. Certain numerical operations are appropriate to use in a Remote Procedure Call (RPC)-based environment. Matrix multiplication is an example of an operation that can have a sufficient number of arithmetic operations to amortize the cost of an RPC call. An experiment which demonstrates that matrix multiplication can be executed remotely on a large system to speed the execution over that experienced on a workstation is described.

  3. Computational plasma physics and supercomputers. Revision 1

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1985-01-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular models, but parallel processing poses new programming difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematical models

  4. Desktop supercomputer: what can it do?

    Science.gov (United States)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-12-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  5. Adaptability of supercomputers to nuclear computations

    International Nuclear Information System (INIS)

    Asai, Kiyoshi; Ishiguro, Misako; Matsuura, Toshihiko.

    1983-01-01

    Recently in the field of scientific and technical calculation, the usefulness of supercomputers represented by CRAY-1 has been recognized, and they are utilized in various countries. The rapid computation of supercomputers is based on the function of vector computation. The authors investigated the adaptability to vector computation of about 40 typical atomic energy codes for the past six years. Based on the results of investigation, the adaptability of the function of vector computation that supercomputers have to atomic energy codes, the problem regarding the utilization and the future prospect are explained. The adaptability of individual calculation codes to vector computation is largely dependent on the algorithm and program structure used for the codes. The change to high speed by pipeline vector system, the investigation in the Japan Atomic Energy Research Institute and the results, and the examples of expressing the codes for atomic energy, environmental safety and nuclear fusion by vector are reported. The magnification of speed up for 40 examples was from 1.5 to 9.0. It can be said that the adaptability of supercomputers to atomic energy codes is fairly good. (Kako, I.)

  6. Development of seismic tomography software for hybrid supercomputers

    Science.gov (United States)

    Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton

    2015-04-01

    Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on

  7. Desktop supercomputer: what can it do?

    International Nuclear Information System (INIS)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-01-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  8. ArrayBridge: Interweaving declarative array processing with high-performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Xing, Haoyuan [The Ohio State Univ., Columbus, OH (United States); Floratos, Sofoklis [The Ohio State Univ., Columbus, OH (United States); Blanas, Spyros [The Ohio State Univ., Columbus, OH (United States); Byna, Suren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Prabhat, Prabhat [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wu, Kesheng [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Brown, Paul [Paradigm4, Inc., Waltham, MA (United States)

    2017-05-04

    Scientists are increasingly turning to datacenter-scale computers to produce and analyze massive arrays. Despite decades of database research that extols the virtues of declarative query processing, scientists still write, debug and parallelize imperative HPC kernels even for the most mundane queries. This impedance mismatch has been partly attributed to the cumbersome data loading process; in response, the database community has proposed in situ mechanisms to access data in scientific file formats. Scientists, however, desire more than a passive access method that reads arrays from files. This paper describes ArrayBridge, a bi-directional array view mechanism for scientific file formats, that aims to make declarative array manipulations interoperable with imperative file-centric analyses. Our prototype implementation of ArrayBridge uses HDF5 as the underlying array storage library and seamlessly integrates into the SciDB open-source array database system. In addition to fast querying over external array objects, ArrayBridge produces arrays in the HDF5 file format just as easily as it can read from it. ArrayBridge also supports time travel queries from imperative kernels through the unmodified HDF5 API, and automatically deduplicates between array versions for space efficiency. Our extensive performance evaluation in NERSC, a large-scale scientific computing facility, shows that ArrayBridge exhibits statistically indistinguishable performance and I/O scalability to the native SciDB storage engine.

  9. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  10. TOP500 Supercomputers for November 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-11-08

    24th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/IBM BlueGene/L and NASA/SGI's Columbia gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 24th edition of the TOP500 list of the worlds fastest supercomputers was released today (November 8, 2004) at the SC2004 Conference in Pittsburgh, Pa.

  11. TOP500 Supercomputers for June 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamos National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.

  12. TOP500 Supercomputers for June 2002

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-06-20

    19th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 19th edition of the TOP500 list of the worlds fastest supercomputers was released today (June 20, 2002). The recently installed Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is as expected the clear new number 1. Its performance of 35.86 Tflop/s (trillions of calculations per second) running the Linpack benchmark is almost five times higher than the performance of the now No.2 IBM ASCI White system at Lawrence Livermore National Laboratory (7.2 Tflop/s). This powerful leap frogging to the top by a system so much faster than the previous top system is unparalleled in the history of the TOP500.

  13. Status reports of supercomputing astrophysics in Japan

    International Nuclear Information System (INIS)

    Nakamura, Takashi; Nagasawa, Mikio

    1990-01-01

    The Workshop on Supercomputing Astrophysics was held at National Laboratory for High Energy Physics (KEK, Tsukuba) from August 31 to September 2, 1989. More than 40 participants of physicists, astronomers were attendant and discussed many topics in the informal atmosphere. The main purpose of this workshop was focused on the theoretical activities in computational astrophysics in Japan. It was also aimed to promote effective collaboration between the numerical experimentists working on supercomputing technique. The various subjects of the presented papers of hydrodynamics, plasma physics, gravitating systems, radiative transfer and general relativity are all stimulating. In fact, these numerical calculations become possible now in Japan owing to the power of Japanese supercomputer such as HITAC S820, Fujitsu VP400E and NEC SX-2. (J.P.N.)

  14. Intelligent Personal Supercomputer for Solving Scientific and Technical Problems

    Directory of Open Access Journals (Sweden)

    Khimich, O.M.

    2016-09-01

    Full Text Available New domestic intellіgent personal supercomputer of hybrid architecture Inparkom_pg for the mathematical modeling of processes in the defense industry, engineering, construction, etc. was developed. Intelligent software for the automatic research and tasks of computational mathematics with approximate data of different structures was designed. Applied software to provide mathematical modeling problems in construction, welding and filtration processes was implemented.

  15. Comprehensive efficiency analysis of supercomputer resource usage based on system monitoring data

    Science.gov (United States)

    Mamaeva, A. A.; Shaykhislamov, D. I.; Voevodin, Vad V.; Zhumatiy, S. A.

    2018-03-01

    One of the main problems of modern supercomputers is the low efficiency of their usage, which leads to the significant idle time of computational resources, and, in turn, to the decrease in speed of scientific research. This paper presents three approaches to study the efficiency of supercomputer resource usage based on monitoring data analysis. The first approach performs an analysis of computing resource utilization statistics, which allows to identify different typical classes of programs, to explore the structure of the supercomputer job flow and to track overall trends in the supercomputer behavior. The second approach is aimed specifically at analyzing off-the-shelf software packages and libraries installed on the supercomputer, since efficiency of their usage is becoming an increasingly important factor for the efficient functioning of the entire supercomputer. Within the third approach, abnormal jobs – jobs with abnormally inefficient behavior that differs significantly from the standard behavior of the overall supercomputer job flow – are being detected. For each approach, the results obtained in practice in the Supercomputer Center of Moscow State University are demonstrated.

  16. Comments on the parallelization efficiency of the Sunway TaihuLight supercomputer

    OpenAIRE

    Végh, János

    2016-01-01

    In the world of supercomputers, the large number of processors requires to minimize the inefficiencies of parallelization, which appear as a sequential part of the program from the point of view of Amdahl's law. The recently suggested new figure of merit is applied to the recently presented supercomputer, and the timeline of "Top 500" supercomputers is scrutinized using the metric. It is demonstrated, that in addition to the computing performance and power consumption, the new supercomputer i...

  17. Simulation of x-rays in refractive structure by the Monte Carlo method using the supercomputer SKIF

    International Nuclear Information System (INIS)

    Yaskevich, Yu.R.; Kravchenko, O.I.; Soroka, I.I.; Chembrovskij, A.G.; Kolesnik, A.S.; Serikova, N.V.; Petrov, P.V.; Kol'chevskij, N.N.

    2013-01-01

    Software 'Xray-SKIF' for the simulation of the X-rays in refractive structures by the Monte-Carlo method using the supercomputer SKIF BSU are developed. The program generates a large number of rays propagated from a source to the refractive structure. The ray trajectory under assumption of geometrical optics is calculated. Absorption is calculated for each ray inside of refractive structure. Dynamic arrays are used for results of calculation rays parameters, its restore the X-ray field distributions very fast at different position of detector. It was found that increasing the number of processors leads to proportional decreasing of calculation time: simulation of 10 8 X-rays using supercomputer with the number of processors from 1 to 30 run-times equal 3 hours and 6 minutes, respectively. 10 9 X-rays are calculated by software 'Xray-SKIF' which allows to reconstruct the X-ray field after refractive structure with a special resolution of 1 micron. (authors)

  18. Tryton Supercomputer Capabilities for Analysis of Massive Data Streams

    Directory of Open Access Journals (Sweden)

    Krawczyk Henryk

    2015-09-01

    Full Text Available The recently deployed supercomputer Tryton, located in the Academic Computer Center of Gdansk University of Technology, provides great means for massive parallel processing. Moreover, the status of the Center as one of the main network nodes in the PIONIER network enables the fast and reliable transfer of data produced by miscellaneous devices scattered in the area of the whole country. The typical examples of such data are streams containing radio-telescope and satellite observations. Their analysis, especially with real-time constraints, can be challenging and requires the usage of dedicated software components. We propose a solution for such parallel analysis using the supercomputer, supervised by the KASKADA platform, which with the conjunction with immerse 3D visualization techniques can be used to solve problems such as pulsar detection and chronometric or oil-spill simulation on the sea surface.

  19. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Energy Technology Data Exchange (ETDEWEB)

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Klimentov, A [Brookhaven National Laboratory (BNL); Maeno, T [Brookhaven National Laboratory (BNL); Nilsson, P [Brookhaven National Laboratory (BNL); Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation

  20. LOFAR, the low frequency array

    Science.gov (United States)

    Vermeulen, R. C.

    2012-09-01

    LOFAR, the Low Frequency Array, is a next-generation radio telescope designed by ASTRON, with antenna stations concentrated in the north of the Netherlands and currently spread into Germany, France, Sweden and the United Kingdom; plans for more LOFAR stations exist in several other countries. Utilizing a novel, phased-array design, LOFAR is optimized for the largely unexplored low frequency range between 30 and 240 MHz. Digital beam-forming techniques make the LOFAR system agile and allow for rapid re-pointing of the telescopes as well as the potential for multiple simultaneous observations. Processing (e.g. cross-correlation) takes place in the LOFAR BlueGene/P supercomputer, and associated post-processing facilities. With its dense core (inner few km) array and long (more than 1000 km) interferometric baselines, LOFAR reaches unparalleled sensitivity and resolution in the low frequency radio regime. The International LOFAR Telescope (ILT) is now issuing its first call for observing projects that will be peer reviewed and selected for observing starting in December. Part of the allocations will be made on the basis of a fully Open Skies policy; there are also reserved fractions assigned by national consortia in return for contributions from their country to the ILT. In this invited talk, the gradually expanding complement of operationally verified observing modes and capabilities are reviewed, and some of the exciting first astronomical results are presented.

  1. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen-Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2018-05-15

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time and supports DMA functionality allowing for parallel processing message-passing.

  2. Applications of supercomputing and the utility industry: Calculation of power transfer capabilities

    International Nuclear Information System (INIS)

    Jensen, D.D.; Behling, S.R.; Betancourt, R.

    1990-01-01

    Numerical models and iterative simulation using supercomputers can furnish cost-effective answers to utility industry problems that are all but intractable using conventional computing equipment. An example of the use of supercomputers by the utility industry is the determination of power transfer capability limits for power transmission systems. This work has the goal of markedly reducing the run time of transient stability codes used to determine power distributions following major system disturbances. To date, run times of several hours on a conventional computer have been reduced to several minutes on state-of-the-art supercomputers, with further improvements anticipated to reduce run times to less than a minute. In spite of the potential advantages of supercomputers, few utilities have sufficient need for a dedicated in-house supercomputing capability. This problem is resolved using a supercomputer center serving a geographically distributed user base coupled via high speed communication networks

  3. Supercomputers to transform Science

    CERN Multimedia

    2006-01-01

    "New insights into the structure of space and time, climate modeling, and the design of novel drugs, are but a few of the many research areas that will be transforned by the installation of three supercomputers at the Unversity of Bristol." (1/2 page)

  4. Convex unwraps its first grown-up supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Manuel, T.

    1988-03-03

    Convex Computer Corp.'s new supercomputer family is even more of an industry blockbuster than its first system. At a tenfold jump in performance, it's far from just an incremental upgrade over its first minisupercomputer, the C-1. The heart of the new family, the new C-2 processor, churning at 50 million floating-point operations/s, spawns a group of systems whose performance could pass for some fancy supercomputers-namely those of the Cray Research Inc. family. When added to the C-1, Convex's five new supercomputers create the C series, a six-member product group offering a performance range from 20 to 200 Mflops. They mark an important transition for Convex from a one-product high-tech startup to a multinational company with a wide-ranging product line. It's a tough transition but the Richardson, Texas, company seems to be doing it. The extended product line propels Convex into the upper end of the minisupercomputer class and nudges it into the low end of the big supercomputers. It positions Convex in an uncrowded segment of the market in the $500,000 to $1 million range offering 50 to 200 Mflops of performance. The company is making this move because the minisuper area, which it pioneered, quickly became crowded with new vendors, causing prices and gross margins to drop drastically.

  5. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-01-01

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  6. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-12-31

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  7. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Science.gov (United States)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  8. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    International Nuclear Information System (INIS)

    Klimentov, A; Maeno, T; Nilsson, P; Panitkin, S; Wenaus, T; De, K; Oleynik, D; Jha, S; Wells, J

    2016-01-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  9. Automatic discovery of the communication network topology for building a supercomputer model

    Science.gov (United States)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  10. PNNL supercomputer to become largest computing resource on the Grid

    CERN Multimedia

    2002-01-01

    Hewlett Packard announced that the US DOE Pacific Northwest National Laboratory will connect a 9.3-teraflop HP supercomputer to the DOE Science Grid. This will be the largest supercomputer attached to a computer grid anywhere in the world (1 page).

  11. Direct exploitation of a top 500 Supercomputer for Analysis of CMS Data

    International Nuclear Information System (INIS)

    Cabrillo, I; Cabellos, L; Marco, J; Fernandez, J; Gonzalez, I

    2014-01-01

    The Altamira Supercomputer hosted at the Instituto de Fisica de Cantatbria (IFCA) entered in operation in summer 2012. Its last generation FDR Infiniband network used (for message passing) in parallel jobs, supports the connection to General Parallel File System (GPFS) servers, enabling an efficient simultaneous processing of multiple data demanding jobs. Sharing a common GPFS system and a single LDAP-based identification with the existing Grid clusters at IFCA allows CMS researchers to exploit the large instantaneous capacity of this supercomputer to execute analysis jobs. The detailed experience describing this opportunistic use for skimming and final analysis of CMS 2012 data for a specific physics channel, resulting in an order of magnitude reduction of the waiting time, is presented.

  12. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    Energy Technology Data Exchange (ETDEWEB)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S [Earth Sciences Department. Barcelona Supercomputing Center. Barcelona (Spain); Cuevas, E [Izanaa Atmospheric Research Center. Agencia Estatal de Meteorologia, Tenerife (Spain); Nickovic, S [Atmospheric Research and Environment Branch, World Meteorological Organization, Geneva (Switzerland)], E-mail: carlos.perez@bsc.es

    2009-03-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  13. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    International Nuclear Information System (INIS)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S; Cuevas, E; Nickovic, S

    2009-01-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  14. An Optimized Parallel FDTD Topology for Challenging Electromagnetic Simulations on Supercomputers

    Directory of Open Access Journals (Sweden)

    Shugang Jiang

    2015-01-01

    Full Text Available It may not be a challenge to run a Finite-Difference Time-Domain (FDTD code for electromagnetic simulations on a supercomputer with more than 10 thousands of CPU cores; however, to make FDTD code work with the highest efficiency is a challenge. In this paper, the performance of parallel FDTD is optimized through MPI (message passing interface virtual topology, based on which a communication model is established. The general rules of optimal topology are presented according to the model. The performance of the method is tested and analyzed on three high performance computing platforms with different architectures in China. Simulations including an airplane with a 700-wavelength wingspan, and a complex microstrip antenna array with nearly 2000 elements are performed very efficiently using a maximum of 10240 CPU cores.

  15. ALMA Array Operations Group process overview

    Science.gov (United States)

    Barrios, Emilio; Alarcon, Hector

    2016-07-01

    ALMA Science operations activities in Chile are responsibility of the Department of Science Operations, which consists of three groups, the Array Operations Group (AOG), the Program Management Group (PMG) and the Data Management Group (DMG). The AOG includes the Array Operators and have the mission to provide support for science observations, operating safely and efficiently the array. The poster describes the AOG process, management and operational tools.

  16. Supercomputers and the mathematical modeling of high complexity problems

    International Nuclear Information System (INIS)

    Belotserkovskii, Oleg M

    2010-01-01

    This paper is a review of many works carried out by members of our scientific school in past years. The general principles of constructing numerical algorithms for high-performance computers are described. Several techniques are highlighted and these are based on the method of splitting with respect to physical processes and are widely used in computing nonlinear multidimensional processes in fluid dynamics, in studies of turbulence and hydrodynamic instabilities and in medicine and other natural sciences. The advances and developments related to the new generation of high-performance supercomputing in Russia are presented.

  17. The Pawsey Supercomputer geothermal cooling project

    Science.gov (United States)

    Regenauer-Lieb, K.; Horowitz, F.; Western Australian Geothermal Centre Of Excellence, T.

    2010-12-01

    The Australian Government has funded the Pawsey supercomputer in Perth, Western Australia, providing computational infrastructure intended to support the future operations of the Australian Square Kilometre Array radiotelescope and to boost next-generation computational geosciences in Australia. Supplementary funds have been directed to the development of a geothermal exploration well to research the potential for direct heat use applications at the Pawsey Centre site. Cooling the Pawsey supercomputer may be achieved by geothermal heat exchange rather than by conventional electrical power cooling, thus reducing the carbon footprint of the Pawsey Centre and demonstrating an innovative green technology that is widely applicable in industry and urban centres across the world. The exploration well is scheduled to be completed in 2013, with drilling due to commence in the third quarter of 2011. One year is allocated to finalizing the design of the exploration, monitoring and research well. Success in the geothermal exploration and research program will result in an industrial-scale geothermal cooling facility at the Pawsey Centre, and will provide a world-class student training environment in geothermal energy systems. A similar system is partially funded and in advanced planning to provide base-load air-conditioning for the main campus of the University of Western Australia. Both systems are expected to draw ~80-95 degrees C water from aquifers lying between 2000 and 3000 meters depth from naturally permeable rocks of the Perth sedimentary basin. The geothermal water will be run through absorption chilling devices, which only require heat (as opposed to mechanical work) to power a chilled water stream adequate to meet the cooling requirements. Once the heat has been removed from the geothermal water, licensing issues require the water to be re-injected back into the aquifer system. These systems are intended to demonstrate the feasibility of powering large-scale air

  18. Supercomputers Of The Future

    Science.gov (United States)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1992-01-01

    Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.

  19. Integration of Titan supercomputer at OLCF with ATLAS Production System

    CERN Document Server

    AUTHOR|(SzGeCERN)643806; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Wenaus, Torre

    2017-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for jo...

  20. Integration of Titan supercomputer at OLCF with ATLAS production system

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration

    2016-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job...

  1. Supercomputer algorithms for reactivity, dynamics and kinetics of small molecules

    International Nuclear Information System (INIS)

    Lagana, A.

    1989-01-01

    Even for small systems, the accurate characterization of reactive processes is so demanding of computer resources as to suggest the use of supercomputers having vector and parallel facilities. The full advantages of vector and parallel architectures can sometimes be obtained by simply modifying existing programs, vectorizing the manipulation of vectors and matrices, and requiring the parallel execution of independent tasks. More often, however, a significant time saving can be obtained only when the computer code undergoes a deeper restructuring, requiring a change in the computational strategy or, more radically, the adoption of a different theoretical treatment. This book discusses supercomputer strategies based upon act and approximate methods aimed at calculating the electronic structure and the reactive properties of small systems. The book shows how, in recent years, intense design activity has led to the ability to calculate accurate electronic structures for reactive systems, exact and high-level approximations to three-dimensional reactive dynamics, and to efficient directive and declaratory software for the modelling of complex systems

  2. NASA Advanced Supercomputing Facility Expansion

    Science.gov (United States)

    Thigpen, William W.

    2017-01-01

    The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.

  3. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    Energy Technology Data Exchange (ETDEWEB)

    Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brightwell, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In this paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.

  4. Theory and applications of spherical microphone array processing

    CERN Document Server

    Jarrett, Daniel P; Naylor, Patrick A

    2017-01-01

    This book presents the signal processing algorithms that have been developed to process the signals acquired by a spherical microphone array. Spherical microphone arrays can be used to capture the sound field in three dimensions and have received significant interest from researchers and audio engineers. Algorithms for spherical array processing are different to corresponding algorithms already known in the literature of linear and planar arrays because the spherical geometry can be exploited to great beneficial effect. The authors aim to advance the field of spherical array processing by helping those new to the field to study it efficiently and from a single source, as well as by offering a way for more experienced researchers and engineers to consolidate their understanding, adding either or both of breadth and depth. The level of the presentation corresponds to graduate studies at MSc and PhD level. This book begins with a presentation of some of the essential mathematical and physical theory relevant to ...

  5. Supercomputers and quantum field theory

    International Nuclear Information System (INIS)

    Creutz, M.

    1985-01-01

    A review is given of why recent simulations of lattice gauge theories have resulted in substantial demands from particle theorists for supercomputer time. These calculations have yielded first principle results on non-perturbative aspects of the strong interactions. An algorithm for simulating dynamical quark fields is discussed. 14 refs

  6. Supercomputations and big-data analysis in strong-field ultrafast optical physics: filamentation of high-peak-power ultrashort laser pulses

    Science.gov (United States)

    Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.

    2016-06-01

    High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3  +  1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.

  7. The Applicability of Incoherent Array Processing to IMS Seismic Array Stations

    Science.gov (United States)

    Gibbons, S. J.

    2012-04-01

    The seismic arrays of the International Monitoring System for the CTBT differ greatly in size and geometry, with apertures ranging from below 1 km to over 60 km. Large and medium aperture arrays with large inter-site spacings complicate the detection and estimation of high frequency phases since signals are often incoherent between sensors. Many such phases, typically from events at regional distances, remain undetected since pipeline algorithms often consider only frequencies low enough to allow coherent array processing. High frequency phases that are detected are frequently attributed qualitatively incorrect backazimuth and slowness estimates and are consequently not associated with the correct event hypotheses. This can lead to missed events both due to a lack of contributing phase detections and by corruption of event hypotheses by spurious detections. Continuous spectral estimation can be used for phase detection and parameter estimation on the largest aperture arrays, with phase arrivals identified as local maxima on beams of transformed spectrograms. The estimation procedure in effect measures group velocity rather than phase velocity and the ability to estimate backazimuth and slowness requires that the spatial extent of the array is large enough to resolve time-delays between envelopes with a period of approximately 4 or 5 seconds. The NOA, AKASG, YKA, WRA, and KURK arrays have apertures in excess of 20 km and spectrogram beamforming on these stations provides high quality slowness estimates for regional phases without additional post-processing. Seven arrays with aperture between 10 and 20 km (MJAR, ESDC, ILAR, KSRS, CMAR, ASAR, and EKA) can provide robust parameter estimates subject to a smoothing of the resulting slowness grids, most effectively achieved by convolving the measured slowness grids with the array response function for a 4 or 5 second period signal. The MJAR array in Japan recorded high SNR Pn signals for both the 2006 and 2009 North Korea

  8. Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers

    International Nuclear Information System (INIS)

    Dreher, Patrick; Scullin, William; Vouk, Mladen

    2015-01-01

    Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers. (paper)

  9. Use of high performance networks and supercomputers for real-time flight simulation

    Science.gov (United States)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  10. Mistral Supercomputer Job History Analysis

    OpenAIRE

    Zasadziński, Michał; Muntés-Mulero, Victor; Solé, Marc; Ludwig, Thomas

    2018-01-01

    In this technical report, we show insights and results of operational data analysis from petascale supercomputer Mistral, which is ranked as 42nd most powerful in the world as of January 2018. Data sources include hardware monitoring data, job scheduler history, topology, and hardware information. We explore job state sequences, spatial distribution, and electric power patterns.

  11. Model-based processing for underwater acoustic arrays

    CERN Document Server

    Sullivan, Edmund J

    2015-01-01

    This monograph presents a unified approach to model-based processing for underwater acoustic arrays. The use of physical models in passive array processing is not a new idea, but it has been used on a case-by-case basis, and as such, lacks any unifying structure. This work views all such processing methods as estimation procedures, which then can be unified by treating them all as a form of joint estimation based on a Kalman-type recursive processor, which can be recursive either in space or time, depending on the application. This is done for three reasons. First, the Kalman filter provides a natural framework for the inclusion of physical models in a processing scheme. Second, it allows poorly known model parameters to be jointly estimated along with the quantities of interest. This is important, since in certain areas of array processing already in use, such as those based on matched-field processing, the so-called mismatch problem either degrades performance or, indeed, prevents any solution at all. Third...

  12. Interactive real-time nuclear plant simulations on a UNIX based supercomputer

    International Nuclear Information System (INIS)

    Behling, S.R.

    1990-01-01

    Interactive real-time nuclear plant simulations are critically important to train nuclear power plant engineers and operators. In addition, real-time simulations can be used to test the validity and timing of plant technical specifications and operational procedures. To accurately and confidently simulate a nuclear power plant transient in real-time, sufficient computer resources must be available. Since some important transients cannot be simulated using preprogrammed responses or non-physical models, commonly used simulation techniques may not be adequate. However, the power of a supercomputer allows one to accurately calculate the behavior of nuclear power plants even during very complex transients. Many of these transients can be calculated in real-time or quicker on the fastest supercomputers. The concept of running interactive real-time nuclear power plant transients on a supercomputer has been tested. This paper describes the architecture of the simulation program, the techniques used to establish real-time synchronization, and other issues related to the use of supercomputers in a new and potentially very important area. (author)

  13. The Fermilab Advanced Computer Program multi-array processor system (ACPMAPS): A site oriented supercomputer for theoretical physics

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1988-08-01

    The ACP Multi-Array Processor System (ACPMAPS) is a highly cost effective, local memory parallel computer designed for floating point intensive grid based problems. The processing nodes of the system are single board array processors based on the FORTRAN and C programmable Weitek XL chip set. The nodes are connected by a network of very high bandwidth 16 port crossbar switches. The architecture is designed to achieve the highest possible cost effectiveness while maintaining a high level of programmability. The primary application of the machine at Fermilab will be lattice gauge theory. The hardware is supported by a transparent site oriented software system called CANOPY which shields theorist users from the underlying node structure. 4 refs., 2 figs

  14. Porting Ordinary Applications to Blue Gene/Q Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy; Katz, Daniel S.; Binkowski, T. Andrew; Zhong, Xiaoliang; Heinonen, Olle; Karpeyev, Dmitry; Wilde, Michael

    2015-08-31

    Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt's sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.

  15. Efficient processing of two-dimensional arrays with C or C++

    Science.gov (United States)

    Donato, David I.

    2017-07-20

    Because fast and efficient serial processing of raster-graphic images and other two-dimensional arrays is a requirement in land-change modeling and other applications, the effects of 10 factors on the runtimes for processing two-dimensional arrays with C and C++ are evaluated in a comparative factorial study. This study’s factors include the choice among three C or C++ source-code techniques for array processing; the choice of Microsoft Windows 7 or a Linux operating system; the choice of 4-byte or 8-byte array elements and indexes; and the choice of 32-bit or 64-bit memory addressing. This study demonstrates how programmer choices can reduce runtimes by 75 percent or more, even after compiler optimizations. Ten points of practical advice for faster processing of two-dimensional arrays are offered to C and C++ programmers. Further study and the development of a C and C++ software test suite are recommended.Key words: array processing, C, C++, compiler, computational speed, land-change modeling, raster-graphic image, two-dimensional array, software efficiency

  16. Experimental investigation of the ribbon-array ablation process

    International Nuclear Information System (INIS)

    Li Zhenghong; Xu Rongkun; Chu Yanyun; Yang Jianlun; Xu Zeping; Ye Fan; Chen Faxin; Xue Feibiao; Ning Jiamin; Qin Yi; Meng Shijian; Hu Qingyuan; Si Fenni; Feng Jinghua; Zhang Faqiang; Chen Jinchuan; Li Linbo; Chen Dingyang; Ding Ning; Zhou Xiuwen

    2013-01-01

    Ablation processes of ribbon-array loads, as well as wire-array loads for comparison, were investigated on Qiangguang-1 accelerator. The ultraviolet framing images indicate that the ribbon-array loads have stable passages of currents, which produce axially uniform ablated plasma. The end-on x-ray framing camera observed the azimuthally modulated distribution of the early ablated ribbon-array plasma and the shrink process of the x-ray radiation region. Magnetic probes measured the total and precursor currents of ribbon-array and wire-array loads, and there exists no evident difference between the precursor currents of the two types of loads. The proportion of the precursor current to the total current is 15% to 20%, and the start time of the precursor current is about 25 ns later than that of the total current. The melting time of the load material is about 16 ns, when the inward drift velocity of the ablated plasma is taken to be 1.5 × 10 7 cm/s.

  17. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    Science.gov (United States)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  18. Adaptive motion compensation in sonar array processing

    NARCIS (Netherlands)

    Groen, J.

    2006-01-01

    In recent years, sonar performance has mainly improved via a significant increase in array ap-erture, signal bandwidth and computational power. This thesis aims at improving sonar array processing techniques based on these three steps forward. In applications such as anti-submarine warfare and mine

  19. Integrating Scientific Array Processing into Standard SQL

    Science.gov (United States)

    Misev, Dimitar; Bachhuber, Johannes; Baumann, Peter

    2014-05-01

    We live in a time that is dominated by data. Data storage is cheap and more applications than ever accrue vast amounts of data. Storing the emerging multidimensional data sets efficiently, however, and allowing them to be queried by their inherent structure, is a challenge many databases have to face today. Despite the fact that multidimensional array data is almost always linked to additional, non-array information, array databases have mostly developed separately from relational systems, resulting in a disparity between the two database categories. The current SQL standard and SQL DBMS supports arrays - and in an extension also multidimensional arrays - but does so in a very rudimentary and inefficient way. This poster demonstrates the practicality of an SQL extension for array processing, implemented in a proof-of-concept multi-faceted system that manages a federation of array and relational database systems, providing transparent, efficient and scalable access to the heterogeneous data in them.

  20. Extracting the Textual and Temporal Structure of Supercomputing Logs

    Energy Technology Data Exchange (ETDEWEB)

    Jain, S; Singh, I; Chandra, A; Zhang, Z; Bronevetsky, G

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an online clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.

  1. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2008-05-15

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation.

  2. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    International Nuclear Information System (INIS)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho

    2008-01-01

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation

  3. Computational Science with the Titan Supercomputer: Early Outcomes and Lessons Learned

    Science.gov (United States)

    Wells, Jack

    2014-03-01

    Modeling and simulation with petascale computing has supercharged the process of innovation and understanding, dramatically accelerating time-to-insight and time-to-discovery. This presentation will focus on early outcomes from the Titan supercomputer at the Oak Ridge National Laboratory. Titan has over 18,000 hybrid compute nodes consisting of both CPUs and GPUs. In this presentation, I will discuss the lessons we have learned in deploying Titan and preparing applications to move from conventional CPU architectures to a hybrid machine. I will present early results of materials applications running on Titan and the implications for the research community as we prepare for exascale supercomputer in the next decade. Lastly, I will provide an overview of user programs at the Oak Ridge Leadership Computing Facility with specific information how researchers may apply for allocations of computing resources. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  4. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sarje, Abhinav [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jacobsen, Douglas W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ringler, Todd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  5. A supercomputing application for reactors core design and optimization

    International Nuclear Information System (INIS)

    Hourcade, Edouard; Gaudier, Fabrice; Arnaud, Gilles; Funtowiez, David; Ammar, Karim

    2010-01-01

    Advanced nuclear reactor designs are often intuition-driven processes where designers first develop or use simplified simulation tools for each physical phenomenon involved. Through the project development, complexity in each discipline increases and implementation of chaining/coupling capabilities adapted to supercomputing optimization process are often postponed to a further step so that task gets increasingly challenging. In the context of renewal in reactor designs, project of first realization are often run in parallel with advanced design although very dependant on final options. As a consequence, the development of tools to globally assess/optimize reactor core features, with the on-going design methods accuracy, is needed. This should be possible within reasonable simulation time and without advanced computer skills needed at project management scale. Also, these tools should be ready to easily cope with modeling progresses in each discipline through project life-time. An early stage development of multi-physics package adapted to supercomputing is presented. The URANIE platform, developed at CEA and based on the Data Analysis Framework ROOT, is very well adapted to this approach. It allows diversified sampling techniques (SRS, LHS, qMC), fitting tools (neuronal networks...) and optimization techniques (genetic algorithm). Also data-base management and visualization are made very easy. In this paper, we'll present the various implementing steps of this core physics tool where neutronics, thermo-hydraulics, and fuel mechanics codes are run simultaneously. A relevant example of optimization of nuclear reactor safety characteristics will be presented. Also, flexibility of URANIE tool will be illustrated with the presentation of several approaches to improve Pareto front quality. (author)

  6. QCD on the BlueGene/L Supercomputer

    International Nuclear Information System (INIS)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-01-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented

  7. QCD on the BlueGene/L Supercomputer

    Science.gov (United States)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-03-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented.

  8. Proceedings of the first energy research power supercomputer users symposium

    International Nuclear Information System (INIS)

    1991-01-01

    The Energy Research Power Supercomputer Users Symposium was arranged to showcase the richness of science that has been pursued and accomplished in this program through the use of supercomputers and now high performance parallel computers over the last year: this report is the collection of the presentations given at the Symposium. ''Power users'' were invited by the ER Supercomputer Access Committee to show that the use of these computational tools and the associated data communications network, ESNet, go beyond merely speeding up computations. Today the work often directly contributes to the advancement of the conceptual developments in their fields and the computational and network resources form the very infrastructure of today's science. The Symposium also provided an opportunity, which is rare in this day of network access to computing resources, for the invited users to compare and discuss their techniques and approaches with those used in other ER disciplines. The significance of new parallel architectures was highlighted by the interesting evening talk given by Dr. Stephen Orszag of Princeton University

  9. Visualization on supercomputing platform level II ASC milestone (3537-1B) results from Sandia.

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk (Kitware, Inc., Clifton Park, NY); Fabian, Nathan; Marion, Patrick (Kitware, Inc., Clifton Park, NY); Moreland, Kenneth D.

    2010-09-01

    This report provides documentation for the completion of the Sandia portion of the ASC Level II Visualization on the platform milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratories and Los Alamos National Laboratories. This milestone contains functionality required for performing visualization directly on a supercomputing platform, which is necessary for peta-scale visualization. Sandia's contribution concerns in-situ visualization, running a visualization in tandem with a solver. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors(GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the performance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. Scientific simulation on parallel supercomputers is traditionally performed in four

  10. Graphics supercomputer for computational fluid dynamics research

    Science.gov (United States)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  11. Oxide nano-rod array structure via a simple metallurgical process

    International Nuclear Information System (INIS)

    Nanko, M; Do, D T M

    2011-01-01

    A simple method for fabricating oxide nano-rod array structure via metallurgical process is reported. Some dilute alloys such as Ni(Al) solid solution shows internal oxidation with rod-like oxide precipices during high-temperature oxidation with low oxygen partial pressure. By removing a metal part in internal oxidation zone, oxide nano-rod array structure can be developed on the surface of metallic components. In this report, Al 2 O 3 or NiAl 2 O 4 nano-rod array structures were prepared by using Ni(Al) solid solution. Effects of Cr addition into Ni(Al) solid solution on internal oxidation were also reported. Pack cementation process for aluminizing of Ni surface was applied to prepare nano-rod array components with desired shape. Near-net shape Ni components with oxide nano-rod array structure on their surface can be prepared by using the pack cementation process and internal oxidation,

  12. Removing Background Noise with Phased Array Signal Processing

    Science.gov (United States)

    Podboy, Gary; Stephens, David

    2015-01-01

    Preliminary results are presented from a test conducted to determine how well microphone phased array processing software could pull an acoustic signal out of background noise. The array consisted of 24 microphones in an aerodynamic fairing designed to be mounted in-flow. The processing was conducted using Functional Beam forming software developed by Optinav combined with cross spectral matrix subtraction. The test was conducted in the free-jet of the Nozzle Acoustic Test Rig at NASA GRC. The background noise was produced by the interaction of the free-jet flow with the solid surfaces in the flow. The acoustic signals were produced by acoustic drivers. The results show that the phased array processing was able to pull the acoustic signal out of the background noise provided the signal was no more than 20 dB below the background noise level measured using a conventional single microphone equipped with an aerodynamic forebody.

  13. A workbench for tera-flop supercomputing

    International Nuclear Information System (INIS)

    Resch, M.M.; Kuester, U.; Mueller, M.S.; Lang, U.

    2003-01-01

    Supercomputers currently reach a peak performance in the range of TFlop/s. With but one exception - the Japanese Earth Simulator - none of these systems has so far been able to also show a level of sustained performance for a variety of applications that comes close to the peak performance. Sustained TFlop/s are therefore rarely seen. The reasons are manifold and are well known: Bandwidth and latency both for main memory and for the internal network are the key internal technical problems. Cache hierarchies with large caches can bring relief but are no remedy to the problem. However, there are not only technical problems that inhibit the full exploitation by scientists of the potential of modern supercomputers. More and more organizational issues come to the forefront. This paper shows the approach of the High Performance Computing Center Stuttgart (HLRS) to deliver a sustained performance of TFlop/s for a wide range of applications from a large group of users spread over Germany. The core of the concept is the role of the data. Around this we design a simulation workbench that hides the complexity of interacting computers, networks and file systems from the user. (authors)

  14. A visual analytics system for optimizing the performance of large-scale networks in supercomputing systems

    Directory of Open Access Journals (Sweden)

    Takanori Fujiwara

    2018-03-01

    Full Text Available The overall efficiency of an extreme-scale supercomputer largely relies on the performance of its network interconnects. Several of the state of the art supercomputers use networks based on the increasingly popular Dragonfly topology. It is crucial to study the behavior and performance of different parallel applications running on Dragonfly networks in order to make optimal system configurations and design choices, such as job scheduling and routing strategies. However, in order to study these temporal network behavior, we would need a tool to analyze and correlate numerous sets of multivariate time-series data collected from the Dragonfly’s multi-level hierarchies. This paper presents such a tool–a visual analytics system–that uses the Dragonfly network to investigate the temporal behavior and optimize the communication performance of a supercomputer. We coupled interactive visualization with time-series analysis methods to help reveal hidden patterns in the network behavior with respect to different parallel applications and system configurations. Our system also provides multiple coordinated views for connecting behaviors observed at different levels of the network hierarchies, which effectively helps visual analysis tasks. We demonstrate the effectiveness of the system with a set of case studies. Our system and findings can not only help improve the communication performance of supercomputing applications, but also the network performance of next-generation supercomputers. Keywords: Supercomputing, Parallel communication network, Dragonfly networks, Time-series data, Performance analysis, Visual analytics

  15. Digital image processing software system using an array processor

    International Nuclear Information System (INIS)

    Sherwood, R.J.; Portnoff, M.R.; Journeay, C.H.; Twogood, R.E.

    1981-01-01

    A versatile array processor-based system for general-purpose image processing was developed. At the heart of this system is an extensive, flexible software package that incorporates the array processor for effective interactive image processing. The software system is described in detail, and its application to a diverse set of applications at LLNL is briefly discussed. 4 figures, 1 table

  16. KfK-seminar series on supercomputing und visualization from May till September 1992

    International Nuclear Information System (INIS)

    Hohenhinnebusch, W.

    1993-05-01

    During the period of may 1992 to september 1992 a series of seminars was held at KfK on several topics of supercomputing in different fields of application. The aim was to demonstrate the importance of supercomputing and visualization in numerical simulations of complex physical and technical phenomena. This report contains the collection of all submitted seminar papers. (orig./HP) [de

  17. Studies of implosion processes of nested tungsten wire-array Z-pinch

    International Nuclear Information System (INIS)

    Ning Cheng; Ding Ning; Liu Quan; Yang Zhenhua

    2006-01-01

    Nested wire-array is a kind of promising structured-load because it can improve the quality of Z-pinch plasma and enhance the radiation power of X-ray source. Based on the zero-dimensional model, the assumption of wire-array collision, and the criterion of optimized load (maximal load kinetic energy), optimization of the typical nested wire-array as a load of Z machine at Sandia Laboratory was carried out. It was shown that the load has been basically optimized. The Z-pinch process of the typical load was numerically studied by means of one-dimensional three-temperature radiation magneto-hydrodynamics (RMHD) code. The obtained results reproduce the dynamic process of the Z-pinch and show the implosion trajectory of nested wire-array and the transfer process of drive current between the inner and outer array. The experimental and computational X-ray pulse was compared, and it was suggested that the assumption of wire-array collision was reasonable in nested wire-array Z-pinch at least for the current level of Z machine. (authors)

  18. Array signal processing in the NASA Deep Space Network

    Science.gov (United States)

    Pham, Timothy T.; Jongeling, Andre P.

    2004-01-01

    In this paper, we will describe the benefits of arraying and past as well as expected future use of this application. The signal processing aspects of array system are described. Field measurements via actual tracking spacecraft are also presented.

  19. APD arrays and large-area APDs via a new planar process

    CERN Document Server

    Farrell, R; Vanderpuye, K; Grazioso, R; Myers, R; Entine, G

    2000-01-01

    A fabrication process has been developed which allows the beveled-edge-type of avalanche photodiode (APD) to be made without the need for the artful bevel formation steps. This new process, applicable to both APD arrays and to discrete detectors, greatly simplifies manufacture and should lead to significant cost reduction for such photodetectors. This is achieved through a simple innovation that allows isolation around the device or array pixel to be brought into the plane of the surface of the silicon wafer, hence a planar process. A description of the new process is presented along with performance data for a variety of APD device and array configurations. APD array pixel gains in excess of 10 000 have been measured. Array pixel coincidence timing resolution of less than 5 ns has been demonstrated. An energy resolution of 6% for 662 keV gamma-rays using a CsI(T1) scintillator on a planar processed large-area APD has been recorded. Discrete APDs with active areas up to 13 cm sup 2 have been operated.

  20. Development of a Cloud Resolving Model for Heterogeneous Supercomputers

    Science.gov (United States)

    Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.

    2017-12-01

    A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.

  1. Application of Supercomputer Technologies for Simulation Of Socio-Economic Systems

    Directory of Open Access Journals (Sweden)

    Vladimir Valentinovich Okrepilov

    2015-06-01

    Full Text Available To date, an extensive experience has been accumulated in investigation of problems related to quality, assessment of management systems, modeling of economic system sustainability. The performed studies have created a basis for development of a new research area — Economics of Quality. Its tools allow to use opportunities of model simulation for construction of the mathematical models adequately reflecting the role of quality in natural, technical, social regularities of functioning of the complex socio-economic systems. Extensive application and development of models, and also system modeling with use of supercomputer technologies, on our deep belief, will bring the conducted research of socio-economic systems to essentially new level. Moreover, the current scientific research makes a significant contribution to model simulation of multi-agent social systems and that is not less important, it belongs to the priority areas in development of science and technology in our country. This article is devoted to the questions of supercomputer technologies application in public sciences, first of all, — regarding technical realization of the large-scale agent-focused models (AFM. The essence of this tool is that owing to the power computer increase it has become possible to describe the behavior of many separate fragments of a difficult system, as socio-economic systems are. The article also deals with the experience of foreign scientists and practicians in launching the AFM on supercomputers, and also the example of AFM developed in CEMI RAS, stages and methods of effective calculating kernel display of multi-agent system on architecture of a modern supercomputer will be analyzed. The experiments on the basis of model simulation on forecasting the population of St. Petersburg according to three scenarios as one of the major factors influencing the development of socio-economic system and quality of life of the population are presented in the

  2. Application of optical processing to adaptive phased array radar

    Science.gov (United States)

    Carroll, C. W.; Vijaya Kumar, B. V. K.

    1988-01-01

    The results of the investigation of the applicability of optical processing to Adaptive Phased Array Radar (APAR) data processing will be summarized. Subjects that are covered include: (1) new iterative Fourier transform based technique to determine the array antenna weight vector such that the resulting antenna pattern has nulls at desired locations; (2) obtaining the solution of the optimal Wiener weight vector by both iterative and direct methods on two laboratory Optical Linear Algebra Processing (OLAP) systems; and (3) an investigation of the effects of errors present in OLAP systems on the solution vectors.

  3. Centralized supercomputer support for magnetic fusion energy research

    International Nuclear Information System (INIS)

    Fuss, D.; Tull, G.G.

    1984-01-01

    High-speed computers with large memories are vital to magnetic fusion energy research. Magnetohydrodynamic (MHD), transport, equilibrium, Vlasov, particle, and Fokker-Planck codes that model plasma behavior play an important role in designing experimental hardware and interpreting the resulting data, as well as in advancing plasma theory itself. The size, architecture, and software of supercomputers to run these codes are often the crucial constraints on the benefits such computational modeling can provide. Hence, vector computers such as the CRAY-1 offer a valuable research resource. To meet the computational needs of the fusion program, the National Magnetic Fusion Energy Computer Center (NMFECC) was established in 1974 at the Lawrence Livermore National Laboratory. Supercomputers at the central computing facility are linked to smaller computer centers at each of the major fusion laboratories by a satellite communication network. In addition to providing large-scale computing, the NMFECC environment stimulates collaboration and the sharing of computer codes and data among the many fusion researchers in a cost-effective manner

  4. Extending ATLAS Computing to Commercial Clouds and Supercomputers

    CERN Document Server

    Nilsson, P; The ATLAS collaboration; Filipcic, A; Klimentov, A; Maeno, T; Oleynik, D; Panitkin, S; Wenaus, T; Wu, W

    2014-01-01

    The Large Hadron Collider will resume data collection in 2015 with substantially increased computing requirements relative to its first 2009-2013 run. A near doubling of the energy and the data rate, high level of event pile-up, and detector upgrades will mean the number and complexity of events to be analyzed will increase dramatically. A naive extrapolation of the Run 1 experience would suggest that a 5-6 fold increase in computing resources are needed - impossible within the anticipated flat computing budgets in the near future. Consequently ATLAS is engaged in an ambitious program to expand its computing to all available resources, notably including opportunistic use of commercial clouds and supercomputers. Such resources present new challenges in managing heterogeneity, supporting data flows, parallelizing workflows, provisioning software, and other aspects of distributed computing, all while minimizing operational load. We will present the ATLAS experience to date with clouds and supercomputers, and des...

  5. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  6. A Versatile Multichannel Digital Signal Processing Module for Microcalorimeter Arrays

    Science.gov (United States)

    Tan, H.; Collins, J. W.; Walby, M.; Hennig, W.; Warburton, W. K.; Grudberg, P.

    2012-06-01

    Different techniques have been developed for reading out microcalorimeter sensor arrays: individual outputs for small arrays, and time-division or frequency-division or code-division multiplexing for large arrays. Typically, raw waveform data are first read out from the arrays using one of these techniques and then stored on computer hard drives for offline optimum filtering, leading not only to requirements for large storage space but also limitations on achievable count rate. Thus, a read-out module that is capable of processing microcalorimeter signals in real time will be highly desirable. We have developed multichannel digital signal processing electronics that are capable of on-board, real time processing of microcalorimeter sensor signals from multiplexed or individual pixel arrays. It is a 3U PXI module consisting of a standardized core processor board and a set of daughter boards. Each daughter board is designed to interface a specific type of microcalorimeter array to the core processor. The combination of the standardized core plus this set of easily designed and modified daughter boards results in a versatile data acquisition module that not only can easily expand to future detector systems, but is also low cost. In this paper, we first present the core processor/daughter board architecture, and then report the performance of an 8-channel daughter board, which digitizes individual pixel outputs at 1 MSPS with 16-bit precision. We will also introduce a time-division multiplexing type daughter board, which takes in time-division multiplexing signals through fiber-optic cables and then processes the digital signals to generate energy spectra in real time.

  7. Generic nano-imprint process for fabrication of nanowire arrays

    Energy Technology Data Exchange (ETDEWEB)

    Pierret, Aurelie; Hocevar, Moira; Algra, Rienk E; Timmering, Eugene C; Verschuuren, Marc A; Immink, George W G; Verheijen, Marcel A; Bakkers, Erik P A M [Philips Research Laboratories Eindhoven, High Tech Campus 11, 5656 AE Eindhoven (Netherlands); Diedenhofen, Silke L [FOM Institute for Atomic and Molecular Physics c/o Philips Research Laboratories, High Tech Campus 4, 5656 AE Eindhoven (Netherlands); Vlieg, E, E-mail: e.p.a.m.bakkers@tue.nl [IMM, Solid State Chemistry, Radboud University Nijmegen, Heyendaalseweg 135, 6525 AJ Nijmegen (Netherlands)

    2010-02-10

    A generic process has been developed to grow nearly defect-free arrays of (heterostructured) InP and GaP nanowires. Soft nano-imprint lithography has been used to pattern gold particle arrays on full 2 inch substrates. After lift-off organic residues remain on the surface, which induce the growth of additional undesired nanowires. We show that cleaning of the samples before growth with piranha solution in combination with a thermal anneal at 550 deg. C for InP and 700 deg. C for GaP results in uniform nanowire arrays with 1% variation in nanowire length, and without undesired extra nanowires. Our chemical cleaning procedure is applicable to other lithographic techniques such as e-beam lithography, and therefore represents a generic process.

  8. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  9. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  10. Guide to dataflow supercomputing basic concepts, case studies, and a detailed example

    CERN Document Server

    Milutinovic, Veljko; Trifunovic, Nemanja; Giorgi, Roberto

    2015-01-01

    This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; revie

  11. Generic nano-imprint process for fabrication of nanowire arrays

    NARCIS (Netherlands)

    Pierret, A.; Hocevar, M.; Diedenhofen, S.L.; Algra, R.E.; Vlieg, E.; Timmering, E.C.; Verschuuren, M.A.; Immink, W.G.G.; Verheijen, M.A.; Bakkers, E.P.A.M.

    2010-01-01

    A generic process has been developed to grow nearly defect-free arrays of (heterostructured) InP and GaP nanowires. Soft nano-imprint lithography has been used to pattern gold particle arrays on full 2inch substrates. After lift-off organic residues remain on the surface, which induce the growth of

  12. The Earthscope USArray Array Network Facility (ANF): Evolution of Data Acquisition, Processing, and Storage Systems

    Science.gov (United States)

    Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.

    2009-12-01

    Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem

  13. Cellular-automata supercomputers for fluid-dynamics modeling

    International Nuclear Information System (INIS)

    Margolus, N.; Toffoli, T.; Vichniac, G.

    1986-01-01

    We report recent developments in the modeling of fluid dynamics, and give experimental results (including dynamical exponents) obtained using cellular automata machines. Because of their locality and uniformity, cellular automata lend themselves to an extremely efficient physical realization; with a suitable architecture, an amount of hardware resources comparable to that of a home computer can achieve (in the simulation of cellular automata) the performance of a conventional supercomputer

  14. Unique Methodologies for Nano/Micro Manufacturing Job Training Via Desktop Supercomputer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kimball, Clyde [Northern Illinois Univ., DeKalb, IL (United States); Karonis, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Lurio, Laurence [Northern Illinois Univ., DeKalb, IL (United States); Piot, Philippe [Northern Illinois Univ., DeKalb, IL (United States); Xiao, Zhili [Northern Illinois Univ., DeKalb, IL (United States); Glatz, Andreas [Northern Illinois Univ., DeKalb, IL (United States); Pohlman, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Hou, Minmei [Northern Illinois Univ., DeKalb, IL (United States); Demir, Veysel [Northern Illinois Univ., DeKalb, IL (United States); Song, Jie [Northern Illinois Univ., DeKalb, IL (United States); Duffin, Kirk [Northern Illinois Univ., DeKalb, IL (United States); Johns, Mitrick [Northern Illinois Univ., DeKalb, IL (United States); Sims, Thomas [Northern Illinois Univ., DeKalb, IL (United States); Yin, Yanbin [Northern Illinois Univ., DeKalb, IL (United States)

    2012-11-21

    This project establishes an initiative in high speed (Teraflop)/large-memory desktop supercomputing for modeling and simulation of dynamic processes important for energy and industrial applications. It provides a training ground for employment of current students in an emerging field with skills necessary to access the large supercomputing systems now present at DOE laboratories. It also provides a foundation for NIU faculty to quantum leap beyond their current small cluster facilities. The funding extends faculty and student capability to a new level of analytic skills with concomitant publication avenues. The components of the Hewlett Packard computer obtained by the DOE funds create a hybrid combination of a Graphics Processing System (12 GPU/Teraflops) and a Beowulf CPU system (144 CPU), the first expandable via the NIU GAEA system to ~60 Teraflops integrated with a 720 CPU Beowulf system. The software is based on access to the NVIDIA/CUDA library and the ability through MATLAB multiple licenses to create additional local programs. A number of existing programs are being transferred to the CPU Beowulf Cluster. Since the expertise necessary to create the parallel processing applications has recently been obtained at NIU, this effort for software development is in an early stage. The educational program has been initiated via formal tutorials and classroom curricula designed for the coming year. Specifically, the cost focus was on hardware acquisitions and appointment of graduate students for a wide range of applications in engineering, physics and computer science.

  15. Sampling phased array a new technique for signal processing and ultrasonic imaging

    OpenAIRE

    Bulavinov, A.; Joneit, D.; Kröning, M.; Bernus, L.; Dalichow, M.H.; Reddy, K.M.

    2006-01-01

    Different signal processing and image reconstruction techniques are applied in ultrasonic non-destructive material evaluation. In recent years, rapid development in the fields of microelectronics and computer engineering lead to wide application of phased array systems. A new phased array technique, called "Sampling Phased Array" has been developed in Fraunhofer Institute for non-destructive testing. It realizes unique approach of measurement and processing of ultrasonic signals. The sampling...

  16. Array processing for seismic surface waves

    Energy Technology Data Exchange (ETDEWEB)

    Marano, S.

    2013-07-01

    This dissertation submitted to the Swiss Federal Institute of Technology ETH in Zurich takes a look at the analysis of surface wave properties which allows geophysicists to gain insight into the structure of the subsoil, thus avoiding more expensive invasive techniques such as borehole drilling. This thesis aims at improving signal processing techniques for the analysis of surface waves in various directions. One main contribution of this work is the development of a method for the analysis of seismic surface waves. The method also deals with the simultaneous presence of multiple waves. Several computational approaches to minimize costs are presented and compared. Finally, numerical experiments that verify the effectiveness of the proposed cost function and resulting array geometry designs are presented. These lead to greatly improved estimation performance in comparison to arbitrary array geometries.

  17. Array processing for seismic surface waves

    International Nuclear Information System (INIS)

    Marano, S.

    2013-01-01

    This dissertation submitted to the Swiss Federal Institute of Technology ETH in Zurich takes a look at the analysis of surface wave properties which allows geophysicists to gain insight into the structure of the subsoil, thus avoiding more expensive invasive techniques such as borehole drilling. This thesis aims at improving signal processing techniques for the analysis of surface waves in various directions. One main contribution of this work is the development of a method for the analysis of seismic surface waves. The method also deals with the simultaneous presence of multiple waves. Several computational approaches to minimize costs are presented and compared. Finally, numerical experiments that verify the effectiveness of the proposed cost function and resulting array geometry designs are presented. These lead to greatly improved estimation performance in comparison to arbitrary array geometries

  18. A FPGA-based signal processing unit for a GEM array detector

    International Nuclear Information System (INIS)

    Yen, W.W.; Chou, H.P.

    2013-06-01

    in the present study, a signal processing unit for a GEM one-dimensional array detector is presented to measure the trajectory of photoelectrons produced by cosmic X-rays. The present GEM array detector system has 16 signal channels. The front-end unit provides timing signals from trigger units and energy signals from charge sensitive amplifies. The prototype of the processing unit is implemented using commercial field programmable gate array circuit boards. The FPGA based system is linked to a personal computer for testing and data analysis. Tests using simulated signals indicated that the FPGA-based signal processing unit has a good linearity and is flexible for parameter adjustment for various experimental conditions (authors)

  19. The TeraGyroid Experiment – Supercomputing 2003

    Directory of Open Access Journals (Sweden)

    R.J. Blake

    2005-01-01

    Full Text Available Amphiphiles are molecules with hydrophobic tails and hydrophilic heads. When dispersed in solvents, they self assemble into complex mesophases including the beautiful cubic gyroid phase. The goal of the TeraGyroid experiment was to study defect pathways and dynamics in these gyroids. The UK's supercomputing and USA's TeraGrid facilities were coupled together, through a dedicated high-speed network, into a single computational Grid for research work that peaked around the Supercomputing 2003 conference. The gyroids were modeled using lattice Boltzmann methods with parameter spaces explored using many 1283 and 3grid point simulations, this data being used to inform the world's largest three-dimensional time dependent simulation with 10243-grid points. The experiment generated some 2 TBytes of useful data. In terms of Grid technology the project demonstrated the migration of simulations (using Globus middleware to and fro across the Atlantic exploiting the availability of resources. Integration of the systems accelerated the time to insight. Distributed visualisation of the output datasets enabled the parameter space of the interactions within the complex fluid to be explored from a number of sites, informed by discourse over the Access Grid. The project was sponsored by EPSRC (UK and NSF (USA with trans-Atlantic optical bandwidth provided by British Telecommunications.

  20. Analyzing the Interplay of Failures and Workload on a Leadership-Class Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Meneses, Esteban [University of Pittsburgh; Ni, Xiang [University of Illinois at Urbana-Champaign; Jones, Terry R [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of fault tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.

  1. NeuroSeek dual-color image processing infrared focal plane array

    Science.gov (United States)

    McCarley, Paul L.; Massie, Mark A.; Baxter, Christopher R.; Huynh, Buu L.

    1998-09-01

    Several technologies have been developed in recent years to advance the state of the art of IR sensor systems including dual color affordable focal planes, on-focal plane array biologically inspired image and signal processing techniques and spectral sensing techniques. Pacific Advanced Technology (PAT) and the Air Force Research Lab Munitions Directorate have developed a system which incorporates the best of these capabilities into a single device. The 'NeuroSeek' device integrates these technologies into an IR focal plane array (FPA) which combines multicolor Midwave IR/Longwave IR radiometric response with on-focal plane 'smart' neuromorphic analog image processing. The readout and processing integrated circuit very large scale integration chip which was developed under this effort will be hybridized to a dual color detector array to produce the NeuroSeek FPA, which will have the capability to fuse multiple pixel-based sensor inputs directly on the focal plane. Great advantages are afforded by application of massively parallel processing algorithms to image data in the analog domain; the high speed and low power consumption of this device mimic operations performed in the human retina.

  2. Highly scalable parallel processing of extracellular recordings of Multielectrode Arrays.

    Science.gov (United States)

    Gehring, Tiago V; Vasilaki, Eleni; Giugliano, Michele

    2015-01-01

    Technological advances of Multielectrode Arrays (MEAs) used for multisite, parallel electrophysiological recordings, lead to an ever increasing amount of raw data being generated. Arrays with hundreds up to a few thousands of electrodes are slowly seeing widespread use and the expectation is that more sophisticated arrays will become available in the near future. In order to process the large data volumes resulting from MEA recordings there is a pressing need for new software tools able to process many data channels in parallel. Here we present a new tool for processing MEA data recordings that makes use of new programming paradigms and recent technology developments to unleash the power of modern highly parallel hardware, such as multi-core CPUs with vector instruction sets or GPGPUs. Our tool builds on and complements existing MEA data analysis packages. It shows high scalability and can be used to speed up some performance critical pre-processing steps such as data filtering and spike detection, helping to make the analysis of larger data sets tractable.

  3. Assessment of Measurement Distortions in GNSS Antenna Array Space-Time Processing

    Directory of Open Access Journals (Sweden)

    Thyagaraja Marathe

    2016-01-01

    Full Text Available Antenna array processing techniques are studied in GNSS as effective tools to mitigate interference in spatial and spatiotemporal domains. However, without specific considerations, the array processing results in biases and distortions in the cross-ambiguity function (CAF of the ranging codes. In space-time processing (STP the CAF misshaping can happen due to the combined effect of space-time processing and the unintentional signal attenuation by filtering. This paper focuses on characterizing these degradations for different controlled signal scenarios and for live data from an antenna array. The antenna array simulation method introduced in this paper enables one to perform accurate analyses in the field of STP. The effects of relative placement of the interference source with respect to the desired signal direction are shown using overall measurement errors and profile of the signal strength. Analyses of contributions from each source of distortion are conducted individually and collectively. Effects of distortions on GNSS pseudorange errors and position errors are compared for blind, semi-distortionless, and distortionless beamforming methods. The results from characterization can be useful for designing low distortion filters that are especially important for high accuracy GNSS applications in challenging environments.

  4. Superresolution with Seismic Arrays using Empirical Matched Field Processing

    Energy Technology Data Exchange (ETDEWEB)

    Harris, D B; Kvaerna, T

    2010-03-24

    Scattering and refraction of seismic waves can be exploited with empirical matched field processing of array observations to distinguish sources separated by much less than the classical resolution limit. To describe this effect, we use the term 'superresolution', a term widely used in the optics and signal processing literature to denote systems that break the diffraction limit. We illustrate superresolution with Pn signals recorded by the ARCES array in northern Norway, using them to identify the origins with 98.2% accuracy of 549 explosions conducted by closely-spaced mines in northwest Russia. The mines are observed at 340-410 kilometers range and are separated by as little as 3 kilometers. When viewed from ARCES many are separated by just tenths of a degree in azimuth. This classification performance results from an adaptation to transient seismic signals of techniques developed in underwater acoustics for localization of continuous sound sources. Matched field processing is a potential competitor to frequency-wavenumber and waveform correlation methods currently used for event detection, classification and location. It operates by capturing the spatial structure of wavefields incident from a particular source in a series of narrow frequency bands. In the rich seismic scattering environment, closely-spaced sources far from the observing array nonetheless produce distinct wavefield amplitude and phase patterns across the small array aperture. With observations of repeating events, these patterns can be calibrated over a wide band of frequencies (e.g. 2.5-12.5 Hertz) for use in a power estimation technique similar to frequency-wavenumber analysis. The calibrations enable coherent processing at high frequencies at which wavefields normally are considered incoherent under a plane wave model.

  5. Role of supercomputers in magnetic fusion and energy research programs

    International Nuclear Information System (INIS)

    Killeen, J.

    1985-06-01

    The importance of computer modeling in magnetic fusion (MFE) and energy research (ER) programs is discussed. The need for the most advanced supercomputers is described, and the role of the National Magnetic Fusion Energy Computer Center in meeting these needs is explained

  6. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  7. Plane-wave electronic structure calculations on a parallel supercomputer

    International Nuclear Information System (INIS)

    Nelson, J.S.; Plimpton, S.J.; Sears, M.P.

    1993-01-01

    The development of iterative solutions of Schrodinger's equation in a plane-wave (pw) basis over the last several years has coincided with great advances in the computational power available for performing the calculations. These dual developments have enabled many new and interesting condensed matter phenomena to be studied from a first-principles approach. The authors present a detailed description of the implementation on a parallel supercomputer (hypercube) of the first-order equation-of-motion solution to Schrodinger's equation, using plane-wave basis functions and ab initio separable pseudopotentials. By distributing the plane-waves across the processors of the hypercube many of the computations can be performed in parallel, resulting in decreases in the overall computation time relative to conventional vector supercomputers. This partitioning also provides ample memory for large Fast Fourier Transform (FFT) meshes and the storage of plane-wave coefficients for many hundreds of energy bands. The usefulness of the parallel techniques is demonstrated by benchmark timings for both the FFT's and iterations of the self-consistent solution of Schrodinger's equation for different sized Si unit cells of up to 512 atoms

  8. Problem solving in nuclear engineering using supercomputers

    International Nuclear Information System (INIS)

    Schmidt, F.; Scheuermann, W.; Schatz, A.

    1987-01-01

    The availability of supercomputers enables the engineer to formulate new strategies for problem solving. One such strategy is the Integrated Planning and Simulation System (IPSS). With the integrated systems, simulation models with greater consistency and good agreement with actual plant data can be effectively realized. In the present work some of the basic ideas of IPSS are described as well as some of the conditions necessary to build such systems. Hardware and software characteristics as realized are outlined. (orig.) [de

  9. High Temporal Resolution Mapping of Seismic Noise Sources Using Heterogeneous Supercomputers

    Science.gov (United States)

    Paitz, P.; Gokhberg, A.; Ermert, L. A.; Fichtner, A.

    2017-12-01

    The time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems like earthquake fault zones, volcanoes, geothermal and hydrocarbon reservoirs. We present results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service providing seismic noise source maps for Central Europe with high temporal resolution. We use source imaging methods based on the cross-correlation of seismic noise records from all seismic stations available in the region of interest. The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept to provide the interested researchers worldwide with regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for the generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise source mapping itself rests on the measurement of logarithmic amplitude ratios in suitably pre-processed noise correlations, and the use of simplified sensitivity kernels. During the implementation we addressed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service-oriented architecture for coordination of various sub-systems, and

  10. Total focusing method with correlation processing of antenna array signals

    Science.gov (United States)

    Kozhemyak, O. A.; Bortalevich, S. I.; Loginov, E. L.; Shinyakov, Y. A.; Sukhorukov, M. P.

    2018-03-01

    The article proposes a method of preliminary correlation processing of a complete set of antenna array signals used in the image reconstruction algorithm. The results of experimental studies of 3D reconstruction of various reflectors using and without correlation processing are presented in the article. Software ‘IDealSystem3D’ by IDeal-Technologies was used for experiments. Copper wires of different diameters located in a water bath were used as a reflector. The use of correlation processing makes it possible to obtain more accurate reconstruction of the image of the reflectors and to increase the signal-to-noise ratio. The experimental results were processed using an original program. This program allows varying the parameters of the antenna array and sampling frequency.

  11. Visualizing quantum scattering on the CM-2 supercomputer

    International Nuclear Information System (INIS)

    Richardson, J.L.

    1991-01-01

    We implement parallel algorithms for solving the time-dependent Schroedinger equation on the CM-2 supercomputer. These methods are unconditionally stable as well as unitary at each time step and have the advantage of being spatially local and explicit. We show how to visualize the dynamics of quantum scattering using techniques for visualizing complex wave functions. Several scattering problems are solved to demonstrate the use of these methods. (orig.)

  12. Implementation of an Antenna Array Signal Processing Breadboard for the Deep Space Network

    Science.gov (United States)

    Navarro, Robert

    2006-01-01

    The Deep Space Network Large Array will replace/augment 34 and 70 meter antenna assets. The array will mainly be used to support NASA's deep space telemetry, radio science, and navigation requirements. The array project will deploy three complexes in the western U.S., Australia, and European longitude each with 400 12m downlink antennas and a DSN central facility at JPL. THis facility will remotely conduct all real-time monitor and control for the network. Signal processing objectives include: provide a means to evaluate the performance of the Breadboard Array's antenna subsystem; design and build prototype hardware; demonstrate and evaluate proposed signal processing techniques; and gain experience with various technologies that may be used in the Large Array. Results are summarized..

  13. Computational investigation of hydrokinetic turbine arrays in an open channel using an actuator disk-LES model

    Science.gov (United States)

    Kang, Seokkoo; Yang, Xiaolei; Sotiropoulos, Fotis

    2012-11-01

    While a considerable amount of work has focused on studying the effects and performance of wind farms, very little is known about the performance of hydrokinetic turbine arrays in open channels. Unlike large wind farms, where the vertical fluxes of momentum and energy from the atmospheric boundary layer comprise the main transport mechanisms, the presence of free surface in hydrokinetic turbine arrays inhibits vertical transport. To explore this fundamental difference between wind and hydrokinetic turbine arrays, we carry out LES with the actuator disk model to systematically investigate various layouts of hydrokinetic turbine arrays mounted on the bed of a straight open channel with fully-developed turbulent flow fed at the channel inlet. Mean flow quantities and turbulence statistics within and downstream of the arrays will be analyzed and the effect of the turbine arrays as means for increasing the effective roughness of the channel bed will be extensively discussed. This work was supported by Initiative for Renewable Energy & the Environment (IREE) (Grant No. RO-0004-12), and computational resources were provided by Minnesota Supercomputing Institute.

  14. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  15. SOFTWARE FOR SUPERCOMPUTER SKIF “ProLit-lC” and “ProNRS-lC” FOR FOUNDRY AND METALLURGICAL PRODUCTIONS

    Directory of Open Access Journals (Sweden)

    A. N. Chichko

    2008-01-01

    Full Text Available The data of modeling on supercomputer system SKIF of technological process of  molds filling by means of computer system 'ProLIT-lc', and also data of modeling of the steel pouring process by means ofTroNRS-lc'are presented. The influence of number of  processors of  multinuclear computer system SKIF on acceleration and time of  modeling of technological processes, connected with production of castings and slugs, is shown.

  16. Solution-processed single-wall carbon nanotube transistor arrays for wearable display backplanes

    Directory of Open Access Journals (Sweden)

    Byeong-Cheol Kang

    2018-01-01

    Full Text Available In this paper, we demonstrate solution-processed single-wall carbon nanotube thin-film transistor (SWCNT-TFT arrays with polymeric gate dielectrics on the polymeric substrates for wearable display backplanes, which can be directly attached to the human body. The optimized SWCNT-TFTs without any buffer layer on flexible substrates exhibit a linear field-effect mobility of 1.5cm2/V-s and a threshold voltage of around 0V. The statistical plot of the key device metrics extracted from 35 SWCNT-TFTs which were fabricated in different batches at different times conclusively support that we successfully demonstrated high-performance solution-processed SWCNT-TFT arrays which demand excellent uniformity in the device performance. We also investigate the operational stability of wearable SWCNT-TFT arrays against an applied strain of up to 40%, which is the essential for a harsh degree of strain on human body. We believe that the demonstration of flexible SWCNT-TFT arrays which were fabricated by all solution-process except the deposition of metal electrodes at process temperature below 130oC can open up new routes for wearable display backplanes.

  17. A novel scalable manufacturing process for the production of hydrogel-forming microneedle arrays.

    Science.gov (United States)

    Lutton, Rebecca E M; Larrañeta, Eneko; Kearney, Mary-Carmel; Boyd, Peter; Woolfson, A David; Donnelly, Ryan F

    2015-10-15

    A novel manufacturing process for fabricating microneedle arrays (MN) has been designed and evaluated. The prototype is able to successfully produce 14×14 MN arrays and is easily capable of scale-up, enabling the transition from laboratory to industry and subsequent commercialisation. The method requires the custom design of metal MN master templates to produce silicone MN moulds using an injection moulding process. The MN arrays produced using this novel method was compared with centrifugation, the traditional method of producing aqueous hydrogel-forming MN arrays. The results proved that there was negligible difference between either methods, with each producing MN arrays with comparable quality. Both types of MN arrays can be successfully inserted in a skin simulant. In both cases the insertion depth was approximately 60% of the needle length and the height reduction after insertion was in both cases approximately 3%. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; D' Azevedo, Eduardo [ORNL; Philip, Bobby [ORNL; Worley, Patrick H [ORNL

    2016-01-01

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phase of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.

  19. Novel Supercomputing Approaches for High Performance Linear Algebra Using FPGAs, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Supercomputing plays a major role in many areas of science and engineering, and it has had tremendous impact for decades in areas such as aerospace, defense, energy,...

  20. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Del Debbio, Luigi; Jordan, Kirk; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  1. A multi-step electrochemical etching process for a three-dimensional micro probe array

    International Nuclear Information System (INIS)

    Kim, Yoonji; Youn, Sechan; Cho, Young-Ho; Park, HoJoon; Chang, Byeung Gyu; Oh, Yong Soo

    2011-01-01

    We present a simple, fast, and cost-effective process for three-dimensional (3D) micro probe array fabrication using multi-step electrochemical metal foil etching. Compared to the previous electroplating (add-on) process, the present electrochemical (subtractive) process results in well-controlled material properties of the metallic microstructures. In the experimental study, we describe the single-step and multi-step electrochemical aluminum foil etching processes. In the single-step process, the depth etch rate and the bias etch rate of an aluminum foil have been measured as 1.50 ± 0.10 and 0.77 ± 0.03 µm min −1 , respectively. On the basis of the single-step process results, we have designed and performed the two-step electrochemical etching process for the 3D micro probe array fabrication. The fabricated 3D micro probe array shows the vertical and lateral fabrication errors of 15.5 ± 5.8% and 3.3 ± 0.9%, respectively, with the surface roughness of 37.4 ± 9.6 nm. The contact force and the contact resistance of the 3D micro probe array have been measured to be 24.30 ± 0.98 mN and 2.27 ± 0.11 Ω, respectively, for an overdrive of 49.12 ± 1.25 µm.

  2. Fundamentals of spherical array processing

    CERN Document Server

    Rafaely, Boaz

    2015-01-01

    This book provides a comprehensive introduction to the theory and practice of spherical microphone arrays. It is written for graduate students, researchers and engineers who work with spherical microphone arrays in a wide range of applications.   The first two chapters provide the reader with the necessary mathematical and physical background, including an introduction to the spherical Fourier transform and the formulation of plane-wave sound fields in the spherical harmonic domain. The third chapter covers the theory of spatial sampling, employed when selecting the positions of microphones to sample sound pressure functions in space. Subsequent chapters present various spherical array configurations, including the popular rigid-sphere-based configuration. Beamforming (spatial filtering) in the spherical harmonics domain, including axis-symmetric beamforming, and the performance measures of directivity index and white noise gain are introduced, and a range of optimal beamformers for spherical arrays, includi...

  3. Signal Processing for a Lunar Array: Minimizing Power Consumption

    Science.gov (United States)

    D'Addario, Larry; Simmons, Samuel

    2011-01-01

    Motivation for the study is: (1) Lunar Radio Array for low frequency, high redshift Dark Ages/Epoch of Reionization observations (z =6-50, f=30-200 MHz) (2) High precision cosmological measurements of 21 cm H I line fluctuations (3) Probe universe before first star formation and provide information about the Intergalactic Medium and evolution of large scale structures (5) Does the current cosmological model accurately describe the Universe before reionization? Lunar Radio Array is for (1) Radio interferometer based on the far side of the moon (1a) Necessary for precision measurements, (1b) Shielding from earth-based and solar RFI (12) No permanent ionosphere, (2) Minimum collecting area of approximately 1 square km and brightness sensitivity 10 mK (3)Several technologies must be developed before deployment The power needed to process signals from a large array of nonsteerable elements is not prohibitive, even for the Moon, and even in current technology. Two different concepts have been proposed: (1) Dark Ages Radio Interferometer (DALI) (2)( Lunar Array for Radio Cosmology (LARC)

  4. Supercomputers and the future of computational atomic scattering physics

    International Nuclear Information System (INIS)

    Younger, S.M.

    1989-01-01

    The advent of the supercomputer has opened new vistas for the computational atomic physicist. Problems of hitherto unparalleled complexity are now being examined using these new machines, and important connections with other fields of physics are being established. This talk briefly reviews some of the most important trends in computational scattering physics and suggests some exciting possibilities for the future. 7 refs., 2 figs

  5. Sampling phased array - a new technique for ultrasonic signal processing and imaging

    OpenAIRE

    Verkooijen, J.; Boulavinov, A.

    2008-01-01

    Over the past 10 years, the improvement in the field of microelectronics and computer engineering has led to significant advances in ultrasonic signal processing and image construction techniques that are currently being applied to non-destructive material evaluation. A new phased array technique, called 'Sampling Phased Array', has been developed in the Fraunhofer Institute for Non-Destructive Testing([1]). It realises a unique approach of measurement and processing of ultrasonic signals. Th...

  6. Watson will see you now: a supercomputer to help clinicians make informed treatment decisions.

    Science.gov (United States)

    Doyle-Lindrud, Susan

    2015-02-01

    IBM has collaborated with several cancer care providers to develop and train the IBM supercomputer Watson to help clinicians make informed treatment decisions. When a patient is seen in clinic, the oncologist can input all of the clinical information into the computer system. Watson will then review all of the data and recommend treatment options based on the latest evidence and guidelines. Once the oncologist makes the treatment decision, this information can be sent directly to the insurance company for approval. Watson has the ability to standardize care and accelerate the approval process, a benefit to the healthcare provider and the patient.

  7. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 2

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  8. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 1

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  9. Subspace Dimensionality: A Tool for Automated QC in Seismic Array Processing

    Science.gov (United States)

    Rowe, C. A.; Stead, R. J.; Begnaud, M. L.

    2013-12-01

    Because of the great resolving power of seismic arrays, the application of automated processing to array data is critically important in treaty verification work. A significant problem in array analysis is the inclusion of bad sensor channels in the beamforming process. We are testing an approach to automated, on-the-fly quality control (QC) to aid in the identification of poorly performing sensor channels prior to beam-forming in routine event detection or location processing. The idea stems from methods used for large computer servers, when monitoring traffic at enormous numbers of nodes is impractical on a node-by node basis, so the dimensionality of the node traffic is instead monitoried for anomalies that could represent malware, cyber-attacks or other problems. The technique relies upon the use of subspace dimensionality or principal components of the overall system traffic. The subspace technique is not new to seismology, but its most common application has been limited to comparing waveforms to an a priori collection of templates for detecting highly similar events in a swarm or seismic cluster. In the established template application, a detector functions in a manner analogous to waveform cross-correlation, applying a statistical test to assess the similarity of the incoming data stream to known templates for events of interest. In our approach, we seek not to detect matching signals, but instead, we examine the signal subspace dimensionality in much the same way that the method addresses node traffic anomalies in large computer systems. Signal anomalies recorded on seismic arrays affect the dimensional structure of the array-wide time-series. We have shown previously that this observation is useful in identifying real seismic events, either by looking at the raw signal or derivatives thereof (entropy, kurtosis), but here we explore the effects of malfunctioning channels on the dimension of the data and its derivatives, and how to leverage this effect for

  10. Parallel Multivariate Spatio-Temporal Clustering of Large Ecological Datasets on Hybrid Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; Kumar, Jitendra [ORNL; Mills, Richard T. [Argonne National Laboratory; Hoffman, Forrest M. [ORNL; Sripathi, Vamsi [Intel Corporation; Hargrove, William Walter [United States Department of Agriculture (USDA), United States Forest Service (USFS)

    2017-09-01

    A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like the Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.

  11. Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers

    Science.gov (United States)

    Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi

    2018-03-01

    Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.

  12. Design and performance characterization of electronic structure calculations on massively parallel supercomputers

    DEFF Research Database (Denmark)

    Romero, N. A.; Glinsvad, Christian; Larsen, Ask Hjorth

    2013-01-01

    Density function theory (DFT) is the most widely employed electronic structure method because of its favorable scaling with system size and accuracy for a broad range of molecular and condensed-phase systems. The advent of massively parallel supercomputers has enhanced the scientific community...

  13. Systolic array processing of the sequential decoding algorithm

    Science.gov (United States)

    Chang, C. Y.; Yao, K.

    1989-01-01

    A systolic array processing technique is applied to implementing the stack algorithm form of the sequential decoding algorithm. It is shown that sorting, a key function in the stack algorithm, can be efficiently realized by a special type of systolic arrays known as systolic priority queues. Compared to the stack-bucket algorithm, this approach is shown to have the advantages that the decoding always moves along the optimal path, that it has a fast and constant decoding speed and that its simple and regular hardware architecture is suitable for VLSI implementation. Three types of systolic priority queues are discussed: random access scheme, shift register scheme and ripple register scheme. The property of the entries stored in the systolic priority queue is also investigated. The results are applicable to many other basic sorting type problems.

  14. The Very Large Array Data Processing Pipeline

    Science.gov (United States)

    Kent, Brian R.; Masters, Joseph S.; Chandler, Claire J.; Davis, Lindsey E.; Kern, Jeffrey S.; Ott, Juergen; Schinzel, Frank K.; Medlin, Drew; Muders, Dirk; Williams, Stewart; Geers, Vincent C.; Momjian, Emmanuel; Butler, Bryan J.; Nakazato, Takeshi; Sugimoto, Kanako

    2018-01-01

    We present the VLA Pipeline, software that is part of the larger pipeline processing framework used for the Karl G. Jansky Very Large Array (VLA), and Atacama Large Millimeter/sub-millimeter Array (ALMA) for both interferometric and single dish observations.Through a collection of base code jointly used by the VLA and ALMA, the pipeline builds a hierarchy of classes to execute individual atomic pipeline tasks within the Common Astronomy Software Applications (CASA) package. Each pipeline task contains heuristics designed by the team to actively decide the best processing path and execution parameters for calibration and imaging. The pipeline code is developed and written in Python and uses a "context" structure for tracking the heuristic decisions and processing results. The pipeline "weblog" acts as the user interface in verifying the quality assurance of each calibration and imaging stage. The majority of VLA scheduling blocks above 1 GHz are now processed with the standard continuum recipe of the pipeline and offer a calibrated measurement set as a basic data product to observatory users. In addition, the pipeline is used for processing data from the VLA Sky Survey (VLASS), a seven year community-driven endeavor started in September 2017 to survey the entire sky down to a declination of -40 degrees at S-band (2-4 GHz). This 5500 hour next-generation large radio survey will explore the time and spectral domains, relying on pipeline processing to generate calibrated measurement sets, polarimetry, and imaging data products that are available to the astronomical community with no proprietary period. Here we present an overview of the pipeline design philosophy, heuristics, and calibration and imaging results produced by the pipeline. Future development will include the testing of spectral line recipes, low signal-to-noise heuristics, and serving as a testing platform for science ready data products.The pipeline is developed as part of the CASA software package by an

  15. High speed vision processor with reconfigurable processing element array based on full-custom distributed memory

    Science.gov (United States)

    Chen, Zhe; Yang, Jie; Shi, Cong; Qin, Qi; Liu, Liyuan; Wu, Nanjian

    2016-04-01

    In this paper, a hybrid vision processor based on a compact full-custom distributed memory for near-sensor high-speed image processing is proposed. The proposed processor consists of a reconfigurable processing element (PE) array, a row processor (RP) array, and a dual-core microprocessor. The PE array includes two-dimensional processing elements with a compact full-custom distributed memory. It supports real-time reconfiguration between the PE array and the self-organized map (SOM) neural network. The vision processor is fabricated using a 0.18 µm CMOS technology. The circuit area of the distributed memory is reduced markedly into 1/3 of that of the conventional memory so that the circuit area of the vision processor is reduced by 44.2%. Experimental results demonstrate that the proposed design achieves correct functions.

  16. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00300320; Klimentov, Alexei; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Vaniachine, Alexandre; Wenaus, Torre; Schovancova, Jaroslava

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modi ed PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real time, information about unused...

  17. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration; Klimentov, Alexei; Oleynik, Danila; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently uses more than 100,000 cores at well over 100 Grid sites with a peak performance of 0.3 petaFLOPS, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real tim...

  18. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    Science.gov (United States)

    Landgrebe, Anton J.

    1987-01-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  19. Explaining the gap between theoretical peak performance and real performance for supercomputer architectures

    International Nuclear Information System (INIS)

    Schoenauer, W.; Haefner, H.

    1993-01-01

    The basic architectures of vector and parallel computers with their properties are presented. Then the memory size and the arithmetic operations in the context of memory bandwidth are discussed. For the exemplary discussion of a single operation micro-measurements of the vector triad for the IBM 3090 VF and the CRAY Y-MP/8 are presented. They reveal the details of the losses for a single operation. Then we analyze the global performance of a whole supercomputer by identifying reduction factors that bring down the theoretical peak performance to the poor real performance. The responsibilities of the manufacturer and of the user for these losses are dicussed. Then the price-performance ratio for different architectures in a snapshot of January 1991 is briefly mentioned. Finally some remarks to a user-friendly architecture for a supercomputer will be made. (orig.)

  20. Adventures in supercomputing: An innovative program for high school teachers

    Energy Technology Data Exchange (ETDEWEB)

    Oliver, C.E.; Hicks, H.R.; Summers, B.G. [Oak Ridge National Lab., TN (United States); Staten, D.G. [Wartburg Central High School, TN (United States)

    1994-12-31

    Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology. Adventures in Supercomputing (AiS), sponsored by the U.S. Department of Energy (DOE), is such a program. It is a program for high school teachers that changes the teacher paradigm from a teacher-directed approach of teaching to a student-centered approach. {open_quotes}A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode{close_quotes}. Not only is the process of teaching changed, but the cross-curricula integration within the AiS materials is remarkable. Written from a teacher`s perspective, this paper will describe the AiS program and its effects on teachers and students, primarily at Wartburg Central High School, in Wartburg, Tennessee. The AiS program in Tennessee is sponsored by Oak Ridge National Laboratory (ORNL).

  1. Algorithm-structured computer arrays and networks architectures and processes for images, percepts, models, information

    CERN Document Server

    Uhr, Leonard

    1984-01-01

    Computer Science and Applied Mathematics: Algorithm-Structured Computer Arrays and Networks: Architectures and Processes for Images, Percepts, Models, Information examines the parallel-array, pipeline, and other network multi-computers.This book describes and explores arrays and networks, those built, being designed, or proposed. The problems of developing higher-level languages for systems and designing algorithm, program, data flow, and computer structure are also discussed. This text likewise describes several sequences of successively more general attempts to combine the power of arrays wi

  2. Evaluation of existing and proposed computer architectures for future ground-based systems

    Science.gov (United States)

    Schulbach, C.

    1985-01-01

    Parallel processing architectures and techniques used in current supercomputers are described and projections are made of future advances. Presently, the von Neumann sequential processing pattern has been accelerated by having separate I/O processors, interleaved memories, wide memories, independent functional units and pipelining. Recent supercomputers have featured single-input, multiple data stream architectures, which have different processors for performing various operations (vector or pipeline processors). Multiple input, multiple data stream machines have also been developed. Data flow techniques, wherein program instructions are activated only when data are available, are expected to play a large role in future supercomputers, along with increased parallel processor arrays. The enhanced operational speeds are essential for adequately treating data from future spacecraft remote sensing instruments such as the Thematic Mapper.

  3. Blind Time-Frequency Analysis for Source Discrimination in Multisensor Array Processing

    National Research Council Canada - National Science Library

    Amin, Moeness

    1999-01-01

    .... We have clearly demonstrated, through analysis and simulations, the offerings of time-frequency distributions in solving key problems in sensor array processing, including direction finding, source...

  4. Monitoring and Evaluation of Alcoholic Fermentation Processes Using a Chemocapacitor Sensor Array

    Science.gov (United States)

    Oikonomou, Petros; Raptis, Ioannis; Sanopoulou, Merope

    2014-01-01

    The alcoholic fermentation of Savatiano must variety was initiated under laboratory conditions and monitored daily with a gas sensor array without any pre-treatment steps. The sensor array consisted of eight interdigitated chemocapacitors (IDCs) coated with specific polymers. Two batches of fermented must were tested and also subjected daily to standard chemical analysis. The chemical composition of the two fermenting musts differed from day one of laboratory monitoring (due to different storage conditions of the musts) and due to a deliberate increase of the acetic acid content of one of the musts, during the course of the process, in an effort to spoil the fermenting medium. Sensor array responses to the headspace of the fermenting medium were compared with those obtained either for pure or contaminated samples with controlled concentrations of standard ethanol solutions of impurities. Results of data processing with Principal Component Analysis (PCA), demonstrate that this sensing system could discriminate between a normal and a potential spoiled grape must fermentation process, so this gas sensing system could be potentially applied during wine production as an auxiliary qualitative control instrument. PMID:25184490

  5. Integration of Titan supercomputer at OLCF with ATLAS Production System

    Science.gov (United States)

    Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to

  6. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-01-01

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  7. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  8. An efficient implementation of a backpropagation learning algorithm on quadrics parallel supercomputer

    International Nuclear Information System (INIS)

    Taraglio, S.; Massaioli, F.

    1995-08-01

    A parallel implementation of a library to build and train Multi Layer Perceptrons via the Back Propagation algorithm is presented. The target machine is the SIMD massively parallel supercomputer Quadrics. Performance measures are provided on three different machines with different number of processors, for two network examples. A sample source code is given

  9. High density processing electronics for superconducting tunnel junction x-ray detector arrays

    Energy Technology Data Exchange (ETDEWEB)

    Warburton, W.K., E-mail: bill@xia.com [XIA LLC, 31057 Genstar Road, Hayward, CA 94544 (United States); Harris, J.T. [XIA LLC, 31057 Genstar Road, Hayward, CA 94544 (United States); Friedrich, S. [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States)

    2015-06-01

    Superconducting tunnel junctions (STJs) are excellent soft x-ray (100–2000 eV) detectors, particularly for synchrotron applications, because of their ability to obtain energy resolutions below 10 eV at count rates approaching 10 kcps. In order to achieve useful solid detection angles with these very small detectors, they are typically deployed in large arrays – currently with 100+ elements, but with 1000 elements being contemplated. In this paper we review a 5-year effort to develop compact, computer controlled low-noise processing electronics for STJ detector arrays, focusing on the major issues encountered and our solutions to them. Of particular interest are our preamplifier design, which can set the STJ operating points under computer control and achieve 2.7 eV energy resolution; our low noise power supply, which produces only 2 nV/√Hz noise at the preamplifier's critical cascode node; our digital processing card that digitizes and digitally processes 32 channels; and an STJ I–V curve scanning algorithm that computes noise as a function of offset voltage, allowing an optimum operating point to be easily selected. With 32 preamplifiers laid out on a custom 3U EuroCard, and the 32 channel digital card in a 3U PXI card format, electronics for a 128 channel array occupy only two small chassis, each the size of a National Instruments 5-slot PXI crate, and allow full array control with simple extensions of existing beam line data collection packages.

  10. Assessment of low-cost manufacturing process sequences. [photovoltaic solar arrays

    Science.gov (United States)

    Chamberlain, R. G.

    1979-01-01

    An extensive research and development activity to reduce the cost of manufacturing photovoltaic solar arrays by a factor of approximately one hundred is discussed. Proposed and actual manufacturing process descriptions were compared to manufacturing costs. An overview of this methodology is presented.

  11. Supercomputing Centers and Electricity Service Providers

    DEFF Research Database (Denmark)

    Patki, Tapasya; Bates, Natalie; Ghatikar, Girish

    2016-01-01

    from a detailed, quantitative survey-based analysis and compare the perspectives of the European grid and SCs to the ones of the United States (US). We then show that contrary to the expectation, SCs in the US are more open toward cooperating and developing demand-management strategies with their ESPs......Supercomputing Centers (SCs) have high and variable power demands, which increase the challenges of the Electricity Service Providers (ESPs) with regards to efficient electricity distribution and reliable grid operation. High penetration of renewable energy generation further exacerbates...... this problem. In order to develop a symbiotic relationship between the SCs and their ESPs and to support effective power management at all levels, it is critical to understand and analyze how the existing relationships were formed and how these are expected to evolve. In this paper, we first present results...

  12. Integration of PanDA workload management system with Titan supercomputer at OLCF

    Science.gov (United States)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  13. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    Directory of Open Access Journals (Sweden)

    De K.

    2016-01-01

    Full Text Available The Large Hadron Collider (LHC, operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed AnalysisWorkload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF, is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF’s Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  14. HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.

    2017-10-01

    PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.

  15. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Gallarno, George [Christian Brothers University; Rogers, James H [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  16. Heat dissipation computations of a HVDC ground electrode using a supercomputer

    International Nuclear Information System (INIS)

    Greiss, H.; Mukhedkar, D.; Lagace, P.J.

    1990-01-01

    This paper reports on the temperature, of soil surrounding a High Voltage Direct Current (HVDC) toroidal ground electrode of practical dimensions, in both homogeneous and non-homogeneous soils that was computed at incremental points in time using finite difference methods on a supercomputer. Curves of the response were computed and plotted at several locations within the soil in the vicinity of the ground electrode for various values of the soil parameters

  17. Argonne National Lab deploys Force10 networks' massively dense ethernet switch for supercomputing cluster

    CERN Multimedia

    2003-01-01

    "Force10 Networks, Inc. today announced that Argonne National Laboratory (Argonne, IL) has successfully deployed Force10 E-Series switch/routers to connect to the TeraGrid, the world's largest supercomputing grid, sponsored by the National Science Foundation (NSF)" (1/2 page).

  18. Sound Is Sound: Film Sound Techniques and Infrasound Data Array Processing

    Science.gov (United States)

    Perttu, A. B.; Williams, R.; Taisne, B.; Tailpied, D.

    2017-12-01

    A multidisciplinary collaboration between earth scientists and a sound designer/composer was established to explore the possibilities of audification analysis of infrasound array data. Through the process of audification of the infrasound we began to experiment with techniques and processes borrowed from cinema to manipulate the noise content of the signal. The results of this posed the question: "Would the accuracy of infrasound data array processing be enhanced by employing these techniques?". So a new area of research was born from this collaboration and highlights the value of these interactions and the unintended paths that can occur from them. Using a reference event database, infrasound data were processed using these new techniques and the results were compared with existing techniques to asses if there was any improvement to detection capability for the array. With just under one thousand volcanoes, and a high probability of eruption, Southeast Asia offers a unique opportunity to develop and test techniques for regional monitoring of volcanoes with different technologies. While these volcanoes are monitored locally (e.g. seismometer, infrasound, geodetic and geochemistry networks) and remotely (e.g. satellite and infrasound), there are challenges and limitations to the current monitoring capability. Not only is there a high fraction of cloud cover in the region, making plume observation more difficult via satellite, there have been examples of local monitoring networks and telemetry being destroyed early in the eruptive sequence. The success of local infrasound studies to identify explosions at volcanoes, and calculate plume heights from these signals, has led to an interest in retrieving source parameters for the purpose of ash modeling with a regional network independent of cloud cover.

  19. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    Science.gov (United States)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  20. ExonMiner: Web service for analysis of GeneChip Exon array data

    Directory of Open Access Journals (Sweden)

    Imoto Seiya

    2008-11-01

    Full Text Available Abstract Background Some splicing isoform-specific transcriptional regulations are related to disease. Therefore, detection of disease specific splice variations is the first step for finding disease specific transcriptional regulations. Affymetrix Human Exon 1.0 ST Array can measure exon-level expression profiles that are suitable to find differentially expressed exons in genome-wide scale. However, exon array produces massive datasets that are more than we can handle and analyze on personal computer. Results We have developed ExonMiner that is the first all-in-one web service for analysis of exon array data to detect transcripts that have significantly different splicing patterns in two cells, e.g. normal and cancer cells. ExonMiner can perform the following analyses: (1 data normalization, (2 statistical analysis based on two-way ANOVA, (3 finding transcripts with significantly different splice patterns, (4 efficient visualization based on heatmaps and barplots, and (5 meta-analysis to detect exon level biomarkers. We implemented ExonMiner on a supercomputer system in order to perform genome-wide analysis for more than 300,000 transcripts in exon array data, which has the potential to reveal the aberrant splice variations in cancer cells as exon level biomarkers. Conclusion ExonMiner is well suited for analysis of exon array data and does not require any installation of software except for internet browsers. What all users need to do is to access the ExonMiner URL http://ae.hgc.jp/exonminer. Users can analyze full dataset of exon array data within hours by high-level statistical analysis with sound theoretical basis that finds aberrant splice variants as biomarkers.

  1. Quantum Hamiltonian Physics with Supercomputers

    International Nuclear Information System (INIS)

    Vary, James P.

    2014-01-01

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed

  2. Quantum Hamiltonian Physics with Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Vary, James P.

    2014-06-15

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed.

  3. Focal plane array with modular pixel array components for scalability

    Science.gov (United States)

    Kay, Randolph R; Campbell, David V; Shinde, Subhash L; Rienstra, Jeffrey L; Serkland, Darwin K; Holmes, Michael L

    2014-12-09

    A modular, scalable focal plane array is provided as an array of integrated circuit dice, wherein each die includes a given amount of modular pixel array circuitry. The array of dice effectively multiplies the amount of modular pixel array circuitry to produce a larger pixel array without increasing die size. Desired pixel pitch across the enlarged pixel array is preserved by forming die stacks with each pixel array circuitry die stacked on a separate die that contains the corresponding signal processing circuitry. Techniques for die stack interconnections and die stack placement are implemented to ensure that the desired pixel pitch is preserved across the enlarged pixel array.

  4. Coherent 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an Optimal Supercomputer Optical Switch Fabric

    DEFF Research Database (Denmark)

    Karinou, Fotini; Borkowski, Robert; Zibar, Darko

    2013-01-01

    We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates.......We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates....

  5. Cooperative visualization and simulation in a supercomputer environment

    International Nuclear Information System (INIS)

    Ruehle, R.; Lang, U.; Wierse, A.

    1993-01-01

    The article takes a closer look on the requirements being imposed by the idea to integrate all the components into a homogeneous software environment. To this end several methods for the distribtuion of applications in dependence of certain problem types are discussed. The currently available methods at the University of Stuttgart Computer Center for the distribution of applications are further explained. Finally the aims and characteristics of a European sponsored project, called PAGEIN, are explained, which fits perfectly into the line of developments at RUS. The aim of the project is to experiment with future cooperative working modes of aerospace scientists in a high speed distributed supercomputing environment. Project results will have an impact on the development of real future scientific application environments. (orig./DG)

  6. Physics-based signal processing algorithms for micromachined cantilever arrays

    Science.gov (United States)

    Candy, James V; Clague, David S; Lee, Christopher L; Rudd, Robert E; Burnham, Alan K; Tringe, Joseph W

    2013-11-19

    A method of using physics-based signal processing algorithms for micromachined cantilever arrays. The methods utilize deflection of a micromachined cantilever that represents the chemical, biological, or physical element being detected. One embodiment of the method comprises the steps of modeling the deflection of the micromachined cantilever producing a deflection model, sensing the deflection of the micromachined cantilever and producing a signal representing the deflection, and comparing the signal representing the deflection with the deflection model.

  7. Vector and parallel processors in computational science

    International Nuclear Information System (INIS)

    Duff, I.S.; Reid, J.K.

    1985-01-01

    This book presents the papers given at a conference which reviewed the new developments in parallel and vector processing. Topics considered at the conference included hardware (array processors, supercomputers), programming languages, software aids, numerical methods (e.g., Monte Carlo algorithms, iterative methods, finite elements, optimization), and applications (e.g., neutron transport theory, meteorology, image processing)

  8. Feynman diagrams sampling for quantum field theories on the QPACE 2 supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Rappl, Florian

    2016-08-01

    This work discusses the application of Feynman diagram sampling in quantum field theories. The method uses a computer simulation to sample the diagrammatic space obtained in a series expansion. For running large physical simulations powerful computers are obligatory, effectively splitting the thesis in two parts. The first part deals with the method of Feynman diagram sampling. Here the theoretical background of the method itself is discussed. Additionally, important statistical concepts and the theory of the strong force, quantum chromodynamics, are introduced. This sets the context of the simulations. We create and evaluate a variety of models to estimate the applicability of diagrammatic methods. The method is then applied to sample the perturbative expansion of the vertex correction. In the end we obtain the value for the anomalous magnetic moment of the electron. The second part looks at the QPACE 2 supercomputer. This includes a short introduction to supercomputers in general, as well as a closer look at the architecture and the cooling system of QPACE 2. Guiding benchmarks of the InfiniBand network are presented. At the core of this part, a collection of best practices and useful programming concepts are outlined, which enables the development of efficient, yet easily portable, applications for the QPACE 2 system.

  9. Processing and display of medical three dimensional arrays of numerical data using octree encoding

    International Nuclear Information System (INIS)

    Amans, J.L.; Darier, P.

    1985-01-01

    Imaging modalities such as X-ray computerized Tomography (CT), Nuclear Medicine and Nuclear Magnetic Resonance can produce three-dimensional (3-D) arrays of numerical data of medical object internal structures. The analysis of 3-D data by synthetic generation of realistic images is an important area of computer graphics and imaging. We are currently developing experimental software that allows the analysis, processing and display of 3-D arrays of numerical data that are organized in a related hierarchical data structure using OCTREE (octal-tree) encoding technique based on a recursive subdivision of the data volume. The OCTREE encoding structure is an extension of the two-dimensional tree structure: the quadtree, developed for image processing applications. Before any operations, the 3-D array of data is OCTREE encoded, thereafter all processings are concerned with the encoded object. The elementary process for the elaboration of a synthetic image includes: conditioning the volume: volume partition (numerical and spatial segmentation), choice of the view-point..., two dimensional display, either by spatial integration (radiography) or by shaded surface representation. This paper introduces these different concepts and specifies the advantages of OCTREE encoding techniques in realizing these operations. Furthermore the application of the OCTREE encoding scheme to the display of 3-D medical volumes generated from multiple CT scans is presented

  10. Using adaptive antenna array in LTE with MIMO for space-time processing

    Directory of Open Access Journals (Sweden)

    Abdourahamane Ahmed Ali

    2015-04-01

    Full Text Available The actual methods of improvement the existent wireless transmission systems are proposed. Mathematical apparatus is considered and proved by models, graph of which are shown, using the adaptive array antenna in LTE with MIMO for space-time processing. The results show that improvements, which are joined with space-time processing, positively reflects on LTE cell size or on throughput

  11. A novel, substrate independent three-step process for the growth of uniform ZnO nanorod arrays

    International Nuclear Information System (INIS)

    Byrne, D.; McGlynn, E.; Henry, M.O.; Kumar, K.; Hughes, G.

    2010-01-01

    We report a three-step deposition process for uniform arrays of ZnO nanorods, involving chemical bath deposition of aligned seed layers followed by nanorod nucleation sites and subsequent vapour phase transport growth of nanorods. This combines chemical bath deposition techniques, which enable substrate independent seeding and nucleation site generation with vapour phase transport growth of high crystalline and optical quality ZnO nanorod arrays. Our data indicate that the three-step process produces uniform nanorod arrays with narrow and rather monodisperse rod diameters (∼ 70 nm) across substrates of centimetre dimensions. X-ray photoelectron spectroscopy, scanning electron microscopy and X-ray diffraction were used to study the growth mechanism and characterise the nanostructures.

  12. Sampling phased array, a new technique for ultrasonic signal processing and imaging now available to industry

    OpenAIRE

    Verkooijen, J.; Bulavinov, A.

    2008-01-01

    Over the past 10 years the improvement in the field of microelectronics and computer engineering has led to significant advances in ultrasonic signal processing and image construction techniques that are currently being applied to non-destructive material evaluation. A new phased array technique, called "Sampling Phased Array" has been developed in the Fraunhofer Institute for non-destructive testing [1]. It realizes a unique approach of measurement and processing of ultrasonic signals. The s...

  13. Arrays of surface-normal electroabsorption modulators for the generation and signal processing of microwave photonics signals

    NARCIS (Netherlands)

    Noharet, Bertrand; Wang, Qin; Platt, Duncan; Junique, Stéphane; Marpaung, D.A.I.; Roeloffzen, C.G.H.

    2011-01-01

    The development of an array of 16 surface-normal electroabsorption modulators operating at 1550nm is presented. The modulator array is dedicated to the generation and processing of microwave photonics signals, targeting a modulation bandwidth in excess of 5GHz. The hybrid integration of the

  14. Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; Leinweber, David; Ruebel, Oliver; Wu, Kesheng

    2011-09-16

    This paper describes collaborative work between active traders, regulators, economists, and supercomputing researchers to replicate and extend investigations of the Flash Crash and other market anomalies in a National Laboratory HPC environment. Our work suggests that supercomputing tools and methods will be valuable to market regulators in achieving the goal of market safety, stability, and security. Research results using high frequency data and analytics are described, and directions for future development are discussed. Currently the key mechanism for preventing catastrophic market action are “circuit breakers.” We believe a more graduated approach, similar to the “yellow light” approach in motorsports to slow down traffic, might be a better way to achieve the same goal. To enable this objective, we study a number of indicators that could foresee hazards in market conditions and explore options to confirm such predictions. Our tests confirm that Volume Synchronized Probability of Informed Trading (VPIN) and a version of volume Herfindahl-Hirschman Index (HHI) for measuring market fragmentation can indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.

  15. SAR processing with stepped chirps and phased array antennas.

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin Walter

    2006-09-01

    Wideband radar signals are problematic for phased array antennas. Wideband radar signals can be generated from series or groups of narrow-band signals centered at different frequencies. An equivalent wideband LFM chirp can be assembled from lesser-bandwidth chirp segments in the data processing. The chirp segments can be transmitted as separate narrow-band pulses, each with their own steering phase operation. This overcomes the problematic dilemma of steering wideband chirps with phase shifters alone, that is, without true time-delay elements.

  16. A fast random number generator for the Intel Paragon supercomputer

    Science.gov (United States)

    Gutbrod, F.

    1995-06-01

    A pseudo-random number generator is presented which makes optimal use of the architecture of the i860-microprocessor and which is expected to have a very long period. It is therefore a good candidate for use on the parallel supercomputer Paragon XP. In the assembler version, it needs 6.4 cycles for a real∗4 random number. There is a FORTRAN routine which yields identical numbers up to rare and minor rounding discrepancies, and it needs 28 cycles. The FORTRAN performance on other microprocessors is somewhat better. Arguments for the quality of the generator and some numerical tests are given.

  17. Solution processed bismuth sulfide nanowire array core/silver shuffle shell solar cells

    NARCIS (Netherlands)

    Cao, Y.; Bernechea, M.; Maclachlan, A.; Zardetto, V.; Creatore, M.; Haque, S.A.; Konstantatos, G.

    2015-01-01

    Low bandgap inorganic semiconductor nanowires have served as building blocks in solution processed solar cells to improve their power conversion capacity and reduce fabrication cost. In this work, we first reported bismuth sulfide nanowire arrays grown from colloidal seeds on a transparent

  18. NASA's Climate in a Box: Desktop Supercomputing for Open Scientific Model Development

    Science.gov (United States)

    Wojcik, G. S.; Seablom, M. S.; Lee, T. J.; McConaughy, G. R.; Syed, R.; Oloso, A.; Kemp, E. M.; Greenseid, J.; Smith, R.

    2009-12-01

    NASA's High Performance Computing Portfolio in cooperation with its Modeling, Analysis, and Prediction program intends to make its climate and earth science models more accessible to a larger community. A key goal of this effort is to open the model development and validation process to the scientific community at large such that a natural selection process is enabled and results in a more efficient scientific process. One obstacle to others using NASA models is the complexity of the models and the difficulty in learning how to use them. This situation applies not only to scientists who regularly use these models but also non-typical users who may want to use the models such as scientists from different domains, policy makers, and teachers. Another obstacle to the use of these models is that access to high performance computing (HPC) accounts, from which the models are implemented, can be restrictive with long wait times in job queues and delays caused by an arduous process of obtaining an account, especially for foreign nationals. This project explores the utility of using desktop supercomputers in providing a complete ready-to-use toolkit of climate research products to investigators and on demand access to an HPC system. One objective of this work is to pre-package NASA and NOAA models so that new users will not have to spend significant time porting the models. In addition, the prepackaged toolkit will include tools, such as workflow, visualization, social networking web sites, and analysis tools, to assist users in running the models and analyzing the data. The system architecture to be developed will allow for automatic code updates for each user and an effective means with which to deal with data that are generated. We plan to investigate several desktop systems, but our work to date has focused on a Cray CX1. Currently, we are investigating the potential capabilities of several non-traditional development environments. While most NASA and NOAA models are

  19. Structural control of ultra-fine CoPt nanodot arrays via electrodeposition process

    Energy Technology Data Exchange (ETDEWEB)

    Wodarz, Siggi [Department of Applied Chemistry, Waseda University, Shinjuku, Tokyo 169-8555 (Japan); Hasegawa, Takashi; Ishio, Shunji [Department of Materials Science, Akita University, Akita City 010-8502 (Japan); Homma, Takayuki, E-mail: t.homma@waseda.jp [Department of Applied Chemistry, Waseda University, Shinjuku, Tokyo 169-8555 (Japan)

    2017-05-15

    CoPt nanodot arrays were fabricated by combining electrodeposition and electron beam lithography (EBL) for the use of bit-patterned media (BPM). To achieve precise control of deposition uniformity and coercivity of the CoPt nanodot arrays, their crystal structure and magnetic properties were controlled by controlling the diffusion state of metal ions from the initial deposition stage with the application of bath agitation. Following bath agitation, the composition gradient of the CoPt alloy with thickness was mitigated to have a near-ideal alloy composition of Co:Pt =80:20, which induces epitaxial-like growth from Ru substrate, thus resulting in the improvement of the crystal orientation of the hcp (002) structure from its initial deposition stages. Furthermore, the cross-sectional transmission electron microscope (TEM) analysis of the nanodots deposited with bath agitation showed CoPt growth along its c-axis oriented in the perpendicular direction, having uniform lattice fringes on the hcp (002) plane from the Ru underlayer interface, which is a significant factor to induce perpendicular magnetic anisotropy. Magnetic characterization of the CoPt nanodot arrays showed increase in the perpendicular coercivity and squareness of the hysteresis loops from 2.0 kOe and 0.64 (without agitation) to 4.0 kOe and 0.87 with bath agitation. Based on the detailed characterization of nanodot arrays, the precise crystal structure control of the nanodot arrays with ultra-high recording density by electrochemical process was successfully demonstrated. - Highlights: • Ultra-fine CoPt nanodot arrays were fabricated by electrodeposition. • Crystallinity of hcp (002) was improved with uniform composition formation. • Uniform formation of hcp lattices leads to an increase in the coercivity.

  20. Frequently updated noise threat maps created with use of supercomputing grid

    Directory of Open Access Journals (Sweden)

    Szczodrak Maciej

    2014-09-01

    Full Text Available An innovative supercomputing grid services devoted to noise threat evaluation were presented. The services described in this paper concern two issues, first is related to the noise mapping, while the second one focuses on assessment of the noise dose and its influence on the human hearing system. The discussed serviceswere developed within the PL-Grid Plus Infrastructure which accumulates Polish academic supercomputer centers. Selected experimental results achieved by the usage of the services proposed were presented. The assessment of the environmental noise threats includes creation of the noise maps using either ofline or online data, acquired through a grid of the monitoring stations. A concept of estimation of the source model parameters based on the measured sound level for the purpose of creating frequently updated noise maps was presented. Connecting the noise mapping grid service with a distributed sensor network enables to automatically update noise maps for a specified time period. Moreover, a unique attribute of the developed software is the estimation of the auditory effects evoked by the exposure to noise. The estimation method uses a modified psychoacoustic model of hearing and is based on the calculated noise level values and on the given exposure period. Potential use scenarios of the grid services for research or educational purpose were introduced. Presentation of the results of predicted hearing threshold shift caused by exposure to excessive noise can raise the public awareness of the noise threats.

  1. PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan; Mills, Richard T.

    2012-04-18

    PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors per realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.

  2. Seismic array processing and computational infrastructure for improved monitoring of Alaskan and Aleutian seismicity and volcanoes

    Science.gov (United States)

    Lindquist, Kent Gordon

    We constructed a near-real-time system, called Iceworm, to automate seismic data collection, processing, storage, and distribution at the Alaska Earthquake Information Center (AEIC). Phase-picking, phase association, and interprocess communication components come from Earthworm (U.S. Geological Survey). A new generic, internal format for digital data supports unified handling of data from diverse sources. A new infrastructure for applying processing algorithms to near-real-time data streams supports automated information extraction from seismic wavefields. Integration of Datascope (U. of Colorado) provides relational database management of all automated measurements, parametric information for located hypocenters, and waveform data from Iceworm. Data from 1997 yield 329 earthquakes located by both Iceworm and the AEIC. Of these, 203 have location residuals under 22 km, sufficient for hazard response. Regionalized inversions for local magnitude in Alaska yield Msb{L} calibration curves (logAsb0) that differ from the Californian Richter magnitude. The new curve is 0.2\\ Msb{L} units more attenuative than the Californian curve at 400 km for earthquakes north of the Denali fault. South of the fault, and for a region north of Cook Inlet, the difference is 0.4\\ Msb{L}. A curve for deep events differs by 0.6\\ Msb{L} at 650 km. We expand geographic coverage of Alaskan regional seismic monitoring to the Aleutians, the Bering Sea, and the entire Arctic by initiating the processing of four short-period, Alaskan seismic arrays. To show the array stations' sensitivity, we detect and locate two microearthquakes that were missed by the AEIC. An empirical study of the location sensitivity of the arrays predicts improvements over the Alaskan regional network that are shown as map-view contour plots. We verify these predictions by detecting an Msb{L} 3.2 event near Unimak Island with one array. The detection and location of four representative earthquakes illustrates the expansion

  3. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2011-01-01

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  4. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-03-29

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  5. Processing and display of three-dimensional arrays of numerical data using octree encoding

    International Nuclear Information System (INIS)

    Amans, J.L.; Antoine, M.; Darier, P.

    1986-04-01

    The analysis of three-dimensional (3-D) arrays of numerical data from medical, industrial or scientific imaging, by synthetic generation of realistic images, has been widely developed. The Octree encoding, that organizes the volume data in a hierarchical tree structure, has some interesting features for 3-D arrays of data processing. The Octree encoding method, based on the recursive subdivision of a 3-D array, is an extension of the Quadtree encoding in the two-dimensional plane. We have developed a software package to validate the basic Octree encoding methodology for some manipulation and display operations of volume data. The contribution introduces the technique we have used (called ''overlay technique'') to make the projection operation of an Octree on a Quadtree encoded image plane. The application of this technique to the hidden surface display is presented [fr

  6. Modeling radiative transport in ICF plasmas on an IBM SP2 supercomputer

    International Nuclear Information System (INIS)

    Johansen, J.A.; MacFarlane, J.J.; Moses, G.A.

    1995-01-01

    At the University of Wisconsin-Madison the authors have integrated a collisional-radiative-equilibrium model into their CONRAD radiation-hydrodynamics code. This integrated package allows them to accurately simulate the transport processes involved in ICF plasmas; including the important effects of self-absorption of line-radiation. However, as they increase the amount of atomic structure utilized in their transport models, the computational demands increase nonlinearly. In an attempt to meet this increased computational demand, they have recently embarked on a mission to parallelize the CONRAD program. The parallel CONRAD development is being performed on an IBM SP2 supercomputer. The parallelism is based on a message passing paradigm, and is being implemented using PVM. At the present time they have determined that approximately 70% of the sequential program can be executed in parallel. Accordingly, they expect that the parallel version will yield a speedup on the order of three times that of the sequential version. This translates into only 10 hours of execution time for the parallel version, whereas the sequential version required 30 hours

  7. A Parallel Supercomputer Implementation of a Biological Inspired Neural Network and its use for Pattern Recognition

    International Nuclear Information System (INIS)

    De Ladurantaye, Vincent; Lavoie, Jean; Bergeron, Jocelyn; Parenteau, Maxime; Lu Huizhong; Pichevar, Ramin; Rouat, Jean

    2012-01-01

    A parallel implementation of a large spiking neural network is proposed and evaluated. The neural network implements the binding by synchrony process using the Oscillatory Dynamic Link Matcher (ODLM). Scalability, speed and performance are compared for 2 implementations: Message Passing Interface (MPI) and Compute Unified Device Architecture (CUDA) running on clusters of multicore supercomputers and NVIDIA graphical processing units respectively. A global spiking list that represents at each instant the state of the neural network is described. This list indexes each neuron that fires during the current simulation time so that the influence of their spikes are simultaneously processed on all computing units. Our implementation shows a good scalability for very large networks. A complex and large spiking neural network has been implemented in parallel with success, thus paving the road towards real-life applications based on networks of spiking neurons. MPI offers a better scalability than CUDA, while the CUDA implementation on a GeForce GTX 285 gives the best cost to performance ratio. When running the neural network on the GTX 285, the processing speed is comparable to the MPI implementation on RQCHP's Mammouth parallel with 64 notes (128 cores).

  8. Fabrication of Aligned Polyaniline Nanofiber Array via a Facile Wet Chemical Process.

    Science.gov (United States)

    Sun, Qunhui; Bi, Wu; Fuller, Thomas F; Ding, Yong; Deng, Yulin

    2009-06-17

    In this work, we demonstrate for the first time a template free approach to synthesize aligned polyaniline nanofiber (PN) array on a passivated gold (Au) substrate via a facile wet chemical process. The Au surface was first modified using 4-aminothiophenol (4-ATP) to afford the surface functionality, followed subsequently by an oxidation polymerization of aniline (AN) monomer in an aqueous medium using ammonium persulfate as the oxidant and tartaric acid as the doping agent. The results show that a vertically aligned PANI nanofiber array with individual fiber diameters of ca. 100 nm, heights of ca. 600 nm and a packing density of ca. 40 pieces·µm(-2) , was synthesized. Copyright © 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Radar techniques using array antennas

    CERN Document Server

    Wirth, Wulf-Dieter

    2013-01-01

    Radar Techniques Using Array Antennas is a thorough introduction to the possibilities of radar technology based on electronic steerable and active array antennas. Topics covered include array signal processing, array calibration, adaptive digital beamforming, adaptive monopulse, superresolution, pulse compression, sequential detection, target detection with long pulse series, space-time adaptive processing (STAP), moving target detection using synthetic aperture radar (SAR), target imaging, energy management and system parameter relations. The discussed methods are confirmed by simulation stud

  10. Frequency Diverse Array Radar Signal Processing via Space-Range-Doppler Focus (SRDF Method

    Directory of Open Access Journals (Sweden)

    Chen Xiaolong

    2018-04-01

    Full Text Available To meet the urgent demand of low-observable moving target detection in complex environments, a novel method of Frequency Diverse Array (FDA radar signal processing method based on Space-Rang-Doppler Focusing (SRDF is proposed in this paper. The current development status of the FDA radar, the design of the array structure, beamforming, and joint estimation of distance and angle are systematically reviewed. The extra degrees of freedom provided by FDA radar are fully utilizsed, which include the Degrees Of Freedom (DOFs of the transmitted waveform, the location of array elements, correlation of beam azimuth and distance, and the long dwell time, which are also the DOFs in joint spatial (angle, distance, and frequency (Doppler dimensions. Simulation results show that the proposed method has the potential of improving target detection and parameter estimation for weak moving targets in complex environments and has broad application prospects in clutter and interference suppression, moving target refinement, etc..

  11. Programmable cellular arrays. Faults testing and correcting in cellular arrays

    International Nuclear Information System (INIS)

    Cercel, L.

    1978-03-01

    A review of some recent researches about programmable cellular arrays in computing and digital processing of information systems is presented, and includes both combinational and sequential arrays, with full arbitrary behaviour, or which can realize better implementations of specialized blocks as: arithmetic units, counters, comparators, control systems, memory blocks, etc. Also, the paper presents applications of cellular arrays in microprogramming, in implementing of a specialized computer for matrix operations, in modeling of universal computing systems. The last section deals with problems of fault testing and correcting in cellular arrays. (author)

  12. Assembly and Integration Process of the First High Density Detector Array for the Atacama Cosmology Telescope

    Science.gov (United States)

    Li, Yaqiong; Choi, Steve; Ho, Shuay-Pwu; Crowley, Kevin T.; Salatino, Maria; Simon, Sara M.; Staggs, Suzanne T.; Nati, Federico; Wollack, Edward J.

    2016-01-01

    The Advanced ACTPol (AdvACT) upgrade on the Atacama Cosmology Telescope (ACT) consists of multichroicTransition Edge Sensor (TES) detector arrays to measure the Cosmic Microwave Background (CMB) polarization anisotropies in multiple frequency bands. The first AdvACT detector array, sensitive to both 150 and 230 GHz, is fabricated on a 150 mm diameter wafer and read out with a completely different scheme compared to ACTPol. Approximately 2000 TES bolometers are packed into the wafer leading to both a much denser detector density and readout circuitry. The demonstration of the assembly and integration of the AdvACT arrays is important for the next generation CMB experiments, which will continue to increase the pixel number and density. We present the detailed assembly process of the first AdvACT detector array.

  13. High-throughput fabrication of micrometer-sized compound parabolic mirror arrays by using parallel laser direct-write processing

    International Nuclear Information System (INIS)

    Yan, Wensheng; Gu, Min; Cumming, Benjamin P

    2015-01-01

    Micrometer-sized parabolic mirror arrays have significant applications in both light emitting diodes and solar cells. However, low fabrication throughput has been identified as major obstacle for the mirror arrays towards large-scale applications due to the serial nature of the conventional method. Here, the mirror arrays are fabricated by using a parallel laser direct-write processing, which addresses this barrier. In addition, it is demonstrated that the parallel writing is able to fabricate complex arrays besides simple arrays and thus offers wider applications. Optical measurements show that each single mirror confines the full-width at half-maximum value to as small as 17.8 μm at the height of 150 μm whilst providing a transmittance of up to 68.3% at a wavelength of 633 nm in good agreement with the calculation values. (paper)

  14. Plasma turbulence calculations on supercomputers

    International Nuclear Information System (INIS)

    Carreras, B.A.; Charlton, L.A.; Dominguez, N.; Drake, J.B.; Garcia, L.; Leboeuf, J.N.; Lee, D.K.; Lynch, V.E.; Sidikman, K.

    1991-01-01

    Although the single-particle picture of magnetic confinement is helpful in understanding some basic physics of plasma confinement, it does not give a full description. Collective effects dominate plasma behavior. Any analysis of plasma confinement requires a self-consistent treatment of the particles and fields. The general picture is further complicated because the plasma, in general, is turbulent. The study of fluid turbulence is a rather complex field by itself. In addition to the difficulties of classical fluid turbulence, plasma turbulence studies face the problems caused by the induced magnetic turbulence, which couples field by itself. In addition to the difficulties of classical fluid turbulence, plasma turbulence studies face the problems caused by the induced magnetic turbulence, which couples back to the fluid. Since the fluid is not a perfect conductor, this turbulence can lead to changes in the topology of the magnetic field structure, causing the magnetic field lines to wander radially. Because the plasma fluid flows along field lines, they carry the particles with them, and this enhances the losses caused by collisions. The changes in topology are critical for the plasma confinement. The study of plasma turbulence and the concomitant transport is a challenging problem. Because of the importance of solving the plasma turbulence problem for controlled thermonuclear research, the high complexity of the problem, and the necessity of attacking the problem with supercomputers, the study of plasma turbulence in magnetic confinement devices is a Grand Challenge problem

  15. Fabricating process of hollow out-of-plane Ni microneedle arrays and properties of the integrated microfluidic device

    Science.gov (United States)

    Zhu, Jun; Cao, Ying; Wang, Hong; Li, Yigui; Chen, Xiang; Chen, Di

    2013-07-01

    Although microfluidic devices that integrate microfluidic chips with hollow out-of-plane microneedle arrays have many advantages in transdermal drug delivery applications, difficulties exist in their fabrication due to the special three-dimensional structures of hollow out-of-plane microneedles. A new, cost-effective process for the fabrication of a hollow out-of-plane Ni microneedle array is presented. The integration of PDMS microchips with the Ni hollow microneedle array and the properties of microfluidic devices are also presented. The integrated microfluidic devices provide a new approach for transdermal drug delivery.

  16. Post-processing Free Quantum Random Number Generator Based on Avalanche Photodiode Array

    International Nuclear Information System (INIS)

    Li Yang; Liao Sheng-Kai; Liang Fu-Tian; Shen Qi; Liang Hao; Peng Cheng-Zhi

    2016-01-01

    Quantum random number generators adopting single photon detection have been restricted due to the non-negligible dead time of avalanche photodiodes (APDs). We propose a new approach based on an APD array to improve the generation rate of random numbers significantly. This method compares the detectors' responses to consecutive optical pulses and generates the random sequence. We implement a demonstration experiment to show its simplicity, compactness and scalability. The generated numbers are proved to be unbiased, post-processing free, ready to use, and their randomness is verified by using the national institute of standard technology statistical test suite. The random bit generation efficiency is as high as 32.8% and the potential generation rate adopting the 32 × 32 APD array is up to tens of Gbits/s. (paper)

  17. Controllable 3D architectures of aligned carbon nanotube arrays by multi-step processes

    Science.gov (United States)

    Huang, Shaoming

    2003-06-01

    An effective way to fabricate large area three-dimensional (3D) aligned CNTs pattern based on pyrolysis of iron(II) phthalocyanine (FePc) by two-step processes is reported. The controllable generation of different lengths and selective growth of the aligned CNT arrays on metal-patterned (e.g., Ag and Au) substrate are the bases for generating such 3D aligned CNTs architectures. By controlling experimental conditions 3D aligned CNT arrays with different lengths/densities and morphologies/structures as well as multi-layered architectures can be fabricated in large scale by multi-step pyrolysis of FePc. These 3D architectures could have interesting properties and be applied for developing novel nanotube-based devices.

  18. Astronomical Data Processing Using SciQL, an SQL Based Query Language for Array Data

    Science.gov (United States)

    Zhang, Y.; Scheers, B.; Kersten, M.; Ivanova, M.; Nes, N.

    2012-09-01

    SciQL (pronounced as ‘cycle’) is a novel SQL-based array query language for scientific applications with both tables and arrays as first class citizens. SciQL lowers the entrance fee of adopting relational DBMS (RDBMS) in scientific domains, because it includes functionality often only found in mathematics software packages. In this paper, we demonstrate the usefulness of SciQL for astronomical data processing using examples from the Transient Key Project of the LOFAR radio telescope. In particular, how the LOFAR light-curve database of all detected sources can be constructed, by correlating sources across the spatial, frequency, time and polarisation domains.

  19. Principles of Adaptive Array Processing

    Science.gov (United States)

    2006-09-01

    ACE with and without tapering (homogeneous case). These analytical results are less suited to predict the detection performance of a real system ...Nickel: Adaptive Beamforming for Phased Array Radars. Proc. Int. Radar Symposium IRS’98 (Munich, Sept. 1998), DGON and VDE /ITG, pp. 897-906.(Reprint also...strategies for airborne radar. Asilomar Conf. on Signals, Systems and Computers, Pacific Grove, CA, 1998, IEEE Cat.Nr. 0-7803-5148-7/98, pp. 1327-1331. [17

  20. Numerical Simulation of the Diffusion Processes in Nanoelectrode Arrays Using an Axial Neighbor Symmetry Approximation.

    Science.gov (United States)

    Peinetti, Ana Sol; Gilardoni, Rodrigo S; Mizrahi, Martín; Requejo, Felix G; González, Graciela A; Battaglini, Fernando

    2016-06-07

    Nanoelectrode arrays have introduced a complete new battery of devices with fascinating electrocatalytic, sensitivity, and selectivity properties. To understand and predict the electrochemical response of these arrays, a theoretical framework is needed. Cyclic voltammetry is a well-fitted experimental technique to understand the undergoing diffusion and kinetics processes. Previous works describing microelectrode arrays have exploited the interelectrode distance to simulate its behavior as the summation of individual electrodes. This approach becomes limited when the size of the electrodes decreases to the nanometer scale due to their strong radial effect with the consequent overlapping of the diffusional fields. In this work, we present a computational model able to simulate the electrochemical behavior of arrays working either as the summation of individual electrodes or being affected by the overlapping of the diffusional fields without previous considerations. Our computational model relays in dividing a regular electrode array in cells. In each of them, there is a central electrode surrounded by neighbor electrodes; these neighbor electrodes are transformed in a ring maintaining the same active electrode area than the summation of the closest neighbor electrodes. Using this axial neighbor symmetry approximation, the problem acquires a cylindrical symmetry, being applicable to any diffusion pattern. The model is validated against micro- and nanoelectrode arrays showing its ability to predict their behavior and therefore to be used as a designing tool.

  1. Reactive flow simulations in complex geometries with high-performance supercomputing

    International Nuclear Information System (INIS)

    Rehm, W.; Gerndt, M.; Jahn, W.; Vogelsang, R.; Binninger, B.; Herrmann, M.; Olivier, H.; Weber, M.

    2000-01-01

    In this paper, we report on a modern field code cluster consisting of state-of-the-art reactive Navier-Stokes- and reactive Euler solvers that has been developed on vector- and parallel supercomputers at the research center Juelich. This field code cluster is used for hydrogen safety analyses of technical systems, for example, in the field of nuclear reactor safety and conventional hydrogen demonstration plants with fuel cells. Emphasis is put on the assessment of combustion loads, which could result from slow, fast or rapid flames, including transition from deflagration to detonation. As a sample of proof tests, the special tools have been tested for specific tasks, based on the comparison of experimental and numerical results, which are in reasonable agreement. (author)

  2. Effect of Source, Surfactant, and Deposition Process on Electronic Properties of Nanotube Arrays

    Directory of Open Access Journals (Sweden)

    Dheeraj Jain

    2011-01-01

    Full Text Available The electronic properties of arrays of carbon nanotubes from several different sources differing in the manufacturing process used with a variety of average properties such as length, diameter, and chirality are studied. We used several common surfactants to disperse each of these nanotubes and then deposited them on Si wafers from their aqueous solutions using dielectrophoresis. Transport measurements were performed to compare and determine the effect of different surfactants, deposition processes, and synthesis processes on nanotubes synthesized using CVD, CoMoCAT, laser ablation, and HiPCO.

  3. Lightweight solar array blanket tooling, laser welding and cover process technology

    Science.gov (United States)

    Dillard, P. A.

    1983-01-01

    A two phase technology investigation was performed to demonstrate effective methods for integrating 50 micrometer thin solar cells into ultralightweight module designs. During the first phase, innovative tooling was developed which allows lightweight blankets to be fabricated in a manufacturing environment with acceptable yields. During the second phase, the tooling was improved and the feasibility of laser processing of lightweight arrays was confirmed. The development of the cell/interconnect registration tool and interconnect bonding by laser welding is described.

  4. Processes and Materials for Flexible PV Arrays

    National Research Council Canada - National Science Library

    Gierow, Paul

    2002-01-01

    .... A parallel incentive for development of flexible PV arrays are the possibilities of synergistic advantages for certain types of spacecraft, in particular the Solar Thermal Propulsion (STP) Vehicle...

  5. Advanced ACTPol Multichroic Polarimeter Array Fabrication Process for 150 mm Wafers

    Science.gov (United States)

    Duff, S. M.; Austermann, J.; Beall, J. A.; Becker, D.; Datta, R.; Gallardo, P. A.; Henderson, S. W.; Hilton, G. C.; Ho, S. P.; Hubmayr, J.; Koopman, B. J.; Li, D.; McMahon, J.; Nati, F.; Niemack, M. D.; Pappas, C. G.; Salatino, M.; Schmitt, B. L.; Simon, S. M.; Staggs, S. T.; Stevens, J. R.; Van Lanen, J.; Vavagiakis, E. M.; Ward, J. T.; Wollack, E. J.

    2016-08-01

    Advanced ACTPol (AdvACT) is a third-generation cosmic microwave background receiver to be deployed in 2016 on the Atacama Cosmology Telescope (ACT). Spanning five frequency bands from 25 to 280 GHz and having just over 5600 transition-edge sensor (TES) bolometers, this receiver will exhibit increased sensitivity and mapping speed compared to previously fielded ACT instruments. This paper presents the fabrication processes developed by NIST to scale to large arrays of feedhorn-coupled multichroic AlMn-based TES polarimeters on 150-mm diameter wafers. In addition to describing the streamlined fabrication process which enables high yields of densely packed detectors across larger wafers, we report the details of process improvements for sensor (AlMn) and insulator (SiN_x) materials and microwave structures, and the resulting performance improvements.

  6. Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science

    Science.gov (United States)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.

    2008-12-01

    The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.

  7. Use of QUADRICS supercomputer as embedded simulator in emergency management systems

    International Nuclear Information System (INIS)

    Bove, R.; Di Costanzo, G.; Ziparo, A.

    1996-07-01

    The experience related to the implementation of a MRBT, atmospheric spreading model with a short duration releasing, are reported. This model was implemented on a QUADRICS-Q1 supercomputer. First is reported a description of the MRBT model. It is an analytical model to study the speadings of light gases realised in the atmosphere cause incidental releasing. The solution of diffusion equation is Gaussian like. It yield the concentration of pollutant substance released. The concentration is function of space and time. Thus the QUADRICS architecture is introduced. And the implementation of the model is described. At the end it will be consider the integration of the QUADRICS-based model as simulator in a emergency management system

  8. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Science.gov (United States)

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  9. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Directory of Open Access Journals (Sweden)

    DeTar Carleton

    2018-01-01

    Full Text Available With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  10. SUPERCOMPUTER SIMULATION OF CRITICAL PHENOMENA IN COMPLEX SOCIAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Petrus M.A. Sloot

    2014-09-01

    Full Text Available The paper describes a problem of computer simulation of critical phenomena in complex social systems on a petascale computing systems in frames of complex networks approach. The three-layer system of nested models of complex networks is proposed including aggregated analytical model to identify critical phenomena, detailed model of individualized network dynamics and model to adjust a topological structure of a complex network. The scalable parallel algorithm covering all layers of complex networks simulation is proposed. Performance of the algorithm is studied on different supercomputing systems. The issues of software and information infrastructure of complex networks simulation are discussed including organization of distributed calculations, crawling the data in social networks and results visualization. The applications of developed methods and technologies are considered including simulation of criminal networks disruption, fast rumors spreading in social networks, evolution of financial networks and epidemics spreading.

  11. Lectures in Supercomputational Neurosciences Dynamics in Complex Brain Networks

    CERN Document Server

    Graben, Peter beim; Thiel, Marco; Kurths, Jürgen

    2008-01-01

    Computational Neuroscience is a burgeoning field of research where only the combined effort of neuroscientists, biologists, psychologists, physicists, mathematicians, computer scientists, engineers and other specialists, e.g. from linguistics and medicine, seem to be able to expand the limits of our knowledge. The present volume is an introduction, largely from the physicists' perspective, to the subject matter with in-depth contributions by system neuroscientists. A conceptual model for complex networks of neurons is introduced that incorporates many important features of the real brain, such as various types of neurons, various brain areas, inhibitory and excitatory coupling and the plasticity of the network. The computational implementation on supercomputers, which is introduced and discussed in detail in this book, will enable the readers to modify and adapt the algortihm for their own research. Worked-out examples of applications are presented for networks of Morris-Lecar neurons to model the cortical co...

  12. Conversion of electromagnetic energy in Z-pinch process of single planar wire arrays at 1.5 MA

    International Nuclear Information System (INIS)

    Liangping, Wang; Mo, Li; Juanjuan, Han; Ning, Guo; Jian, Wu; Aici, Qiu

    2014-01-01

    The electromagnetic energy conversion in the Z-pinch process of single planar wire arrays was studied on Qiangguang generator (1.5 MA, 100 ns). Electrical diagnostics were established to monitor the voltage of the cathode-anode gap and the load current for calculating the electromagnetic energy. Lumped-element circuit model of wire arrays was employed to analyze the electromagnetic energy conversion. Inductance as well as resistance of a wire array during the Z-pinch process was also investigated. Experimental data indicate that the electromagnetic energy is mainly converted to magnetic energy and kinetic energy and ohmic heating energy can be neglected before the final stagnation. The kinetic energy can be responsible for the x-ray radiation before the peak power. After the stagnation, the electromagnetic energy coupled by the load continues increasing and the resistance of the load achieves its maximum of 0.6–1.0 Ω in about 10–20 ns

  13. Power and thermal efficient numerical processing

    DEFF Research Database (Denmark)

    Liu, Wei; Nannarelli, Alberto

    2015-01-01

    Numerical processing is at the core of applications in many areas ranging from scientific and engineering calculations to financial computing. These applications are usually executed on large servers or supercomputers to exploit their high speed, high level of parallelism and high bandwidth...

  14. Array processor architecture

    Science.gov (United States)

    Barnes, George H. (Inventor); Lundstrom, Stephen F. (Inventor); Shafer, Philip E. (Inventor)

    1983-01-01

    A high speed parallel array data processing architecture fashioned under a computational envelope approach includes a data base memory for secondary storage of programs and data, and a plurality of memory modules interconnected to a plurality of processing modules by a connection network of the Omega gender. Programs and data are fed from the data base memory to the plurality of memory modules and from hence the programs are fed through the connection network to the array of processors (one copy of each program for each processor). Execution of the programs occur with the processors operating normally quite independently of each other in a multiprocessing fashion. For data dependent operations and other suitable operations, all processors are instructed to finish one given task or program branch before all are instructed to proceed in parallel processing fashion on the next instruction. Even when functioning in the parallel processing mode however, the processors are not locked-step but execute their own copy of the program individually unless or until another overall processor array synchronization instruction is issued.

  15. Symbolic simulation of engineering systems on a supercomputer

    International Nuclear Information System (INIS)

    Ragheb, M.; Gvillo, D.; Makowitz, H.

    1986-01-01

    Model-Based Production-Rule systems for analysis are developed for the symbolic simulation of Complex Engineering systems on a CRAY X-MP Supercomputer. The Fault-Tree and Event-Tree Analysis methodologies from Systems-Analysis are used for problem representation and are coupled to the Rule-Based System Paradigm from Knowledge Engineering to provide modelling of engineering devices. Modelling is based on knowledge of the structure and function of the device rather than on human expertise alone. To implement the methodology, we developed a production-Rule Analysis System that uses both backward-chaining and forward-chaining: HAL-1986. The inference engine uses an Induction-Deduction-Oriented antecedent-consequent logic and is programmed in Portable Standard Lisp (PSL). The inference engine is general and can accommodate general modifications and additions to the knowledge base. The methodologies used will be demonstrated using a model for the identification of faults, and subsequent recovery from abnormal situations in Nuclear Reactor Safety Analysis. The use of the exposed methodologies for the prognostication of future device responses under operational and accident conditions using coupled symbolic and procedural programming is discussed

  16. Increasing the specificity and function of DNA microarrays by processing arrays at different stringencies

    DEFF Research Database (Denmark)

    Dufva, Martin; Petersen, Jesper; Poulsen, Lena

    2009-01-01

    DNA microarrays have for a decade been the only platform for genome-wide analysis and have provided a wealth of information about living organisms. DNA microarrays are processed today under one condition only, which puts large demands on assay development because all probes on the array need to f...

  17. Real-time data acquisition and parallel data processing solution for TJ-II Bolometer arrays diagnostic

    Energy Technology Data Exchange (ETDEWEB)

    Barrera, E. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain)]. E-mail: eduardo.barrera@upm.es; Ruiz, M. [Grupo de Investigacion en Instrumentacion y Acustica Aplicada, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Lopez, S. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Machon, D. [Departamento de Sistemas Electronicos y de Control, Universidad Politecnica de Madrid, Crta. Valencia Km. 7, 28031 Madrid (Spain); Vega, J. [Asociacion EURATOM/CIEMAT para Fusion, 28040 Madrid (Spain); Ochando, M. [Asociacion EURATOM/CIEMAT para Fusion, 28040 Madrid (Spain)

    2006-07-15

    Maps of local plasma emissivity of TJ-II plasmas are determined using three-array cameras of silicon photodiodes (AXUV type from IRD). They have assigned the top and side ports of the same sector of the vacuum vessel. Each array consists of 20 unfiltered detectors. The signals from each of these detectors are the inputs to an iterative algorithm of tomographic reconstruction. Currently, these signals are acquired by a PXI standard system at approximately 50 kS/s, with 12 bits of resolution and are stored for off-line processing. A 0.5 s discharge generates 3 Mbytes of raw data. The algorithm's load exceeds the CPU capacity of the PXI system's controller in a continuous mode, making unfeasible to process the samples in parallel with their acquisition in a PXI standard system. A new architecture model has been developed, making possible to add one or several processing cards to a standard PXI system. With this model, it is possible to define how to distribute, in real-time, the data from all acquired signals in the system among the processing cards and the PXI controller. This way, by distributing the data processing among the system controller and two processing cards, the data processing can be done in parallel with the acquisition. Hence, this system configuration would be able to measure even in long pulse devices.

  18. Fabrication process for CMUT arrays with polysilicon electrodes, nanometre precision cavity gaps and through-silicon vias

    International Nuclear Information System (INIS)

    Due-Hansen, J; Poppe, E; Summanwar, A; Jensen, G U; Breivik, L; Wang, D T; Schjølberg-Henriksen, K; Midtbø, K

    2012-01-01

    Capacitive micromachined ultrasound transducers (CMUTs) can be used to realize miniature ultrasound probes. Through-silicon vias (TSVs) allow for close integration of the CMUT and read-out electronics. A fabrication process enabling the realization of a CMUT array with TSVs is being developed. The integrated process requires the formation of highly doped polysilicon electrodes with low surface roughness. A process for polysilicon film deposition, doping, CMP, RIE and thermal annealing that resulted in a film with sheet resistance of 4.0 Ω/□ and a surface roughness of 1 nm rms has been developed. The surface roughness of the polysilicon film was found to increase with higher phosphorus concentrations. The surface roughness also increased when oxygen was present in the thermal annealing ambient. The RIE process for etching CMUT cavities in the doped polysilicon gave a mean etch depth of 59.2 ± 3.9 nm and a uniformity across the wafer ranging from 1.0 to 4.7%. The two presented processes are key processes that enable the fabrication of CMUT arrays suitable for applications in for instance intravascular cardiology and gastrointestinal imaging. (paper)

  19. Free-running ADC- and FPGA-based signal processing method for brain PET using GAPD arrays

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Wei [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Choi, Yong, E-mail: ychoi.image@gmail.com [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Hong, Key Jo [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Kang, Jihoon [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Jung, Jin Ho [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Huh, Youn Suk [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Lim, Hyun Keong; Kim, Sang Su [Department of Electronic Engineering, Sogang University, 1 Shinsu-Dong, Mapo-Gu, Seoul 121-742 (Korea, Republic of); Kim, Byung-Tae [Department of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, 50 Ilwon-Dong, Gangnam-Gu, Seoul 135-710 (Korea, Republic of); Chung, Yonghyun [Department of Radiological Science, Yonsei University College of Health Science, 234 Meaji, Heungup Wonju, Kangwon-Do 220-710 (Korea, Republic of)

    2012-02-01

    Currently, for most photomultiplier tube (PMT)-based PET systems, constant fraction discriminators (CFD) and time to digital converters (TDC) have been employed to detect gamma ray signal arrival time, whereas anger logic circuits and peak detection analog-to-digital converters (ADCs) have been implemented to acquire position and energy information of detected events. As compared to PMT the Geiger-mode avalanche photodiodes (GAPDs) have a variety of advantages, such as compactness, low bias voltage requirement and MRI compatibility. Furthermore, the individual read-out method using a GAPD array coupled 1:1 with an array scintillator can provide better image uniformity than can be achieved using PMT and anger logic circuits. Recently, a brain PET using 72 GAPD arrays (4 Multiplication-Sign 4 array, pixel size: 3 mm Multiplication-Sign 3 mm) coupled 1:1 with LYSO scintillators (4 Multiplication-Sign 4 array, pixel size: 3 mm Multiplication-Sign 3 mm Multiplication-Sign 20 mm) has been developed for simultaneous PET/MRI imaging in our laboratory. Eighteen 64:1 position decoder circuits (PDCs) were used to reduce GAPD channel number and three off-the-shelf free-running ADC and field programmable gate array (FPGA) combined data acquisition (DAQ) cards were used for data acquisition and processing. In this study, a free-running ADC- and FPGA-based signal processing method was developed for the detection of gamma ray signal arrival time, energy and position information all together for each GAPD channel. For the method developed herein, three DAQ cards continuously acquired 18 channels of pre-amplified analog gamma ray signals and 108-bit digital addresses from 18 PDCs. In the FPGA, the digitized gamma ray pulses and digital addresses were processed to generate data packages containing pulse arrival time, baseline value, energy value and GAPD channel ID. Finally, these data packages were saved to a 128 Mbyte on-board synchronous dynamic random access memory (SDRAM) and

  20. Why genetic information processing could have a quantum basis

    Indian Academy of Sciences (India)

    Unknown

    Centre for Theoretical Studies and Supercomputer Education and Research Centre, ... the parent to the offspring, sensory information conveyed by the sense organ to the .... The task involved in genetic information processing is. ASSEMBLY.

  1. Signal processing for solar array monitoring, fault detection, and optimization

    CERN Document Server

    Braun, Henry; Spanias, Andreas

    2012-01-01

    Although the solar energy industry has experienced rapid growth recently, high-level management of photovoltaic (PV) arrays has remained an open problem. As sensing and monitoring technology continues to improve, there is an opportunity to deploy sensors in PV arrays in order to improve their management. In this book, we examine the potential role of sensing and monitoring technology in a PV context, focusing on the areas of fault detection, topology optimization, and performance evaluation/data visualization. First, several types of commonly occurring PV array faults are considered and detection algorithms are described. Next, the potential for dynamic optimization of an array's topology is discussed, with a focus on mitigation of fault conditions and optimization of power output under non-fault conditions. Finally, monitoring system design considerations such as type and accuracy of measurements, sampling rate, and communication protocols are considered. It is our hope that the benefits of monitoring presen...

  2. Dependently typed array programs don’t go wrong

    NARCIS (Netherlands)

    Trojahner, K.; Grelck, C.

    2009-01-01

    The array programming paradigm adopts multidimensional arrays as the fundamental data structures of computation. Array operations process entire arrays instead of just single elements. This makes array programs highly expressive and introduces data parallelism in a natural way. Array programming

  3. Dependently typed array programs don't go wrong

    NARCIS (Netherlands)

    Trojahner, K.; Grelck, C.

    2008-01-01

    The array programming paradigm adopts multidimensional arrays as the fundamental data structures of computation. Array operations process entire arrays instead of just single elements. This makes array programs highly expressive and introduces data parallelism in a natural way. Array programming

  4. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry

    2017-02-27

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey\\'s acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less

  5. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry; Etienne, Vincent; Gashawbeza, Ewenet; Curiel, Emesto Sandoval; Khan, Azizur; Feki, Saber; Kortas, Samuel

    2017-01-01

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey's acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less than

  6. The TESS Science Processing Operations Center

    Science.gov (United States)

    Jenkins, Jon M.; Twicken, Joseph D.; McCauliff, Sean; Campbell, Jennifer; Sanderfer, Dwight; Lung, David; Mansouri-Samani, Masoud; Girouard, Forrest; Tenenbaum, Peter; Klaus, Todd; hide

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover approximately 1,000 small planets with R(sub p) less than 4 (solar radius) and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed at NASA Ames Research Center based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NASA Advanced Supercomputing Division's Pleiades supercomputer. The SPOC will also search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes (MAST).

  7. A Spatiotemporal Indexing Approach for Efficient Processing of Big Array-Based Climate Data with MapReduce

    Science.gov (United States)

    Li, Zhenlong; Hu, Fei; Schnase, John L.; Duffy, Daniel Q.; Lee, Tsengdar; Bowen, Michael K.; Yang, Chaowei

    2016-01-01

    Climate observations and model simulations are producing vast amounts of array-based spatiotemporal data. Efficient processing of these data is essential for assessing global challenges such as climate change, natural disasters, and diseases. This is challenging not only because of the large data volume, but also because of the intrinsic high-dimensional nature of geoscience data. To tackle this challenge, we propose a spatiotemporal indexing approach to efficiently manage and process big climate data with MapReduce in a highly scalable environment. Using this approach, big climate data are directly stored in a Hadoop Distributed File System in its original, native file format. A spatiotemporal index is built to bridge the logical array-based data model and the physical data layout, which enables fast data retrieval when performing spatiotemporal queries. Based on the index, a data-partitioning algorithm is applied to enable MapReduce to achieve high data locality, as well as balancing the workload. The proposed indexing approach is evaluated using the National Aeronautics and Space Administration (NASA) Modern-Era Retrospective Analysis for Research and Applications (MERRA) climate reanalysis dataset. The experimental results show that the index can significantly accelerate querying and processing (10 speedup compared to the baseline test using the same computing cluster), while keeping the index-to-data ratio small (0.0328). The applicability of the indexing approach is demonstrated by a climate anomaly detection deployed on a NASA Hadoop cluster. This approach is also able to support efficient processing of general array-based spatiotemporal data in various geoscience domains without special configuration on a Hadoop cluster.

  8. De Novo Ultrascale Atomistic Simulations On High-End Parallel Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Nakano, A; Kalia, R K; Nomura, K; Sharma, A; Vashishta, P; Shimojo, F; van Duin, A; Goddard, III, W A; Biswas, R; Srivastava, D; Yang, L H

    2006-09-04

    We present a de novo hierarchical simulation framework for first-principles based predictive simulations of materials and their validation on high-end parallel supercomputers and geographically distributed clusters. In this framework, high-end chemically reactive and non-reactive molecular dynamics (MD) simulations explore a wide solution space to discover microscopic mechanisms that govern macroscopic material properties, into which highly accurate quantum mechanical (QM) simulations are embedded to validate the discovered mechanisms and quantify the uncertainty of the solution. The framework includes an embedded divide-and-conquer (EDC) algorithmic framework for the design of linear-scaling simulation algorithms with minimal bandwidth complexity and tight error control. The EDC framework also enables adaptive hierarchical simulation with automated model transitioning assisted by graph-based event tracking. A tunable hierarchical cellular decomposition parallelization framework then maps the O(N) EDC algorithms onto Petaflops computers, while achieving performance tunability through a hierarchy of parameterized cell data/computation structures, as well as its implementation using hybrid Grid remote procedure call + message passing + threads programming. High-end computing platforms such as IBM BlueGene/L, SGI Altix 3000 and the NSF TeraGrid provide an excellent test grounds for the framework. On these platforms, we have achieved unprecedented scales of quantum-mechanically accurate and well validated, chemically reactive atomistic simulations--1.06 billion-atom fast reactive force-field MD and 11.8 million-atom (1.04 trillion grid points) quantum-mechanical MD in the framework of the EDC density functional theory on adaptive multigrids--in addition to 134 billion-atom non-reactive space-time multiresolution MD, with the parallel efficiency as high as 0.998 on 65,536 dual-processor BlueGene/L nodes. We have also achieved an automated execution of hierarchical QM

  9. Rapid prototyping of biodegradable microneedle arrays by integrating CO2 laser processing and polymer molding

    International Nuclear Information System (INIS)

    Tu, K T; Chung, C K

    2016-01-01

    An integrated technology of CO 2 laser processing and polymer molding has been demonstrated for the rapid prototyping of biodegradable poly-lactic-co-glycolic acid (PLGA) microneedle arrays. Rapid and low-cost CO 2 laser processing was used for the fabrication of a high-aspect-ratio microneedle master mold instead of conventional time-consuming and expensive photolithography and etching processes. It is crucial to use flexible polydimethylsiloxane (PDMS) to detach PLGA. However, the direct CO 2 laser-ablated PDMS could generate poor surfaces with bulges, scorches, re-solidification and shrinkage. Here, we have combined the polymethyl methacrylate (PMMA) ablation and two-step PDMS casting process to form a PDMS female microneedle mold to eliminate the problem of direct ablation. A self-assembled monolayer polyethylene glycol was coated to prevent stiction between the two PDMS layers during the peeling-off step in the PDMS-to-PDMS replication. Then the PLGA microneedle array was successfully released by bending the second-cast PDMS mold with flexibility and hydrophobic property. The depth of the polymer microneedles can range from hundreds of micrometers to millimeters. It is linked to the PMMA pattern profile and can be adjusted by CO 2 laser power and scanning speed. The proposed integration process is maskless, simple and low-cost for rapid prototyping with a reusable mold. (paper)

  10. Rapid prototyping of biodegradable microneedle arrays by integrating CO2 laser processing and polymer molding

    Science.gov (United States)

    Tu, K. T.; Chung, C. K.

    2016-06-01

    An integrated technology of CO2 laser processing and polymer molding has been demonstrated for the rapid prototyping of biodegradable poly-lactic-co-glycolic acid (PLGA) microneedle arrays. Rapid and low-cost CO2 laser processing was used for the fabrication of a high-aspect-ratio microneedle master mold instead of conventional time-consuming and expensive photolithography and etching processes. It is crucial to use flexible polydimethylsiloxane (PDMS) to detach PLGA. However, the direct CO2 laser-ablated PDMS could generate poor surfaces with bulges, scorches, re-solidification and shrinkage. Here, we have combined the polymethyl methacrylate (PMMA) ablation and two-step PDMS casting process to form a PDMS female microneedle mold to eliminate the problem of direct ablation. A self-assembled monolayer polyethylene glycol was coated to prevent stiction between the two PDMS layers during the peeling-off step in the PDMS-to-PDMS replication. Then the PLGA microneedle array was successfully released by bending the second-cast PDMS mold with flexibility and hydrophobic property. The depth of the polymer microneedles can range from hundreds of micrometers to millimeters. It is linked to the PMMA pattern profile and can be adjusted by CO2 laser power and scanning speed. The proposed integration process is maskless, simple and low-cost for rapid prototyping with a reusable mold.

  11. A dual-directional light-control film with a high-sag and high-asymmetrical-shape microlens array fabricated by a UV imprinting process

    International Nuclear Information System (INIS)

    Lin, Ta-Wei; Liao, Yunn-Shiuan; Chen, Chi-Feng; Yang, Jauh-Jung

    2008-01-01

    A dual-directional light-control film with a high-sag and high-asymmetric-shape long gapless hexagonal microlens array fabricated by an ultra-violent (UV) imprinting process is presented. Such a lens array is designed by ray-tracing simulation and fabricated by a micro-replication process including gray-scale lithography, electroplating process and UV curing. The shape of the designed lens array is similar to that of a near half-cylindrical lens array with a periodical ripple. The measurement results of a prototype show that the incident lights using a collimated LED with the FWHM of dispersion angle, 12°, are diversified differently in short and long axes. The numerical and experimental results show that the FWHMs of the view angle for angular brightness in long and short axis directions through the long hexagonal lens are about 34.3° and 18.1° and 31° and 13°, respectively. Compared with the simulation result, the errors in long and short axes are about 5% and 16%, respectively. Obviously, the asymmetric gapless microlens array can realize the aim of the controlled asymmetric angular brightness. Such a light-control film can be used as a power saving screen compared with convention diffusing film for the application of a rear projection display

  12. X-ray imager using solution processed organic transistor arrays and bulk heterojunction photodiodes on thin, flexible plastic substrate

    NARCIS (Netherlands)

    Gelinck, G.H.; Kumar, A.; Moet, D.; Steen, J.L. van der; Shafique, U.; Malinowski, P.E.; Myny, K.; Rand, B.P.; Simon, M.; Rütten, W.; Douglas, A.; Jorritsma, J.; Heremans, P.L.; Andriessen, H.A.J.M.

    2013-01-01

    We describe the fabrication and characterization of large-area active-matrix X-ray/photodetector array of high quality using organic photodiodes and organic transistors. All layers with the exception of the electrodes are solution processed. Because it is processed on a very thin plastic substrate

  13. Development of a high performance eigensolver on the peta-scale next generation supercomputer system

    International Nuclear Information System (INIS)

    Imamura, Toshiyuki; Yamada, Susumu; Machida, Masahiko

    2010-01-01

    For the present supercomputer systems, a multicore and multisocket processors are necessary to build a system, and choice of interconnection is essential. In addition, for effective development of a new code, high performance, scalable, and reliable numerical software is one of the key items. ScaLAPACK and PETSc are well-known software on distributed memory parallel computer systems. It is needless to say that highly tuned software towards new architecture like many-core processors must be chosen for real computation. In this study, we present a high-performance and high-scalable eigenvalue solver towards the next-generation supercomputer system, so called 'K-computer' system. We have developed two versions, the standard version (eigen s) and enhanced performance version (eigen sx), which are developed on the T2K cluster system housed at University of Tokyo. Eigen s employs the conventional algorithms; Householder tridiagonalization, divide and conquer (DC) algorithm, and Householder back-transformation. They are carefully implemented with blocking technique and flexible two-dimensional data-distribution to reduce the overhead of memory traffic and data transfer, respectively. Eigen s performs excellently on the T2K system with 4096 cores (theoretical peak is 37.6 TFLOPS), and it shows fine performance 3.0 TFLOPS with a two hundred thousand dimensional matrix. The enhanced version, eigen sx, uses more advanced algorithms; the narrow-band reduction algorithm, DC for band matrices, and the block Householder back-transformation with WY-representation. Even though this version is still on a test stage, it shows 4.7 TFLOPS with the same dimensional matrix on eigen s. (author)

  14. Optimal control of stretching process of flexible solar arrays on spacecraft based on a hybrid optimization strategy

    Directory of Open Access Journals (Sweden)

    Qijia Yao

    2017-07-01

    Full Text Available The optimal control of multibody spacecraft during the stretching process of solar arrays is investigated, and a hybrid optimization strategy based on Gauss pseudospectral method (GPM and direct shooting method (DSM is presented. First, the elastic deformation of flexible solar arrays was described approximately by the assumed mode method, and a dynamic model was established by the second Lagrangian equation. Then, the nonholonomic motion planning problem is transformed into a nonlinear programming problem by using GPM. By giving fewer LG points, initial values of the state variables and control variables were obtained. A serial optimization framework was adopted to obtain the approximate optimal solution from a feasible solution. Finally, the control variables were discretized at LG points, and the precise optimal control inputs were obtained by DSM. The optimal trajectory of the system can be obtained through numerical integration. Through numerical simulation, the stretching process of solar arrays is stable with no detours, and the control inputs match the various constraints of actual conditions. The results indicate that the method is effective with good robustness. Keywords: Motion planning, Multibody spacecraft, Optimal control, Gauss pseudospectral method, Direct shooting method

  15. Re-inventing electromagnetics - Supercomputing solution of Maxwell's equations via direct time integration on space grids

    International Nuclear Information System (INIS)

    Taflove, A.

    1992-01-01

    This paper summarizes the present state and future directions of applying finite-difference and finite-volume time-domain techniques for Maxwell's equations on supercomputers to model complex electromagnetic wave interactions with structures. Applications so far have been dominated by radar cross section technology, but by no means are limited to this area. In fact, the gains we have made place us on the threshold of being able to make tremendous contributions to non-defense electronics and optical technology. Some of the most interesting research in these commercial areas is summarized. 47 refs

  16. ASIC Readout Circuit Architecture for Large Geiger Photodiode Arrays

    Science.gov (United States)

    Vasile, Stefan; Lipson, Jerold

    2012-01-01

    The objective of this work was to develop a new class of readout integrated circuit (ROIC) arrays to be operated with Geiger avalanche photodiode (GPD) arrays, by integrating multiple functions at the pixel level (smart-pixel or active pixel technology) in 250-nm CMOS (complementary metal oxide semiconductor) processes. In order to pack a maximum of functions within a minimum pixel size, the ROIC array is a full, custom application-specific integrated circuit (ASIC) design using a mixed-signal CMOS process with compact primitive layout cells. The ROIC array was processed to allow assembly in bump-bonding technology with photon-counting infrared detector arrays into 3-D imaging cameras (LADAR). The ROIC architecture was designed to work with either common- anode Si GPD arrays or common-cathode InGaAs GPD arrays. The current ROIC pixel design is hardwired prior to processing one of the two GPD array configurations, and it has the provision to allow soft reconfiguration to either array (to be implemented into the next ROIC array generation). The ROIC pixel architecture implements the Geiger avalanche quenching, bias, reset, and time to digital conversion (TDC) functions in full-digital design, and uses time domain over-sampling (vernier) to allow high temporal resolution at low clock rates, increased data yield, and improved utilization of the laser beam.

  17. Remote online process measurements by a fiber optic diode array spectrometer

    International Nuclear Information System (INIS)

    Van Hare, D.R.; Prather, W.S.; O'Rourke, P.E.

    1986-01-01

    The development of remote online monitors for radioactive process streams is an active research area at the Savannah River Laboratory (SRL). A remote offline spectrophotometric measurement system has been developed and used at the Savannah River Plant (SRP) for the past year to determine the plutonium concentration of process solution samples. The system consists of a commercial diode array spectrophotometer modified with fiber optic cables that allow the instrument to be located remotely from the measurement cell. Recently, a fiber optic multiplexer has been developed for this instrument, which allows online monitoring of five locations sequentially. The multiplexer uses a motorized micrometer to drive one of five sets of optical fibers into the optical path of the instrument. A sixth optical fiber is used as an external reference and eliminates the need to flush out process lines to re-reference the spectrophotometer. The fiber optic multiplexer has been installed in a process prototype facility to monitor uranium loading and breakthrough of ion exchange columns. The design of the fiber optic multiplexer is discussed and data from the prototype facility are presented to demonstrate the capabilities of the measurement system

  18. Microneedle array electrode for human EEG recording.

    NARCIS (Netherlands)

    Lüttge, Regina; van Nieuwkasteele-Bystrova, Svetlana Nikolajevna; van Putten, Michel Johannes Antonius Maria; Vander Sloten, Jos; Verdonck, Pascal; Nyssen, Marc; Haueisen, Jens

    2009-01-01

    Microneedle array electrodes for EEG significantly reduce the mounting time, particularly by circumvention of the need for skin preparation by scrubbing. We designed a new replication process for numerous types of microneedle arrays. Here, polymer microneedle array electrodes with 64 microneedles,

  19. Earth and environmental science in the 1980's: Part 1: Environmental data systems, supercomputer facilities and networks

    Science.gov (United States)

    1986-01-01

    Overview descriptions of on-line environmental data systems, supercomputer facilities, and networks are presented. Each description addresses the concepts of content, capability, and user access relevant to the point of view of potential utilization by the Earth and environmental science community. The information on similar systems or facilities is presented in parallel fashion to encourage and facilitate intercomparison. In addition, summary sheets are given for each description, and a summary table precedes each section.

  20. Process Development And Simulation For Cold Fabrication Of Doubly Curved Metal Plate By Using Line Array Roll Set

    International Nuclear Information System (INIS)

    Shim, D. S.; Jung, C. G.; Seong, D. Y.; Yang, D. Y.; Han, J. M.; Han, M. S.

    2007-01-01

    For effective manufacturing of a doubly curved sheet metal, a novel sheet metal forming process is proposed. The suggested process uses a Line Array Roll Set (LARS) composed of a pair of upper and lower roll assemblies in a symmetric manner. The process offers flexibility as compared with the conventional manufacturing processes, because it does not require any complex-shaped die and loss of material by blank-holding is minimized. LARS allows flexibility of the incremental forming process and adopts the principle of bending deformation, resulting in a slight deformation in thickness. Rolls composed of line array roll sets are divided into a driving roll row and two idle roll rows. The arrayed rolls in the central lines of the upper and lower roll assemblies are motor-driven so that they deform and transfer the sheet metal using friction between the rolls and the sheet metal. The remaining rolls are idle rolls, generating bending deformation with driving rolls. Furthermore, all the rolls are movable in any direction so that they are adaptable to any size or shape of the desired three-dimensional configuration. In the process, the sheet is deformed incrementally as deformation proceeds simultaneously in rolling and transverse directions step by step. Consequently, it can be applied to the fabrication of doubly curved ship hull plates by undergoing several passes. In this work, FEM simulations are carried out for verification of the proposed incremental forming system using the chosen design parameters. Based on the results of the simulation, the relationship between the roll set configuration and the curvature of a sheet metal is determined. The process information such as the forming loads and torques acting on every roll is analyzed as important data for the design and development of the manufacturing system

  1. 32 bit digital optical computer - A hardware update

    Science.gov (United States)

    Guilfoyle, Peter S.; Carter, James A., III; Stone, Richard V.; Pape, Dennis R.

    1990-01-01

    Such state-of-the-art devices as multielement linear laser diode arrays, multichannel acoustooptic modulators, optical relays, and avalanche photodiode arrays, are presently applied to the implementation of a 32-bit supercomputer's general-purpose optical central processing architecture. Shannon's theorem, Morozov's control operator method (in conjunction with combinatorial arithmetic), and DeMorgan's law have been used to design an architecture whose 100 MHz clock renders it fully competitive with emerging planar-semiconductor technology. Attention is given to the architecture's multichannel Bragg cells, thermal design and RF crosstalk considerations, and the first and second anamorphic relay legs.

  2. CCD and IR array controllers

    Science.gov (United States)

    Leach, Robert W.; Low, Frank J.

    2000-08-01

    A family of controllers has bene developed that is powerful and flexible enough to operate a wide range of CCD and IR focal plane arrays in a variety of ground-based applications. These include fast readout of small CCD and IR arrays for adaptive optics applications, slow readout of large CCD and IR mosaics, and single CCD and IR array operation at low background/low noise regimes as well as high background/high speed regimes. The CCD and IR controllers have a common digital core based on user- programmable digital signal processors that are used to generate the array clocking and signal processing signals customized for each application. A fiber optic link passes image data and commands to VME or PCI interface boards resident in a host computer to the controller. CCD signal processing is done with a dual slope integrator operating at speeds of up to one Megapixel per second per channel. Signal processing of IR arrays is done either with a dual channel video processor or a four channel video processor that has built-in image memory and a coadder to 32-bit precision for operating high background arrays. Recent developments underway include the implementation of a fast fiber optic data link operating at a speed of 12.5 Megapixels per second for fast image transfer from the controller to the host computer, and supporting image acquisition software and device drivers for the PCI interface board for the Sun Solaris, Linux and Windows 2000 operating systems.

  3. Efficient development of memory bounded geo-applications to scale on modern supercomputers

    Science.gov (United States)

    Räss, Ludovic; Omlin, Samuel; Licul, Aleksandar; Podladchikov, Yuri; Herman, Frédéric

    2016-04-01

    Numerical modeling is an actual key tool in the area of geosciences. The current challenge is to solve problems that are multi-physics and for which the length scale and the place of occurrence might not be known in advance. Also, the spatial extend of the investigated domain might strongly vary in size, ranging from millimeters for reactive transport to kilometers for glacier erosion dynamics. An efficient way to proceed is to develop simple but robust algorithms that perform well and scale on modern supercomputers and permit therefore very high-resolution simulations. We propose an efficient approach to solve memory bounded real-world applications on modern supercomputers architectures. We optimize the software to run on our newly acquired state-of-the-art GPU cluster "octopus". Our approach shows promising preliminary results on important geodynamical and geomechanical problematics: we have developed a Stokes solver for glacier flow and a poromechanical solver including complex rheologies for nonlinear waves in stressed rocks porous rocks. We solve the system of partial differential equations on a regular Cartesian grid and use an iterative finite difference scheme with preconditioning of the residuals. The MPI communication happens only locally (point-to-point); this method is known to scale linearly by construction. The "octopus" GPU cluster, which we use for the computations, has been designed to achieve maximal data transfer throughput at minimal hardware cost. It is composed of twenty compute nodes, each hosting four Nvidia Titan X GPU accelerators. These high-density nodes are interconnected with a parallel (dual-rail) FDR InfiniBand network. Our efforts show promising preliminary results for the different physics investigated. The glacier flow solver achieves good accuracy in the relevant benchmarks and the coupled poromechanical solver permits to explain previously unresolvable focused fluid flow as a natural outcome of the porosity setup. In both cases

  4. Acoustic array systems theory, implementation, and application

    CERN Document Server

    Bai, Mingsian R; Benesty, Jacob

    2013-01-01

    Presents a unified framework of far-field and near-field array techniques for noise source identification and sound field visualization, from theory to application. Acoustic Array Systems: Theory, Implementation, and Application provides an overview of microphone array technology with applications in noise source identification and sound field visualization. In the comprehensive treatment of microphone arrays, the topics covered include an introduction to the theory, far-field and near-field array signal processing algorithms, practical implementations, and common applic

  5. Research center Juelich to install Germany's most powerful supercomputer new IBM System for science and research will achieve 5.8 trillion computations per second

    CERN Multimedia

    2002-01-01

    "The Research Center Juelich, Germany, and IBM today announced that they have signed a contract for the delivery and installation of a new IBM supercomputer at the Central Institute for Applied Mathematics" (1/2 page).

  6. Solar array flight dynamic experiment

    Science.gov (United States)

    Schock, Richard W.

    1987-01-01

    The purpose of the Solar Array Flight Dynamic Experiment (SAFDE) is to demonstrate the feasibility of on-orbit measurement and ground processing of large space structures' dynamic characteristics. Test definition or verification provides the dynamic characteristic accuracy required for control systems use. An illumination/measurement system was developed to fly on space shuttle flight STS-41D. The system was designed to dynamically evaluate a large solar array called the Solar Array Flight Experiment (SAFE) that had been scheduled for this flight. The SAFDE system consisted of a set of laser diode illuminators, retroreflective targets, an intelligent star tracker receiver and the associated equipment to power, condition, and record the results. In six tests on STS-41D, data was successfully acquired from 18 retroreflector targets and ground processed, post flight, to define the solar array's dynamic characteristic. The flight experiment proved the viability of on-orbit test definition of large space structures dynamic characteristics. Future large space structures controllability should be greatly enhanced by this capability.

  7. MEGADOCK 4.0: an ultra-high-performance protein-protein docking software for heterogeneous supercomputers.

    Science.gov (United States)

    Ohue, Masahito; Shimoda, Takehiro; Suzuki, Shuji; Matsuzaki, Yuri; Ishida, Takashi; Akiyama, Yutaka

    2014-11-15

    The application of protein-protein docking in large-scale interactome analysis is a major challenge in structural bioinformatics and requires huge computing resources. In this work, we present MEGADOCK 4.0, an FFT-based docking software that makes extensive use of recent heterogeneous supercomputers and shows powerful, scalable performance of >97% strong scaling. MEGADOCK 4.0 is written in C++ with OpenMPI and NVIDIA CUDA 5.0 (or later) and is freely available to all academic and non-profit users at: http://www.bi.cs.titech.ac.jp/megadock. akiyama@cs.titech.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  8. A highly scalable particle tracking algorithm using partitioned global address space (PGAS) programming for extreme-scale turbulence simulations

    Science.gov (United States)

    Buaria, D.; Yeung, P. K.

    2017-12-01

    A new parallel algorithm utilizing a partitioned global address space (PGAS) programming model to achieve high scalability is reported for particle tracking in direct numerical simulations of turbulent fluid flow. The work is motivated by the desire to obtain Lagrangian information necessary for the study of turbulent dispersion at the largest problem sizes feasible on current and next-generation multi-petaflop supercomputers. A large population of fluid particles is distributed among parallel processes dynamically, based on instantaneous particle positions such that all of the interpolation information needed for each particle is available either locally on its host process or neighboring processes holding adjacent sub-domains of the velocity field. With cubic splines as the preferred interpolation method, the new algorithm is designed to minimize the need for communication, by transferring between adjacent processes only those spline coefficients determined to be necessary for specific particles. This transfer is implemented very efficiently as a one-sided communication, using Co-Array Fortran (CAF) features which facilitate small data movements between different local partitions of a large global array. The cost of monitoring transfer of particle properties between adjacent processes for particles migrating across sub-domain boundaries is found to be small. Detailed benchmarks are obtained on the Cray petascale supercomputer Blue Waters at the University of Illinois, Urbana-Champaign. For operations on the particles in a 81923 simulation (0.55 trillion grid points) on 262,144 Cray XE6 cores, the new algorithm is found to be orders of magnitude faster relative to a prior algorithm in which each particle is tracked by the same parallel process at all times. This large speedup reduces the additional cost of tracking of order 300 million particles to just over 50% of the cost of computing the Eulerian velocity field at this scale. Improving support of PGAS models on

  9. Dumand-array data-acquisition system

    International Nuclear Information System (INIS)

    Brenner, A.E.; Theriot, D.; Dau, W.D.; Geelhood, B.D.; Harris, F.; Learned, J.G.; Stenger, V.; March, R.; Roos, C.; Shumard, E.

    1982-04-01

    An overall data acquisition approach for DUMAND is described. The scheme assumes one array to shore optical fiber transmission line for each string of the array. The basic event sampling period is approx. 13 μsec. All potentially interesting data is transmitted to shore where the major processing is performed

  10. A Novel Self-aligned and Maskless Process for Formation of Highly Uniform Arrays of Nanoholes and Nanopillars

    Directory of Open Access Journals (Sweden)

    Wu Wei

    2008-01-01

    Full Text Available AbstractFabrication of a large area of periodic structures with deep sub-wavelength features is required in many applications such as solar cells, photonic crystals, and artificial kidneys. We present a low-cost and high-throughput process for realization of 2D arrays of deep sub-wavelength features using a self-assembled monolayer of hexagonally close packed (HCP silica and polystyrene microspheres. This method utilizes the microspheres as super-lenses to fabricate nanohole and pillar arrays over large areas on conventional positive and negative photoresist, and with a high aspect ratio. The period and diameter of the holes and pillars formed with this technique can be controlled precisely and independently. We demonstrate that the method can produce HCP arrays of hole of sub-250 nm size using a conventional photolithography system with a broadband UV source centered at 400 nm. We also present our 3D FDTD modeling, which shows a good agreement with the experimental results.

  11. In situ synthesis of protein arrays.

    Science.gov (United States)

    He, Mingyue; Stoevesandt, Oda; Taussig, Michael J

    2008-02-01

    In situ or on-chip protein array methods use cell free expression systems to produce proteins directly onto an immobilising surface from co-distributed or pre-arrayed DNA or RNA, enabling protein arrays to be created on demand. These methods address three issues in protein array technology: (i) efficient protein expression and availability, (ii) functional protein immobilisation and purification in a single step and (iii) protein on-chip stability over time. By simultaneously expressing and immobilising many proteins in parallel on the chip surface, the laborious and often costly processes of DNA cloning, expression and separate protein purification are avoided. Recently employed methods reviewed are PISA (protein in situ array) and NAPPA (nucleic acid programmable protein array) from DNA and puromycin-mediated immobilisation from mRNA.

  12. Automatic Defect Detection for TFT-LCD Array Process Using Quasiconformal Kernel Support Vector Data Description

    Directory of Open Access Journals (Sweden)

    Yi-Hung Liu

    2011-09-01

    Full Text Available Defect detection has been considered an efficient way to increase the yield rate of panels in thin film transistor liquid crystal display (TFT-LCD manufacturing. In this study we focus on the array process since it is the first and key process in TFT-LCD manufacturing. Various defects occur in the array process, and some of them could cause great damage to the LCD panels. Thus, how to design a method that can robustly detect defects from the images captured from the surface of LCD panels has become crucial. Previously, support vector data description (SVDD has been successfully applied to LCD defect detection. However, its generalization performance is limited. In this paper, we propose a novel one-class machine learning method, called quasiconformal kernel SVDD (QK-SVDD to address this issue. The QK-SVDD can significantly improve generalization performance of the traditional SVDD by introducing the quasiconformal transformation into a predefined kernel. Experimental results, carried out on real LCD images provided by an LCD manufacturer in Taiwan, indicate that the proposed QK-SVDD not only obtains a high defect detection rate of 96%, but also greatly improves generalization performance of SVDD. The improvement has shown to be over 30%. In addition, results also show that the QK-SVDD defect detector is able to accomplish the task of defect detection on an LCD image within 60 ms.

  13. Phased arrays techniques and split spectrum processing for inspection of thick titanium casting components

    International Nuclear Information System (INIS)

    Banchet, J.; Chahbaz, A.; Sicard, R.; Zellouf, D.E.

    2003-01-01

    In aircraft structures, titanium parts and engine members are critical structural components, and their inspection crucial. However, these structures are very difficult to inspect ultrasonically because of their large grain structure that increases noise drastically. In this work, phased array inspection setups were developed to detected small defects such as simulated inclusions and porosity contained in thick titanium casting blocks, which are frequently used in the aerospace industry. A Cut Spectrum Processing (CSP)-based algorithm was then implemented on the acquired data by employing a set of parallel bandpass filters with different center frequencies. This process led in substantial improvement of the signal to noise ratio and thus, of detectability

  14. Array capabilities and future arrays

    International Nuclear Information System (INIS)

    Radford, D.

    1993-01-01

    Early results from the new third-generation instruments GAMMASPHERE and EUROGAM are confirming the expectation that such arrays will have a revolutionary effect on the field of high-spin nuclear structure. When completed, GAMMASHPERE will have a resolving power am order of magnitude greater that of the best second-generation arrays. When combined with other instruments such as particle-detector arrays and fragment mass analysers, the capabilites of the arrays for the study of more exotic nuclei will be further enhanced. In order to better understand the limitations of these instruments, and to design improved future detector systems, it is important to have some intelligible and reliable calculation for the relative resolving power of different instrument designs. The derivation of such a figure of merit will be briefly presented, and the relative sensitivities of arrays currently proposed or under construction presented. The design of TRIGAM, a new third-generation array proposed for Chalk River, will also be discussed. It is instructive to consider how far arrays of Compton-suppressed Ge detectors could be taken. For example, it will be shown that an idealised open-quote perfectclose quotes third-generation array of 1000 detectors has a sensitivity an order of magnitude higher again than that of GAMMASPHERE. Less conventional options for new arrays will also be explored

  15. Solution-Processed Wide-Bandgap Organic Semiconductor Nanostructures Arrays for Nonvolatile Organic Field-Effect Transistor Memory.

    Science.gov (United States)

    Li, Wen; Guo, Fengning; Ling, Haifeng; Liu, Hui; Yi, Mingdong; Zhang, Peng; Wang, Wenjun; Xie, Linghai; Huang, Wei

    2018-01-01

    In this paper, the development of organic field-effect transistor (OFET) memory device based on isolated and ordered nanostructures (NSs) arrays of wide-bandgap (WBG) small-molecule organic semiconductor material [2-(9-(4-(octyloxy)phenyl)-9H-fluoren-2-yl)thiophene]3 (WG 3 ) is reported. The WG 3 NSs are prepared from phase separation by spin-coating blend solutions of WG 3 /trimethylolpropane (TMP), and then introduced as charge storage elements for nonvolatile OFET memory devices. Compared to the OFET memory device with smooth WG 3 film, the device based on WG 3 NSs arrays exhibits significant improvements in memory performance including larger memory window (≈45 V), faster switching speed (≈1 s), stable retention capability (>10 4 s), and reliable switching properties. A quantitative study of the WG 3 NSs morphology reveals that enhanced memory performance is attributed to the improved charge trapping/charge-exciton annihilation efficiency induced by increased contact area between the WG 3 NSs and pentacene layer. This versatile solution-processing approach to preparing WG 3 NSs arrays as charge trapping sites allows for fabrication of high-performance nonvolatile OFET memory devices, which could be applicable to a wide range of WBG organic semiconductor materials. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Wavelet transform-vector quantization compression of supercomputer ocean model simulation output

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J N; Brislawn, C M

    1992-11-12

    We describe a new procedure for efficient compression of digital information for storage and transmission purposes. The algorithm involves a discrete wavelet transform subband decomposition of the data set, followed by vector quantization of the wavelet transform coefficients using application-specific vector quantizers. The new vector quantizer design procedure optimizes the assignment of both memory resources and vector dimensions to the transform subbands by minimizing an exponential rate-distortion functional subject to constraints on both overall bit-rate and encoder complexity. The wavelet-vector quantization method, which originates in digital image compression. is applicable to the compression of other multidimensional data sets possessing some degree of smoothness. In this paper we discuss the use of this technique for compressing the output of supercomputer simulations of global climate models. The data presented here comes from Semtner-Chervin global ocean models run at the National Center for Atmospheric Research and at the Los Alamos Advanced Computing Laboratory.

  17. Cantilever arrays with self-aligned nanotips of uniform height

    International Nuclear Information System (INIS)

    Koelmans, W W; Peters, T; Berenschot, E; De Boer, M J; Siekman, M H; Abelmann, L

    2012-01-01

    Cantilever arrays are employed to increase the throughput of imaging and manipulation at the nanoscale. We present a fabrication process to construct cantilever arrays with nanotips that show a uniform tip–sample distance. Such uniformity is crucial, because in many applications the cantilevers do not feature individual tip–sample spacing control. Uniform cantilever arrays lead to very similar tip–sample interaction within an array, enable non-contact modes for arrays and give better control over the load force in contact modes. The developed process flow uses a single mask to define both tips and cantilevers. An additional mask is required for the back side etch. The tips are self-aligned in the convex corner at the free end of each cantilever. Although we use standard optical contact lithography, we show that the convex corner can be sharpened to a nanometre scale radius by an isotropic underetch step. The process is robust and wafer-scale. The resonance frequencies of the cantilevers within an array are shown to be highly uniform with a relative standard error of 0.26% or lower. The tip–sample distance within an array of up to ten cantilevers is measured to have a standard error around 10 nm. An imaging demonstration using the AFM shows that all cantilevers in the array have a sharp tip with a radius below 10 nm. The process flow for the cantilever arrays finds application in probe-based nanolithography, probe-based data storage, nanomanufacturing and parallel scanning probe microscopy. (paper)

  18. Application of Seismic Array Processing to Tsunami Early Warning

    Science.gov (United States)

    An, C.; Meng, L.

    2015-12-01

    Tsunami wave predictions of the current tsunami warning systems rely on accurate earthquake source inversions of wave height data. They are of limited effectiveness for the near-field areas since the tsunami waves arrive before data are collected. Recent seismic and tsunami disasters have revealed the need for early warning to protect near-source coastal populations. In this work we developed the basis for a tsunami warning system based on rapid earthquake source characterisation through regional seismic array back-projections. We explored rapid earthquake source imaging using onshore dense seismic arrays located at regional distances on the order of 1000 km, which provides faster source images than conventional teleseismic back-projections. We implement this method in a simulated real-time environment, and analysed the 2011 Tohoku earthquake rupture with two clusters of Hi-net stations in Kyushu and Northern Hokkaido, and the 2014 Iquique event with the Earthscope USArray Transportable Array. The results yield reasonable estimates of rupture area, which is approximated by an ellipse and leads to the construction of simple slip models based on empirical scaling of the rupture area, seismic moment and average slip. The slip model is then used as the input of the tsunami simulation package COMCOT to predict the tsunami waves. In the example of the Tohoku event, the earthquake source model can be acquired within 6 minutes from the start of rupture and the simulation of tsunami waves takes less than 2 min, which could facilitate a timely tsunami warning. The predicted arrival time and wave amplitude reasonably fit observations. Based on this method, we propose to develop an automatic warning mechanism that provides rapid near-field warning for areas of high tsunami risk. The initial focus will be Japan, Pacific Northwest and Alaska, where dense seismic networks with the capability of real-time data telemetry and open data accessibility, such as the Japanese HiNet (>800

  19. Cas4-Dependent Prespacer Processing Ensures High-Fidelity Programming of CRISPR Arrays.

    Science.gov (United States)

    Lee, Hayun; Zhou, Yi; Taylor, David W; Sashital, Dipali G

    2018-04-05

    CRISPR-Cas immune systems integrate short segments of foreign DNA as spacers into the host CRISPR locus to provide molecular memory of infection. Cas4 proteins are widespread in CRISPR-Cas systems and are thought to participate in spacer acquisition, although their exact function remains unknown. Here we show that Bacillus halodurans type I-C Cas4 is required for efficient prespacer processing prior to Cas1-Cas2-mediated integration. Cas4 interacts tightly with the Cas1 integrase, forming a heterohexameric complex containing two Cas1 dimers and two Cas4 subunits. In the presence of Cas1 and Cas2, Cas4 processes double-stranded substrates with long 3' overhangs through site-specific endonucleolytic cleavage. Cas4 recognizes PAM sequences within the prespacer and prevents integration of unprocessed prespacers, ensuring that only functional spacers will be integrated into the CRISPR array. Our results reveal the critical role of Cas4 in maintaining fidelity during CRISPR adaptation, providing a structural and mechanistic model for prespacer processing and integration. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Fully Integrated Linear Single Photon Avalanche Diode (SPAD) Array with Parallel Readout Circuit in a Standard 180 nm CMOS Process

    Science.gov (United States)

    Isaak, S.; Bull, S.; Pitter, M. C.; Harrison, Ian.

    2011-05-01

    This paper reports on the development of a SPAD device and its subsequent use in an actively quenched single photon counting imaging system, and was fabricated in a UMC 0.18 μm CMOS process. A low-doped p- guard ring (t-well layer) encircling the active area to prevent the premature reverse breakdown. The array is a 16×1 parallel output SPAD array, which comprises of an active quenched SPAD circuit in each pixel with the current value being set by an external resistor RRef = 300 kΩ. The SPAD I-V response, ID was found to slowly increase until VBD was reached at excess bias voltage, Ve = 11.03 V, and then rapidly increase due to avalanche multiplication. Digital circuitry to control the SPAD array and perform the necessary data processing was designed in VHDL and implemented on a FPGA chip. At room temperature, the dark count was found to be approximately 13 KHz for most of the 16 SPAD pixels and the dead time was estimated to be 40 ns.

  1. Fabrication of CoZn alloy nanowire arrays: Significant improvement in magnetic properties by annealing process

    International Nuclear Information System (INIS)

    Koohbor, M.; Soltanian, S.; Najafi, M.; Servati, P.

    2012-01-01

    Highlights: ► Increasing the Zn concentration changes the structure of NWs from hcp to amorphous. ► Increasing the Zn concentration significantly reduces the Hc value of NWs. ► Magnetic properties of CoZn NWs can be significantly enhanced by appropriate annealing. ► The pH of electrolyte has no significant effect on the properties of the NW arrays. ► Deposition frequency has considerable effects on the magnetic properties of NWs. - Abstract: Highly ordered arrays of Co 1−x Zn x (0 ≤ x ≤ 0.74) nanowires (NWs) with diameters of ∼35 nm and high length-to-diameter ratios (up to 150) were fabricated by co-electrodeposition of Co and Zn into pores of anodized aluminum oxide (AAO) templates. The Co and Zn contents of the NWs were adjusted by varying the ratio of Zn and Co ion concentrations in the electrolyte. The effect of the Zn content, electrodeposition conditions (frequency and pH) and annealing on the structural and magnetic properties (e.g., coercivity (Hc) and squareness (Sq)) of NW arrays were investigated using X-ray diffraction (XRD), scanning electron microscopy, electron diffraction, and alternating gradient force magnetometer (AGFM). XRD patterns reveal that an increase in the concentration of Zn ions of the electrolyte forces the hcp crystal structure of Co NWs to change into an amorphous phase, resulting in a significant reduction in Hc. It was found that the magnetic properties of NWs can be significantly improved by appropriate annealing process. The highest values for Hc (2050 Oe) and Sq (0.98) were obtained for NWs electrodeposited using 0.95/0.05 Co:Zn concentrations at 200 Hz and annealed at 575 °C. While the pH of electrolyte is found to have no significant effect on the structural and magnetic properties of the NW arrays, the electrodeposition frequency has considerable effects on the magnetic properties of the NW arrays. The changes in magnetic property of NWs are rooted in a competition between shape anisotropy and

  2. Beam pattern improvement by compensating array nonuniformities in a guided wave phased array

    International Nuclear Information System (INIS)

    Kwon, Hyu-Sang; Lee, Seung-Seok; Kim, Jin-Yeon

    2013-01-01

    This paper presents a simple data processing algorithm which can improve the performance of a uniform circular array based on guided wave transducers. The algorithm, being intended to be used with the delay-and-sum beamformer, effectively eliminates the effects of nonuniformities that can significantly degrade the beam pattern. Nonuniformities can arise intrinsically from the array geometry when the circular array is transformed to a linear array for beam steering and extrinsically from unequal conditions of transducers such as element-to-element variations of sensitivity and directivity. The effects of nonuniformities are compensated by appropriately imposing weight factors on the elements in the projected linear array. Different cases are simulated, where the improvements of the beam pattern, especially the level of the highest sidelobe, are clearly seen, and related issues are discussed. An experiment is performed which uses A0 mode Lamb waves in a steel plate, to demonstrate the usefulness of the proposed method. The discrepancy between theoretical and experimental beam patterns is explained by accounting for near-field effects. (paper)

  3. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  4. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu

    2013-07-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  5. Research to application: Supercomputing trends for the 90's - Opportunities for interdisciplinary computations

    International Nuclear Information System (INIS)

    Shankar, V.

    1991-01-01

    The progression of supercomputing is reviewed from the point of view of computational fluid dynamics (CFD), and multidisciplinary problems impacting the design of advanced aerospace configurations are addressed. The application of full potential and Euler equations to transonic and supersonic problems in the 70s and early 80s is outlined, along with Navier-Stokes computations widespread during the late 80s and early 90s. Multidisciplinary computations currently in progress are discussed, including CFD and aeroelastic coupling for both static and dynamic flexible computations, CFD, aeroelastic, and controls coupling for flutter suppression and active control, and the development of a computational electromagnetics technology based on CFD methods. Attention is given to computational challenges standing in a way of the concept of establishing a computational environment including many technologies. 40 refs

  6. Continuous catchment-scale monitoring of geomorphic processes with a 2-D seismological array

    Science.gov (United States)

    Burtin, A.; Hovius, N.; Milodowski, D.; Chen, Y.-G.; Wu, Y.-M.; Lin, C.-W.; Chen, H.

    2012-04-01

    The monitoring of geomorphic processes during extreme climatic events is of a primary interest to estimate their impact on the landscape dynamics. However, available techniques to survey the surface activity do not provide a relevant time and/or space resolution. Furthermore, these methods hardly investigate the dynamics of the events since their detection are made a posteriori. To increase our knowledge of the landscape evolution and the influence of extreme climatic events on a catchment dynamics, we need to develop new tools and procedures. In many past works, it has been shown that seismic signals are relevant to detect and locate surface processes (landslides, debris flows). During the 2010 typhoon season, we deployed a network of 12 seismometers dedicated to monitor the surface processes of the Chenyoulan catchment in Taiwan. We test the ability of a two dimensional array and small inter-stations distances (~ 11 km) to map in continuous and at a catchment-scale the geomorphic activity. The spectral analysis of continuous records shows a high-frequency (> 1 Hz) seismic energy that is coherent with the occurrence of hillslope and river processes. Using a basic detection algorithm and a location approach running on the analysis of seismic amplitudes, we manage to locate the catchment activity. We mainly observe short-time events (> 300 occurrences) associated with debris falls and bank collapses during daily convective storms, where 69% of occurrences are coherent with the time distribution of precipitations. We also identify a couple of debris flows during a large tropical storm. In contrast, the FORMOSAT imagery does not detect any activity, which somehow reflects the lack of extreme climatic conditions during the experiment. However, high resolution pictures confirm the existence of links between most of geomorphic events and existing structures (landslide scars, gullies...). We thus conclude to an activity that is dominated by reactivation processes. It

  7. Flexible eddy current coil arrays

    International Nuclear Information System (INIS)

    Krampfner, Y.; Johnson, D.P.

    1987-01-01

    A novel approach was devised to overcome certain limitations of conventional eddy current testing. The typical single-element hand-wound probe was replaced with a two dimensional array of spirally wound probe elements deposited on a thin, flexible polyimide substrate. This provides full and reliable coverage of the test area and eliminates the need for scanning. The flexible substrate construction of the array allows the probes to conform to irregular part geometries, such as turbine blades and tubing, thereby eliminating the need for specialized probes for each geometry. Additionally, the batch manufacturing process of the array can yield highly uniform and reproducible coil geometries. The array is driven by a portable computer-based eddy current instrument, smartEDDY/sup TM/, capable of two-frequency operation, and offers a great deal of versatility and flexibility due to its software-based architecture. The array is coupled to the instrument via an 80-switch multiplexer that can be configured to address up to 1600 probes. The individual array elements may be addressed in any desired sequence, as defined by the software

  8. High performance adaptive image processing on multi-scale hybrid architectures

    NARCIS (Netherlands)

    Liu, Fangbin

    2015-01-01

    In such an exciting age of information explosion, huge amount of visual data are produced continuously 24 hours, 7 days in both daily life and scientific research. Processing and storage of such a huge amount of data forms big challenges. Use of supercomputers tackles the need-for-speed challenge

  9. Uniform illumination rendering using an array of LEDs: a signal processing perspective

    NARCIS (Netherlands)

    Yang, Hongming; Bergmans, J.W.M.; Schenk, T.C.W.; Linnartz, J.P.M.G.; Rietman, R.

    2009-01-01

    An array of a large number of LEDs will be widely used in future indoor illumination systems. In this paper, we investigate the problem of rendering uniform illumination by a regular LED array on the ceiling of a room. We first present two general results on the scaling property of the basic

  10. Car2x with software defined networks, network functions virtualization and supercomputers technical and scientific preparations for the Amsterdam Arena telecoms fieldlab

    NARCIS (Netherlands)

    Meijer R.J.; Cushing R.; De Laat C.; Jackson P.; Klous S.; Koning R.; Makkes M.X.; Meerwijk A.

    2015-01-01

    In the invited talk 'Car2x with SDN, NFV and supercomputers' we report about how our past work with SDN [1, 2] allows the design of a smart mobility fieldlab in the huge parking lot the Amsterdam Arena. We explain how we can engineer and test software that handle the complex conditions of the Car2X

  11. Chunking of Large Multidimensional Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Rotem, Doron; Otoo, Ekow J.; Seshadri, Sridhar

    2007-02-28

    Data intensive scientific computations as well on-lineanalytical processing applications as are done on very large datasetsthat are modeled as k-dimensional arrays. The storage organization ofsuch arrays on disks is done by partitioning the large global array intofixed size hyper-rectangular sub-arrays called chunks or tiles that formthe units of data transfer between disk and memory. Typical queriesinvolve the retrieval of sub-arrays in a manner that accesses all chunksthat overlap the query results. An important metric of the storageefficiency is the expected number of chunks retrieved over all suchqueries. The question that immediately arises is "what shapes of arraychunks give the minimum expected number of chunks over a query workload?"In this paper we develop two probabilistic mathematical models of theproblem and provide exact solutions using steepest descent and geometricprogramming methods. Experimental results, using synthetic workloads onreal life data sets, show that our chunking is much more efficient thanthe existing approximate solutions.

  12. Arraying proteins by cell-free synthesis.

    Science.gov (United States)

    He, Mingyue; Wang, Ming-Wei

    2007-10-01

    Recent advances in life science have led to great motivation for the development of protein arrays to study functions of genome-encoded proteins. While traditional cell-based methods have been commonly used for generating protein arrays, they are usually a time-consuming process with a number of technical challenges. Cell-free protein synthesis offers an attractive system for making protein arrays, not only does it rapidly converts the genetic information into functional proteins without the need for DNA cloning, but also presents a flexible environment amenable to production of folded proteins or proteins with defined modifications. Recent advancements have made it possible to rapidly generate protein arrays from PCR DNA templates through parallel on-chip protein synthesis. This article reviews current cell-free protein array technologies and their proteomic applications.

  13. Array processors based on Gaussian fraction-free method

    Energy Technology Data Exchange (ETDEWEB)

    Peng, S; Sedukhin, S [Aizu Univ., Aizuwakamatsu, Fukushima (Japan); Sedukhin, I

    1998-03-01

    The design of algorithmic array processors for solving linear systems of equations using fraction-free Gaussian elimination method is presented. The design is based on a formal approach which constructs a family of planar array processors systematically. These array processors are synthesized and analyzed. It is shown that some array processors are optimal in the framework of linear allocation of computations and in terms of number of processing elements and computing time. (author)

  14. Co-Prime Frequency and Aperture Design for HF Surveillance, Wideband Radar Imaging, and Nonstationary Array Processing

    Science.gov (United States)

    2018-03-10

    circuit boards. A computational electromagnetics software package, FEKO [24], is used to model the antenna arrays, and the RMIM [12] is used to...Symposium on Intelligent Signal Processing and Communications Systems, Chengdu, China, 2010. [24] FEKO Suite 6.3, EM Software & Systems- S.A. (Pty) Ltd...including suggestions for reducing the burden, to Department of Defense, Washington Headquarters Services , Directorate for Information Operations and

  15. Upside to downsizing : Acceleware's graphic processor technology propels seismic data processing revolution

    Energy Technology Data Exchange (ETDEWEB)

    Smith, M.

    2009-11-15

    Accelware has developed a graphic processor technology (GPU) that is transforming the petroleum industry. The benefits of the technology are its small-footprint, low-wattage, and high speed. The software brings supercomputing speed to the desktop by leveraging the massive parallel processing capacity to the very latest in GPU technology. This article discussed the GPU technology and its emergence as a powerful supercomputing tool. Accelware's partnering with California-based NVIDIA was also outlined. The advantages of the technology were also discussed including its smaller footprint. Accelware's hardware takes up a fraction of the space and uses up to 70 per cent less power than a traditional central processing unit. By combining Accelware's core knowledge in making complex algorithms run in parallel with an in-house team of seismic industry experts, the company provides software solutions for seismic data processors that access the massively parallel processing capabilities of GPUs. 1 fig.

  16. Maximum-likelihood methods for array processing based on time-frequency distributions

    Science.gov (United States)

    Zhang, Yimin; Mu, Weifeng; Amin, Moeness G.

    1999-11-01

    This paper proposes a novel time-frequency maximum likelihood (t-f ML) method for direction-of-arrival (DOA) estimation for non- stationary signals, and compares this method with conventional maximum likelihood DOA estimation techniques. Time-frequency distributions localize the signal power in the time-frequency domain, and as such enhance the effective SNR, leading to improved DOA estimation. The localization of signals with different t-f signatures permits the division of the time-frequency domain into smaller regions, each contains fewer signals than those incident on the array. The reduction of the number of signals within different time-frequency regions not only reduces the required number of sensors, but also decreases the computational load in multi- dimensional optimizations. Compared to the recently proposed time- frequency MUSIC (t-f MUSIC), the proposed t-f ML method can be applied in coherent environments, without the need to perform any type of preprocessing that is subject to both array geometry and array aperture.

  17. Analyzing CMOS/SOS fabrication for LSI arrays

    Science.gov (United States)

    Ipri, A. C.

    1978-01-01

    Report discusses set of design rules that have been developed as result of work with test arrays. Set of optimum dimensions is given that would maximize process output and would correspondingly minimize costs in fabrication of large-scale integration (LSI) arrays.

  18. A user-friendly web portal for T-Coffee on supercomputers

    Directory of Open Access Journals (Sweden)

    Koetsier Jos

    2011-05-01

    Full Text Available Abstract Background Parallel T-Coffee (PTC was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. Results In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. Conclusions The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution.

  19. Proceedings of the Adaptive Sensor Array Processing Workshop (12th) Held in Lexington, MA on 16-18 March 2004 (CD-ROM)

    National Research Council Canada - National Science Library

    James, F

    2004-01-01

    ...: The twelfth annual workshop on Adaptive Sensor Array Processing presented a diverse agenda featuring new work on adaptive methods for communications, radar and sonar, algorithmic challenges posed...

  20. Applying the sequential neural-network approximation and orthogonal array algorithm to optimize the axial-flow cooling system for rapid thermal processes

    International Nuclear Information System (INIS)

    Hung, Shih-Yu; Shen, Ming-Ho; Chang, Ying-Pin

    2009-01-01

    The sequential neural-network approximation and orthogonal array (SNAOA) were used to shorten the cooling time for the rapid cooling process such that the normalized maximum resolved stress in silicon wafer was always below one in this study. An orthogonal array was first conducted to obtain the initial solution set. The initial solution set was treated as the initial training sample. Next, a back-propagation sequential neural network was trained to simulate the feasible domain to obtain the optimal parameter setting. The size of the training sample was greatly reduced due to the use of the orthogonal array. In addition, a restart strategy was also incorporated into the SNAOA so that the searching process may have a better opportunity to reach a near global optimum. In this work, we considered three different cooling control schemes during the rapid thermal process: (1) downward axial gas flow cooling scheme; (2) upward axial gas flow cooling scheme; (3) dual axial gas flow cooling scheme. Based on the maximum shear stress failure criterion, the other control factors such as flow rate, inlet diameter, outlet width, chamber height and chamber diameter were also examined with respect to cooling time. The results showed that the cooling time could be significantly reduced using the SNAOA approach

  1. Mosaic Process for the Fabrication of an Acoustic Transducer Array

    National Research Council Canada - National Science Library

    2005-01-01

    .... Deriving a geometric shape for the array based on the established performance level. Selecting piezoceramic materials based on considerations related to the performance level and derived geometry...

  2. A semi-automatic calibration method for seismic arrays applied to an Alaskan array

    Science.gov (United States)

    Lindquist, K. G.; Tibuleac, I. M.; Hansen, R. A.

    2001-12-01

    Well-calibrated, small (less than 22 km) aperture seismic arrays are of great importance for event location and characterization. We have implemented the crosscorrelation method of Tibuleac and Herrin (Seis. Res. Lett. 1997) as a semi-automatic procedure, applicable to any seismic array. With this we are able to process thousands of phases with several days of computer time on a Sun Blade 1000 workstation. Complicated geology beneath elements and elevation differences amonst the array stations made station corrections necessary. 328 core phases (including PcP, PKiKP, PKP, PKKP) were used in order to determine the static corrections. To demonstrate this application and method, we have analyzed P and PcP arrivals at the ILAR array (Eielson, Alaska) between years 1995-2000. The arrivals were picked by PIDC, for events (mb>4.0) well located by the USGS. We calculated backazimuth and horizontal velocity residuals for all events. We observed large backazimuth residuals for regional and near-regional phases. We are discussing the possibility of a dipping Moho (strike E-W, dip N) beneath the array versus other local structure that would produce the residuals.

  3. The Owens Valley Millimeter Array

    International Nuclear Information System (INIS)

    Padin, S.; Scott, S.L.; Woody, D.P.; Scoville, N.Z.; Seling, T.V.

    1991-01-01

    The telescopes and signal processing systems of the Owens Valley Millimeter Array are considered, and improvements in the sensitivity and stability of the instrument are characterized. The instrument can be applied to map sources in the 85 to 115 GHz and 218 to 265 GHz bands with a resolution of about 1 arcsec in the higher frequency band. The operation of the array is fully automated. The current scientific programs for the array encompass high-resolution imaging of protoplanetary/protostellar disk structures, observations of molecular cloud complexes associated with spiral structure in nearby galaxies, and observations of molecular structures in the nuclei of spiral and luminous IRAS galaxies. 9 refs

  4. Fabrication of CoZn alloy nanowire arrays: Significant improvement in magnetic properties by annealing process

    Energy Technology Data Exchange (ETDEWEB)

    Koohbor, M. [Department of Physics, University of Kurdistan, Sanandaj (Iran, Islamic Republic of); Soltanian, S., E-mail: s.soltanian@gmail.com [Department of Physics, University of Kurdistan, Sanandaj (Iran, Islamic Republic of); Department of Electrical and Computer Engineering, University of British Columbia, Vancouver (Canada); Najafi, M. [Department of Physics, University of Kurdistan, Sanandaj (Iran, Islamic Republic of); Department of Physics, Hamadan University of Technology, Hamadan (Iran, Islamic Republic of); Servati, P. [Department of Electrical and Computer Engineering, University of British Columbia, Vancouver (Canada)

    2012-01-05

    Highlights: Black-Right-Pointing-Pointer Increasing the Zn concentration changes the structure of NWs from hcp to amorphous. Black-Right-Pointing-Pointer Increasing the Zn concentration significantly reduces the Hc value of NWs. Black-Right-Pointing-Pointer Magnetic properties of CoZn NWs can be significantly enhanced by appropriate annealing. Black-Right-Pointing-Pointer The pH of electrolyte has no significant effect on the properties of the NW arrays. Black-Right-Pointing-Pointer Deposition frequency has considerable effects on the magnetic properties of NWs. - Abstract: Highly ordered arrays of Co{sub 1-x}Zn{sub x} (0 {<=} x {<=} 0.74) nanowires (NWs) with diameters of {approx}35 nm and high length-to-diameter ratios (up to 150) were fabricated by co-electrodeposition of Co and Zn into pores of anodized aluminum oxide (AAO) templates. The Co and Zn contents of the NWs were adjusted by varying the ratio of Zn and Co ion concentrations in the electrolyte. The effect of the Zn content, electrodeposition conditions (frequency and pH) and annealing on the structural and magnetic properties (e.g., coercivity (Hc) and squareness (Sq)) of NW arrays were investigated using X-ray diffraction (XRD), scanning electron microscopy, electron diffraction, and alternating gradient force magnetometer (AGFM). XRD patterns reveal that an increase in the concentration of Zn ions of the electrolyte forces the hcp crystal structure of Co NWs to change into an amorphous phase, resulting in a significant reduction in Hc. It was found that the magnetic properties of NWs can be significantly improved by appropriate annealing process. The highest values for Hc (2050 Oe) and Sq (0.98) were obtained for NWs electrodeposited using 0.95/0.05 Co:Zn concentrations at 200 Hz and annealed at 575 Degree-Sign C. While the pH of electrolyte is found to have no significant effect on the structural and magnetic properties of the NW arrays, the electrodeposition frequency has considerable effects on

  5. Portable implementation model for CFD simulations. Application to hybrid CPU/GPU supercomputers

    Science.gov (United States)

    Oyarzun, Guillermo; Borrell, Ricard; Gorobets, Andrey; Oliva, Assensi

    2017-10-01

    Nowadays, high performance computing (HPC) systems experience a disruptive moment with a variety of novel architectures and frameworks, without any clarity of which one is going to prevail. In this context, the portability of codes across different architectures is of major importance. This paper presents a portable implementation model based on an algebraic operational approach for direct numerical simulation (DNS) and large eddy simulation (LES) of incompressible turbulent flows using unstructured hybrid meshes. The strategy proposed consists in representing the whole time-integration algorithm using only three basic algebraic operations: sparse matrix-vector product, a linear combination of vectors and dot product. The main idea is based on decomposing the nonlinear operators into a concatenation of two SpMV operations. This provides high modularity and portability. An exhaustive analysis of the proposed implementation for hybrid CPU/GPU supercomputers has been conducted with tests using up to 128 GPUs. The main objective consists in understanding the challenges of implementing CFD codes on new architectures.

  6. ESPRIT And Uniform Linear Arrays

    Science.gov (United States)

    Roy, R. H.; Goldburg, M.; Ottersten, B. E.; Swindlehurst, A. L.; Viberg, M.; Kailath, T.

    1989-11-01

    Abstract ¬â€?ESPRIT is a recently developed and patented technique for high-resolution estimation of signal parameters. It exploits an invariance structure designed into the sensor array to achieve a reduction in computational requirements of many orders of magnitude over previous techniques such as MUSIC, Burg's MEM, and Capon's ML, and in addition achieves performance improvement as measured by parameter estimate error variance. It is also manifestly more robust with respect to sensor errors (e.g. gain, phase, and location errors) than other methods as well. Whereas ESPRIT only requires that the sensor array possess a single invariance best visualized by considering two identical but other-wise arbitrary arrays of sensors displaced (but not rotated) with respect to each other, many arrays currently in use in various applications are uniform linear arrays of identical sensor elements. Phased array radars are commonplace in high-resolution direction finding systems, and uniform tapped delay lines (i.e., constant rate A/D converters) are the rule rather than the exception in digital signal processing systems. Such arrays possess many invariances, and are amenable to other types of analysis, which is one of the main reasons such structures are so prevalent. Recent developments in high-resolution algorithms of the signal/noise subspace genre including total least squares (TLS) ESPRIT applied to uniform linear arrays are summarized. ESPRIT is also shown to be a generalization of the root-MUSIC algorithm (applicable only to the case of uniform linear arrays of omni-directional sensors and unimodular cisoids). Comparisons with various estimator bounds, including CramerRao bounds, are presented.

  7. Big Data Challenges for Large Radio Arrays

    Science.gov (United States)

    Jones, Dayton L.; Wagstaff, Kiri; Thompson, David; D'Addario, Larry; Navarro, Robert; Mattmann, Chris; Majid, Walid; Lazio, Joseph; Preston, Robert; Rebbapragada, Umaa

    2012-01-01

    Future large radio astronomy arrays, particularly the Square Kilometre Array (SKA), will be able to generate data at rates far higher than can be analyzed or stored affordably with current practices. This is, by definition, a "big data" problem, and requires an end-to-end solution if future radio arrays are to reach their full scientific potential. Similar data processing, transport, storage, and management challenges face next-generation facilities in many other fields.

  8. Process-morphology scaling relations quantify self-organization in capillary densified nanofiber arrays.

    Science.gov (United States)

    Kaiser, Ashley L; Stein, Itai Y; Cui, Kehang; Wardle, Brian L

    2018-02-07

    Capillary-mediated densification is an inexpensive and versatile approach to tune the application-specific properties and packing morphology of bulk nanofiber (NF) arrays, such as aligned carbon nanotubes. While NF length governs elasto-capillary self-assembly, the geometry of cellular patterns formed by capillary densified NFs cannot be precisely predicted by existing theories. This originates from the recently quantified orders of magnitude lower than expected NF array effective axial elastic modulus (E), and here we show via parametric experimentation and modeling that E determines the width, area, and wall thickness of the resulting cellular pattern. Both experiments and models show that further tuning of the cellular pattern is possible by altering the NF-substrate adhesion strength, which could enable the broad use of this facile approach to predictably pattern NF arrays for high value applications.

  9. Post-Irradiation Examination of Array Targets - Part I

    Energy Technology Data Exchange (ETDEWEB)

    Icenhour, A.S.

    2004-01-23

    During FY 2001, two arrays, each containing seven neptunium-loaded targets, were irradiated at the Advanced Test Reactor in Idaho to examine the influence of multi-target self-shielding on {sup 236}Pu content and to evaluate fission product release data. One array consisted of seven targets that contained 10 vol% NpO{sub 2} pellets, while the other array consisted of seven targets that contained 20 vol % NpO{sub 2} pellets. The arrays were located in the same irradiation facility but were axially separated to minimize the influence of one array on the other. Each target also contained a dosimeter package, which consisted of a small NpO{sub 2} wire that was inside a vanadium container. After completion of irradiation and shipment back to the Oak Ridge National Laboratory, nine of the targets (four from the 10 vol% array and five from the 20 vol% array) were punctured for pressure measurement and measurement of {sup 85}Kr. These nine targets and the associated dosimeters were then chemically processed to measure the residual neptunium, total plutonium production, {sup 238}Pu production, and {sup 236}Pu concentration at discharge. The amount and isotopic composition of fission products were also measured. This report provides the results of the processing and analysis of the nine targets.

  10. Nanofabrication and characterization of ZnO nanorod arrays and branched microrods by aqueous solution route and rapid thermal processing

    International Nuclear Information System (INIS)

    Lupan, Oleg; Chow, Lee; Chai, Guangyu; Roldan, Beatriz; Naitabdi, Ahmed; Schulte, Alfons; Heinrich, Helge

    2007-01-01

    This paper presents an inexpensive and fast fabrication method for one-dimensional (1D) ZnO nanorod arrays and branched two-dimensional (2D), three-dimensional (3D) - nanoarchitectures. Our synthesis technique includes the use of an aqueous solution route and post-growth rapid thermal annealing. It permits rapid and controlled growth of ZnO nanorod arrays of 1D - rods, 2D - crosses, and 3D - tetrapods without the use of templates or seeds. The obtained ZnO nanorods are uniformly distributed on the surface of Si substrates and individual or branched nano/microrods can be easily transferred to other substrates. Process parameters such as concentration, temperature and time, type of substrate and the reactor design are critical for the formation of nanorod arrays with thin diameter and transferable nanoarchitectures. X-ray diffraction, scanning electron microscopy, X-ray photoelectron spectroscopy, transmission electron microscopy and Micro-Raman spectroscopy have been used to characterize the samples

  11. Two-dimensional random arrays for real time volumetric imaging

    DEFF Research Database (Denmark)

    Davidsen, Richard E.; Jensen, Jørgen Arendt; Smith, Stephen W.

    1994-01-01

    real time volumetric imaging system, which employs a wide transmit beam and receive mode parallel processing to increase image frame rate. Depth-of-field comparisons were made from simulated on-axis and off-axis beamplots at ranges from 30 to 160 mm for both coaxial and offset transmit and receive......Two-dimensional arrays are necessary for a variety of ultrasonic imaging techniques, including elevation focusing, 2-D phase aberration correction, and real time volumetric imaging. In order to reduce system cost and complexity, sparse 2-D arrays have been considered with element geometries...... selected ad hoc, by algorithm, or by random process. Two random sparse array geometries and a sparse array with a Mills cross receive pattern were simulated and compared to a fully sampled aperture with the same overall dimensions. The sparse arrays were designed to the constraints of the Duke University...

  12. EDF's experience with supercomputing and challenges ahead - towards multi-physics and multi-scale approaches

    International Nuclear Information System (INIS)

    Delbecq, J.M.; Banner, D.

    2003-01-01

    Nuclear power plants are a major asset of the EDF company. To remain so, in particular in a context of deregulation, competitiveness, safety and public acceptance are three conditions. These stakes apply both to existing plants and to future reactors. The purpose of the presentation is to explain how supercomputing can help EDF to satisfy these requirements. Three examples are described in detail: ensuring optimal use of nuclear fuel under wholly safe conditions, understanding and simulating the material deterioration mechanisms and moving forward with numerical simulation for the performance of EDF's activities. In conclusion, a broader vision of EDF long term R and D in the field of numerical simulation is given and especially of five challenges taken up by EDF together with its industrial and scientific partners. (author)

  13. Study and Design of Differential Microphone Arrays

    CERN Document Server

    Benesty, Jacob

    2013-01-01

    Microphone arrays have attracted a lot of interest over the last few decades since they have the potential to solve many important problems such as noise reduction/speech enhancement, source separation, dereverberation, spatial sound recording, and source localization/tracking, to name a few. However, the design and implementation of microphone arrays with beamforming algorithms is not a trivial task when it comes to processing broadband signals such as speech. Indeed, in most sensor arrangements, the beamformer tends to have a frequency-dependent response. One exception, perhaps, is the family of differential microphone arrays (DMAs) that have the promise to form frequency-independent responses. Moreover, they have the potential to attain high directional gains with small and compact apertures. As a result, this type of microphone arrays has drawn much research and development attention recently. This book is intended to provide a systematic study of DMAs from a signal processing perspective. The primary obj...

  14. Seismometer array station processors

    International Nuclear Information System (INIS)

    Key, F.A.; Lea, T.G.; Douglas, A.

    1977-01-01

    A description is given of the design, construction and initial testing of two types of Seismometer Array Station Processor (SASP), one to work with data stored on magnetic tape in analogue form, the other with data in digital form. The purpose of a SASP is to detect the short period P waves recorded by a UK-type array of 20 seismometers and to edit these on to a a digital library tape or disc. The edited data are then processed to obtain a rough location for the source and to produce seismograms (after optimum processing) for analysis by a seismologist. SASPs are an important component in the scheme for monitoring underground explosions advocated by the UK in the Conference of the Committee on Disarmament. With digital input a SASP can operate at 30 times real time using a linear detection process and at 20 times real time using the log detector of Weichert. Although the log detector is slower, it has the advantage over the linear detector that signals with lower signal-to-noise ratio can be detected and spurious large amplitudes are less likely to produce a detection. It is recommended, therefore, that where possible array data should be recorded in digital form for input to a SASP and that the log detector of Weichert be used. Trial runs show that a SASP is capable of detecting signals down to signal-to-noise ratios of about two with very few false detections, and at mid-continental array sites it should be capable of detecting most, if not all, the signals with magnitude above msub(b) 4.5; the UK argues that, given a suitable network, it is realistic to hope that sources of this magnitude and above can be detected and identified by seismological means alone. (author)

  15. Deformable wire array: fiber drawn tunable metamaterials

    DEFF Research Database (Denmark)

    Fleming, Simon; Stefani, Alessio; Tang, Xiaoli

    2017-01-01

    By fiber drawing we fabricate a wire array metamaterial, the structure of which can be actively modified. The plasma frequency can be tuned by 50% by compressing the metamaterial; recovers when released and the process can be repeated.......By fiber drawing we fabricate a wire array metamaterial, the structure of which can be actively modified. The plasma frequency can be tuned by 50% by compressing the metamaterial; recovers when released and the process can be repeated....

  16. Copper-encapsulated vertically aligned carbon nanotube arrays.

    Science.gov (United States)

    Stano, Kelly L; Chapla, Rachel; Carroll, Murphy; Nowak, Joshua; McCord, Marian; Bradford, Philip D

    2013-11-13

    A new procedure is described for the fabrication of vertically aligned carbon nanotubes (VACNTs) that are decorated, and even completely encapsulated, by a dense network of copper nanoparticles. The process involves the conformal deposition of pyrolytic carbon (Py-C) to stabilize the aligned carbon-nanotube structure during processing. The stabilized arrays are mildly functionalized using oxygen plasma treatment to improve wettability, and they are then infiltrated with an aqueous, supersaturated Cu salt solution. Once dried, the salt forms a stabilizing crystal network throughout the array. After calcination and H2 reduction, Cu nanoparticles are left decorating the CNT surfaces. Studies were carried out to determine the optimal processing parameters to maximize Cu content in the composite. These included the duration of Py-C deposition and system process pressure as well as the implementation of subsequent and multiple Cu salt solution infiltrations. The optimized procedure yielded a nanoscale hybrid material where the anisotropic alignment from the VACNT array was preserved, and the mass of the stabilized arrays was increased by over 24-fold because of the addition of Cu. The procedure has been adapted for other Cu salts and can also be used for other metal salts altogether, including Ni, Co, Fe, and Ag. The resulting composite is ideally suited for application in thermal management devices because of its low density, mechanical integrity, and potentially high thermal conductivity. Additionally, further processing of the material via pressing and sintering can yield consolidated, dense bulk composites.

  17. Micromirror array nanostructures for anticounterfeiting applications

    Science.gov (United States)

    Lee, Robert A.

    2004-06-01

    The optical characteristics of pixellated passive micro mirror arrays are derived and applied in the context of their use as reflective optically variable device (OVD) nanostructures for the protection of documents from counterfeiting. The traditional design variables of foil based diffractive OVDs are shown to be able to be mapped to a corresponding set of design parameters for reflective optical micro mirror array (OMMA) devices. The greatly increased depth characteristics of micro mirror array OVDs provides an opportunity for directly printing the OVD microstructure onto the security document in-line with the normal printing process. The micro mirror array OVD architecture therefore eliminates the need for hot stamping foil as the carrier of the OVD information, thereby reducing costs. The origination of micro mirror array devices via a palette based data format and a combination electron beam lithography and photolithography techniques is discussed via an artwork example and experimental tests. Finally the application of the technology to the design of a generic class of devices which have the interesting property of allowing for both application and customer specific OVD image encoding and data encoding at the end user stage of production is described. Because of the end user nature of the image and data encoding process these devices are particularly well suited to ID document applications and for this reason we refer this new OVD concept as biometric OVD technology.

  18. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  19. Si Wire-Array Solar Cells

    Science.gov (United States)

    Boettcher, Shannon

    2010-03-01

    Micron-scale Si wire arrays are three-dimensional photovoltaic absorbers that enable orthogonalization of light absorption and carrier collection and hence allow for the utilization of relatively impure Si in efficient solar cell designs. The wire arrays are grown by a vapor-liquid-solid-catalyzed process on a crystalline (111) Si wafer lithographically patterned with an array of metal catalyst particles. Following growth, such arrays can be embedded in polymethyldisiloxane (PDMS) and then peeled from the template growth substrate. The result is an unusual photovoltaic material: a flexible, bendable, wafer-thickness crystalline Si absorber. In this paper I will describe: 1. the growth of high-quality Si wires with controllable doping and the evaluation of their photovoltaic energy-conversion performance using a test electrolyte that forms a rectifying conformal semiconductor-liquid contact 2. the observation of enhanced absorption in wire arrays exceeding the conventional light trapping limits for planar Si cells of equivalent material thickness and 3. single-wire and large-area solid-state Si wire-array solar cell results obtained to date with directions for future cell designs based on optical and device physics. In collaboration with Michael Kelzenberg, Morgan Putnam, Joshua Spurgeon, Daniel Turner-Evans, Emily Warren, Nathan Lewis, and Harry Atwater, California Institute of Technology.

  20. Compressive sensing-based electrostatic sensor array signal processing and exhausted abnormal debris detecting

    Science.gov (United States)

    Tang, Xin; Chen, Zhongsheng; Li, Yue; Yang, Yongmin

    2018-05-01

    When faults happen at gas path components of gas turbines, some sparsely-distributed and charged debris will be generated and released into the exhaust gas. The debris is called abnormal debris. Electrostatic sensors can detect the debris online and further indicate the faults. It is generally considered that, under a specific working condition, a more serious fault generates more and larger debris, and a piece of larger debris carries more charge. Therefore, the amount and charge of the abnormal debris are important indicators of the fault severity. However, because an electrostatic sensor can only detect the superposed effect on the electrostatic field of all the debris, it can hardly identify the amount and position of the debris. Moreover, because signals of electrostatic sensors depend on not only charge but also position of debris, and the position information is difficult to acquire, measuring debris charge accurately using the electrostatic detecting method is still a technical difficulty. To solve these problems, a hemisphere-shaped electrostatic sensors' circular array (HSESCA) is used, and an array signal processing method based on compressive sensing (CS) is proposed in this paper. To research in a theoretical framework of CS, the measurement model of the HSESCA is discretized into a sparse representation form by meshing. In this way, the amount and charge of the abnormal debris are described as a sparse vector. It is further reconstructed by constraining l1-norm when solving an underdetermined equation. In addition, a pre-processing method based on singular value decomposition and a result calibration method based on weighted-centroid algorithm are applied to ensure the accuracy of the reconstruction. The proposed method is validated by both numerical simulations and experiments. Reconstruction errors, characteristics of the results and some related factors are discussed.

  1. Microfabricated hollow microneedle array using ICP etcher

    Science.gov (United States)

    Ji, Jing; Tay, Francis E. H.; Miao, Jianmin

    2006-04-01

    This paper presents a developed process for fabrication of hollow silicon microneedle arrays. The inner hollow hole and the fluidic reservoir are fabricated in deep reactive ion etching. The profile of outside needles is achieved by the developed fabrication process, which combined isotropic etching and anisotropic etching with inductively coupled plasma (ICP) etcher. Using the combination of SF6/O2 isotropic etching chemistry and Bosch process, the high aspect ratio 3D and high density microneedle arrays are fabricated. The generated needle external geometry can be controlled by etching variables in the isotropic and anisotropic cases.

  2. Microfabricated hollow microneedle array using ICP etcher

    International Nuclear Information System (INIS)

    Ji Jing; Tay, Francis E H; Miao Jianmin

    2006-01-01

    This paper presents a developed process for fabrication of hollow silicon microneedle arrays. The inner hollow hole and the fluidic reservoir are fabricated in deep reactive ion etching. The profile of outside needles is achieved by the developed fabrication process, which combined isotropic etching and anisotropic etching with inductively coupled plasma (ICP) etcher. Using the combination of SF 6 /O 2 isotropic etching chemistry and Bosch process, the high aspect ratio 3D and high density microneedle arrays are fabricated. The generated needle external geometry can be controlled by etching variables in the isotropic and anisotropic cases

  3. Microfabricated hollow microneedle array using ICP etcher

    Energy Technology Data Exchange (ETDEWEB)

    Ji Jing [Mechanical Engineering National University of Singapore, 119260, Singapore (Singapore); Tay, Francis E H [Mechanical Engineering National University of Singapore, 119260, Singapore (Singapore); Miao Jianmin [MicroMachines Center, School of Mechanical and Aerospace Engineering, Nanyang Technological University, 50 Nanyang Avenue, 639798 (Singapore)

    2006-04-01

    This paper presents a developed process for fabrication of hollow silicon microneedle arrays. The inner hollow hole and the fluidic reservoir are fabricated in deep reactive ion etching. The profile of outside needles is achieved by the developed fabrication process, which combined isotropic etching and anisotropic etching with inductively coupled plasma (ICP) etcher. Using the combination of SF{sub 6}/O{sub 2} isotropic etching chemistry and Bosch process, the high aspect ratio 3D and high density microneedle arrays are fabricated. The generated needle external geometry can be controlled by etching variables in the isotropic and anisotropic cases.

  4. Full image-processing pipeline in field-programmable gate array for a small endoscopic camera

    Science.gov (United States)

    Mostafa, Sheikh Shanawaz; Sousa, L. Natércia; Ferreira, Nuno Fábio; Sousa, Ricardo M.; Santos, Joao; Wäny, Martin; Morgado-Dias, F.

    2017-01-01

    Endoscopy is an imaging procedure used for diagnosis as well as for some surgical purposes. The camera used for the endoscopy should be small and able to produce a good quality image or video, to reduce discomfort of the patients, and to increase the efficiency of the medical team. To achieve these fundamental goals, a small endoscopy camera with a footprint of 1 mm×1 mm×1.65 mm is used. Due to the physical properties of the sensors and human vision system limitations, different image-processing algorithms, such as noise reduction, demosaicking, and gamma correction, among others, are needed to faithfully reproduce the image or video. A full image-processing pipeline is implemented using a field-programmable gate array (FPGA) to accomplish a high frame rate of 60 fps with minimum processing delay. Along with this, a viewer has also been developed to display and control the image-processing pipeline. The control and data transfer are done by a USB 3.0 end point in the computer. The full developed system achieves real-time processing of the image and fits in a Xilinx Spartan-6LX150 FPGA.

  5. Configuration Considerations for Low Frequency Arrays

    Science.gov (United States)

    Lonsdale, C. J.

    2005-12-01

    The advance of digital signal processing capabilities has spurred a new effort to exploit the lowest radio frequencies observable from the ground, from ˜10 MHz to a few hundred MHz. Multiple scientifically and technically complementary instruments are planned, including the Mileura Widefield Array (MWA) in the 80-300 MHz range, and the Long Wavelength Array (LWA) in the 20-80 MHz range. The latter instrument will target relatively high angular resolution, and baselines up to a few hundred km. An important practical question for the design of such an array is how to distribute the collecting area on the ground. The answer to this question profoundly affects both cost and performance. In this contribution, the factors which determine the anticipated performance of any such array are examined, paying particular attention to the viability and accuracy of array calibration. It is argued that due to the severity of ionospheric effects in particular, it will be difficult or impossible to achieve routine, high dynamic range imaging with a geographically large low frequency array, unless a large number of physically separate array stations is built. This conclusion is general, is based on the need for adequate sampling of ionospheric irregularities, and is independent of the calibration algorithms and techniques that might be employed. It is further argued that array configuration figures of merit that are traditionally used for higher frequency arrays are inappropriate, and a different set of criteria are proposed.

  6. Performance Analysis and Scaling Behavior of the Terrestrial Systems Modeling Platform TerrSysMP in Large-Scale Supercomputing Environments

    Science.gov (United States)

    Kollet, S. J.; Goergen, K.; Gasper, F.; Shresta, P.; Sulis, M.; Rihani, J.; Simmer, C.; Vereecken, H.

    2013-12-01

    In studies of the terrestrial hydrologic, energy and biogeochemical cycles, integrated multi-physics simulation platforms take a central role in characterizing non-linear interactions, variances and uncertainties of system states and fluxes in reciprocity with observations. Recently developed integrated simulation platforms attempt to honor the complexity of the terrestrial system across multiple time and space scales from the deeper subsurface including groundwater dynamics into the atmosphere. Technically, this requires the coupling of atmospheric, land surface, and subsurface-surface flow models in supercomputing environments, while ensuring a high-degree of efficiency in the utilization of e.g., standard Linux clusters and massively parallel resources. A systematic performance analysis including profiling and tracing in such an application is crucial in the understanding of the runtime behavior, to identify optimum model settings, and is an efficient way to distinguish potential parallel deficiencies. On sophisticated leadership-class supercomputers, such as the 28-rack 5.9 petaFLOP IBM Blue Gene/Q 'JUQUEEN' of the Jülich Supercomputing Centre (JSC), this is a challenging task, but even more so important, when complex coupled component models are to be analysed. Here we want to present our experience from coupling, application tuning (e.g. 5-times speedup through compiler optimizations), parallel scaling and performance monitoring of the parallel Terrestrial Systems Modeling Platform TerrSysMP. The modeling platform consists of the weather prediction system COSMO of the German Weather Service; the Community Land Model, CLM of NCAR; and the variably saturated surface-subsurface flow code ParFlow. The model system relies on the Multiple Program Multiple Data (MPMD) execution model where the external Ocean-Atmosphere-Sea-Ice-Soil coupler (OASIS3) links the component models. TerrSysMP has been instrumented with the performance analysis tool Scalasca and analyzed

  7. Improved chemical identification from sensor arrays using intelligent algorithms

    Science.gov (United States)

    Roppel, Thaddeus A.; Wilson, Denise M.

    2001-02-01

    Intelligent signal processing algorithms are shown to improve identification rates significantly in chemical sensor arrays. This paper focuses on the use of independently derived sensor status information to modify the processing of sensor array data by using a fast, easily-implemented "best-match" approach to filling in missing sensor data. Most fault conditions of interest (e.g., stuck high, stuck low, sudden jumps, excess noise, etc.) can be detected relatively simply by adjunct data processing, or by on-board circuitry. The objective then is to devise, implement, and test methods for using this information to improve the identification rates in the presence of faulted sensors. In one typical example studied, utilizing separately derived, a-priori knowledge about the health of the sensors in the array improved the chemical identification rate by an artificial neural network from below 10 percent correct to over 99 percent correct. While this study focuses experimentally on chemical sensor arrays, the results are readily extensible to other types of sensor platforms.

  8. 10-100 Gbps Offload NIC for WAN, NLR, and Grid Computing

    Science.gov (United States)

    Awrach, James; Maccabe, Arthur

    2011-01-01

    An extremely fast offload engine system has been developed that operates at 60 Gigabits per second (Gbps), and has scalability to 100 Gbps full-duplex (f-d). This system is based on unique coding and architecture derived from splintered UDP (User Datagram Protocol) offload technology, resulting in unique FPGA (field programmable gate array) intellectual property core and firmware. This innovation improves the networking speed of supercomputer clusters by providing an ultra-fast network protocol processing offload from a CPU (central processing unit) by inserting an offload engine into a host backplane and network connections. This runs on protocol firmware.

  9. Next-Generation Microshutter Arrays for Large-Format Imaging and Spectroscopy

    Science.gov (United States)

    Moseley, Samuel; Kutyrev, Alexander; Brown, Ari; Li, Mary

    2012-01-01

    A next-generation microshutter array, LArge Microshutter Array (LAMA), was developed as a multi-object field selector. LAMA consists of small-scaled microshutter arrays that can be combined to form large-scale microshutter array mosaics. Microshutter actuation is accomplished via electrostatic attraction between the shutter and a counter electrode, and 2D addressing can be accomplished by applying an electrostatic potential between a row of shutters and a column, orthogonal to the row, of counter electrodes. Microelectromechanical system (MEMS) technology is used to fabricate the microshutter arrays. The main feature of the microshutter device is to use a set of standard surface micromachining processes for device fabrication. Electrostatic actuation is used to eliminate the need for macromechanical magnet actuating components. A simplified electrostatic actuation with no macro components (e.g. moving magnets) required for actuation and latching of the shutters will make the microshutter arrays robust and less prone to mechanical failure. Smaller-size individual arrays will help to increase the yield and thus reduce the cost and improve robustness of the fabrication process. Reducing the size of the individual shutter array to about one square inch and building the large-scale mosaics by tiling these smaller-size arrays would further help to reduce the cost of the device due to the higher yield of smaller devices. The LAMA development is based on prior experience acquired while developing microshutter arrays for the James Webb Space Telescope (JWST), but it will have different features. The LAMA modular design permits large-format mosaicking to cover a field of view at least 50 times larger than JWST MSA. The LAMA electrostatic, instead of magnetic, actuation enables operation cycles at least 100 times faster and a mass significantly smaller compared to JWST MSA. Also, standard surface micromachining technology will simplify the fabrication process, increasing

  10. Status report of the end-to-end ASKAP software system: towards early science operations

    Science.gov (United States)

    Guzman, Juan Carlos; Chapman, Jessica; Marquarding, Malte; Whiting, Matthew

    2016-08-01

    The Australian SKA Pathfinder (ASKAP) is a novel centimetre radio synthesis telescope currently in the commissioning phase and located in the midwest region of Western Australia. It comprises of 36 x 12 m diameter reflector antennas each equipped with state-of-the-art and award winning Phased Array Feeds (PAF) technology. The PAFs provide a wide, 30 square degree field-of-view by forming up to 36 separate dual-polarisation beams at once. This results in a high data rate: 70 TB of correlated visibilities in an 8-hour observation, requiring custom-written, high-performance software running in dedicated High Performance Computing (HPC) facilities. The first six antennas equipped with first-generation PAF technology (Mark I), named the Boolardy Engineering Test Array (BETA) have been in use since 2014 as a platform to test PAF calibration and imaging techniques, and along the way it has been producing some great science results. Commissioning of the ASKAP Array Release 1, that is the first six antennas with second-generation PAFs (Mark II) is currently under way. An integral part of the instrument is the Central Processor platform hosted at the Pawsey Supercomputing Centre in Perth, which executes custom-written software pipelines, designed specifically to meet the ASKAP imaging requirements of wide field of view and high dynamic range. There are three key hardware components of the Central Processor: The ingest nodes (16 x node cluster), the fast temporary storage (1 PB Lustre file system) and the processing supercomputer (200 TFlop system). This High-Performance Computing (HPC) platform is managed and supported by the Pawsey support team. Due to the limited amount of data generated by BETA and the first ASKAP Array Release, the Central Processor platform has been running in a more "traditional" or user-interactive mode. But this is about to change: integration and verification of the online ingest pipeline starts in early 2016, which is required to support the full

  11. Uniform illumination rendering using an array of LEDs: a signal processing perspective

    OpenAIRE

    Yang, Hongming; Bergmans, J.W.M.; Schenk, T.C.W.; Linnartz, J.P.M.G.; Rietman, R.

    2009-01-01

    An array of a large number of LEDs will be widely used in future indoor illumination systems. In this paper, we investigate the problem of rendering uniform illumination by a regular LED array on the ceiling of a room. We first present two general results on the scaling property of the basic illumination pattern, i.e., the light pattern of a single LED, and the setting of LED illumination levels, respectively. Thereafter, we propose to use the relative mean squared error as the cost function ...

  12. Automated installation methods for photovoltaic arrays

    Science.gov (United States)

    Briggs, R.; Daniels, A.; Greenaway, R.; Oster, J., Jr.; Racki, D.; Stoeltzing, R.

    1982-11-01

    Since installation expenses constitute a substantial portion of the cost of a large photovoltaic power system, methods for reduction of these costs were investigated. The installation of the photovoltaic arrays includes all areas, starting with site preparation (i.e., trenching, wiring, drainage, foundation installation, lightning protection, grounding and installation of the panel) and concluding with the termination of the bus at the power conditioner building. To identify the optimum combination of standard installation procedures and automated/mechanized techniques, the installation process was investigated including the equipment and hardware available, the photovoltaic array structure systems and interfaces, and the array field and site characteristics. Preliminary designs of hardware for both the standard installation method, the automated/mechanized method, and a mix of standard installation procedures and mechanized procedures were identified to determine which process effectively reduced installation costs. In addition, costs associated with each type of installation method and with the design, development and fabrication of new installation hardware were generated.

  13. Assessment techniques for a learning-centered curriculum: evaluation design for adventures in supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Helland, B. [Ames Lab., IA (United States); Summers, B.G. [Oak Ridge National Lab., TN (United States)

    1996-09-01

    As the classroom paradigm shifts from being teacher-centered to being learner-centered, student assessments are evolving from typical paper and pencil testing to other methods of evaluation. Students should be probed for understanding, reasoning, and critical thinking abilities rather than their ability to return memorized facts. The assessment of the Department of Energy`s pilot program, Adventures in Supercomputing (AiS), offers one example of assessment techniques developed for learner-centered curricula. This assessment has employed a variety of methods to collect student data. Methods of assessment used were traditional testing, performance testing, interviews, short questionnaires via email, and student presentations of projects. The data obtained from these sources have been analyzed by a professional assessment team at the Center for Children and Technology. The results have been used to improve the AiS curriculum and establish the quality of the overall AiS program. This paper will discuss the various methods of assessment used and the results.

  14. The BlueGene/L Supercomputer and Quantum ChromoDynamics

    International Nuclear Information System (INIS)

    Vranas, P; Soltz, R

    2006-01-01

    In summary our update contains: (1) Perfect speedup sustaining 19.3% of peak for the Wilson D D-slash Dirac operator. (2) Measurements of the full Conjugate Gradient (CG) inverter that inverts the Dirac operator. The CG inverter contains two global sums over the entire machine. Nevertheless, our measurements retain perfect speedup scaling demonstrating the robustness of our methods. (3) We ran on the largest BG/L system, the LLNL 64 rack BG/L supercomputer, and obtained a sustained speed of 59.1 TFlops. Furthermore, the speedup scaling of the Dirac operator and of the CG inverter are perfect all the way up to the full size of the machine, 131,072 cores (please see Figure II). The local lattice is rather small (4 x 4 x 4 x 16) while the total lattice has been a lattice QCD vision for thermodynamic studies (a total of 128 x 128 x 256 x 32 lattice sites). This speed is about five times larger compared to the speed we quoted in our submission. As we have pointed out in our paper QCD is notoriously sensitive to network and memory latencies, has a relatively high communication to computation ratio which can not be overlapped in BGL in virtual node mode, and as an application is in a class of its own. The above results are thrilling to us and a 30 year long dream for lattice QCD

  15. Phased Array Ultrasonic Inspection of Titanium Forgings

    International Nuclear Information System (INIS)

    Howard, P.; Klaassen, R.; Kurkcu, N.; Barshinger, J.; Chalek, C.; Nieters, E.; Sun, Zongqi; Fromont, F. de

    2007-01-01

    Aerospace forging inspections typically use multiple, subsurface-focused sound beams in combination with digital C-scan image acquisition and display. Traditionally, forging inspections have been implemented using multiple single element, fixed focused transducers. Recent advances in phased array technology have made it possible to perform an equivalent inspection using a single phased array transducer. General Electric has developed a system to perform titanium forging inspection based on medical phased array technology and advanced image processing techniques. The components of that system and system performance for titanium inspection will be discussed

  16. Timed arrays wideband and time varying antenna arrays

    CERN Document Server

    Haupt, Randy L

    2015-01-01

    Introduces timed arrays and design approaches to meet the new high performance standards The author concentrates on any aspect of an antenna array that must be viewed from a time perspective. The first chapters briefly introduce antenna arrays and explain the difference between phased and timed arrays. Since timed arrays are designed for realistic time-varying signals and scenarios, the book also reviews wideband signals, baseband and passband RF signals, polarization and signal bandwidth. Other topics covered include time domain, mutual coupling, wideband elements, and dispersion. The auth

  17. Low-Noise CMOS Circuits for On-Chip Signal Processing in Focal-Plane Arrays

    Science.gov (United States)

    Pain, Bedabrata

    The performance of focal-plane arrays can be significantly enhanced through the use of on-chip signal processing. Novel, in-pixel, on-focal-plane, analog signal-processing circuits for high-performance imaging are presented in this thesis. The presence of a high background-radiation is a major impediment for infrared focal-plane array design. An in-pixel, background-suppression scheme, using dynamic analog current memory circuit, is described. The scheme also suppresses spatial noise that results from response non-uniformities of photo-detectors, leading to background limited infrared detector readout performance. Two new, low-power, compact, current memory circuits, optimized for operation at ultra-low current levels required in infrared-detection, are presented. The first one is a self-cascading current memory that increases the output impedance, and the second one is a novel, switch feed-through reducing current memory, implemented using error-current feedback. This circuit can operate with a residual absolute -error of less than 0.1%. The storage-time of the memory is long enough to also find applications in neural network circuits. In addition, a voltage-mode, accurate, low-offset, low-power, high-uniformity, random-access sample-and-hold cell, implemented using a CCD with feedback, is also presented for use in background-suppression and neural network applications. A new, low noise, ultra-low level signal readout technique, implemented by individually counting photo-electrons within the detection pixel, is presented. The output of each unit-cell is a digital word corresponding to the intensity of the photon flux, and the readout is noise free. This technique requires the use of unit-cell amplifiers that feature ultra-high-gain, low-power, self-biasing capability and noise in sub-electron levels. Both single-input and differential-input implementations of such amplifiers are investigated. A noise analysis technique is presented for analyzing sampled

  18. Lithography-free centimeter-long nanochannel fabrication method using an electrospun nanofiber array

    International Nuclear Information System (INIS)

    Park, Suk Hee; Shin, Hyun-Jun; Lee, Sangyoup; Kim, Yong-Hwan; Yang, Dong-Yol; Lee, Jong-Chul

    2012-01-01

    Novel cost-effective methods for polymeric and metallic nanochannel fabrication have been demonstrated using an electrospun nanofiber array. Like other electrospun nanofiber-based nanofabrication methods, our system also showed high throughput as well as cost-effective performances. Unlike other systems, however, our fabrication scheme provides a pseudo-parallel nanofiber array a few centimeters long at a speed of several tens of fibers per second based on our unique inclined-gap fiber collecting system. Pseudo-parallel nanofiber arrays were used either directly for the PDMS molding process or for the metal lift-off process followed by the SiO 2 deposition process to produce the nanochannel array. While the PDMS molding process was a simple fabrication based on one-step casting, the metal lift-off process followed by SiO 2 deposition allowed finetuning on height and width of nanogrooves down to subhundred nanometers from a few micrometers. Nanogrooves were covered either with cover glass or with PDMS slab and nanochannel connectivity was investigated with a fluorescent dye. Also, nanochannel arrays were used to investigate mobility and conformations of λ-DNA. (paper)

  19. DNA electrophoresis through microlithographic arrays

    International Nuclear Information System (INIS)

    Sevick, E.M.; Williams, D.R.M.

    1996-01-01

    Electrophoresis is one of the most widely used techniques in biochemistry and genetics for size-separating charged molecular chains such as DNA or synthetic polyelectrolytes. The separation is achieved by driving the chains through a gel with an external electric field. As a result of the field and the obstacles that the medium provides, the chains have different mobilities and are physically separated after a given process time. The macroscopically observed mobility scales inversely with chain size: small molecules move through the medium quickly while larger molecules move more slowly. However, electrophoresis remains a tool that has yet to be optimised for most efficient size separation of polyelectrolytes, particularly large polyelectrolytes, e.g. DNA in excess of 30-50 kbp. Microlithographic arrays etched with an ordered pattern of obstacles provide an attractive alternative to gel media and provide wider avenues for size separation of polyelectrolytes and promote a better understanding of the separation process. Its advantages over gels are (1) the ordered array is durable and can be re-used, (2) the array morphology is ordered and can be standardized for specific separation, and (3) calibration with a marker polyelectrolyte is not required as the array is reproduced to high precision. Most importantly, the array geometry can be graduated along the chip so as to expand the size-dependent regime over larger chain lengths and postpone saturation. In order to predict the effect of obstacles upon the chain-length dependence in mobility and hence, size separation, we study the dynamics of single chains using theory and simulation. We present recent work describing: 1) the release kinetics of a single DNA molecule hooked around a point, frictionless obstacle and in both weak and strong field limits, 2) the mobility of a chain impinging upon point obstacles in an ordered array of obstacles, demonstrating the wide range of interactions possible between the chain and

  20. Integration of Antibody Array Technology into Drug Discovery and Development.

    Science.gov (United States)

    Huang, Wei; Whittaker, Kelly; Zhang, Huihua; Wu, Jian; Zhu, Si-Wei; Huang, Ruo-Pan

    Antibody arrays represent a high-throughput technique that enables the parallel detection of multiple proteins with minimal sample volume requirements. In recent years, antibody arrays have been widely used to identify new biomarkers for disease diagnosis or prognosis. Moreover, many academic research laboratories and commercial biotechnology companies are starting to apply antibody arrays in the field of drug discovery. In this review, some technical aspects of antibody array development and the various platforms currently available will be addressed; however, the main focus will be on the discussion of antibody array technologies and their applications in drug discovery. Aspects of the drug discovery process, including target identification, mechanisms of drug resistance, molecular mechanisms of drug action, drug side effects, and the application in clinical trials and in managing patient care, which have been investigated using antibody arrays in recent literature will be examined and the relevance of this technology in progressing this process will be discussed. Protein profiling with antibody array technology, in addition to other applications, has emerged as a successful, novel approach for drug discovery because of the well-known importance of proteins in cell events and disease development.

  1. Backshort-Under-Grid arrays for infrared astronomy

    Science.gov (United States)

    Allen, C. A.; Benford, D. J.; Chervenak, J. A.; Chuss, D. T.; Miller, T. M.; Moseley, S. H.; Staguhn, J. G.; Wollack, E. J.

    2006-04-01

    We are developing a kilopixel, filled bolometer array for space infrared astronomy. The array consists of three individual components, to be merged into a single, working unit; (1) a transition edge sensor bolometer array, operating in the milliKelvin regime, (2) a quarter-wave backshort grid, and (3) superconducting quantum interference device multiplexer readout. The detector array is designed as a filled, square grid of suspended, silicon bolometers with superconducting sensors. The backshort arrays are fabricated separately and will be positioned in the cavities created behind each detector during fabrication. The grids have a unique interlocking feature machined into the walls for positioning and mechanical stability. The spacing of the backshort beneath the detector grid can be set from ˜30 300 μm, by independently adjusting two process parameters during fabrication. The ultimate goal is to develop a large-format array architecture with background-limited sensitivity, suitable for a wide range of wavelengths and applications, to be directly bump bonded to a multiplexer circuit. We have produced prototype two-dimensional arrays having 8×8 detector elements. We present detector design, fabrication overview, and assembly technologies.

  2. Detailed Diagnostics of the BIOMASS Feed Array Prototype

    DEFF Research Database (Denmark)

    Cappellin, C.; Pivnenko, Sergey; Pontoppidan, K.

    2013-01-01

    of the array had a significant influence on the measured feed pattern. The 3D reconstruction and further post-processing is therefore applied both to the feed array measured data, and a set of simulated data generated by the GRASP software which replicate the series of measurements. The results...

  3. Silver Nanowire Arrays : Fabrication and Applications

    OpenAIRE

    Feng, Yuyi

    2016-01-01

    Nanowire arrays have increasingly received attention for their use in a variety of applications such as surface-enhanced Raman scattering (SERS), plasmonic sensing, and electrodes for photoelectric devices. However, until now, large scale fabrication of device-suitable metallic nanowire arrays on supporting substrates has seen very limited success. This thesis describes my work rst on the development of a novel successful processing route for the fabrication of uniform noble metallic (e.g. A...

  4. Compensated readout for high-density MOS-gated memristor crossbar array

    KAUST Repository

    Zidan, Mohammed A.

    2015-01-01

    Leakage current is one of the main challenges facing high-density MOS-gated memristor arrays. In this study, we show that leakage current ruins the memory readout process for high-density arrays, and analyze the tradeoff between the array density and its power consumption. We propose a novel readout technique and its underlying circuitry, which is able to compensate for the transistor leakage-current effect in the high-density gated memristor array.

  5. Integration of Fiber-Optic Sensor Arrays into a Multi-Modal Tactile Sensor Processing System for Robotic End-Effectors

    Directory of Open Access Journals (Sweden)

    Peter Kampmann

    2014-04-01

    Full Text Available With the increasing complexity of robotic missions and the development towards long-term autonomous systems, the need for multi-modal sensing of the environment increases. Until now, the use of tactile sensor systems has been mostly based on sensing one modality of forces in the robotic end-effector. The use of a multi-modal tactile sensory system is motivated, which combines static and dynamic force sensor arrays together with an absolute force measurement system. This publication is focused on the development of a compact sensor interface for a fiber-optic sensor array, as optic measurement principles tend to have a bulky interface. Mechanical, electrical and software approaches are combined to realize an integrated structure that provides decentralized data pre-processing of the tactile measurements. Local behaviors are implemented using this setup to show the effectiveness of this approach.

  6. Suppression of 3D coherent noise by areal geophone array; Menteki jushinki array ni yoru sanjigen coherent noise no yokusei

    Energy Technology Data Exchange (ETDEWEB)

    Murayama, R; Nakagami, K; Tanaka, H [Japan National Oil Corp., Tokyo (Japan). Technology Research Center

    1996-05-01

    For improving the quality of data collected by reflection seismic exploration, a lattice was deployed at one point of a traverse line, and the data therefrom were used to study the 3D coherent noise suppression effect of the areal array. The test was conducted at a Japan National Oil Corporation test field in Kashiwazaki City, Niigata Prefecture. The deployed lattice had 144 vibration receiving points arrayed at intervals of 8m composing an areal array, and 187 vibration generating points arrayed at intervals of 20m extending over 6.5km. Data was collected at the vibration receiving points in the lattice, each point acting independently from the others, and processed for the composition of a large areal array, with the said data from plural vibration receiving points added up therein. As the result of analysis of the records covering the data collected at the receiving points in the lattice, it is noted that an enlarged areal array leads to a higher S/N ratio and that different reflection waves are emphasized when the array direction is changed. 1 ref., 6 figs.

  7. Optimised 'on demand' protein arraying from DNA by cell free expression with the 'DNA to Protein Array' (DAPA) technology.

    Science.gov (United States)

    Schmidt, Ronny; Cook, Elizabeth A; Kastelic, Damjana; Taussig, Michael J; Stoevesandt, Oda

    2013-08-02

    We have previously described a protein arraying process based on cell free expression from DNA template arrays (DNA Array to Protein Array, DAPA). Here, we have investigated the influence of different array support coatings (Ni-NTA, Epoxy, 3D-Epoxy and Polyethylene glycol methacrylate (PEGMA)). Their optimal combination yields an increased amount of detected protein and an optimised spot morphology on the resulting protein array compared to the previously published protocol. The specificity of protein capture was improved using a tag-specific capture antibody on a protein repellent surface coating. The conditions for protein expression were optimised to yield the maximum amount of protein or the best detection results using specific monoclonal antibodies or a scaffold binder against the expressed targets. The optimised DAPA system was able to increase by threefold the expression of a representative model protein while conserving recognition by a specific antibody. The amount of expressed protein in DAPA was comparable to those of classically spotted protein arrays. Reaction conditions can be tailored to suit the application of interest. DAPA represents a cost effective, easy and convenient way of producing protein arrays on demand. The reported work is expected to facilitate the application of DAPA for personalized medicine and screening purposes. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Hydrogen Detection With a Gas Sensor ArrayProcessing and Recognition of Dynamic Responses Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Gwiżdż Patryk

    2015-03-01

    Full Text Available An array consisting of four commercial gas sensors with target specifications for hydrocarbons, ammonia, alcohol, explosive gases has been constructed and tested. The sensors in the array operate in the dynamic mode upon the temperature modulation from 350°C to 500°C. Changes in the sensor operating temperature lead to distinct resistance responses affected by the gas type, its concentration and the humidity level. The measurements are performed upon various hydrogen (17-3000 ppm, methane (167-3000 ppm and propane (167-3000 ppm concentrations at relative humidity levels of 0-75%RH. The measured dynamic response signals are further processed with the Discrete Fourier Transform. Absolute values of the dc component and the first five harmonics of each sensor are analysed by a feed-forward back-propagation neural network. The ultimate aim of this research is to achieve a reliable hydrogen detection despite an interference of the humidity and residual gases.

  9. rasdaman Array Database: current status

    Science.gov (United States)

    Merticariu, George; Toader, Alexandru

    2015-04-01

    rasdaman (Raster Data Manager) is a Free Open Source Array Database Management System which provides functionality for storing and processing massive amounts of raster data in the form of multidimensional arrays. The user can access, process and delete the data using SQL. The key features of rasdaman are: flexibility (datasets of any dimensionality can be processed with the help of SQL queries), scalability (rasdaman's distributed architecture enables it to seamlessly run on cloud infrastructures while offering an increase in performance with the increase of computation resources), performance (real-time access, processing, mixing and filtering of arrays of any dimensionality) and reliability (legacy communication protocol replaced with a new one based on cutting edge technology - Google Protocol Buffers and ZeroMQ). Among the data with which the system works, we can count 1D time series, 2D remote sensing imagery, 3D image time series, 3D geophysical data, and 4D atmospheric and climate data. Most of these representations cannot be stored only in the form of raw arrays, as the location information of the contents is also important for having a correct geoposition on Earth. This is defined by ISO 19123 as coverage data. rasdaman provides coverage data support through the Petascope service. Extensions were added on top of rasdaman in order to provide support for the Geoscience community. The following OGC standards are currently supported: Web Map Service (WMS), Web Coverage Service (WCS), and Web Coverage Processing Service (WCPS). The Web Map Service is an extension which provides zoom and pan navigation over images provided by a map server. Starting with version 9.1, rasdaman supports WMS version 1.3. The Web Coverage Service provides capabilities for downloading multi-dimensional coverage data. Support is also provided for several extensions of this service: Subsetting Extension, Scaling Extension, and, starting with version 9.1, Transaction Extension, which

  10. Solving sparse linear least squares problems on some supercomputers by using large dense blocks

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Ostromsky, T; Sameh, A

    1997-01-01

    technique is preferable to sparse matrix technique when the matrices are not large, because the high computational speed compensates fully the disadvantages of using more arithmetic operations and more storage. For very large matrices the computations must be organized as a sequence of tasks in each......Efficient subroutines for dense matrix computations have recently been developed and are available on many high-speed computers. On some computers the speed of many dense matrix operations is near to the peak-performance. For sparse matrices storage and operations can be saved by operating only...... and storing only nonzero elements. However, the price is a great degradation of the speed of computations on supercomputers (due to the use of indirect addresses, to the need to insert new nonzeros in the sparse storage scheme, to the lack of data locality, etc.). On many high-speed computers a dense matrix...

  11. Simulating the Sky as Seen by the Square Kilometer Array using the MIT Array Performance Simulator (MAPS)

    Science.gov (United States)

    Matthews, Lynn D.; Cappallo, R. J.; Doeleman, S. S.; Fish, V. L.; Lonsdale, C. J.; Oberoi, D.; Wayth, R. B.

    2009-05-01

    The Square Kilometer Array (SKA) is a proposed next-generation radio telescope that will operate at frequencies of 0.1-30 GHz and be 50-100 times more sensitive than existing radio arrays. Meeting the performance goals of this instrument will require innovative new hardware and software developments, a variety of which are now under consideration. Key to evaluating the performance characteristics of proposed SKA designs and testing the feasibility of new data calibration and processing algorithms is the ability to carry out realistic simulations of radio wavelength arrays under a variety of observing conditions. The MIT Array Performance Simulator (MAPS) (http://www.haystack.mit.edu/ast/arrays/maps/index.html) is an observations simulation package designed to achieve this goal. MAPS accepts an input source list or sky model and generates a model visibility set for a user-defined "virtual observatory'', incorporating such factors as array geometry, primary beam shape, field-of-view, and time and frequency resolution. Optionally, effects such as thermal noise, out-of-beam sources, variable station beams, and time/location-dependent ionospheric effects can be included. We will showcase current capabilities of MAPS for SKA applications by presenting results from an analysis of the effects of realistic sky backgrounds on the achievable image fidelity and dynamic range of SKA-like arrays comprising large numbers of small-diameter antennas.

  12. Fabrication of large NbSi bolometer arrays for CMB applications

    International Nuclear Information System (INIS)

    Ukibe, M.; Belier, B.; Camus, Ph.; Dobrea, C.; Dumoulin, L.; Fernandez, B.; Fournier, T.; Guillaudin, O.; Marnieros, S.; Yates, S.J.C.

    2006-01-01

    Future cosmic microwave background experiments for high-resolution anisotropy mapping and polarisation detection require large arrays of bolometers at low temperature. We have developed a process to build arrays of antenna-coupled bolometers for that purpose. With adjustment of the Nb x Si 1-x alloy composition, the array can be made of high impedance or superconductive (TES) sensors

  13. Batch fabrication of disposable screen printed SERS arrays.

    Science.gov (United States)

    Qu, Lu-Lu; Li, Da-Wei; Xue, Jin-Qun; Zhai, Wen-Lei; Fossey, John S; Long, Yi-Tao

    2012-03-07

    A novel facile method of fabricating disposable and highly reproducible surface-enhanced Raman spectroscopy (SERS) arrays using screen printing was explored. The screen printing ink containing silver nanoparticles was prepared and printed on supporting materials by a screen printing process to fabricate SERS arrays (6 × 10 printed spots) in large batches. The fabrication conditions, SERS performance and application of these arrays were systematically investigated, and a detection limit of 1.6 × 10(-13) M for rhodamine 6G could be achieved. Moreover, the screen printed SERS arrays exhibited high reproducibility and stability, the spot-to-spot SERS signals showed that the intensity variation was less than 10% and SERS performance could be maintained over 12 weeks. Portable high-throughput analysis of biological samples was accomplished using these disposable screen printed SERS arrays.

  14. Multi-Beam Radio Frequency (RF) Aperture Arrays Using Multiplierless Approximate Fast Fourier Transform (FFT)

    Science.gov (United States)

    2017-08-01

    Fourier transform, discrete Fourier transform, digital array processing , antenna beamformers 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...125 3.7 Simulation of 2-D Beams Cross Sections .................................................................... 125 3.7.1 8...unlimited. List of Figures Figure Page Figure 1: N-beam Array Processing System using a Linear Array

  15. Microneedles array with biodegradable tips for transdermal drug delivery

    Science.gov (United States)

    Iliescu, Ciprian; Chen, Bangtao; Wei, Jiashen; Tay, Francis E. H.

    2008-12-01

    The paper presented an enhancement solution for transdermal drug delivery using microneedles array with biodegradable tips. The microneedles array was fabricated by using deep reactive ion etching (DRIE) and the biodegradable tips were made to be porous by electrochemical etching process. The porous silicon microneedle tips can greatly enhance the transdermal drug delivery in a minimum invasion, painless, and convenient manner, at the same time; they are breakable and biodegradable. Basically, the main problem of the silicon microneedles consists of broken microneedles tips during the insertion. The solution proposed is to fabricate the microneedle tip from a biodegradable material - porous silicon. The silicon microneedles are fabricated using DRIE notching effect of reflected charges on mask. The process overcomes the difficulty in the undercut control of the tips during the classical isotropic silicon etching process. When the silicon tips were formed, the porous tips were then generated using a classical electrochemical anodization process in MeCN/HF/H2O solution. The paper presents the experimental results of in vitro release of calcein and BSA with animal skins using a microneedle array with biodegradable tips. Compared to the transdermal drug delivery without any enhancer, the microneedle array had presented significant enhancement of drug release.

  16. Hybrid Arrays for Chemical Sensing

    Science.gov (United States)

    Kramer, Kirsten E.; Rose-Pehrsson, Susan L.; Johnson, Kevin J.; Minor, Christian P.

    In recent years, multisensory approaches to environment monitoring for chemical detection as well as other forms of situational awareness have become increasingly popular. A hybrid sensor is a multimodal system that incorporates several sensing elements and thus produces data that are multivariate in nature and may be significantly increased in complexity compared to data provided by single-sensor systems. Though a hybrid sensor is itself an array, hybrid sensors are often organized into more complex sensing systems through an assortment of network topologies. Part of the reason for the shift to hybrid sensors is due to advancements in sensor technology and computational power available for processing larger amounts of data. There is also ample evidence to support the claim that a multivariate analytical approach is generally superior to univariate measurements because it provides additional redundant and complementary information (Hall, D. L.; Linas, J., Eds., Handbook of Multisensor Data Fusion, CRC, Boca Raton, FL, 2001). However, the benefits of a multisensory approach are not automatically achieved. Interpretation of data from hybrid arrays of sensors requires the analyst to develop an application-specific methodology to optimally fuse the disparate sources of data generated by the hybrid array into useful information characterizing the sample or environment being observed. Consequently, multivariate data analysis techniques such as those employed in the field of chemometrics have become more important in analyzing sensor array data. Depending on the nature of the acquired data, a number of chemometric algorithms may prove useful in the analysis and interpretation of data from hybrid sensor arrays. It is important to note, however, that the challenges posed by the analysis of hybrid sensor array data are not unique to the field of chemical sensing. Applications in electrical and process engineering, remote sensing, medicine, and of course, artificial

  17. Self-assembly and optical properties of patterned ZnO nanodot arrays

    International Nuclear Information System (INIS)

    Song Yijian; Zheng Maojun; Ma Li

    2007-01-01

    Patterned ZnO nanodot (ND) arrays and a ND-cavity microstructure were realized on an anodic alumina membrane (AAM) surface through a spin-coating sol-gel process, which benefits from the morphology and localized negative charge surface of AAM as well as the optimized sol concentration. The growth mechanism is believed to be a self-assembly process. This provides a simple approach to fabricate semiconductor quantum dot (QD) arrays and a QD-cavity system with its advantage in low cost and mass production. Strong ultra-violet emission, a multi-phonon process, and its special structure-related properties were observed in the patterned ZnO ND arrays

  18. Multilevel photonic modules for millimeter-wave phased-array antennas

    Science.gov (United States)

    Paolella, Arthur C.; Bauerle, Athena; Joshi, Abhay M.; Wright, James G.; Coryell, Louis A.

    2000-09-01

    Millimeter wave phased array systems have antenna element sizes and spacings similar to MMIC chip dimensions by virtue of the operating wavelength. Designing modules in traditional planar packaing techniques are therefore difficult to implement. An advantageous way to maintain a small module footprint compatible with Ka-Band and high frequency systems is to take advantage of two leading edge technologies, opto- electronic integrated circuits (OEICs) and multilevel packaging technology. Under a Phase II SBIR these technologies are combined to form photonic modules for optically controlled millimeter wave phased array antennas. The proposed module, consisting of an OEIC integrated with a planar antenna array will operate on the 40GHz region. The OEIC consists of an InP based dual-depletion PIN photodetector and distributed amplifier. The multi-level module will be fabricated using an enhanced circuit processing thick film process. Since the modules are batch fabricated using an enhanced circuit processing thick film process. Since the modules are batch fabricated, using standard commercial processes, it has the potential to be low cost while maintaining high performance, impacting both military and commercial communications systems.

  19. Fabrication of large NbSi bolometer arrays for CMB applications

    Energy Technology Data Exchange (ETDEWEB)

    Ukibe, M. [AIST, Tsukuba Central 2, Tsukuba, Ibaraki 305-8568 (Japan); CNRS-CSNSM, Bat 104, Orsay Campus F-91405 (France); Belier, B. [CNRS-IEF, Bat 220, Orsay Campus F-91405 (France); Camus, Ph. [CNRS-CRTBT, 25 avenue des Martyrs, Grenoble F-38042 (France)]. E-mail: philippe.camus@grenoble.cnrs.fr; Dobrea, C. [CNRS-CSNSM, Bat 104, Orsay Campus F-91405 (France); Dumoulin, L. [CNRS-CSNSM, Bat 104, Orsay Campus F-91405 (France); Fernandez, B. [CNRS-CRTBT, 25 avenue des Martyrs, Grenoble F-38042 (France); Fournier, T. [CNRS-CRTBT, 25 avenue des Martyrs, Grenoble F-38042 (France); Guillaudin, O. [CNRS-LPSC, 53 avenue des Martyrs, Grenoble F-38042 (France); Marnieros, S. [CNRS-CSNSM, Bat 104, Orsay Campus F-91405 (France); Yates, S.J.C. [CNRS-CSNSM, Bat 104, Orsay Campus F-91405 (France)

    2006-04-15

    Future cosmic microwave background experiments for high-resolution anisotropy mapping and polarisation detection require large arrays of bolometers at low temperature. We have developed a process to build arrays of antenna-coupled bolometers for that purpose. With adjustment of the Nb{sub x}Si{sub 1-x} alloy composition, the array can be made of high impedance or superconductive (TES) sensors.

  20. Cascadia Subduction Zone Earthquake Source Spectra from an Array of Arrays

    Science.gov (United States)

    Gomberg, J. S.; Vidale, J. E.

    2011-12-01

    It is generally accepted that spectral characteristics distinguish 'slow' seismic sources from those of 'ordinary' or 'fast' earthquakes. To explore this difference, we measure ordinary earthquake spectra of about 30 seismic events located near the Cascadia plate interface where ETS regularly occurs. We separate the affects of local site response, regional propagation (attenuation and spreading), and processes near or at the source for a dense dataset recorded on an array of eight seismic micro-arrays. The arrays have apertures of 1-2 km with 21-31 seismographs in each, and are separated by 10-20 km. We assume that the spectrum of each recorded signal may be described by the product of 1) frequency-dependent site response, 2) propagation effects that include geometric spreading and an exponential decay that varies with distance, frequency, and 3) a frequency-dependent source spectrum. Using more than1000 seismograms from all events recorded at all sites simultaneously, we solve for frequency-dependent site response and source spectra, as well as a single regional Q value. We interpret only the slope of the source terms because most earthquakes have magnitudes less than 0, so we expect that their corner frequencies are higher frequency than the recorded passband. The amplitude variation in the site response within the same array sometimes exceeds a factor of 3, which is consistent with the variation seen visually. We see variability in the slopes of the source spectra comparable to the difference between 'slow' and 'fast' events observed in other studies, and which show a strong correlation with source location. Spectral slopes of spatially clustered sources are nearly identical but usually differ from those of clusters at a distance of a few tens of km, and spectral content varies systematically with location within the distribution of events. While these differences may reflect varying source processes (e.g., rupture velocity, stress drop), the strong correlation

  1. Process Development of Gallium Nitride Phosphide Core-Shell Nanowire Array Solar Cell

    Science.gov (United States)

    Chuang, Chen

    Dilute Nitride GaNP is a promising materials for opto-electronic applications due to its band gap tunability. The efficiency of GaNxP1-x /GaNyP1-y core-shell nanowire solar cell (NWSC) is expected to reach as high as 44% by 1% N and 9% N in the core and shell, respectively. By developing such high efficiency NWSCs on silicon substrate, a further reduction of the cost of solar photovoltaic can be further reduced to 61$/MWh, which is competitive to levelized cost of electricity (LCOE) of fossil fuels. Therefore, a suitable NWSC structure and fabrication process need to be developed to achieve this promising NWSC. This thesis is devoted to the study on the development of fabrication process of GaNxP 1-x/GaNyP1-y core-shell Nanowire solar cell. The thesis is divided into two major parts. In the first parts, previously grown GaP/GaNyP1-y core-shell nanowire samples are used to develop the fabrication process of Gallium Nitride Phosphide nanowire solar cell. The design for nanowire arrays, passivation layer, polymeric filler spacer, transparent col- lecting layer and metal contact are discussed and fabricated. The property of these NWSCs are also characterized to point out the future development of Gal- lium Nitride Phosphide NWSC. In the second part, a nano-hole template made by nanosphere lithography is studied for selective area growth of nanowires to improve the structure of core-shell NWSC. The fabrication process of nano-hole templates and the results are presented. To have a consistent features of nano-hole tem- plate, the Taguchi Method is used to optimize the fabrication process of nano-hole templates.

  2. Developing infrared array controller with software real time operating system

    Science.gov (United States)

    Sako, Shigeyuki; Miyata, Takashi; Nakamura, Tomohiko; Motohara, Kentaro; Uchimoto, Yuka Katsuno; Onaka, Takashi; Kataza, Hirokazu

    2008-07-01

    Real-time capabilities are required for a controller of a large format array to reduce a dead-time attributed by readout and data transfer. The real-time processing has been achieved by dedicated processors including DSP, CPLD, and FPGA devices. However, the dedicated processors have problems with memory resources, inflexibility, and high cost. Meanwhile, a recent PC has sufficient resources of CPUs and memories to control the infrared array and to process a large amount of frame data in real-time. In this study, we have developed an infrared array controller with a software real-time operating system (RTOS) instead of the dedicated processors. A Linux PC equipped with a RTAI extension and a dual-core CPU is used as a main computer, and one of the CPU cores is allocated to the real-time processing. A digital I/O board with DMA functions is used for an I/O interface. The signal-processing cores are integrated in the OS kernel as a real-time driver module, which is composed of two virtual devices of the clock processor and the frame processor tasks. The array controller with the RTOS realizes complicated operations easily, flexibly, and at a low cost.

  3. Co-Prime Frequency and Aperture Design for HF Surveillance, Wideband Radar Imaging, and Nonstationary Array Processing

    Science.gov (United States)

    2018-03-01

    to develop novel co-prime sampling and array design strategies that achieve high-resolution estimation of spectral power distributions and signal...by the array geometry and the frequency offset. We overcome this limitation by introducing a novel sparsity-based multi-target localization approach...estimation using a sparse uniform linear array with two CW signals of co-prime frequencies,” IEEE International Workshop on Computational Advances

  4. X-ray microcalorimeter arrays fabricated by surface micromachining

    International Nuclear Information System (INIS)

    Hilton, G.C.; Beall, J.A.; Deiker, S.; Vale, L.R.; Doriese, W.B.; Beyer, Joern; Ullom, J.N.; Reintsema, C.D.; Xu, Y.; Irwin, K.D.

    2004-01-01

    We are developing arrays of Mo/Cu transition edge sensor-based detectors for use as X-ray microcalorimeters and sub-millimeter bolometers. We have fabricated 8x8 pixel X-ray microcalorimeter arrays using surface micromachining. Surface-micromachining techniques hold the promise of scalability to much larger arrays and may allow for the integration of in-plane multiplexer elements. In this paper we describe the surface micromachining process and recent improvements in the device geometry that provide for increased mechanical strength. We also present X-ray and heat pulse spectra collected using these detectors

  5. Fabrication of microlens arrays using a CO2-assisted embossing technique

    International Nuclear Information System (INIS)

    Huang, Tzu-Chien; Chan, Bin-Da; Ciou, Jyun-Kai; Yang, Sen-Yeu

    2009-01-01

    This paper reports a method to fabricate microlens arrays with a low processing temperature and a low pressure. The method is based on embossing a softened polymeric substrate over a mold with micro-hole arrays. Due to the effect of capillary and surface tension, microlens arrays can be formed. The embossing medium is CO 2 gas, which supplies a uniform pressing pressure so that large-area microlens arrays can be fabricated. CO 2 gas also acts as a solvent to plasticize the polymer substrates. With the special dissolving ability and isotropic pressing capacity of CO 2 gas, microlens arrays can be fabricated at a low temperature (lower than T g ) and free of thermal-induced residual stress. Such a combined mechanism of dissolving and embossing with CO 2 gas makes the fabrication of microlens arrays direct with complex processes, and is more compatible for optical usage. In the study, it is also found that the sag height of microlens changes when different CO 2 dissolving pressure and time are used. This makes it easy to fabricate microlens arrays of different geometries without using different molds. The quality, uniformity and optical property of the fabricated microlens arrays have been verified with measurements of the dimensions, surface smoothness, focal length, transmittance and light intensity through the fabricated microlens arrays

  6. Microfabricated Silicon Microneedle Array for Transdermal Drug Delivery

    International Nuclear Information System (INIS)

    Ji, J; Tay, F E; Miao Jianmin; Iliescu, C

    2006-01-01

    This paper presents developed processes for silicon microneedle arrays microfabrication. Three types of microneedles structures were achieved by isotropic etching in inductively coupled plasma (ICP) using SF 6 /O 2 gases, combination of isotropic etching with deep etching, and wet etching, respectively. A microneedle array with biodegradable porous tips was further developed based on the fabricated microneedles

  7. Microfabricated Silicon Microneedle Array for Transdermal Drug Delivery

    Energy Technology Data Exchange (ETDEWEB)

    Ji, J [Mechanical Engineering National University of Singapore, 119260, Singapore (Singapore); Tay, F E [Mechanical Engineering National University of Singapore, 119260, Singapore (Singapore); Miao Jianmin [MicroMachines Center, School of Mechanical and Aerospace Engineering, Nanyang Technological University, 50 Nanyang Avenue, 639798 (Singapore); Iliescu, C [Institute of Bioengineering and Nanotechnology, 31 Biopolis Way, Nanos, 04-01, 138669 (Singapore)

    2006-04-01

    This paper presents developed processes for silicon microneedle arrays microfabrication. Three types of microneedles structures were achieved by isotropic etching in inductively coupled plasma (ICP) using SF{sub 6}/O{sub 2} gases, combination of isotropic etching with deep etching, and wet etching, respectively. A microneedle array with biodegradable porous tips was further developed based on the fabricated microneedles.

  8. Array processors: an introduction to their architecture, software, and applications in nuclear medicine

    International Nuclear Information System (INIS)

    King, M.A.; Doherty, P.W.; Rosenberg, R.J.; Cool, S.L.

    1983-01-01

    Array processors are ''number crunchers'' that dramatically enhance the processing power of nuclear medicine computer systems for applicatons dealing with the repetitive operations involved in digital image processing of large segments of data. The general architecture and the programming of array processors are introduced, along with some applications of array processors to the reconstruction of emission tomographic images, digital image enhancement, and functional image formation

  9. EDF's experience with supercomputing and challenges ahead - towards multi-physics and multi-scale approaches

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M.; Banner, D. [Electricite de France (EDF)- R and D Division, 92 - Clamart (France)

    2003-07-01

    Nuclear power plants are a major asset of the EDF company. To remain so, in particular in a context of deregulation, competitiveness, safety and public acceptance are three conditions. These stakes apply both to existing plants and to future reactors. The purpose of the presentation is to explain how supercomputing can help EDF to satisfy these requirements. Three examples are described in detail: ensuring optimal use of nuclear fuel under wholly safe conditions, understanding and simulating the material deterioration mechanisms and moving forward with numerical simulation for the performance of EDF's activities. In conclusion, a broader vision of EDF long term R and D in the field of numerical simulation is given and especially of five challenges taken up by EDF together with its industrial and scientific partners. (author)

  10. Performance Evaluation of an Intel Haswell- and Ivy Bridge-Based Supercomputer Using Scientific and Engineering Applications

    Science.gov (United States)

    Saini, Subhash; Hood, Robert T.; Chang, Johnny; Baron, John

    2016-01-01

    We present a performance evaluation conducted on a production supercomputer of the Intel Xeon Processor E5- 2680v3, a twelve-core implementation of the fourth-generation Haswell architecture, and compare it with Intel Xeon Processor E5-2680v2, an Ivy Bridge implementation of the third-generation Sandy Bridge architecture. Several new architectural features have been incorporated in Haswell including improvements in all levels of the memory hierarchy as well as improvements to vector instructions and power management. We critically evaluate these new features of Haswell and compare with Ivy Bridge using several low-level benchmarks including subset of HPCC, HPCG and four full-scale scientific and engineering applications. We also present a model to predict the performance of HPCG and Cart3D within 5%, and Overflow within 10% accuracy.

  11. 369 TFlop/s molecular dynamics simulations on the Roadrunner general-purpose heterogeneous supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Swaminarayan, Sriram [Los Alamos National Laboratory; Germann, Timothy C [Los Alamos National Laboratory; Kadau, Kai [Los Alamos National Laboratory; Fossum, Gordon C [IBM CORPORATION

    2008-01-01

    The authors present timing and performance numbers for a short-range parallel molecular dynamics (MD) code, SPaSM, that has been rewritten for the heterogeneous Roadrunner supercomputer. Each Roadrunner compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, so that there are four MPI ranks per node, each with one Opteron and one Cell. The interatomic forces are computed on the Cells (each with one PPU and eight SPU cores), while the Opterons are used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard-Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), corresponding to 124 MFlop/Watt/s at a price of approximately 3.69 MFlops/dollar. They demonstrate an initial target application, the jetting and ejection of material from a shocked surface.

  12. Automated Array Assembly, Phase 2. Quarterly technical progress report, April-June 1979

    Energy Technology Data Exchange (ETDEWEB)

    Carbajal, B.G.

    1979-07-01

    The Automated Array Assembly Task, Phase 2 of the Low Cost Solar Array (LSA) Project is a process development task. This contract provides for the fabrication of modules from large area Tandem Junction Cells (TJC). The key activities in this contract effort are (a) Large Area TJC including cell design, process verification and cell fabrication and (b) Tandem Junction Module (TJM) including definition of the cell-module interfaces, substrate fabrication, interconnect fabrication and module assembly. The overall goal is to advance solar cell module process technology to meet the 1986 goal of a production capability of 500 megawatts per year at a cost of less than $500 per peak kilowatt. This contract will focus on the Tandem Junction Module process. During this quarter, effort was focused on design and process verification. The large area TJC design was completed and the design verification was completed. Process variation experiments led to refinements in the baseline TJC process. Formed steel substrates were porcelainized. Cell array assembly techniques using infrared soldering are being checked out. Dummy cell arrays up to 5 cell by 5 cell have been assembled using all backside contacts.

  13. Processes for design, construction and utilisation of arrays of light-emitting diodes and light-emitting diode-coupled optical fibres for multi-site brain light delivery.

    Science.gov (United States)

    Bernstein, Jacob G; Allen, Brian D; Guerra, Alexander A; Boyden, Edward S

    2015-05-01

    Optogenetics enables light to be used to control the activity of genetically targeted cells in the living brain. Optical fibers can be used to deliver light to deep targets, and LEDs can be spatially arranged to enable patterned light delivery. In combination, arrays of LED-coupled optical fibers can enable patterned light delivery to deep targets in the brain. Here we describe the process flow for making LED arrays and LED-coupled optical fiber arrays, explaining key optical, electrical, thermal, and mechanical design principles to enable the manufacturing, assembly, and testing of such multi-site targetable optical devices. We also explore accessory strategies such as surgical automation approaches as well as innovations to enable low-noise concurrent electrophysiology.

  14. Dynamical analysis of surface-insulated planar wire array Z-pinches

    Science.gov (United States)

    Li, Yang; Sheng, Liang; Hei, Dongwei; Li, Xingwen; Zhang, Jinhai; Li, Mo; Qiu, Aici

    2018-05-01

    The ablation and implosion dynamics of planar wire array Z-pinches with and without surface insulation are compared and discussed in this paper. This paper first presents a phenomenological model named the ablation and cascade snowplow implosion (ACSI) model, which accounts for the ablation and implosion phases of a planar wire array Z-pinch in a single simulation. The comparison between experimental data and simulation results shows that the ACSI model could give a fairly good description about the dynamical characteristics of planar wire array Z-pinches. Surface insulation introduces notable differences in the ablation phase of planar wire array Z-pinches. The ablation phase is divided into two stages: insulation layer ablation and tungsten wire ablation. The two-stage ablation process of insulated wires is simulated in the ACSI model by updating the formulas describing the ablation process.

  15. The data array, a tool to interface the user to a large data base

    Science.gov (United States)

    Foster, G. H.

    1974-01-01

    Aspects of the processing of spacecraft data is considered. Use of the data array in a large address space as an intermediate form in data processing for a large scientific data base is advocated. Techniques for efficient indexing in data arrays are reviewed and the data array method for mapping an arbitrary structure onto linear address space is shown. A compromise between the two forms is given. The impact of the data array on the user interface are considered along with implementation.

  16. SNP Arrays

    Directory of Open Access Journals (Sweden)

    Jari Louhelainen

    2016-10-01

    Full Text Available The papers published in this Special Issue “SNP arrays” (Single Nucleotide Polymorphism Arrays focus on several perspectives associated with arrays of this type. The range of papers vary from a case report to reviews, thereby targeting wider audiences working in this field. The research focus of SNP arrays is often human cancers but this Issue expands that focus to include areas such as rare conditions, animal breeding and bioinformatics tools. Given the limited scope, the spectrum of papers is nothing short of remarkable and even from a technical point of view these papers will contribute to the field at a general level. Three of the papers published in this Special Issue focus on the use of various SNP array approaches in the analysis of three different cancer types. Two of the papers concentrate on two very different rare conditions, applying the SNP arrays slightly differently. Finally, two other papers evaluate the use of the SNP arrays in the context of genetic analysis of livestock. The findings reported in these papers help to close gaps in the current literature and also to give guidelines for future applications of SNP arrays.

  17. Physical Limitations To Nonuniformity Correction In IR Focal Plane Arrays

    Science.gov (United States)

    Scribner, D. A.; Kruer, M. R.; Gridley, J. C.; Sarkady, K.

    1988-05-01

    Simple nonuniformity correction algorithms currently in use can be severely limited by nonlinear response characteristics of the individual pixels in an IR focal plane array. Although more complicated multi-point algorithms improve the correction process they too can be limited by nonlinearities. Furthermore, analysis of single pixel noise power spectrums usually show some level of 1 /f noise. This in turn causes pixel outputs to drift independent of each other thus causing the spatial noise (often called fixed pattern noise) of the array to increase as a function of time since the last calibration. Measurements are presented for two arrays (a HgCdTe hybrid and a Pt:Si CCD) describing pixel nonlinearities, 1/f noise, and residual spatial noise (after nonuniforming correction). Of particular emphasis is spatial noise as a function of the lapsed time since the last calibration and the calibration process selected. The resulting spatial noise is examined in terms of its effect on the NEAT performance of each array tested and comparisons are made. Finally, a discussion of implications for array developers is given.

  18. A novel method to design sparse linear arrays for ultrasonic phased array.

    Science.gov (United States)

    Yang, Ping; Chen, Bin; Shi, Ke-Ren

    2006-12-22

    In ultrasonic phased array testing, a sparse array can increase the resolution by enlarging the aperture without adding system complexity. Designing a sparse array involves choosing the best or a better configuration from a large number of candidate arrays. We firstly designed sparse arrays by using a genetic algorithm, but found that the arrays have poor performance and poor consistency. So, a method based on the Minimum Redundancy Linear Array was then adopted. Some elements are determined by the minimum-redundancy array firstly in order to ensure spatial resolution and then a genetic algorithm is used to optimize the remaining elements. Sparse arrays designed by this method have much better performance and consistency compared to the arrays designed only by a genetic algorithm. Both simulation and experiment confirm the effectiveness.

  19. Optical technology for microwave applications VI and optoelectronic signal processing for phased-array antennas III; Proceedings of the Meeting, Orlando, FL, Apr. 20-23, 1992

    Science.gov (United States)

    Yao, Shi-Kay; Hendrickson, Brian M.

    The following topics related to optical technology for microwave applications are discussed: advanced acoustooptic devices, signal processing device technologies, optical signal processor technologies, microwave and optomicrowave devices, advanced lasers and sources, wideband electrooptic modulators, and wideband optical communications. The topics considered in the discussion of optoelectronic signal processing for phased-array antennas include devices, signal processing, and antenna systems.

  20. Petascale Computational Systems

    OpenAIRE

    Bell, Gordon; Gray, Jim; Szalay, Alex

    2007-01-01

    Computational science is changing to be data intensive. Super-Computers must be balanced systems; not just CPU farms but also petascale IO and networking arrays. Anyone building CyberInfrastructure should allocate resources to support a balanced Tier-1 through Tier-3 design.

  1. Synthesis and characterization of Mn-doped ZnO column arrays

    International Nuclear Information System (INIS)

    Yang Mei; Guo Zhixing; Qiu Kehui; Long Jianping; Yin Guangfu; Guan Denggao; Liu Sutian; Zhou Shijie

    2010-01-01

    Mn-doped ZnO column arrays were successfully synthesized by conventional sol-gel process. Effect of Mn/Zn atomic ratio and reaction time were investigated, and the morphology, tropism and optical properties of Mn-doped ZnO column arrays were characterized by SEM, XRD and photoluminescence (PL) spectroscopy. The result shows that a Mn/Zn atomic ratio of 0.1 and growth time of 12 h are the optimal condition for the preparation of densely distributed ZnO column arrays. XRD analysis shows that Mn-doped ZnO column arrays are highly c-axis oriented. As for Mn-doped ZnO column arrays, obvious increase of photoluminescence intensity is observed at the wavelength of ∼395 nm and ∼413 nm, compared to pure ZnO column arrays.

  2. Resonance spectra of diabolo optical antenna arrays

    Directory of Open Access Journals (Sweden)

    Hong Guo

    2015-10-01

    Full Text Available A complete set of diabolo optical antenna arrays with different waist widths and periods was fabricated on a sapphire substrate by using a standard e-beam lithography and lift-off process. Fabricated diabolo optical antenna arrays were characterized by measuring the transmittance and reflectance with a microscope-coupled FTIR spectrometer. It was found experimentally that reducing the waist width significantly shifts the resonance to longer wavelength and narrowing the waist of the antennas is more effective than increasing the period of the array for tuning the resonance wavelength. Also it is found that the magnetic field enhancement near the antenna waist is correlated to the shift of the resonance wavelength.

  3. Conducting polymer nanowire arrays for high performance supercapacitors.

    Science.gov (United States)

    Wang, Kai; Wu, Haiping; Meng, Yuena; Wei, Zhixiang

    2014-01-15

    This Review provides a brief summary of the most recent research developments in the fabrication and application of one-dimensional ordered conducting polymers nanostructure (especially nanowire arrays) and their composites as electrodes for supercapacitors. By controlling the nucleation and growth process of polymerization, aligned conducting polymer nanowire arrays and their composites with nano-carbon materials can be prepared by employing in situ chemical polymerization or electrochemical polymerization without a template. This kind of nanostructure (such as polypyrrole and polyaniline nanowire arrays) possesses high capacitance, superior rate capability ascribed to large electrochemical surface, and an optimal ion diffusion path in the ordered nanowire structure, which is proved to be an ideal electrode material for high performance supercapacitors. Furthermore, flexible, micro-scale, threadlike, and multifunctional supercapacitors are introduced based on conducting polyaniline nanowire arrays and their composites. These prototypes of supercapacitors utilize the high flexibility, good processability, and large capacitance of conducting polymers, which efficiently extend the usage of supercapacitors in various situations, and even for a complicated integration system of different electronic devices. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Materials preparation and fabrication of pyroelectric polymer/silicon MOSFET detector arrays. Final report

    International Nuclear Information System (INIS)

    Bloomfield, P.

    1992-01-01

    The authors have delivered several 64-element linear arrays of pyroelectric elements fully integrated on silicon wafers with MOS readout devices. They have delivered detailed drawings of the linear arrays to LANL. They have processed a series of two inch wafers per submitted design. Each two inch wafer contains two 64 element arrays. After spin-coating copolymer onto the arrays, vacuum depositing the top electrodes, and polarizing the copolymer films so as to make them pyroelectrically active, each wafer was split in half. The authors developed a thicker oxide coating separating the extended gate electrode (beneath the polymer detector) from the silicon. This should reduce its parasitic capacitance and hence improve the S/N. They provided LANL three processed 64 element sensor arrays. Each array was affixed to a connector panel and selected solder pads of the common ground, the common source voltage supply connections, the 64 individual drain connections, and the 64 drain connections (for direct pyroelectric sensing response rather than the MOSFET action) were wire bonded to the connector panel solder pads. This entails (64 + 64 + 1 + 1) = 130 possible bond connections per 64 element array. This report now details the processing steps and the progress of the individual wafers as they were carried through from beginning to end

  5. Mathematical analysis of the real time array PCR (RTA PCR) process

    NARCIS (Netherlands)

    Dijksman, Johan Frederik; Pierik, A.

    2012-01-01

    Real time array PCR (RTA PCR) is a recently developed biochemical technique that measures amplification curves (like with quantitative real time Polymerase Chain Reaction (qRT PCR)) of a multitude of different templates in a sample. It combines two different methods in order to profit from the

  6. Graphical Environment Tools for Application to Gamma-Ray Energy Tracking Arrays

    Energy Technology Data Exchange (ETDEWEB)

    Todd, Richard A. [RIS Corp.; Radford, David C. [ORNL Physics Div.

    2013-12-30

    Highly segmented, position-sensitive germanium detector systems are being developed for nuclear physics research where traditional electronic signal processing with mixed analog and digital function blocks would be enormously complex and costly. Future systems will be constructed using pipelined processing of high-speed digitized signals as is done in the telecommunications industry. Techniques which provide rapid algorithm and system development for future systems are desirable. This project has used digital signal processing concepts and existing graphical system design tools to develop a set of re-usable modular functions and libraries targeted for the nuclear physics community. Researchers working with complex nuclear detector arrays such as the Gamma-Ray Energy Tracking Array (GRETA) have been able to construct advanced data processing algorithms for implementation in field programmable gate arrays (FPGAs) through application of these library functions using intuitive graphical interfaces.

  7. ATMAD: robust image analysis for Automatic Tissue MicroArray De-arraying.

    Science.gov (United States)

    Nguyen, Hoai Nam; Paveau, Vincent; Cauchois, Cyril; Kervrann, Charles

    2018-04-19

    Over the last two decades, an innovative technology called Tissue Microarray (TMA), which combines multi-tissue and DNA microarray concepts, has been widely used in the field of histology. It consists of a collection of several (up to 1000 or more) tissue samples that are assembled onto a single support - typically a glass slide - according to a design grid (array) layout, in order to allow multiplex analysis by treating numerous samples under identical and standardized conditions. However, during the TMA manufacturing process, the sample positions can be highly distorted from the design grid due to the imprecision when assembling tissue samples and the deformation of the embedding waxes. Consequently, these distortions may lead to severe errors of (histological) assay results when the sample identities are mismatched between the design and its manufactured output. The development of a robust method for de-arraying TMA, which localizes and matches TMA samples with their design grid, is therefore crucial to overcome the bottleneck of this prominent technology. In this paper, we propose an Automatic, fast and robust TMA De-arraying (ATMAD) approach dedicated to images acquired with brightfield and fluorescence microscopes (or scanners). First, tissue samples are localized in the large image by applying a locally adaptive thresholding on the isotropic wavelet transform of the input TMA image. To reduce false detections, a parametric shape model is considered for segmenting ellipse-shaped objects at each detected position. Segmented objects that do not meet the size and the roundness criteria are discarded from the list of tissue samples before being matched with the design grid. Sample matching is performed by estimating the TMA grid deformation under the thin-plate model. Finally, thanks to the estimated deformation, the true tissue samples that were preliminary rejected in the early image processing step are recognized by running a second segmentation step. We

  8. New Mexico High School Supercomputing Challenge, 1990--1995: Five years of making a difference to students, teachers, schools, and communities. Progress report

    Energy Technology Data Exchange (ETDEWEB)

    Foster, M.; Kratzer, D.

    1996-02-01

    The New Mexico High School Supercomputing Challenge is an academic program dedicated to increasing interest in science and math among high school students by introducing them to high performance computing. This report provides a summary and evaluation of the first five years of the program, describes the program and shows the impact that it has had on high school students, their teachers, and their communities. Goals and objectives are reviewed and evaluated, growth and development of the program are analyzed, and future directions are discussed.

  9. Micropatterned arrays of porous silicon: toward sensory biointerfaces.

    Science.gov (United States)

    Flavel, Benjamin S; Sweetman, Martin J; Shearer, Cameron J; Shapter, Joseph G; Voelcker, Nicolas H

    2011-07-01

    We describe the fabrication of arrays of porous silicon spots by means of photolithography where a positive photoresist serves as a mask during the anodization process. In particular, photoluminescent arrays and porous silicon spots suitable for further chemical modification and the attachment of human cells were created. The produced arrays of porous silicon were chemically modified by means of a thermal hydrosilylation reaction that facilitated immobilization of the fluorescent dye lissamine, and alternatively, the cell adhesion peptide arginine-glycine-aspartic acid-serine. The latter modification enabled the selective attachment of human lens epithelial cells on the peptide functionalized regions of the patterns. This type of surface patterning, using etched porous silicon arrays functionalized with biological recognition elements, presents a new format of interfacing porous silicon with mammalian cells. Porous silicon arrays with photoluminescent properties produced by this patterning strategy also have potential applications as platforms for in situ monitoring of cell behavior.

  10. Parallel simulation of tsunami inundation on a large-scale supercomputer

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2013-12-01

    An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the

  11. Developing barbed microtip-based electrode arrays for biopotential measurement.

    Science.gov (United States)

    Hsu, Li-Sheng; Tung, Shu-Wei; Kuo, Che-Hsi; Yang, Yao-Joe

    2014-07-10

    This study involved fabricating barbed microtip-based electrode arrays by using silicon wet etching. KOH anisotropic wet etching was employed to form a standard pyramidal microtip array and HF/HNO3 isotropic etching was used to fabricate barbs on these microtips. To improve the electrical conductance between the tip array on the front side of the wafer and the electrical contact on the back side, a through-silicon via was created during the wet etching process. The experimental results show that the forces required to detach the barbed microtip arrays from human skin, a polydimethylsiloxane (PDMS) polymer, and a polyvinylchloride (PVC) film were larger compared with those required to detach microtip arrays that lacked barbs. The impedances of the skin-electrode interface were measured and the performance levels of the proposed dry electrode were characterized. Electrode prototypes that employed the proposed tip arrays were implemented. Electroencephalogram (EEG) and electrocardiography (ECG) recordings using these electrode prototypes were also demonstrated.

  12. Developing Barbed Microtip-Based Electrode Arrays for Biopotential Measurement

    Directory of Open Access Journals (Sweden)

    Li-Sheng Hsu

    2014-07-01

    Full Text Available This study involved fabricating barbed microtip-based electrode arrays by using silicon wet etching. KOH anisotropic wet etching was employed to form a standard pyramidal microtip array and HF/HNO3 isotropic etching was used to fabricate barbs on these microtips. To improve the electrical conductance between the tip array on the front side of the wafer and the electrical contact on the back side, a through-silicon via was created during the wet etching process. The experimental results show that the forces required to detach the barbed microtip arrays from human skin, a polydimethylsiloxane (PDMS polymer, and a polyvinylchloride (PVC film were larger compared with those required to detach microtip arrays that lacked barbs. The impedances of the skin-electrode interface were measured and the performance levels of the proposed dry electrode were characterized. Electrode prototypes that employed the proposed tip arrays were implemented. Electroencephalogram (EEG and electrocardiography (ECG recordings using these electrode prototypes were also demonstrated.

  13. Sonochemically Fabricated Microelectrode Arrays for Use as Sensing Platforms

    Directory of Open Access Journals (Sweden)

    Stuart D. Collyer

    2010-05-01

    Full Text Available The development, manufacture, modification and subsequent utilisation of sonochemically-formed microelectrode arrays is described for a range of applications. Initial fabrication of the sensing platform utilises ultrasonic ablation of electrochemically insulating polymers deposited upon conductive carbon substrates, forming an array of up to 70,000 microelectrode pores cm–2. Electrochemical and optical analyses using these arrays, their enhanced signal response and stir-independence area are all discussed. The growth of conducting polymeric “mushroom” protrusion arrays with entrapped biological entities, thereby forming biosensors is detailed. The simplicity and inexpensiveness of this approach, lending itself ideally to mass fabrication coupled with unrivalled sensitivity and stir independence makes commercial viability of this process a reality. Application of microelectrode arrays as functional components within sensors include devices for detection of chlorine, glucose, ethanol and pesticides. Immunosensors based on microelectrode arrays are described within this monograph for antigens associated with prostate cancer and transient ischemic attacks (strokes.

  14. Imaging spectroscopy using embedded diffractive optical arrays

    Science.gov (United States)

    Hinnrichs, Michele; Hinnrichs, Bradford

    2017-09-01

    Pacific Advanced Technology (PAT) has developed an infrared hyperspectral camera based on diffractive optic arrays. This approach to hyperspectral imaging has been demonstrated in all three infrared bands SWIR, MWIR and LWIR. The hyperspectral optical system has been integrated into the cold-shield of the sensor enabling the small size and weight of this infrared hyperspectral sensor. This new and innovative approach to an infrared hyperspectral imaging spectrometer uses micro-optics that are made up of an area array of diffractive optical elements where each element is tuned to image a different spectral region on a common focal plane array. The lenslet array is embedded in the cold-shield of the sensor and actuated with a miniature piezo-electric motor. This approach enables rapid infrared spectral imaging with multiple spectral images collected and processed simultaneously each frame of the camera. This paper will present our optical mechanical design approach which results in an infrared hyper-spectral imaging system that is small enough for a payload on a small satellite, mini-UAV, commercial quadcopter or man portable. Also, an application of how this spectral imaging technology can easily be used to quantify the mass and volume flow rates of hydrocarbon gases. The diffractive optical elements used in the lenslet array are blazed gratings where each lenslet is tuned for a different spectral bandpass. The lenslets are configured in an area array placed a few millimeters above the focal plane and embedded in the cold-shield to reduce the background signal normally associated with the optics. The detector array is divided into sub-images covered by each lenslet. We have developed various systems using a different number of lenslets in the area array. Depending on the size of the focal plane and the diameter of the lenslet array will determine the number of simultaneous different spectral images collected each frame of the camera. A 2 x 2 lenslet array will image

  15. Maximum Constrained Directivity of Oversteered End-Fire Sensor Arrays

    Directory of Open Access Journals (Sweden)

    Andrea Trucco

    2015-06-01

    Full Text Available For linear arrays with fixed steering and an inter-element spacing smaller than one half of the wavelength, end-fire steering of a data-independent beamformer offers better directivity than broadside steering. The introduction of a lower bound on the white noise gain ensures the necessary robustness against random array errors and sensor mismatches. However, the optimum broadside performance can be obtained using a simple processing architecture, whereas the optimum end-fire performance requires a more complicated system (because complex weight coefficients are needed. In this paper, we reconsider the oversteering technique as a possible way to simplify the processing architecture of equally spaced end-fire arrays. We propose a method for computing the amount of oversteering and the related real-valued weight vector that allows the constrained directivity to be maximized for a given inter-element spacing. Moreover, we verify that the maximized oversteering performance is very close to the optimum end-fire performance. We conclude that optimized oversteering is a viable method for designing end-fire arrays that have better constrained directivity than broadside arrays but with a similar implementation complexity. A numerical simulation is used to perform a statistical analysis, which confirms that the maximized oversteering performance is robust against sensor mismatches.

  16. An efficient method for evaluating RRAM crossbar array performance

    Science.gov (United States)

    Song, Lin; Zhang, Jinyu; Chen, An; Wu, Huaqiang; Qian, He; Yu, Zhiping

    2016-06-01

    An efficient method is proposed in this paper to mitigate computational burden in resistive random access memory (RRAM) array simulation. In the worst case scenario, a 4 Mb RRAM array with line resistance is greatly reduced using this method. For 1S1R-RRAM array structures, static and statistical parameters in both reading and writing processes are simulated. Error analysis is performed to prove the reliability of the algorithm when line resistance is extremely small compared with the junction resistance. Results show that high precision is maintained even if the size of RRAM array is reduced by one thousand times, which indicates significant improvements in both computational efficiency and memory requirements.

  17. Parallelism and array processing

    International Nuclear Information System (INIS)

    Zacharov, V.

    1983-01-01

    Modern computing, as well as the historical development of computing, has been dominated by sequential monoprocessing. Yet there is the alternative of parallelism, where several processes may be in concurrent execution. This alternative is discussed in a series of lectures, in which the main developments involving parallelism are considered, both from the standpoint of computing systems and that of applications that can exploit such systems. The lectures seek to discuss parallelism in a historical context, and to identify all the main aspects of concurrency in computation right up to the present time. Included will be consideration of the important question as to what use parallelism might be in the field of data processing. (orig.)

  18. Comparison of Computational and Experimental Microphone Array Results for an 18%-Scale Aircraft Model

    Science.gov (United States)

    Lockard, David P.; Humphreys, William M.; Khorrami, Mehdi R.; Fares, Ehab; Casalino, Damiano; Ravetta, Patricio A.

    2015-01-01

    An 18%-scale, semi-span model is used as a platform for examining the efficacy of microphone array processing using synthetic data from numerical simulations. Two hybrid RANS/LES codes coupled with Ffowcs Williams-Hawkings solvers are used to calculate 97 microphone signals at the locations of an array employed in the NASA LaRC 14x22 tunnel. Conventional, DAMAS, and CLEAN-SC array processing is applied in an identical fashion to the experimental and computational results for three different configurations involving deploying and retracting the main landing gear and a part span flap. Despite the short time records of the numerical signals, the beamform maps are able to isolate the noise sources, and the appearance of the DAMAS synthetic array maps is generally better than those from the experimental data. The experimental CLEAN-SC maps are similar in quality to those from the simulations indicating that CLEAN-SC may have less sensitivity to background noise. The spectrum obtained from DAMAS processing of synthetic array data is nearly identical to the spectrum of the center microphone of the array, indicating that for this problem array processing of synthetic data does not improve spectral comparisons with experiment. However, the beamform maps do provide an additional means of comparison that can reveal differences that cannot be ascertained from spectra alone.

  19. Micromolding for ceramic microneedle arrays

    NARCIS (Netherlands)

    van Nieuwkasteele-Bystrova, Svetlana Nikolajevna; Lüttge, Regina

    2011-01-01

    The fabrication process of ceramic microneedle arrays (MNAs) is presented. This includes the manufacturing of an SU-8/Si-master, its double replication resulting in a PDMS mold for production by micromolding and ceramic sintering. The robustness of the replicated structures was tested by means of

  20. A novel method of microneedle array fabrication using inclined deep x-ray exposure

    International Nuclear Information System (INIS)

    Moon, Sang Jun; Jin, Chun Yan; Lee, Seung S

    2006-01-01

    We report a novel fabrication method for the microneedle array with a 3-dimensional feature and its replication method; 'Hot-pressing' process with bio-compatible material, PLLA (Poly L-LActide). Using inclined deep X-ray exposure technique, we fabricate a band type microneedle array with a single body on the same material basement. Since the single body feature does not make adhesion problem with the microneedle shank and basement during peel-off step of a mold, the PMMA (Poly-Methyl-MethAcrylate) microneedle array mold insert can be used for mold process which is used with the soft material mold, PDMS (Poly-Di- Methyl-Siloxane). The side inclined deep X-ray exposure also makes complex 3-dimensional features by the regions which are not exposed during twice successive exposure steps. In addition, the successive exposure does not need an additional mask alignment after the first side exposure. The fabricated band type microneedle array mold inserts are assembled for large area patch type out-of-plane microneedle array. The bio-compatible microneedle array can be fabricated to the laboratory scale mass production by the single body PMMA mold insert and 'Hot-pressing' process

  1. A novel method of microneedle array fabrication using inclined deep x-ray exposure

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Sang Jun; Jin, Chun Yan; Lee, Seung S [Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology (KAIST), 373-1, Guseong-dong, Yuseong-dong, Daejeon (Korea, Republic of)

    2006-04-01

    We report a novel fabrication method for the microneedle array with a 3-dimensional feature and its replication method; 'Hot-pressing' process with bio-compatible material, PLLA (Poly L-LActide). Using inclined deep X-ray exposure technique, we fabricate a band type microneedle array with a single body on the same material basement. Since the single body feature does not make adhesion problem with the microneedle shank and basement during peel-off step of a mold, the PMMA (Poly-Methyl-MethAcrylate) microneedle array mold insert can be used for mold process which is used with the soft material mold, PDMS (Poly-Di- Methyl-Siloxane). The side inclined deep X-ray exposure also makes complex 3-dimensional features by the regions which are not exposed during twice successive exposure steps. In addition, the successive exposure does not need an additional mask alignment after the first side exposure. The fabricated band type microneedle array mold inserts are assembled for large area patch type out-of-plane microneedle array. The bio-compatible microneedle array can be fabricated to the laboratory scale mass production by the single body PMMA mold insert and 'Hot-pressing' process.

  2. Solving “Antenna Array Thinning Problem” Using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Rajashree Jain

    2012-01-01

    Full Text Available Thinning involves reducing total number of active elements in an antenna array without causing major degradation in system performance. Dynamic thinning is the process of achieving this under real-time conditions. It is required to find a strategic subset of antenna elements for thinning so as to have its optimum performance. From a mathematical perspective this is a nonlinear, multidimensional problem with multiple objectives and many constraints. Solution for such problem cannot be obtained by classical analytical techniques. It will be required to employ some type of search algorithm which can lead to a practical solution in an optimal. The present paper discusses an approach of using genetic algorithm for array thinning. After discussing the basic concept involving antenna array, array thinning, dynamic thinning, and application methodology, simulation results of applying the technique to linear and planar arrays are presented.

  3. Estimation of surface impedance using different types of microphone arrays

    DEFF Research Database (Denmark)

    Richard, Antoine Philippe André; Fernandez Grande, Efren; Brunskog, Jonas

    2017-01-01

    This study investigates microphone array methods to measure the angle dependent surface impedance of acoustic materials. The methods are based on the reconstruction of the sound field on the surface of the material, using a wave expansion formulation. The reconstruction of both the pressure...... and the particle velocity leads to an estimation of the surface impedance for a given angle of incidence. A porous type absorber sample is tested experimentally in anechoic conditions for different array geometries, sample sizes, incidence angles, and distances between the array and sample. In particular......, the performances of a rigid spherical array and a double layer planar array are examined. The use of sparse array processing methods and conventional regulariation approaches are studied. In addition, the influence of the size of the sample on the surface impedance estimation is investigated using both...

  4. A criticality safety analysis code using a vectorized Monte Carlo method on the HITAC S-810 supercomputer

    International Nuclear Information System (INIS)

    Morimoto, Y.; Maruyama, H.

    1987-01-01

    A vectorized Monte Carlo criticality safety analysis code has been developed on the vector supercomputer HITAC S-810. In this code, a multi-particle tracking algorithm was adopted for effective utilization of the vector processor. A flight analysis with pseudo-scattering was developed to reduce the computational time needed for flight analysis, which represents the bulk of computational time. This new algorithm realized a speed-up of factor 1.5 over the conventional flight analysis. The code also adopted the multigroup cross section constants library of the Bodarenko type with 190 groups, with 132 groups being for fast and epithermal regions and 58 groups being for the thermal region. Evaluation work showed that this code reproduce the experimental results to an accuracy of about 1 % for the effective neutron multiplication factor. (author)

  5. Coupling in reflector arrays

    DEFF Research Database (Denmark)

    Appel-Hansen, Jørgen

    1968-01-01

    In order to reduce the space occupied by a reflector array, it is desirable to arrange the array antennas as close to each other as possible; however, in this case coupling between the array antennas will reduce the reflecting properties of the reflector array. The purpose of the present communic......In order to reduce the space occupied by a reflector array, it is desirable to arrange the array antennas as close to each other as possible; however, in this case coupling between the array antennas will reduce the reflecting properties of the reflector array. The purpose of the present...

  6. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  7. Solar cell array design handbook - The principles and technology of photovoltaic energy conversion

    Science.gov (United States)

    Rauschenbach, H. S.

    1980-01-01

    Photovoltaic solar cell array design and technology for ground-based and space applications are discussed from the user's point of view. Solar array systems are described, with attention given to array concepts, historical development, applications and performance, and the analysis of array characteristics, circuits, components, performance and reliability is examined. Aspects of solar cell array design considered include the design process, photovoltaic system and detailed array design, and the design of array thermal, radiation shielding and electromagnetic components. Attention is then given to the characteristics and design of the separate components of solar arrays, including the solar cells, optical elements and mechanical elements, and the fabrication, testing, environmental conditions and effects and material properties of arrays and their components are discussed.

  8. Ultrathin NbN film superconducting single-photon detector array

    International Nuclear Information System (INIS)

    Smirnov, K; Korneev, A; Minaeva, O; Divochiy, A; Tarkhov, M; Ryabchun, S; Seleznev, V; Kaurova, N; Voronov, B; Gol'tsman, G; Polonsky, S

    2007-01-01

    We report on the fabrication process of the 2 x 2 superconducting single-photon detector (SSPD) array. The SSPD array is made from ultrathin NbN film and is operated at liquid helium temperatures. Each detector is a nanowire-based structure patterned by electron beam lithography process. The advances in fabrication technology allowed us to produce highly uniform strips and preserve superconducting properties of the unpatterned film. SSPD exhibit up to 30% quantum efficiency in near infrared and up to 1% at 5-μm wavelength. Due to 120 MHz counting rate and 18 ps jitter, the time-domain multiplexing read-out is proposed for large scale SSPD arrays. Single-pixel SSPD has already found a practical application in non-invasive testing of semiconductor very-large scale integrated circuits. The SSPD significantly outperformed traditional single-photon counting avalanche diodes

  9. Optimizing laser beam profiles using micro-lens arrays for efficient material processing: applications to solar cells

    Science.gov (United States)

    Hauschild, Dirk; Homburg, Oliver; Mitra, Thomas; Ivanenko, Mikhail; Jarczynski, Manfred; Meinschien, Jens; Bayer, Andreas; Lissotschenko, Vitalij

    2009-02-01

    High power laser sources are used in various production tools for microelectronic products and solar cells, including the applications annealing, lithography, edge isolation as well as dicing and patterning. Besides the right choice of the laser source suitable high performance optics for generating the appropriate beam profile and intensity distribution are of high importance for the right processing speed, quality and yield. For industrial applications equally important is an adequate understanding of the physics of the light-matter interaction behind the process. In advance simulations of the tool performance can minimize technical and financial risk as well as lead times for prototyping and introduction into series production. LIMO has developed its own software founded on the Maxwell equations taking into account all important physical aspects of the laser based process: the light source, the beam shaping optical system and the light-matter interaction. Based on this knowledge together with a unique free-form micro-lens array production technology and patented micro-optics beam shaping designs a number of novel solar cell production tool sub-systems have been built. The basic functionalities, design principles and performance results are presented with a special emphasis on resilience, cost reduction and process reliability.

  10. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone; Manzano Franco, Joseph B.

    2012-12-31

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, we introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.

  11. A Platform for Manufacturable Stretchable Micro-electrode Arrays

    NARCIS (Netherlands)

    Khoshfetrat Pakazad, S.; Savov, A.; Braam, S.R.; Dekker, R.

    2012-01-01

    A platform for the batch fabrication of pneumatically actuated Stretchable Micro-Electrode Arrays (SMEAs) by using state-of-the-art micro-fabrication techniques and materials is demonstrated. The proposed fabrication process avoids the problems normally associated with processing of thin film

  12. The Kepler DB, a Database Management System for Arrays, Sparse Arrays and Binary Data

    Science.gov (United States)

    McCauliff, Sean; Cote, Miles T.; Girouard, Forrest R.; Middour, Christopher; Klaus, Todd C.; Wohler, Bill

    2010-01-01

    The Kepler Science Operations Center stores pixel values on approximately six million pixels collected every 30-minutes, as well as data products that are generated as a result of running the Kepler science processing pipeline. The Kepler Database (Kepler DB) management system was created to act as the repository of this information. After one year of ight usage, Kepler DB is managing 3 TiB of data and is expected to grow to over 10 TiB over the course of the mission. Kepler DB is a non-relational, transactional database where data are represented as one dimensional arrays, sparse arrays or binary large objects. We will discuss Kepler DB's APIs, implementation, usage and deployment at the Kepler Science Operations Center.

  13. The Kepler DB: a database management system for arrays, sparse arrays, and binary data

    Science.gov (United States)

    McCauliff, Sean; Cote, Miles T.; Girouard, Forrest R.; Middour, Christopher; Klaus, Todd C.; Wohler, Bill

    2010-07-01

    The Kepler Science Operations Center stores pixel values on approximately six million pixels collected every 30 minutes, as well as data products that are generated as a result of running the Kepler science processing pipeline. The Kepler Database management system (Kepler DB)was created to act as the repository of this information. After one year of flight usage, Kepler DB is managing 3 TiB of data and is expected to grow to over 10 TiB over the course of the mission. Kepler DB is a non-relational, transactional database where data are represented as one-dimensional arrays, sparse arrays or binary large objects. We will discuss Kepler DB's APIs, implementation, usage and deployment at the Kepler Science Operations Center.

  14. Aleutian Array of Arrays (A-cubed) to probe a broad spectrum of fault slip under the Aleutian Islands

    Science.gov (United States)

    Ghosh, A.; LI, B.

    2016-12-01

    Alaska-Aleutian subduction zone is one of the most seismically active subduction zones in this planet. It is characterized by remarkable along-strike variations in seismic behavior, more than 50 active volcanoes, and presents a unique opportunity to serve as a natural laboratory to study subduction zone processes including fault dynamics. Yet details of the seismicity pattern, spatiotemporal distribution of slow earthquakes, nature of interaction between slow and fast earthquakes and their implication on the tectonic behavior remain unknown. We use a hybrid seismic network approach and install 3 mini seismic arrays and 5 stand-alone stations to simultaneously image subduction fault and nearby volcanic system (Makushin). The arrays and stations are strategically located in the Unalaska Island, where prolific tremor activity is detected and located by a solo pilot array in summer 2012. The hybrid network is operational between summer 2015 and 2016 in continuous mode. One of the three arrays starts in summer 2014 and provides additional data covering a longer time span. The pilot array in the Akutan Island recorded continuous seismic data for 2 months. An automatic beam-backprojection analysis detects almost daily tremor activity, with an average of more than an hour per day. We imaged two active sources separated by a tremor gap. The western source, right under the Unalaska Island shows the most prolific activity with a hint of steady migration. In addition, we are able to identify more than 10 families of low frequency earthquakes (LFEs) in this area. They are located within the tremor source area as imaged by the bean-backprojection technique. Application of a match filter technique reveals that intervals between LFE activities are shorter during tremor activity and longer during quiet time period. We expect to present new results from freshly obtained data. The experiment A-cubed is illuminating subduction zone processes under Unalaska Island in unprecedented

  15. Low-cost solar array progress and plans

    Science.gov (United States)

    Callaghan, W. T.

    It is pointed out that significant redirection has occurred in the U.S. Department of Energy (DOE) Photovoltaics Program, and thus in the Flat-Plate Solar Array Project (FSA), since the 3rd European Communities Conference. The Silicon Materials Task has now the objective to sponsor theoretical and experimental research on silicon material refinement technology suitable for photovoltaic flat-plate solar arrays. With respect to the hydrochlorination reaction, a process proof of concept was completed through definition of reaction kinetics, catalyst, and reaction characteristics. In connection with the dichlorosilane chemical vapor desposition process, a preliminary design was completed of an experimental process system development unit with a capacity of 100 to 200 MT/yr of Si.Attention is also given to the silicon-sheet formation research area, environmental isolation research, the cell and module formation task, the engineering sciences area, and the module performance and failure analysis area.

  16. Concurrent array-based queue

    Science.gov (United States)

    Heidelberger, Philip; Steinmacher-Burow, Burkhard

    2015-01-06

    According to one embodiment, a method for implementing an array-based queue in memory of a memory system that includes a controller includes configuring, in the memory, metadata of the array-based queue. The configuring comprises defining, in metadata, an array start location in the memory for the array-based queue, defining, in the metadata, an array size for the array-based queue, defining, in the metadata, a queue top for the array-based queue and defining, in the metadata, a queue bottom for the array-based queue. The method also includes the controller serving a request for an operation on the queue, the request providing the location in the memory of the metadata of the queue.

  17. The Argonne silicon strip-detector array

    Energy Technology Data Exchange (ETDEWEB)

    Wuosmaa, A H; Back, B B; Betts, R R; Freer, M; Gehring, J; Glagola, B G; Happ, Th; Henderson, D J; Wilt, P [Argonne National Lab., IL (United States); Bearden, I G [Purdue Univ., Lafayette, IN (United States). Dept. of Physics

    1992-08-01

    Many nuclear physics experiments require the ability to analyze events in which large numbers of charged particles are detected and identified simultaneously, with good resolution and high efficiency, either alone, or in coincidence with gamma rays. The authors have constructed a compact large-area detector array to measure these processes efficiently and with excellent energy resolution. The array consists of four double-sided silicon strip detectors, each 5x5 cm{sup 2} in area, with front and back sides divided into 16 strips. To exploit the capability of the device fully, a system to read each strip-detector segment has been designed and constructed, based around a custom-built multi-channel preamplifier. The remainder of the system consists of high-density CAMAC modules, including multi-channel discriminators, charge-sensing analog-to-digital converters, and time-to-digital converters. The array`s performance has been evaluated using alpha-particle sources, and in a number of experiments conducted at Argonne and elsewhere. Energy resolutions of {Delta}E {approx} 20-30 keV have been observed for 5 to 8 MeV alpha particles, as well as time resolutions {Delta}T {<=} 500 ps. 4 figs.

  18. A hollow stainless steel microneedle array to deliver insulin to a diabetic rat

    International Nuclear Information System (INIS)

    Vinayakumar, K B; Rajanna, K; Kulkarni, Prachit G; Ramachandra, S G; Nayak, M M; Hegde, Gopalkrishna M; Dinesh, N S

    2016-01-01

    A novel fabrication process has been described for the development of a hollow stainless steel microneedle array using femto second laser micromachining. Using this method, a complicated microstructure can be fabricated in a single process step without using masks. The mechanical stability of the fabricated microneedle array was measured for axial and transverse loading. Skin histology was carried out to study the microneedle penetration into the rat skin. Fluid flow through the microneedle array was studied for different inlet pressures. The packaging of the microneedle array, to protect the microneedle bore blockage from dust and other atmospheric contaminations, was also considered. Finally, the microneedle array was tested and studied in vivo for insulin delivery to a diabetic rat. The results obtained were compared with the standard subcutaneous delivery with the same dose rate and were found to be in good agreement. (paper)

  19. A hollow stainless steel microneedle array to deliver insulin to a diabetic rat

    Science.gov (United States)

    Vinayakumar, K. B.; Kulkarni, Prachit G.; Nayak, M. M.; Dinesh, N. S.; Hegde, Gopalkrishna M.; Ramachandra, S. G.; Rajanna, K.

    2016-06-01

    A novel fabrication process has been described for the development of a hollow stainless steel microneedle array using femto second laser micromachining. Using this method, a complicated microstructure can be fabricated in a single process step without using masks. The mechanical stability of the fabricated microneedle array was measured for axial and transverse loading. Skin histology was carried out to study the microneedle penetration into the rat skin. Fluid flow through the microneedle array was studied for different inlet pressures. The packaging of the microneedle array, to protect the microneedle bore blockage from dust and other atmospheric contaminations, was also considered. Finally, the microneedle array was tested and studied in vivo for insulin delivery to a diabetic rat. The results obtained were compared with the standard subcutaneous delivery with the same dose rate and were found to be in good agreement.

  20. Supercomputers and parallel computation. Based on the proceedings of a workshop on progress in the use of vector and array processors organised by the Institute of Mathematics and its Applications and held in Bristol, 2-3 September 1982

    International Nuclear Information System (INIS)

    Paddon, D.J.

    1984-01-01

    This book is based on the proceedings of a conference on parallel computing held in 1982. There are 18 papers which cover the following topics: VLSI parallel architectures, the theory of parallel computing and vector and array processor computing. One paper on 'Tough Problems in Reactor Design' is indexed separately. All the contributions are on research done in the United Kingdom. Although much of the experience in array processor computing is associated with the ICL distributed array processor (DAP) and this is reflected in the contributions, the research relating to the ICL DAP is relevant to all types of array processors. (UK)

  1. Tests Of Array Of Flush Pressure Sensors

    Science.gov (United States)

    Larson, Larry J.; Moes, Timothy R.; Siemers, Paul M., III

    1992-01-01

    Report describes tests of array of pressure sensors connected to small orifices flush with surface of 1/7-scale model of F-14 airplane in wind tunnel. Part of effort to determine whether pressure parameters consisting of various sums, differences, and ratios of measured pressures used to compute accurately free-stream values of stagnation pressure, static pressure, angle of attack, angle of sideslip, and mach number. Such arrays of sensors and associated processing circuitry integrated into advanced aircraft as parts of flight-monitoring and -controlling systems.

  2. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    Science.gov (United States)

    Barr, David R. W.; Dudek, Piotr

    2009-12-01

    We present a software environment for the efficient simulation of cellular processor arrays (CPAs). This software (APRON) is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.

  3. APRON: A Cellular Processor Array Simulation and Hardware Design Tool

    Directory of Open Access Journals (Sweden)

    David R. W. Barr

    2009-01-01

    Full Text Available We present a software environment for the efficient simulation of cellular processor arrays (CPAs. This software (APRON is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.

  4. High-Performance Flexible Force and Temperature Sensing Array with a Robust Structure

    Science.gov (United States)

    Kim, Min-Seok; Song, Han-Wook; Park, Yon-Kyu

    We have developed a flexible tactile sensor array capable of sensing physical quantities, e.g. force and temperature with high-performances and high spatial resolution. The fabricated tactile sensor consists of 8 × 8 force measuring array with 1 mm spacing and a thin metal (copper) temperature sensor. The flexible force sensing array consists of sub-millimetre-size bar-shaped semi-conductor strain gage array attached to a thin and flexible printed circuit board covered by stretchable elastomeric material on both sides. This design incorporates benefits of both materials; the semi-conductor's high performance and the polymer's mechanical flexibility and robustness, while overcoming their drawbacks of those two materials. Special fabrication processes, so called “dry-transfer technique” have been used to fabricate the tactile sensor along with standard micro-fabrication processes.

  5. Ordered arrays of polymeric nanopores by using inverse nanostructured PTFE surfaces

    International Nuclear Information System (INIS)

    Martín, Jaime; Martín-González, Marisol; Del Campo, Adolfo; Reinosa, Julián J; Fernández, José Francisco

    2012-01-01

    We present a simple, efficient, and high-throughput methodology for the fabrication of ordered nanoporous polymeric surfaces with areas in the range of cm 2 . The procedure is based on a two-stage replication of a master nanostructured pattern. The process starts with the preparation of an ordered array of poly(tetrafluoroethylene) (PTFE) free-standing nanopillars by wetting self-ordered porous anodic aluminum oxide templates with molten PTFE. The nanopillars are 120 nm in diameter and approximately 350 nm long, while the array extends over cm 2 . The PTFE nanostructuring process induces surface hydrocarbonation of the nanopillars, as revealed by confocal Raman microscopy/spectroscopy, which enhances the wettability of the originally hydrophobic material and facilitates its subsequent use as an inverse pattern. Thus, the PTFE nanostructure is then used as a negative master for the fabrication of macroscopic hexagonal arrays of nanopores composed of biocompatible poly(vinylalcohol). In this particular case, the nanopores are 130–140 nm in diameter and the interpore distance is around 430 nm. Features of such characteristic dimensions are known to be easily recognized by living cells. Moreover, the inverse mold is not destroyed in the pore array demolding process and can be reused for further pore array fabrication. Therefore, the developed method allows the high-throughput production of cm 2 -scale biocompatible nanoporous surfaces that could be interesting as two-dimensional scaffolds for tissue repair or wound healing. Moreover, our approach can be extrapolated to the fabrication of almost any polymer and biopolymer ordered pore array. (paper)

  6. A Hardware Accelerator for Fault Simulation Utilizing a Reconfigurable Array Architecture

    Directory of Open Access Journals (Sweden)

    Sungho Kang

    1996-01-01

    Full Text Available In order to reduce cost and to achieve high speed a new hardware accelerator for fault simulation has been designed. The architecture of the new accelerator is based on a reconfigurabl mesh type processing element (PE array. Circuit elements at the same topological level are simulated concurrently, as in a pipelined process. A new parallel simulation algorithm expands all of the gates to two input gates in order to limit the number of faults to two at each gate, so that the faults can be distributed uniformly throughout the PE array. The PE array reconfiguration operation provides a simulation speed advantage by maximizing the use of each PE cell.

  7. Capacitance of a highly ordered array of nanocapacitors: Model and microscopy

    Science.gov (United States)

    Cortés, A.; Celedón, C.; Ulloa, P.; Kepaptsoglou, D.; Häberle, P.

    2011-11-01

    This manuscript describes briefly the process used to build an ordered porous array in an anodic aluminum oxide (AAO) membrane, filled with multiwall carbon nanotubes (MWCNTs). The MWCNTs were grown directly inside the membrane through chemical vapor deposition (CVD). The role of the CNTs is to provide narrow metal electrodes contact with a dielectric surface barrier, hence, forming a capacitor. This procedure allows the construction of an array of 1010 parallel nano-spherical capacitors/cm2. A central part of this contribution is the use of physical parameters obtained from processing transmission electron microscopy (TEM) images, to predict the specific capacitance of the AAOs arrays. Electrical parameters were obtained by solving Laplace's equation through finite element methods (FEMs).

  8. Three-dimensional digital imaging based on shifted point-array encoding.

    Science.gov (United States)

    Tian, Jindong; Peng, Xiang

    2005-09-10

    An approach to three-dimensional (3D) imaging based on shifted point-array encoding is presented. A kind of point-array structure light is projected sequentially onto the reference plane and onto the object surface to be tested and thus forms a pair of point-array images. A mathematical model is established to formulize the imaging process with the pair of point arrays. This formulation allows for a description of the relationship between the range image of the object surface and the lateral displacement of each point in the point-array image. Based on this model, one can reconstruct each 3D range image point by computing the lateral displacement of the corresponding point on the two point-array images. The encoded point array can be shifted digitally along both the lateral and the longitudinal directions step by step to achieve high spatial resolution. Experimental results show good agreement with the theoretical predictions. This method is applicable for implementing 3D imaging of object surfaces with complex topology or large height discontinuities.

  9. Large-area gold nanohole arrays fabricated by one-step method for surface plasmon resonance biochemical sensing.

    Science.gov (United States)

    Qi, Huijie; Niu, Lihong; Zhang, Jie; Chen, Jian; Wang, Shujie; Yang, Jingjing; Guo, Siyi; Lawson, Tom; Shi, Bingyang; Song, Chunpeng

    2018-04-01

    Surface plasmon resonance (SPR) nanosensors based on metallic nanohole arrays have been widely reported to detect binding interactions in biological specimens. A simple and effective method for constructing nanoscale arrays is essential for the development of SPR nanosensors. In this work, we report a one-step method to fabricate nanohole arrays by thermal nanoimprinting in the matrix of IPS (Intermediate Polymer Stamp). No additional etching process or supporting substrate is required. The preparation process is simple, time-saving and compatible for roll-to-roll process, potentially allowing mass production. Moreover, the nanohole arrays were integrated into detection platform as SPR sensors to investigate different types of biological binding interactions. The results demonstrate that our one-step method can be used to efficiently fabricate large-area and uniform nanohole arrays for biochemical sensing.

  10. A low-power small-area ADC array for IRFPA readout

    Science.gov (United States)

    Zhong, Shengyou; Yao, Libin

    2013-09-01

    The readout integrated circuit (ROIC) is a bridge between the infrared focal plane array (IRFPA) and image processing circuit in an infrared imaging system. The ROIC is the first part of signal processing circuit and connected to detectors directly, so its performance will greatly affect the detector or even the whole imaging system performance. With the development of CMOS technologies, it's possible to digitalize the signal inside the ROIC and develop the digital ROIC. Digital ROIC can reduce complexity of the whole system and improve the system reliability. More importantly, it can accommodate variety of digital signal processing techniques which the traditional analog ROIC cannot achieve. The analog to digital converter (ADC) is the most important building block in the digital ROIC. The requirements for ADCs inside the ROIC are low power, high dynamic range and small area. In this paper we propose an RC hybrid Successive Approximation Register (SAR) ADC as the column ADC for digital ROIC. In our proposed ADC structure, a resistor ladder is used to generate several voltages. The proposed RC hybrid structure not only reduces the area of capacitor array but also releases requirement for capacitor array matching. Theory analysis and simulation show RC hybrid SAR ADC is suitable for ADC array applications

  11. Anodic Aluminum Oxide Templates for Nano wires Array Fabrication

    International Nuclear Information System (INIS)

    Nur Ubaidah Saidin; Kok, K.Y.; Ng, I.K.

    2011-01-01

    This paper reports on the process developed to fabricate anodic aluminium oxide (AAO) templates suitable for the fabrication of nano wire arrays. Anodization process has been used to fabricate the AAO templates with pore diameters ranging from 15 nm to 30 nm. Electrodeposition of parallel arrays of high aspect ratio nickel nano wires were demonstrated using these fabricated AAO templates. The nano wires produced were characterized using X-ray diffraction (XRD) and scanning electron microscopy (SEM). It was found that the orientations of the electrodeposited nickel nano wires were governed by the deposition current and electrolyte conditions. (author)

  12. Silica needle template fabrication of metal hollow microneedle arrays

    International Nuclear Information System (INIS)

    Zhu, M W; Li, H W; Chen, X L; Tang, Y F; Lu, M H; Chen, Y F

    2009-01-01

    Drug delivery through hollow microneedle (HMN) arrays has now been recognized as one of the most promising techniques because it minimizes the shortcomings of the traditional drug delivery methods and has many exciting advantages—pain free and tunable release rates, for example. However, this drug delivery method has been hindered greatly from mass clinical application because of the high fabrication cost of HMN arrays. Hence, we developed a simple and cost-effective procedure using silica needles as templates to massively fabricate HMN arrays by using popular materials and industrially applicable processes of micro- imprint, hot embossing, electroplating and polishing. Metal HMN arrays with high quality are prepared with great flexibility with tunable parameters of area, length of needle, size of hollow and array dimension. This efficient and cost-effective fabrication method can also be applied to other applications after minor alterations, such as preparation of optic, acoustic and solar harvesting materials and devices

  13. Silica needle template fabrication of metal hollow microneedle arrays

    Science.gov (United States)

    Zhu, M. W.; Li, H. W.; Chen, X. L.; Tang, Y. F.; Lu, M. H.; Chen, Y. F.

    2009-11-01

    Drug delivery through hollow microneedle (HMN) arrays has now been recognized as one of the most promising techniques because it minimizes the shortcomings of the traditional drug delivery methods and has many exciting advantages—pain free and tunable release rates, for example. However, this drug delivery method has been hindered greatly from mass clinical application because of the high fabrication cost of HMN arrays. Hence, we developed a simple and cost-effective procedure using silica needles as templates to massively fabricate HMN arrays by using popular materials and industrially applicable processes of micro- imprint, hot embossing, electroplating and polishing. Metal HMN arrays with high quality are prepared with great flexibility with tunable parameters of area, length of needle, size of hollow and array dimension. This efficient and cost-effective fabrication method can also be applied to other applications after minor alterations, such as preparation of optic, acoustic and solar harvesting materials and devices.

  14. Miniaturized Ultrasound Imaging Probes Enabled by CMUT Arrays with Integrated Frontend Electronic Circuits

    Science.gov (United States)

    Khuri-Yakub, B. (Pierre) T.; Oralkan, Ömer; Nikoozadeh, Amin; Wygant, Ira O.; Zhuang, Steve; Gencel, Mustafa; Choe, Jung Woo; Stephens, Douglas N.; de la Rama, Alan; Chen, Peter; Lin, Feng; Dentinger, Aaron; Wildes, Douglas; Thomenius, Kai; Shivkumar, Kalyanam; Mahajan, Aman; Seo, Chi Hyung; O’Donnell, Matthew; Truong, Uyen; Sahn, David J.

    2010-01-01

    Capacitive micromachined ultrasonic transducer (CMUT) arrays are conveniently integrated with frontend integrated circuits either monolithically or in a hybrid multichip form. This integration helps with reducing the number of active data processing channels for 2D arrays. This approach also preserves the signal integrity for arrays with small elements. Therefore CMUT arrays integrated with electronic circuits are most suitable to implement miniaturized probes required for many intravascular, intracardiac, and endoscopic applications. This paper presents examples of miniaturized CMUT probes utilizing 1D, 2D, and ring arrays with integrated electronics. PMID:21097106

  15. Case for a field-programmable gate array multicore hybrid machine for an image-processing application

    Science.gov (United States)

    Rakvic, Ryan N.; Ives, Robert W.; Lira, Javier; Molina, Carlos

    2011-01-01

    General purpose computer designers have recently begun adding cores to their processors in order to increase performance. For example, Intel has adopted a homogeneous quad-core processor as a base for general purpose computing. PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high level. Can modern image-processing algorithms utilize these additional cores? On the other hand, modern advancements in configurable hardware, most notably field-programmable gate arrays (FPGAs) have created an interesting question for general purpose computer designers. Is there a reason to combine FPGAs with multicore processors to create an FPGA multicore hybrid general purpose computer? Iris matching, a repeatedly executed portion of a modern iris-recognition algorithm, is parallelized on an Intel-based homogeneous multicore Xeon system, a heterogeneous multicore Cell system, and an FPGA multicore hybrid system. Surprisingly, the cheaper PS3 slightly outperforms the Intel-based multicore on a core-for-core basis. However, both multicore systems are beaten by the FPGA multicore hybrid system by >50%.

  16. The performance of disk arrays in shared-memory database machines

    Science.gov (United States)

    Katz, Randy H.; Hong, Wei

    1993-01-01

    In this paper, we examine how disk arrays and shared memory multiprocessors lead to an effective method for constructing database machines for general-purpose complex query processing. We show that disk arrays can lead to cost-effective storage systems if they are configured from suitably small formfactor disk drives. We introduce the storage system metric data temperature as a way to evaluate how well a disk configuration can sustain its workload, and we show that disk arrays can sustain the same data temperature as a more expensive mirrored-disk configuration. We use the metric to evaluate the performance of disk arrays in XPRS, an operational shared-memory multiprocessor database system being developed at the University of California, Berkeley.

  17. Drying induced upright sliding and reorganization of carbon nanotube arrays

    International Nuclear Information System (INIS)

    Li Qingwen; De Paula, Raymond; Zhang Xiefei; Zheng Lianxi; Arendt, Paul N; Mueller, Fred M; Zhu, Y T; Tu Yi

    2006-01-01

    Driven by capillary force, wet carbon nanotube (CNT) arrays have been found to reorganize into cellular structures upon drying. During the reorganization process, individual CNTs are firmly attached to the substrate and have to lie down on the substrate at cell bottoms, forming closed cells. Here we demonstrate that by modifying catalyst structures, the adhesion of CNTs to the substrate can be weakened. Upon drying such CNT arrays, CNTs may slide away from their original sites on the surface and self-assemble into cellular patterns with bottoms open. It is also found that the sliding distance of CNTs increases with array height, and drying millimetre tall arrays leads to the sliding of CNTs over a few hundred micrometres and the eventual self-assembly into discrete islands. By introducing regular vacancies in CNT arrays, CNTs may be manipulated into different patterns

  18. Sorting white blood cells in microfabricated arrays

    Science.gov (United States)

    Castelino, Judith Andrea Rose

    Fractionating white cells in microfabricated arrays presents the potential for detecting cells with abnormal adhesive or deformation properties. A possible application is separating nucleated fetal red blood cells from maternal blood. Since fetal cells are nucleated, it is possible to extract genetic information about the fetus from them. Separating fetal cells from maternal blood would provide a low cost noninvasive prenatal diagnosis for genetic defects, which is not currently available. We present results showing that fetal cells penetrate further into our microfabricated arrays than adult cells, and that it is possible to enrich the fetal cell fraction using the arrays. We discuss modifications to the array which would result in further enrichment. Fetal cells are less adhesive and more deformable than adult white cells. To determine which properties limit penetration, we compared the penetration of granulocytes and lymphocytes in arrays with different etch depths, constriction size, constriction frequency, and with different amounts of metabolic activity. The penetration of lymphocytes and granulocytes into constrained and unconstrained arrays differed qualitatively. In constrained arrays, the cells were activated by repeated shearing, and the number of cells stuck as a function of distance fell superexponentially. In unconstrained arrays the number of cells stuck fell slower than an exponential. We attribute this result to different subpopulations of cells with different sticking parameters. We determined that penetration in unconstrained arrays was limited by metabolic processes, and that when metabolic activity was reduced penetration was limited by deformability. Fetal cells also contain a different form of hemoglobin with a higher oxygen affinity than adult hemoglobin. Deoxygenated cells are paramagnetic and are attracted to high magnetic field gradients. We describe a device which can separate cells using 10 μm magnetic wires to deflect the paramagnetic

  19. Spatio-temporal change detection from multidimensional arrays: Detecting deforestation from MODIS time series

    Science.gov (United States)

    Lu, Meng; Pebesma, Edzer; Sanchez, Alber; Verbesselt, Jan

    2016-07-01

    Growing availability of long-term satellite imagery enables change modeling with advanced spatio-temporal statistical methods. Multidimensional arrays naturally match the structure of spatio-temporal satellite data and can provide a clean modeling process for complex spatio-temporal analysis over large datasets. Our study case illustrates the detection of breakpoints in MODIS imagery time series for land cover change in the Brazilian Amazon using the BFAST (Breaks For Additive Season and Trend) change detection framework. BFAST includes an Empirical Fluctuation Process (EFP) to alarm the change and a change point time locating process. We extend the EFP to account for the spatial autocorrelation between spatial neighbors and assess the effects of spatial correlation when applying BFAST on satellite image time series. In addition, we evaluate how sensitive EFP is to the assumption that its time series residuals are temporally uncorrelated, by modeling it as an autoregressive process. We use arrays as a unified data structure for the modeling process, R to execute the analysis, and an array database management system to scale computation. Our results point to BFAST as a robust approach against mild temporal and spatial correlation, to the use of arrays to ease the modeling process of spatio-temporal change, and towards communicable and scalable analysis.

  20. A CMOS ASIC Design for SiPM Arrays.

    Science.gov (United States)

    Dey, Samrat; Banks, Lushon; Chen, Shaw-Pin; Xu, Wenbin; Lewellen, Thomas K; Miyaoka, Robert S; Rudell, Jacques C

    2011-12-01

    Our lab has previously reported on novel board-level readout electronics for an 8×8 silicon photomultiplier (SiPM) array featuring row/column summation technique to reduce the hardware requirements for signal processing. We are taking the next step by implementing a monolithic CMOS chip which is based on the row-column architecture. In addition, this paper explores the option of using diagonal summation as well as calibration to compensate for temperature and process variations. Further description of a timing pickoff signal which aligns all of the positioning (spatial channels) pulses in the array is described. The ASIC design is targeted to be scalable with the detector size and flexible to accommodate detectors from different vendors. This paper focuses on circuit implementation issues associated with the design of the ASIC to interface our Phase II MiCES FPGA board with a SiPM array. Moreover, a discussion is provided for strategies to eventually integrate all the analog and mixed-signal electronics with the SiPM, on either a single-silicon substrate or multi-chip module (MCM).

  1. Hollow Nanospheres Array Fabrication via Nano-Conglutination Technology.

    Science.gov (United States)

    Zhang, Man; Deng, Qiling; Xia, Liangping; Shi, Lifang; Cao, Axiu; Pang, Hui; Hu, Song

    2015-09-01

    Hollow nanospheres array is a special nanostructure with great applications in photonics, electronics and biochemistry. The nanofabrication technique with high resolution is crucial to nanosciences and nano-technology. This paper presents a novel nonconventional nano-conglutination technology combining polystyrenes spheres (PSs) self-assembly, conglutination and a lift-off process to fabricate the hollow nanospheres array with nanoholes. A self-assembly monolayer of PSs was stuck off from the quartz wafer by the thiol-ene adhesive material, and then the PSs was removed via a lift-off process and the hollow nanospheres embedded into the thiol-ene substrate was obtained. Thiolene polymer is a UV-curable material via "click chemistry" reaction at ambient conditions without the oxygen inhibition, which has excellent chemical and physical properties to be attractive as the adhesive material in nano-conglutination technology. Using the technique, a hollow nanospheres array with the nanoholes at the diameter of 200 nm embedded into the rigid thiol-ene substrate was fabricated, which has great potential to serve as a reaction container, catalyst and surface enhanced Raman scattering substrate.

  2. Investigation on the Photoelectrocatalytic Activity of Well-Aligned TiO2 Nanotube Arrays

    Directory of Open Access Journals (Sweden)

    Xiaomeng Wu

    2012-01-01

    Full Text Available Well-aligned TiO2 nanotube arrays were fabricated by anodizing Ti foil in viscous F− containing organic electrolytes, and the crystal structure and morphology of the TiO2 nanotube array were characterized and analyzed by XRD, SEM, and TEM, respectively. The photocatalytic activity of the TiO2 nanotube arrays was evaluated in the photocatalytic (PC and photoelectrocatalytic (PEC degradation of methylene blue (MB dye in different supporting solutions. The excellent performance of ca. 97% for color removal was reached after 90 min in the PEC process compared to that of PC process which indicates that a certain external potential bias favors the promotion of the electrode reaction rate on TiO2 nanotube array when it is under illumination. In addition, it is found that PEC process conducted in supporting solutions with low pH and containing Cl− is also beneficial to accelerate the degradation rate of MB.

  3. Ordered arrays of embedded Ga nanoparticles on patterned silicon substrates

    International Nuclear Information System (INIS)

    Bollani, M; Bietti, S; Sanguinetti, S; Frigeri, C; Chrastina, D; Reyes, K; Smereka, P; Millunchick, J M; Vanacore, G M; Tagliaferri, A; Burghammer, M

    2014-01-01

    We fabricate site-controlled, ordered arrays of embedded Ga nanoparticles on Si, using a combination of substrate patterning and molecular-beam epitaxial growth. The fabrication process consists of two steps. Ga droplets are initially nucleated in an ordered array of inverted pyramidal pits, and then partially crystallized by exposure to an As flux, which promotes the formation of a GaAs shell that seals the Ga nanoparticle within two semiconductor layers. The nanoparticle formation process has been investigated through a combination of extensive chemical and structural characterization and theoretical kinetic Monte Carlo simulations. (papers)

  4. The Next-Generation Very Large Array: Technical Overview

    Science.gov (United States)

    McKinnon, Mark; Selina, Rob

    2018-01-01

    As part of its mandate as a national observatory, the NRAO is looking toward the long range future of radio astronomy and fostering the long term growth of the US astronomical community. NRAO has sponsored a series of science and technical community meetings to consider the science mission and design of a next-generation Very Large Array (ngVLA), building on the legacies of the Atacama Large Millimeter/submillimeter Array (ALMA) and the Very Large Array (VLA).The basic ngVLA design emerging from these discussions is an interferometric array with approximately ten times the sensitivity and ten times higher spatial resolution than the VLA and ALMA radio telescopes, optimized for operation in the wavelength range 0.3cm to 3cm. The ngVLA would open a new window on the Universe through ultra-sensitive imaging of thermal line and continuum emission down to milli-arcsecond resolution, as well as unprecedented broadband continuum polarimetric imaging of non-thermal processes. The specifications and concepts for major ngVLA system elements are rapidly converging.We will provide an overview of the current system design of the ngVLA. The concepts for major system elements such as the antenna, receiving electronics, and central signal processing will be presented. We will also describe the major development activities that are presently underway to advance the design.

  5. A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing

    Science.gov (United States)

    Cobos, Maximo; Lopez, JoseJ; Spors, Sascha

    2010-12-01

    Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.

  6. Multi-mode sensor processing on a dynamically reconfigurable massively parallel processor array

    Science.gov (United States)

    Chen, Paul; Butts, Mike; Budlong, Brad; Wasson, Paul

    2008-04-01

    This paper introduces a novel computing architecture that can be reconfigured in real time to adapt on demand to multi-mode sensor platforms' dynamic computational and functional requirements. This 1 teraOPS reconfigurable Massively Parallel Processor Array (MPPA) has 336 32-bit processors. The programmable 32-bit communication fabric provides streamlined inter-processor connections with deterministically high performance. Software programmability, scalability, ease of use, and fast reconfiguration time (ranging from microseconds to milliseconds) are the most significant advantages over FPGAs and DSPs. This paper introduces the MPPA architecture, its programming model, and methods of reconfigurability. An MPPA platform for reconfigurable computing is based on a structural object programming model. Objects are software programs running concurrently on hundreds of 32-bit RISC processors and memories. They exchange data and control through a network of self-synchronizing channels. A common application design pattern on this platform, called a work farm, is a parallel set of worker objects, with one input and one output stream. Statically configured work farms with homogeneous and heterogeneous sets of workers have been used in video compression and decompression, network processing, and graphics applications.

  7. MISSION-ORIENTED SENSOR ARRAYS AND UAVs – A CASE STUDY ON ENVIRONMENTAL MONITORING

    Directory of Open Access Journals (Sweden)

    N. M. Figueira

    2015-08-01

    Full Text Available This paper presents a new concept of UAV mission design in geomatics, applied to the generation of thematic maps for a multitude of civilian and military applications. We discuss the architecture of Mission-Oriented Sensors Arrays (MOSA, proposed in Figueira et Al. (2013, aimed at splitting and decoupling the mission-oriented part of the system (non safety-critical hardware and software from the aircraft control systems (safety-critical. As a case study, we present an environmental monitoring application for the automatic generation of thematic maps to track gunshot activity in conservation areas. The MOSA modeled for this application integrates information from a thermal camera and an on-the-ground microphone array. The use of microphone arrays technology is of particular interest in this paper. These arrays allow estimation of the direction-of-arrival (DOA of the incoming sound waves. Information about events of interest is obtained by the fusion of the data provided by the microphone array, captured by the UAV, fused with information from the termal image processing. Preliminary results show the feasibility of the on-the-ground sound processing array and the simulation of the main processing module, to be embedded into an UAV in a future work. The main contributions of this paper are the proposed MOSA system, including concepts, models and architecture.

  8. Array display tool ADT reference manual. Version 1.2

    International Nuclear Information System (INIS)

    Evans, K. Jr.

    1995-12-01

    Array Display Tool (ADT) is a Motif program to display arrays of process variables from the Advanced Photon Source control system. A typical use is to display the horizontal and vertical monitor readings. A picture of the ADT interface is here. The screen layout, apart from the menu bar, consists of two types of graphic areas in which the values for the arrays of process variables are shown: Display areas, which display one or more arrays as a function of index, and a zoom area. In the zoom area specified arrays only are displayed as a function of lattice position along with symbols for the major elements of the lattice. There can be several display areas, but at most one zoom area. When the screen is resized these areas change size proportionally. There are a number of options in the View Menu to change the way the values are displayed. It is also possible via the Options Menu to: (1) Store the current values internally. (2) Store the values from a snapshot file internally. (3) Display one of the stored sets of values along with the current values. (4) Display the difference of the current values with one of the stored sets of values. (5) Write the current values to a snapshot file. There are several (currently 5) slots in which you can store values internally. In addition you can display the values with specified reference values subtracted

  9. Electrodeposited highly-ordered manganese oxide nanowire arrays for supercapacitors

    Science.gov (United States)

    Liu, Haifeng; Lu, Bingqiang; Wei, Shuiqiang; Bao, Mi; Wen, Yanxuan; Wang, Fan

    2012-07-01

    Large arrays of well-aligned Mn oxide nanowires were prepared by electrodeposition using anodic aluminum oxide templates. The sizes of nanowires were tuned by varying the electrotype solution involved and the MnO2 nanowires with 10 μm in length were obtained in a neutral KMnO4 bath for 1 h. MnO2 nanowire arrays grown on conductor substance save the tedious electrode-making process, and electrochemical characterization demonstrates that the MnO2 nanowire arrays electrode has good capacitive behavior. Due to the limited mass transportation in narrow spacing, the spacing effects between the neighbor nanowires have show great influence to the electrochemical performance.

  10. In-beam conversion electron spectroscopy using the SACRED array

    International Nuclear Information System (INIS)

    Jones, P.M.; Cann, K.J.; Cocks, J.F.C.; Jones, G.D.; Julin, R.; Schulze, B.; Smith, J.F.; Wilson, A.N.

    1997-01-01

    Conversion electron studies of medium-heavy to heavy nuclear mass systems are important where the internal conversion process begins to dominate over gamma-ray emission. The use of a segmented detector array sensitive to conversion electrons has been used to study multiple conversion electron cascades from nuclear transitions. The application of the silicon array for conversion electron detection (SACRED) for in-beam measurements has successfully been implemented. (orig.). With 2 figs

  11. Experimental studies of Z-pinches of mixed wire array with aluminum and tungsten

    International Nuclear Information System (INIS)

    Ning Cheng; Li Zhenghong; Hua Xinsheng; Xu Rongkun; Peng Xianjue; Xu Zeping; Yang Jianlun; Guo Cun; Jiang Shilun; Feng Shuping; Yang Libing; Yan Chengli; Song Fengjun; Smirnov, V.P.; Kalinin, Yu.G.; Kingsep, A.S.; Chernenko, A.S.; Grabovsky, E.V.

    2004-01-01

    In the form of joint experiment between China and Russia, the experimental studies of Z-pinches of mixed wire array of aluminum (A1) and tungsten (W) were carried out on S-300 generator, which was located on Kurchatov Institute of Russia. The experimental results were compared with those of single A1 array and single W array, respectively. There are obvious difference between mixed one and single one in their photon spectral distributions. The intensity of K-series emission lines from the mixed wire array Z-pinch is lower than that from single A1 array. The radiated lines with wavelengths less than 1.6 nm were not found in single W array Z-pinches. In the Z-pinch processes, the area radiating x-rays in mixed wire array is smaller than that of single A1 array, but is slightly lower than that from single W array. The FWHM of x-ray pulse with a maximal power 0.3-0.5 TW and total energy 10-20 kJ is about 25 ns, which radiated from Z-pinches with a radial convergence of 4-5 on S-300 generator. The shadow photograph of the mixed wire-array Z-pinch plasma by laser probe shows that the core-corona configuration was formed and the corona was moving toward the center axis during the wire-array plasma formation, that the interface of the plasma is not clear, and that there are a number structures inside. They also suggests that there was an obvious development of Magneto Rayleigh-Taylor instability in the Z-pinch process as well

  12. Gate protective device for SOS array

    Science.gov (United States)

    Meyer, J. E., Jr.; Scott, J. H.

    1972-01-01

    Protective gate device consisting of alternating heavily doped n(+) and p(+) diffusions eliminates breakdown voltages in silicon oxide on sapphire arrays caused by electrostatic discharge from person or equipment. Diffusions are easily produced during normal double epitaxial processing. Devices with nine layers had 27-volt breakdown.

  13. Cyclotron-Resonance-Maser Arrays

    International Nuclear Information System (INIS)

    Kesar, A.; Lei, L.; Dikhtyar, V.; Korol, M.; Jerby, E.

    1999-01-01

    The cyclotron-resonance-maser (CRM) array [1] is a radiation source which consists of CRM elements coupled together under a common magnetic field. Each CRM-element employs a low-energy electron-beam which performs a cyclotron interaction with the local electromagnetic wave. These waves can be coupled together among the CRM elements, hence the interaction is coherently synchronized in the entire array. The implementation of the CRM-array approach may alleviate several technological difficulties which impede the development of single-beam gyro-devices. Furthermore, it proposes new features, such as the phased-array antenna incorporated in the CRM-array itself. The CRM-array studies may lead to the development of compact, high-power radiation sources operating at low-voltages. This paper introduces new conceptual schemes of CRM-arrays, and presents the progress in related theoretical and experimental studies in our laboratory. These include a multi-mode analysis of a CRM-array, and a first operation of this device with five carbon-fiber cathodes

  14. Degree-of-Freedom Strengthened Cascade Array for DOD-DOA Estimation in MIMO Array Systems.

    Science.gov (United States)

    Yao, Bobin; Dong, Zhi; Zhang, Weile; Wang, Wei; Wu, Qisheng

    2018-05-14

    In spatial spectrum estimation, difference co-array can provide extra degrees-of-freedom (DOFs) for promoting parameter identifiability and parameter estimation accuracy. For the sake of acquiring as more DOFs as possible with a given number of physical sensors, we herein design a novel sensor array geometry named cascade array. This structure is generated by systematically connecting a uniform linear array (ULA) and a non-uniform linear array, and can provide more DOFs than some exist array structures but less than the upper-bound indicated by minimum redundant array (MRA). We further apply this cascade array into multiple input multiple output (MIMO) array systems, and propose a novel joint direction of departure (DOD) and direction of arrival (DOA) estimation algorithm, which is based on a reduced-dimensional weighted subspace fitting technique. The algorithm is angle auto-paired and computationally efficient. Theoretical analysis and numerical simulations prove the advantages and effectiveness of the proposed array structure and the related algorithm.

  15. Sensor Arrays and Electronic Tongue Systems

    Directory of Open Access Journals (Sweden)

    Manel del Valle

    2012-01-01

    Full Text Available This paper describes recent work performed with electronic tongue systems utilizing electrochemical sensors. The electronic tongues concept is a new trend in sensors that uses arrays of sensors together with chemometric tools to unravel the complex information generated. Initial contributions and also the most used variant employ conventional ion selective electrodes, in which it is named potentiometric electronic tongue. The second important variant is the one that employs voltammetry for its operation. As chemometric processing tool, the use of artificial neural networks as the preferred data processing variant will be described. The use of the sensor arrays inserted in flow injection or sequential injection systems will exemplify attempts made to automate the operation of electronic tongues. Significant use of biosensors, mainly enzyme-based, to form what is already named bioelectronic tongue will be also presented. Application examples will be illustrated with selected study cases from the Sensors and Biosensors Group at the Autonomous University of Barcelona.

  16. Wire Array Solar Cells: Fabrication and Photoelectrochemical Studies

    Science.gov (United States)

    Spurgeon, Joshua Michael

    Despite demand for clean energy to reduce our addiction to fossil fuels, the price of these technologies relative to oil and coal has prevented their widespread implementation. Solar energy has enormous potential as a carbon-free resource but is several times the cost of coal-produced electricity, largely because photovoltaics of practical efficiency require high-quality, pure semiconductor materials. To produce current in a planar junction solar cell, an electron or hole generated deep within the material must travel all the way to the junction without recombining. Radial junction, wire array solar cells, however, have the potential to decouple the directions of light absorption and charge-carrier collection so that a semiconductor with a minority-carrier diffusion length shorter than its absorption depth (i.e., a lower quality, potentially cheaper material) can effectively produce current. The axial dimension of the wires is long enough for sufficient optical absorption while the charge-carriers are collected along the shorter radial dimension in a massively parallel array. This thesis explores the wire array solar cell design by developing potentially low-cost fabrication methods and investigating the energy-conversion properties of the arrays in photoelectrochemical cells. The concept was initially investigated with Cd(Se, Te) rod arrays; however, Si was the primary focus of wire array research because its semiconductor properties make low-quality Si an ideal candidate for improvement in a radial geometry. Fabrication routes for Si wire arrays were explored, including the vapor-liquid-solid growth of wires using SiCl4. Uniform, vertically aligned Si wires were demonstrated in a process that permits control of the wire radius, length, and spacing. A technique was developed to transfer these wire arrays into a low-cost, flexible polymer film, and grow multiple subsequent arrays using a single Si(111) substrate. Photoelectrochemical measurements on Si wire array

  17. Calibration of a fluxgate magnetometer array and its application in magnetic object localization

    International Nuclear Information System (INIS)

    Pang, Hongfeng; Luo, Shitu; Zhang, Qi; Li, Ji; Chen, Dixiang; Pan, Mengchun; Luo, Feilu

    2013-01-01

    The magnetometer array is effective for magnetic object detection and localization. Calibration is important to improve the accuracy of the magnetometer array. A magnetic sensor array built with four three-axis DM-050 fluxgate magnetometers is designed, which is connected by a cross aluminum frame. In order to improve the accuracy of the magnetometer array, a calibration process is presented. The calibration process includes magnetometer calibration, coordinate transformation and misalignment calibration. The calibration system consists of a magnetic sensor array, a GSM-19T proton magnetometer, a two-dimensional nonmagnetic rotation platform, a 12 V-dc portable power device and two portable computers. After magnetometer calibration, the RMS error has been decreased from an original value of 125.559 nT to a final value of 1.711 nT (a factor of 74). After alignment, the RMS error of misalignment has been decreased from 1322.3 to 6.0 nT (a factor of 220). Then, the calibrated array deployed on the nonmagnetic rotation platform is used for ferromagnetic object localization. Experimental results show that the estimated errors of X, Y and Z axes are −0.049 m, 0.008 m and 0.025 m, respectively. Thus, the magnetometer array is effective for magnetic object detection and localization in three dimensions. (paper)

  18. Calibration of a fluxgate magnetometer array and its application in magnetic object localization

    Science.gov (United States)

    Pang, Hongfeng; Luo, Shitu; Zhang, Qi; Li, Ji; Chen, Dixiang; Pan, Mengchun; Luo, Feilu

    2013-07-01

    The magnetometer array is effective for magnetic object detection and localization. Calibration is important to improve the accuracy of the magnetometer array. A magnetic sensor array built with four three-axis DM-050 fluxgate magnetometers is designed, which is connected by a cross aluminum frame. In order to improve the accuracy of the magnetometer array, a calibration process is presented. The calibration process includes magnetometer calibration, coordinate transformation and misalignment calibration. The calibration system consists of a magnetic sensor array, a GSM-19T proton magnetometer, a two-dimensional nonmagnetic rotation platform, a 12 V-dc portable power device and two portable computers. After magnetometer calibration, the RMS error has been decreased from an original value of 125.559 nT to a final value of 1.711 nT (a factor of 74). After alignment, the RMS error of misalignment has been decreased from 1322.3 to 6.0 nT (a factor of 220). Then, the calibrated array deployed on the nonmagnetic rotation platform is used for ferromagnetic object localization. Experimental results show that the estimated errors of X, Y and Z axes are -0.049 m, 0.008 m and 0.025 m, respectively. Thus, the magnetometer array is effective for magnetic object detection and localization in three dimensions.

  19. DETECTION OF FAST TRANSIENTS WITH RADIO INTERFEROMETRIC ARRAYS

    International Nuclear Information System (INIS)

    Bhat, N. D. R.; Chengalur, J. N.; Gupta, Y.; Prasad, J.; Roy, J.; Kudale, S. S.; Cox, P. J.; Bailes, M.; Burke-Spolaor, S.; Van Straten, W.

    2013-01-01

    Next-generation radio arrays, including the Square Kilometre Array (SKA) and its pathfinders, will open up new avenues for exciting transient science at radio wavelengths. Their innovative designs, comprising a large number of small elements, pose several challenges in digital processing and optimal observing strategies. The Giant Metre-wave Radio Telescope (GMRT) presents an excellent test-bed for developing and validating suitable observing modes and strategies for transient experiments with future arrays. Here we describe the first phase of the ongoing development of a transient detection system for GMRT that is planned to eventually function in a commensal mode with other observing programs. It capitalizes on the GMRT's interferometric and sub-array capabilities, and the versatility of a new software backend. We outline considerations in the plan and design of transient exploration programs with interferometric arrays, and describe a pilot survey that was undertaken to aid in the development of algorithms and associated analysis software. This survey was conducted at 325 and 610 MHz, and covered 360 deg 2 of the sky with short dwell times. It provides large volumes of real data that can be used to test the efficacies of various algorithms and observing strategies applicable for transient detection. We present examples that illustrate the methodologies of detecting short-duration transients, including the use of sub-arrays for higher resilience to spurious events of terrestrial origin, localization of candidate events via imaging, and the use of a phased array for improved signal detection and confirmation. In addition to demonstrating applications of interferometric arrays for fast transient exploration, our efforts mark important steps in the roadmap toward SKA-era science.

  20. Detection of Fast Transients with Radio Interferometric Arrays

    Science.gov (United States)

    Bhat, N. D. R.; Chengalur, J. N.; Cox, P. J.; Gupta, Y.; Prasad, J.; Roy, J.; Bailes, M.; Burke-Spolaor, S.; Kudale, S. S.; van Straten, W.

    2013-05-01

    Next-generation radio arrays, including the Square Kilometre Array (SKA) and its pathfinders, will open up new avenues for exciting transient science at radio wavelengths. Their innovative designs, comprising a large number of small elements, pose several challenges in digital processing and optimal observing strategies. The Giant Metre-wave Radio Telescope (GMRT) presents an excellent test-bed for developing and validating suitable observing modes and strategies for transient experiments with future arrays. Here we describe the first phase of the ongoing development of a transient detection system for GMRT that is planned to eventually function in a commensal mode with other observing programs. It capitalizes on the GMRT's interferometric and sub-array capabilities, and the versatility of a new software backend. We outline considerations in the plan and design of transient exploration programs with interferometric arrays, and describe a pilot survey that was undertaken to aid in the development of algorithms and associated analysis software. This survey was conducted at 325 and 610 MHz, and covered 360 deg2 of the sky with short dwell times. It provides large volumes of real data that can be used to test the efficacies of various algorithms and observing strategies applicable for transient detection. We present examples that illustrate the methodologies of detecting short-duration transients, including the use of sub-arrays for higher resilience to spurious events of terrestrial origin, localization of candidate events via imaging, and the use of a phased array for improved signal detection and confirmation. In addition to demonstrating applications of interferometric arrays for fast transient exploration, our efforts mark important steps in the roadmap toward SKA-era science.

  1. Design of a new electrode array for cochlear implants

    International Nuclear Information System (INIS)

    Kha, H.; Chen, B.

    2010-01-01

    Full text: This study aims to design a new electrode array which can be precisely located beneath the basilar membrane within the cochlear scala tympani. This placement of the electrode array is beneficial for increasing the effectiveness of the electrical stimulation of the audi tory nerves and maximising the growth factors delivered into the cochlea for regenerating the progressively lost auditory neurons, thereby significantly improving performance of the cochlear implant systems. Methods The design process involved two steps. First, the biocom patible nitinol-based shape memory alloy, of which mechanical deformation can be controlled using electrical cUTents/fields act vated by body temperature, was selected. Second, five different designs of the electrode array with embedded nitinol actuators were studied (Table I). The finite element method was employed to predict final positions of these electrode arrays. Results The electrode array with three 6 mm actuators at 2-8, 8-J4 and 14-20 mm from the tip (Fig. I) was found to be located most closely to the basilar membrane, compared with those in the other four cases. Conclusions A new nitinol cochlear implant electrode array with three embedded nitinol actuators has been designed. This electrode array is expected to be located beneath the basilar membrane for maximising the delivery of growth factors. Future research will involve the manufacturing of a prototype of this electrode array for use in insertion experiments and neurotrophin release tests.

  2. Blending of phased array data

    Science.gov (United States)

    Duijster, Arno; van Groenestijn, Gert-Jan; van Neer, Paul; Blacquière, Gerrit; Volker, Arno

    2018-04-01

    The use of phased arrays is growing in the non-destructive testing industry and the trend is towards large 2D arrays, but due to limitations, it is currently not possible to record the signals from all elements, resulting in aliased data. In the past, we have presented a data interpolation scheme `beyond spatial aliasing' to overcome this aliasing. In this paper, we present a different approach: blending and deblending of data. On the hardware side, groups of receivers are blended (grouped) in only a few transmit/recording channels. This allows for transmission and recording with all elements, in a shorter acquisition time and with less channels. On the data processing side, this blended data is deblended (separated) by transforming it to a different domain and applying an iterative filtering and thresholding. Two different filtering methods are compared: f-k filtering and wavefield extrapolation filtering. The deblending and filtering methods are demonstrated on simulated experimental data. The wavefield extrapolation filtering proves to outperform f-k filtering. The wavefield extrapolation method can deal with groups of up to 24 receivers, in a phased array of 48 × 48 elements.

  3. A review of array radars

    Science.gov (United States)

    Brookner, E.

    1981-10-01

    Achievements in the area of array radars are illustrated by such activities as the operational deployment of the large high-power, high-range-resolution Cobra Dane; the operational deployment of two all-solid-state high-power, large UHF Pave Paws radars; and the development of the SAM multifunction Patriot radar. This paper reviews the following topics: array radars steered in azimuth and elevation by phase shifting (phase-phase steered arrays); arrays steered + or - 60 deg, limited scan arrays, hemispherical coverage, and omnidirectional coverage arrays; array radars steering electronically in only one dimension, either by frequency or by phase steering; and array radar antennas which use no electronic scanning but instead use array antennas for achieving low antenna sidelobes.

  4. Micro-machined high-frequency (80 MHz) PZT thick film linear arrays.

    Science.gov (United States)

    Zhou, Qifa; Wu, Dawei; Liu, Changgeng; Zhu, Benpeng; Djuth, Frank; Shung, K

    2010-10-01

    This paper presents the development of a micromachined high-frequency linear array using PZT piezoelectric thick films. The linear array has 32 elements with an element width of 24 μm and an element length of 4 mm. Array elements were fabricated by deep reactive ion etching of PZT thick films, which were prepared from spin-coating of PZT sol-gel composite. Detailed fabrication processes, especially PZT thick film etching conditions and a novel transferring-and-etching method, are presented and discussed. Array designs were evaluated by simulation. Experimental measurements show that the array had a center frequency of 80 MHz and a fractional bandwidth (-6 dB) of 60%. An insertion loss of -41 dB and adjacent element crosstalk of -21 dB were found at the center frequency.

  5. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C. (LCF)

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to

  6. Preparation and properties of novel magnetic composite nanostructures: Arrays of nanowires in porous membranes

    International Nuclear Information System (INIS)

    Vazquez, M.; Hernandez-Velez, M.; Asenjo, A.; Navas, D.; Pirota, K.; Prida, V.; Sanchez, O.; Baldonedo, J.L.

    2006-01-01

    In the present work, we introduce our latest achievements in the development of novel highly ordered composite magnetic nanostructures employing anodized nanoporous membranes as precursor templates where long-range hexagonal symmetry is induced by self-assembling during anodization process. Subsequent processing as electroplating, sputtering or pressing are employed to prepare arrays of metallic, semiconductor or polymeric nanowires embedded in oxide or metallic membranes. Particular attention is paid to recent results on controlling the magnetic anisotropy in arrays of metallic nanowires, particularly Co, and nanohole arrays in Ni membranes

  7. Evaluating the networking characteristics of the Cray XC-40 Intel Knights Landing-based Cori supercomputer at NERSC

    Energy Technology Data Exchange (ETDEWEB)

    Doerfler, Douglas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Austin, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cook, Brandon [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Deslippe, Jack [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kandalla, Krishna [Cray Inc, Bloomington, MN (United States); Mendygral, Peter [Cray Inc, Bloomington, MN (United States)

    2017-09-12

    There are many potential issues associated with deploying the Intel Xeon Phi™ (code named Knights Landing [KNL]) manycore processor in a large-scale supercomputer. One in particular is the ability to fully utilize the high-speed communications network, given that the serial performance of a Xeon Phi TM core is a fraction of a Xeon®core. In this paper, we take a look at the trade-offs associated with allocating enough cores to fully utilize the Aries high-speed network versus cores dedicated to computation, e.g., the trade-off between MPI and OpenMP. In addition, we evaluate new features of Cray MPI in support of KNL, such as internode optimizations. We also evaluate one-sided programming models such as Unified Parallel C. We quantify the impact of the above trade-offs and features using a suite of National Energy Research Scientific Computing Center applications.

  8. The EUROBALL array

    International Nuclear Information System (INIS)

    Rossi Alvarez, C.

    1998-01-01

    The quality of the multidetector array EUROBALL is described, with emphasis on the history and formal organization of the related European collaboration. The detector layout is presented together with the electronics and Data Acquisition capabilities. The status of the instrument, its performances and the main features of some recently developed ancillary detectors will also be described. The EUROBALL array is operational in Legnaro National Laboratory (Italy) since April 1997 and is expected to run up to November 1998. The array represents a significant improvement in detector efficiency and sensitivity with respect to the previous generation of multidetector arrays

  9. A polychromator-type near-infrared spectrometer with a high-sensitivity and high-resolution photodiode array detector for pharmaceutical process monitoring on the millisecond time scale.

    Science.gov (United States)

    Murayama, Kodai; Genkawa, Takuma; Ishikawa, Daitaro; Komiyama, Makoto; Ozaki, Yukihiro

    2013-02-01

    In the fine chemicals industry, particularly in the pharmaceutical industry, advanced sensing technologies have recently begun being incorporated into the process line in order to improve safety and quality in accordance with process analytical technology. For estimating the quality of powders without preparation during drug formulation, near-infrared (NIR) spectroscopy has been considered the most promising sensing approach. In this study, we have developed a compact polychromator-type NIR spectrometer equipped with a photodiode (PD) array detector. This detector is consisting of 640 InGaAs-PD elements with 20-μm pitch. Some high-specification spectrometers, which use InGaAs-PD with 512 elements, have a wavelength resolution of about 1.56 nm when covering 900-1700 nm range. On the other hand, the newly developed detector, having the PD with one of the world's highest density, enables wavelength resolution of below 1.25 nm. Moreover, thanks to the combination with a highly integrated charge amplifier array circuit, measurement speed of the detector is higher by two orders than that of existing PD array detectors. The developed spectrometer is small (120 mm × 220 mm × 200 mm) and light (6 kg), and it contains various key devices including the high-density and high-sensitivity PD array detector, NIR technology, and spectroscopy technology for a spectroscopic analyzer that has the required detection mechanism and high sensitivity for powder measurement, as well as a high-speed measuring function for blenders. Moreover, we have evaluated the characteristics of the developed NIR spectrometer, and the measurement of powder samples confirmed that it has high functionality.

  10. A polychromator-type near-infrared spectrometer with a high-sensitivity and high-resolution photodiode array detector for pharmaceutical process monitoring on the millisecond time scale

    Science.gov (United States)

    Murayama, Kodai; Genkawa, Takuma; Ishikawa, Daitaro; Komiyama, Makoto; Ozaki, Yukihiro

    2013-02-01

    In the fine chemicals industry, particularly in the pharmaceutical industry, advanced sensing technologies have recently begun being incorporated into the process line in order to improve safety and quality in accordance with process analytical technology. For estimating the quality of powders without preparation during drug formulation, near-infrared (NIR) spectroscopy has been considered the most promising sensing approach. In this study, we have developed a compact polychromator-type NIR spectrometer equipped with a photodiode (PD) array detector. This detector is consisting of 640 InGaAs-PD elements with 20-μm pitch. Some high-specification spectrometers, which use InGaAs-PD with 512 elements, have a wavelength resolution of about 1.56 nm when covering 900-1700 nm range. On the other hand, the newly developed detector, having the PD with one of the world's highest density, enables wavelength resolution of below 1.25 nm. Moreover, thanks to the combination with a highly integrated charge amplifier array circuit, measurement speed of the detector is higher by two orders than that of existing PD array detectors. The developed spectrometer is small (120 mm × 220 mm × 200 mm) and light (6 kg), and it contains various key devices including the high-density and high-sensitivity PD array detector, NIR technology, and spectroscopy technology for a spectroscopic analyzer that has the required detection mechanism and high sensitivity for powder measurement, as well as a high-speed measuring function for blenders. Moreover, we have evaluated the characteristics of the developed NIR spectrometer, and the measurement of powder samples confirmed that it has high functionality.

  11. Challenging aspects of contemporary cochlear implant electrode array design.

    Science.gov (United States)

    Mistrík, Pavel; Jolly, Claude; Sieber, Daniel; Hochmair, Ingeborg

    2017-12-01

    A design comparison of current perimodiolar and lateral wall electrode arrays of the cochlear implant (CI) is provided. The focus is on functional features such as acoustic frequency coverage and tonotopic mapping, battery consumption and dynamic range. A traumacity of their insertion is also evaluated. Review of up-to-date literature. Perimodiolar electrode arrays are positioned in the basal turn of the cochlea near the modiolus. They are designed to initiate the action potential in the proximity to the neural soma located in spiral ganglion. On the other hand, lateral wall electrode arrays can be inserted deeper inside the cochlea, as they are located along the lateral wall and such insertion trajectory is less traumatic. This class of arrays targets primarily surviving neural peripheral processes. Due to their larger insertion depth, lateral wall arrays can deliver lower acoustic frequencies in manner better corresponding to cochlear tonotopicity. In fact, spiral ganglion sections containing auditory nerve fibres tuned to low acoustic frequencies are located deeper than 1 and half turn inside the cochlea. For this reason, a significant frequency mismatch might be occurring for apical electrodes in perimodiolar arrays, detrimental to speech perception. Tonal languages such as Mandarin might be therefore better treated with lateral wall arrays. On the other hand, closer proximity to target tissue results in lower psychophysical threshold levels for perimodiolar arrays. However, the maximal comfort level is also lower, paradoxically resulting in narrower dynamic range than that of lateral wall arrays. Battery consumption is comparable for both types of arrays. Lateral wall arrays are less likely to cause trauma to cochlear structures. As the current trend in cochlear implantation is the maximal protection of residual acoustic hearing, the lateral wall arrays seem more suitable for hearing preservation CI surgeries. Future development could focus on combining the

  12. Testing of ITER central solenoid coil insulation in an array

    International Nuclear Information System (INIS)

    Jayakumar, R.; Martovetsky, N.N.; Perfect, S.A.

    1995-01-01

    A glass-polyimide insulation system has been proposed by the US team for use in the Central Solenoid (CS) coil of the international Thermonuclear Experimental Reactor (ITER) machine and it is planned to use this system in the CS model coil inner module. The turn insulation will consist of 2 layers of combined prepreg and Kapton. Each layer is 50% overlapped with a butt wrap of prepreg and an overwrap of S glass. The coil layers will be separated by a glass-resin composite and impregnated in a VPI process. Small scale tests on the various components of the insulation are complete. It is planned to fabricate and test the insulation in a 4 x 4 insulated CS conductor array which will include the layer insulation and be vacuum impregnated. The conductor array will be subjected to 20 thermal cycles and 100000 mechanical load cycles in a Liquid Nitrogen environment. These loads are similar to those seen in the CS coil design. The insulation will be electrically tested at several stages during mechanical testing. This paper will describe the array configuration, fabrication: process, instrumentation, testing configuration, and supporting analyses used in selecting the array and test configurations

  13. Ultra-wideband WDM VCSEL arrays by lateral heterogeneous integration

    Science.gov (United States)

    Geske, Jon

    Advancements in heterogeneous integration are a driving factor in the development of evermore sophisticated and functional electronic and photonic devices. Such advancements will merge the optical and electronic capabilities of different material systems onto a common integrated device platform. This thesis presents a new lateral heterogeneous integration technology called nonplanar wafer bonding. The technique is capable of integrating multiple dissimilar semiconductor device structures on the surface of a substrate in a single wafer bond step, leaving different integrated device structures adjacent to each other on the wafer surface. Material characterization and numerical simulations confirm that the material quality is not compromised during the process. Nonplanar wafer bonding is used to fabricate ultra-wideband wavelength division multiplexed (WDM) vertical-cavity surface-emitting laser (VCSEL) arrays. The optically-pumped VCSEL arrays span 140 nm from 1470 to 1610 nm, a record wavelength span for devices operating in this wavelength range. The array uses eight wavelength channels to span the 140 nm with all channels separated by precisely 20 nm. All channels in the array operate single mode to at least 65°C with output power uniformity of +/- 1 dB. The ultra-wideband WDM VCSEL arrays are a significant first step toward the development of a single-chip source for optical networks based on coarse WDM (CWDM), a low-cost alternative to traditional dense WDM. The CWDM VCSEL arrays make use of fully-oxidized distributed Bragg reflectors (DBRs) to provide the wideband reflectivity required for optical feedback and lasing across 140 rim. In addition, a novel optically-pumped active region design is presented. It is demonstrated, with an analytical model and experimental results, that the new active-region design significantly improves the carrier uniformity in the quantum wells and results in a 50% lasing threshold reduction and a 20°C improvement in the peak

  14. Restoring Low Sidelobe Antenna Patterns with Failed Elements in a Phased Array Antenna

    Science.gov (United States)

    2016-02-01

    optimum low sidelobes are demonstrated in several examples. Index Terms — Array signal processing, beams, linear algebra , phased arrays, shaped...beam antennas. I. INTRODUCTION For many phased array antenna applications , low spatial sidelobes are required, and it is desirable to maintain...represented by a linear combination of low sidelobe beamformers with no failed elements, ’s, in a neighborhood around under the constraint that the linear

  15. Remanence coercivity of dot arrays of hcp-CoPt perpendicular films

    Energy Technology Data Exchange (ETDEWEB)

    Mitsuzuka, K; Shimatsu, T; Aoi, H [Research Institute of Electrical Communication, Tohoku University, Sendai, 980-8577 (Japan); Kikuchi, N; Okamoto, S; Kitakami, O, E-mail: shimatsu@riec.tohoku.ac.j [Institute of Multidisciplinary Research for Advanced Materials, Tohoku University, Sendai, 980-8577 (Japan)

    2010-01-01

    The remanence coercivity, H{sub r}, of hcp-CoPt dot arrays with various dot thicknesses, {delta}, (3 and 10 nm) and Pt content (20-30at%) were experimentally investigated as a function of the dot diameter, D(30-400 nm). All dot arrays showed a single domain state, even after removal of an applied field equal to H{sub r}. The angular dependence of H{sub r} for the dot arrays indicated coherent rotation of the magnetization during nucleation. H{sub r} increased as Ddecreased in all series of dot arrays with various {delta} and Pt content. Assuming that the nucleation field of a dot is determined by the switching field of a grain having the smallest switching field, we calculated the value of nucleation field H{sub n}{sup cal} taking account of the c-axis distribution and the distribution of the demagnetizing field in the dot. The values of H{sub r} obtained experimentally are in good agreement with those of H{sub n}{sup cal}, taking account of thermal agitation of magnetization. This result suggested that the reversal process of hcp-CoPt dot arrays starts from a nucleation at the center of the dot followed by a propagation process.

  16. Vision communications based on LED array and imaging sensor

    Science.gov (United States)

    Yoo, Jong-Ho; Jung, Sung-Yoon

    2012-11-01

    In this paper, we propose a brand new communication concept, called as "vision communication" based on LED array and image sensor. This system consists of LED array as a transmitter and digital device which include image sensor such as CCD and CMOS as receiver. In order to transmit data, the proposed communication scheme simultaneously uses the digital image processing and optical wireless communication scheme. Therefore, the cognitive communication scheme is possible with the help of recognition techniques used in vision system. By increasing data rate, our scheme can use LED array consisting of several multi-spectral LEDs. Because arranged each LED can emit multi-spectral optical signal such as visible, infrared and ultraviolet light, the increase of data rate is possible similar to WDM and MIMO skills used in traditional optical and wireless communications. In addition, this multi-spectral capability also makes it possible to avoid the optical noises in communication environment. In our vision communication scheme, the data packet is composed of Sync. data and information data. Sync. data is used to detect the transmitter area and calibrate the distorted image snapshots obtained by image sensor. By making the optical rate of LED array be same with the frame rate (frames per second) of image sensor, we can decode the information data included in each image snapshot based on image processing and optical wireless communication techniques. Through experiment based on practical test bed system, we confirm the feasibility of the proposed vision communications based on LED array and image sensor.

  17. Gravitational wave searches with pulsar timing arrays: Cancellation of clock and ephemeris noises

    Science.gov (United States)

    Tinto, Massimo

    2018-04-01

    We propose a data processing technique to cancel monopole and dipole noise sources (such as clock and ephemeris noises, respectively) in pulsar timing array searches for gravitational radiation. These noises are the dominant sources of correlated timing fluctuations in the lower-part (≈10-9-10-8 Hz ) of the gravitational wave band accessible by pulsar timing experiments. After deriving the expressions that reconstruct these noises from the timing data, we estimate the gravitational wave sensitivity of our proposed processing technique to single-source signals to be at least one order of magnitude higher than that achievable by directly processing the timing data from an equal-size array. Since arrays can generate pairs of clock and ephemeris-free timing combinations that are no longer affected by correlated noises, we implement with them the cross-correlation statistic to search for an isotropic stochastic gravitational wave background. We find the resulting optimal signal-to-noise ratio to be more than one order of magnitude larger than that obtainable by correlating pairs of timing data from arrays of equal size.

  18. Servo scanning 3D micro EDM for array micro cavities using on-machine fabricated tool electrodes

    Science.gov (United States)

    Tong, Hao; Li, Yong; Zhang, Long

    2018-02-01

    Array micro cavities are useful in many fields including in micro molds, optical devices, biochips and so on. Array servo scanning micro electro discharge machining (EDM), using array micro electrodes with simple cross-sectional shape, has the advantage of machining complex 3D micro cavities in batches. In this paper, the machining errors caused by offline-fabricated array micro electrodes are analyzed in particular, and then a machining process of array servo scanning micro EDM is proposed by using on-machine fabricated array micro electrodes. The array micro electrodes are fabricated on-machine by combined procedures including wire electro discharge grinding, array reverse copying and electrode end trimming. Nine-array tool electrodes with Φ80 µm diameter and 600 µm length are obtained. Furthermore, the proposed process is verified by several machining experiments for achieving nine-array hexagonal micro cavities with top side length of 300 µm, bottom side length of 150 µm, and depth of 112 µm or 120 µm. In the experiments, a chip hump accumulates on the electrode tips like the built-up edge in mechanical machining under the conditions of brass workpieces, copper electrodes and the dielectric of deionized water. The accumulated hump can be avoided by replacing the water dielectric by an oil dielectric.

  19. Flame Structure and Dynamics for an Array of Premixed Methane-Air Jets

    Science.gov (United States)

    Nigam, Siddharth P.; Lapointe, Caelan; Christopher, Jason D.; Wimer, Nicholas T.; Hayden, Torrey R. S.; Rieker, Gregory B.; Hamlington, Peter E.

    2017-11-01

    Premixed flames have been studied extensively, both experimentally and computationally, and their properties are reasonably well characterized for a range of conditions and configurations. However, the premixed combustion process is potentially much more difficult to predict when many such flames are arranged in a closely spaced array. These arrays must be better understood, in particular, for the design of industrial burners used in chemical and heat treatment processes. Here, the effects of geometric array parameters (e.g., angle and diameter of jet inlets, number of inlets and their respective orientation) and operating conditions (e.g., jet velocities, fuel-air ratio) on flame structure and dynamics are studied using large eddy simulations (LES). The simulations are performed in OpenFOAM using multi-step chemistry for a methane-air mixture, and temperature and chemical composition fields are characterized for a variety of configurations as functions of height above the array. Implications of these results for the design and operation of industrial burners are outlined.

  20. A fast computation method for MUSIC spectrum function based on circular arrays

    Science.gov (United States)

    Du, Zhengdong; Wei, Ping

    2015-02-01

    The large computation amount of multiple signal classification (MUSIC) spectrum function seriously affects the timeliness of direction finding system using MUSIC algorithm, especially in the two-dimensional directions of arrival (DOA) estimation of azimuth and elevation with a large antenna array. This paper proposes a fast computation method for MUSIC spectrum. It is suitable for any circular array. First, the circular array is transformed into a virtual uniform circular array, in the process of calculating MUSIC spectrum, for the cyclic characteristics of steering vector, the inner product in the calculation of spatial spectrum is realised by cyclic convolution. The computational amount of MUSIC spectrum is obviously less than that of the conventional method. It is a very practical way for MUSIC spectrum computation in circular arrays.