WorldWideScience

Sample records for supercomputers shlomo weiss

  1. Supercomputational science

    CERN Document Server

    Wilson, S

    1990-01-01

    In contemporary research, the supercomputer now ranks, along with radio telescopes, particle accelerators and the other apparatus of "big science", as an expensive resource, which is nevertheless essential for state of the art research. Supercomputers are usually provided as shar.ed central facilities. However, unlike, telescopes and accelerators, they are find a wide range of applications which extends across a broad spectrum of research activity. The difference in performance between a "good" and a "bad" computer program on a traditional serial computer may be a factor of two or three, but on a contemporary supercomputer it can easily be a factor of one hundred or even more! Furthermore, this factor is likely to increase with future generations of machines. In keeping with the large capital and recurrent costs of these machines, it is appropriate to devote effort to training and familiarization so that supercomputers are employed to best effect. This volume records the lectures delivered at a Summer School ...

  2. Mario Weiss (1927 – 2008)

    CERN Multimedia

    2008-01-01

    It was with great sadness that we learned that our colleague and friend Mario Weiss passed away on February 11th. A feeling of emptiness overtook us all. Mario was a reassuring reference in the small community of linear accelerator experts, as he continued to come to CERN regularly and discuss accelerator problems with passion for many years after his official retirement. Mario came to CERN in 1960 and in the PS Division worked on beam dynamics of low-energy high-intensity proton beams, soon becoming a world-level expert in the field. He took an active part in the construction of Linac2, where he was responsible for the low-energy beam transport system. In the early 80’s he turned his interest to the Radio Frequency Quadrupole (RFQ), a novel concept for acceleration which allows the problems related with bunching and injecting low-energy beams to be overcomed. After starting a fruitful collaboration with the Los Alamos scientists, ...

  3. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee

    2011-11-15

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  4. Emerging supercomputer architectures

    Energy Technology Data Exchange (ETDEWEB)

    Messina, P.C.

    1987-01-01

    This paper will examine the current and near future trends for commercially available high-performance computers with architectures that differ from the mainstream ''supercomputer'' systems in use for the last few years. These emerging supercomputer architectures are just beginning to have an impact on the field of high performance computing. 7 refs., 1 tab.

  5. NSF Commits to Supercomputers.

    Science.gov (United States)

    Waldrop, M. Mitchell

    1985-01-01

    The National Science Foundation (NSF) has allocated at least $200 million over the next five years to support four new supercomputer centers. Issues and trends related to this NSF initiative are examined. (JN)

  6. Evaluation of Programs: Reading Carol H. Weiss

    Science.gov (United States)

    Msila, Vuyisile; Setlhako, Angeline

    2013-01-01

    Carol Weiss did much to enhance the role of evaluation in her writings. Her work shows evaluators what affects their roles as they evaluate programs. Furthermore, her theory of change spells out the complexities involved in program evaluation. There are various processes involved in the evaluation of programs. The paper looks at some of the…

  7. 2012 Cliff Weiss Memorial Essay Contest Winners

    Science.gov (United States)

    Techniques: Connecting Education and Careers (J3), 2012

    2012-01-01

    This article presents the winners of the 2012 Cliff Weiss Memorial Essay Contest. They are Naim Owens from Washington, DC, and Colissa Menke from Warrensburg, Missouri. The 2012 essay topic is "How do you feel CTE prepares individuals, including yourself, for a future career?"

  8. Psychoanalytic ethics: Edoardo Weiss, Freud, and Mussolini.

    Science.gov (United States)

    Roazen, P

    1991-10-01

    This paper examines Edoardo Weiss's correspondence with S. Freud regarding Concetta Forzano's analysis and Forzano's efforts to intercede with Mussolini on Freud's behalf after the Nazis invaded Austria in 1938. Freud's reliance on Mussolini can be explained by traditional Viennese attitudes toward Italy, the Duce's protectiveness about Austrian independence, and the relatively benign attitude of the Fascist regime towards Jews.

  9. Mallory-Weiss Tear during Esophagogastroduodenoscopy

    Directory of Open Access Journals (Sweden)

    Ji Wan Kim

    2015-02-01

    Full Text Available Mallory-Weiss tears (MWTs are mucosal lacerations caused by forceful retching and are typically located at the gastroesophageal junction. Reported cases of MWT with serious complications seen in esophagogastroduodenoscopy are limited. We report MWT in an 81-year-old woman who presented with gastric perforation by esophagogastroduodenoscopy. We discuss and indicate that hiatal hernia, atrophic gastritis and old age may be associated with the gastric perforation in comparison to typical tears occurring at the gastroesophageal junction.

  10. Energy sciences supercomputing 1990

    Energy Technology Data Exchange (ETDEWEB)

    Mirin, A.A.; Kaiper, G.V. (eds.)

    1990-01-01

    This report contains papers on the following topics: meeting the computational challenge; lattice gauge theory: probing the standard model; supercomputing for the superconducting super collider; and overview of ongoing studies in climate model diagnosis and intercomparison; MHD simulation of the fueling of a tokamak fusion reactor through the injection of compact toroids; gyrokinetic particle simulation of tokamak plasmas; analyzing chaos: a visual essay in nonlinear dynamics; supercomputing and research in theoretical chemistry; monte carlo simulations of light nuclei; parallel processing; and scientists of the future: learning by doing.

  11. Supercomputers to transform Science

    CERN Multimedia

    2006-01-01

    "New insights into the structure of space and time, climate modeling, and the design of novel drugs, are but a few of the many research areas that will be transforned by the installation of three supercomputers at the Unversity of Bristol." (1/2 page)

  12. Petaflop supercomputers of China

    Institute of Scientific and Technical Information of China (English)

    Guoliang CHEN

    2010-01-01

    @@ After ten years of development, high performance computing (HPC) in China has made remarkable progress. In November, 2010, the NUDT Tianhe-1A and the Dawning Nebulae respectively claimed the 1st and 3rd places in the Top500 Supercomputers List; this recognizes internationally the level that China has achieved in high performance computer manufacturing.

  13. Introduction to Reconfigurable Supercomputing

    CERN Document Server

    Lanzagorta, Marco; Rosenberg, Robert

    2010-01-01

    This book covers technologies, applications, tools, languages, procedures, advantages, and disadvantages of reconfigurable supercomputing using Field Programmable Gate Arrays (FPGAs). The target audience is the community of users of High Performance Computers (HPe who may benefit from porting their applications into a reconfigurable environment. As such, this book is intended to guide the HPC user through the many algorithmic considerations, hardware alternatives, usability issues, programming languages, and design tools that need to be understood before embarking on the creation of reconfigur

  14. Enabling department-scale supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  15. Ultrascalable petaflop parallel supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Blumrich, Matthias A. (Ridgefield, CT); Chen, Dong (Croton On Hudson, NY); Chiu, George (Cross River, NY); Cipolla, Thomas M. (Katonah, NY); Coteus, Paul W. (Yorktown Heights, NY); Gara, Alan G. (Mount Kisco, NY); Giampapa, Mark E. (Irvington, NY); Hall, Shawn (Pleasantville, NY); Haring, Rudolf A. (Cortlandt Manor, NY); Heidelberger, Philip (Cortlandt Manor, NY); Kopcsay, Gerard V. (Yorktown Heights, NY); Ohmacht, Martin (Yorktown Heights, NY); Salapura, Valentina (Chappaqua, NY); Sugavanam, Krishnan (Mahopac, NY); Takken, Todd (Brewster, NY)

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  16. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-01-01

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  17. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-12-31

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  18. Computational Dimensionalities of Global Supercomputing

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2013-12-01

    Full Text Available This Invited Paper pertains to subject of my Plenary Keynote Speech at the 17th World Multi-Conference on Systemics, Cybernetics and Informatics (WMSCI 2013 held in Orlando, Florida on July 9-12, 2013. The title of my Plenary Keynote Speech was: "Dimensionalities of Computation: from Global Supercomputing to Data, Text and Web Mining" but this Invited Paper will focus only on the "Computational Dimensionalities of Global Supercomputing" and is based upon a summary of the contents of several individual articles that have been previously written with myself as lead author and published in [75], [76], [77], [78], [79], [80] and [11]. The topics of these of the Plenary Speech included Overview of Current Research in Global Supercomputing [75], Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing [76], Data Mining Supercomputing with SAS™ JMP® Genomics ([77], [79], [80], and Visualization by Supercomputing Data Mining [81]. ______________________ [11.] Committee on the Future of Supercomputing, National Research Council (2003, The Future of Supercomputing: An Interim Report, ISBN-13: 978-0-309-09016- 2, http://www.nap.edu/catalog/10784.html [75.] Segall, Richard S.; Zhang, Qingyu and Cook, Jeffrey S.(2013, "Overview of Current Research in Global Supercomputing", Proceedings of Forty- Fourth Meeting of Southwest Decision Sciences Institute (SWDSI, Albuquerque, NM, March 12-16, 2013. [76.] Segall, Richard S. and Zhang, Qingyu (2010, "Open-Source Software Tools for Data Mining Analysis of Genomic and Spatial Images using High Performance Computing", Proceedings of 5th INFORMS Workshop on Data Mining and Health Informatics, Austin, TX, November 6, 2010. [77.] Segall, Richard S., Zhang, Qingyu and Pierce, Ryan M.(2010, "Data Mining Supercomputing with SAS™ JMP®; Genomics: Research-in-Progress, Proceedings of 2010 Conference on Applied Research in Information Technology, sponsored by

  19. Microprocessors: from desktops to supercomputers.

    Science.gov (United States)

    Baskett, F; Hennessy, J L

    1993-08-13

    Continuing improvements in integrated circuit technology and computer architecture have driven microprocessors to performance levels that rival those of supercomputers-at a fraction of the price. The use of sophisticated memory hierarchies enables microprocessor-based machines to have very large memories built from commodity dynamic random access memory while retaining the high bandwidth and low access time needed in a high-performance machine. Parallel processors composed of these high-performance microprocessors are becoming the supercomputing technology of choice for scientific and engineering applications. The challenges for these new supercomputers have been in developing multiprocessor architectures that are easy to program and that deliver high performance without extraordinary programming efforts by users. Recent progress in multiprocessor architecture has led to ways to meet these challenges.

  20. World's fastest supercomputer opens up to users

    Science.gov (United States)

    Xin, Ling

    2016-08-01

    China's latest supercomputer - Sunway TaihuLight - has claimed the crown as the world's fastest computer according to the latest TOP500 list, released at the International Supercomputer Conference in Frankfurt in late June.

  1. Improved Access to Supercomputers Boosts Chemical Applications.

    Science.gov (United States)

    Borman, Stu

    1989-01-01

    Supercomputing is described in terms of computing power and abilities. The increase in availability of supercomputers for use in chemical calculations and modeling are reported. Efforts of the National Science Foundation and Cray Research are highlighted. (CW)

  2. Desktop supercomputers. Advance medical imaging.

    Science.gov (United States)

    Frisiello, R S

    1991-02-01

    Medical imaging tools that radiologists as well as a wide range of clinicians and healthcare professionals have come to depend upon are emerging into the next phase of functionality. The strides being made in supercomputing technologies--including reduction of size and price--are pushing medical imaging to a new level of accuracy and functionality.

  3. An assessment of worldwide supercomputer usage

    Energy Technology Data Exchange (ETDEWEB)

    Wasserman, H.J.; Simmons, M.L.; Hayes, A.H.

    1995-01-01

    This report provides a comparative study of advanced supercomputing usage in Japan and the United States as of Spring 1994. It is based on the findings of a group of US scientists whose careers have centered on programming, evaluating, and designing high-performance supercomputers for over ten years. The report is a follow-on to an assessment of supercomputing technology in Europe and Japan that was published in 1993. Whereas the previous study focused on supercomputer manufacturing capabilities, the primary focus of the current work was to compare where and how supercomputers are used. Research for this report was conducted through both literature studies and field research in Japan.

  4. Euler-Heisenberg-Weiss action for QCD+QED

    CERN Document Server

    Ozaki, Sho; Hattori, Koichi; Itakura, Kazunori

    2015-01-01

    We derive an analytic expression for one-loop effective action of QCD+QED at zero and finite temperatures by using the Schwinger's proper time method. The result is a nonlinear effective action not only for electromagnetic and chromo-electromagnetic fields but also the Polyakov loop, and thus reproduces the Euler-Heisenberg action in QED, QCD, and QED+QCD, and also the Weiss potential for the Polyakov loop at finite temperature. As applications of this "Euler-Heisenberg-Weiss" action in QCD+QED, we investigate quark pair productions induced by QCD+QED fields at zero temperature and the Polyakov loop in the presence of strong electromagnetic fields. Quark one-loop contribution to the effective potential of the Polyakov loop explicitly breaks the center symmetry, and is found to be enhanced by the magnetic field, which is consistent with the inverse magnetic catalysis observed in lattice QCD simulation.

  5. TOP500 Supercomputers for June 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  6. WAIS-IV and WISC-IV Structural Validity: Alternate Methods, Alternate Results. Commentary on Weiss et al. (2013a) and Weiss et al. (2013b)

    Science.gov (United States)

    Canivez, Gary L.; Kush, Joseph C.

    2013-01-01

    Weiss, Keith, Zhu, and Chen (2013a) and Weiss, Keith, Zhu, and Chen (2013b), this issue, report examinations of the factor structure of the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) and Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV), respectively; comparing Wechsler Hierarchical Model (W-HM) and…

  7. WAIS-IV and WISC-IV Structural Validity: Alternate Methods, Alternate Results. Commentary on Weiss et al. (2013a) and Weiss et al. (2013b)

    Science.gov (United States)

    Canivez, Gary L.; Kush, Joseph C.

    2013-01-01

    Weiss, Keith, Zhu, and Chen (2013a) and Weiss, Keith, Zhu, and Chen (2013b), this issue, report examinations of the factor structure of the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) and Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV), respectively; comparing Wechsler Hierarchical Model (W-HM) and…

  8. Flatfoot in Müller-Weiss syndrome: a case series

    Directory of Open Access Journals (Sweden)

    Wang Xu

    2012-08-01

    Full Text Available Abstract Introduction Spontaneous osteonecrosis of the navicular bone in adults is a rare entity, known as Müller-Weiss syndrome. We report here on our experience with six patients with Müller-Weiss syndrome accompanied by flatfoot deformity, but on a literature search found no reports on this phenomenon. Because the natural history and treatment are controversial, an understanding of how to manage this deformity may be helpful for surgeons when choosing the most appropriate operative procedure. Case presentation Six patients (five women, one man; average age, 54 years with flatfoot caused by osteonecrosis of the navicular bone were followed up between January 2005 and December 2008 (mean follow-up period, 23.2 months. Conservative treatment, such as physical therapy, and non-steroidal anti-inflammatory drugs were used, but failed. Physical examinations revealed flattening of the medial arch of the involved foot and mild tenderness at the mid-tarsal joint. Weight-bearing X-rays (anterior-posterior and lateral views, computed tomography, and MRI scans were performed for each case. Talonavicular joint arthrodesis was performed in cases of single talonavicular joint arthritis. Triple arthrodesis was performed in cases of triple joint arthritis to reconstruct the medial arch. Clinical outcomes were assessed using the American Orthopaedic Foot and Ankle Society ankle-hindfoot scale; the scores were 63.0 pre-operatively and 89.8 post-operatively. All patients developed bony fusion. Conclusions The reason for the development of flatfoot in patients with Müller-Weiss syndrome is unknown. Surgical treatment may achieve favorable outcomes in terms of deformity correction, pain relief, and functional restoration. The choice of operative procedure may differ in patients with both flatfoot and posterior tibial tendon dysfunction.

  9. INTEL: Intel based systems move up in supercomputing ranks

    CERN Multimedia

    2002-01-01

    "The TOP500 supercomputer rankings released today at the Supercomputing 2002 conference show a dramatic increase in the number of Intel-based systems being deployed in high-performance computing (HPC) or supercomputing areas" (1/2 page).

  10. Dyons and Roberge - Weiss transition in lattice QCD

    CERN Document Server

    Bornyakov, V G; Goy, V A; Ilgenfritz, E -M; Martemyanov, B V; Molochkov, A V; Nakamura, Atsushi; Nikolaev, A A; Zakharov, V I

    2016-01-01

    We study lattice QCD with $N_f=2$ Wilson fermions at nonzero imaginary chemical potential and nonzero temperature. We relate the Roberge - Weiss phase transition to the properties of dyons which are constituents of the KvBLL calorons. We present numerical evidence that the characteristic features of the spectral gap of the overlap Dirac operator as function of an angle modifying the boundary condition are determined by the $Z_3$ sector of the respective imaginary chemical potential. We then demonstrate that dyon excitations in thermal configurations could be responsible (in line with perturbative excitations) for these phenomena.

  11. Comparing Clusters and Supercomputers for Lattice QCD

    CERN Document Server

    Gottlieb, S

    2001-01-01

    Since the development of the Beowulf project to build a parallel computer from commodity PC components, there have been many such clusters built. The MILC QCD code has been run on a variety of clusters and supercomputers. Key design features are identified, and the cost effectiveness of clusters and supercomputers are compared.

  12. Low Cost Supercomputer for Applications in Physics

    Science.gov (United States)

    Ahmed, Maqsood; Ahmed, Rashid; Saeed, M. Alam; Rashid, Haris; Fazal-e-Aleem

    2007-02-01

    Using parallel processing technique and commodity hardware, Beowulf supercomputers can be built at a much lower cost. Research organizations and educational institutions are using this technique to build their own high performance clusters. In this paper we discuss the architecture and design of Beowulf supercomputer and our own experience of building BURRAQ cluster.

  13. Phase Transition in Conditional Curie-Weiss Model

    CERN Document Server

    Opoku, Alex A; Ansah, Richard

    2016-01-01

    This paper proposes a conditional Curie-Weiss model as a model for opinion formation in a society polarized along two opinions, say opinions 1 and 2. The model comes with interaction strength $\\beta>0$ and bais $h$. Here the population in question is divided into three main groups, namely: Group one consisting of individuals who have decided on opinion 1. Let the proportion of this group be given by $s$. Group two consisting of individauls who have chosen opinion 2. Let $r$ be their proportion. Group three consisting of individuals who are yet to decide and they will decide based on their environmental conditions. Let $1-s-r$ be the proportion of this group. We show that the specific magnetization of the associated conditional Curie-Weiss model has a first order phase transition (discontinuous jump in specific magnetization) at $\\beta^*=\\left(1-s-r\\right)^{-1}$. It is also shown that not all the discontinuous jumps in magnetization will result in phase change. We point out how an extention of this model could...

  14. TOP500 Supercomputers for June 2005

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2005-06-22

    25th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/L LNL BlueGene/L and IBM gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 25th edition of the TOP500 list of the world's fastest supercomputers was released today (June 22, 2005) at the 20th International Supercomputing Conference (ISC2005) in Heidelberg Germany.

  15. TOP500 Supercomputers for November 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  16. 16 million [pounds] investment for 'virtual supercomputer'

    CERN Multimedia

    Holland, C

    2003-01-01

    "The Particle Physics and Astronomy Research Council is to spend 16million [pounds] to create a massive computing Grid, equivalent to the world's second largest supercomputer after Japan's Earth Simulator computer" (1/2 page)

  17. Supercomputers open window of opportunity for nursing.

    Science.gov (United States)

    Meintz, S L

    1993-01-01

    A window of opportunity was opened for nurse researchers with the High Performance Computing and Communications (HPCC) initiative in President Bush's 1992 fiscal-year budget. Nursing research moved into the high-performance computing environment through the University of Nevada Las Vegas/Cray Project for Nursing and Health Data Research (PNHDR). USing the CRAY YMP 2/216 supercomputer, the PNHDR established the validity of a supercomputer platform for nursing research. In addition, the research has identified a paradigm shift in statistical analysis, delineated actual and potential barriers to nursing research in a supercomputing environment, conceptualized a new branch of nursing science called Nurmetrics, and discovered new avenue for nursing research utilizing supercomputing tools.

  18. TOP500 Supercomputers for November 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-11-08

    24th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/IBM BlueGene/L and NASA/SGI's Columbia gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 24th edition of the TOP500 list of the worlds fastest supercomputers was released today (November 8, 2004) at the SC2004 Conference in Pittsburgh, Pa.

  19. Misleading Performance Reporting in the Supercomputing Field

    Directory of Open Access Journals (Sweden)

    David H. Bailey

    1992-01-01

    Full Text Available In a previous humorous note, I outlined 12 ways in which performance figures for scientific supercomputers can be distorted. In this paper, the problem of potentially misleading performance reporting is discussed in detail. Included are some examples that have appeared in recent published scientific papers. This paper also includes some proposed guidelines for reporting performance, the adoption of which would raise the level of professionalism and reduce the level of confusion in the field of supercomputing.

  20. Simulating Galactic Winds on Supercomputers

    Science.gov (United States)

    Schneider, Evan

    2017-01-01

    Galactic winds are a ubiquitous feature of rapidly star-forming galaxies. Observations of nearby galaxies have shown that winds are complex, multiphase phenomena, comprised of outflowing gas at a large range of densities, temperatures, and velocities. Describing how starburst-driven outflows originate, evolve, and affect the circumgalactic medium and gas supply of galaxies is an important challenge for theories of galaxy evolution. In this talk, I will discuss how we are using a new hydrodynamics code, Cholla, to improve our understanding of galactic winds. Cholla is a massively parallel, GPU-based code that takes advantage of specialized hardware on the newest generation of supercomputers. With Cholla, we can perform large, three-dimensional simulations of multiphase outflows, allowing us to track the coupling of mass and momentum between gas phases across hundreds of parsecs at sub-parsec resolution. The results of our recent simulations demonstrate that the evolution of cool gas in galactic winds is highly dependent on the initial structure of embedded clouds. In particular, we find that turbulent density structures lead to more efficient mass transfer from cool to hot phases of the wind. I will discuss the implications of our results both for the incorporation of winds into cosmological simulations, and for interpretations of observed multiphase winds and the circumgalatic medium of nearby galaxies.

  1. GREEN SUPERCOMPUTING IN A DESKTOP BOX

    Energy Technology Data Exchange (ETDEWEB)

    HSU, CHUNG-HSING [Los Alamos National Laboratory; FENG, WU-CHUN [NON LANL; CHING, AVERY [NON LANL

    2007-01-17

    The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an open-source operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicated computer cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren - a shared (and power-hungry) datacenter resource that must reside in a machine-cooled room in order to operate properly. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a 'green' desktop supercomputer - a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 'pizza box' workstation. In this paper, they present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop supercomputer that achieves 14 Gflops on Linpack but sips only 185 watts of power at load, resulting in a performance-power ratio that is over 300% better than their reference SMP platform.

  2. A training program for scientific supercomputing users

    Energy Technology Data Exchange (ETDEWEB)

    Hanson, F.; Moher, T.; Sabelli, N.; Solem, A.

    1988-01-01

    There is need for a mechanism to transfer supercomputing technology into the hands of scientists and engineers in such a way that they will acquire a foundation of knowledge that will permit integration of supercomputing as a tool in their research. Most computing center training emphasizes computer-specific information about how to use a particular computer system; most academic programs teach concepts to computer scientists. Only a few brief courses and new programs are designed for computational scientists. This paper describes an eleven-week training program aimed principally at graduate and postdoctoral students in computationally-intensive fields. The program is designed to balance the specificity of computing center courses, the abstractness of computer science courses, and the personal contact of traditional apprentice approaches. It is based on the experience of computer scientists and computational scientists, and consists of seminars and clinics given by many visiting and local faculty. It covers a variety of supercomputing concepts, issues, and practices related to architecture, operating systems, software design, numerical considerations, code optimization, graphics, communications, and networks. Its research component encourages understanding of scientific computing and supercomputer hardware issues. Flexibility in thinking about computing needs is emphasized by the use of several different supercomputer architectures, such as the Cray X/MP48 at the National Center for Supercomputing Applications at University of Illinois at Urbana-Champaign, IBM 3090 600E/VF at the Cornell National Supercomputer Facility, and Alliant FX/8 at the Advanced Computing Research Facility at Argonne National Laboratory. 11 refs., 6 tabs.

  3. Isolation of Cyclopropenylidene Lithium Adducts: The Weiss-Yoshida Reagent**

    Science.gov (United States)

    Lavallo, Vincent; Ishida, Yutaka; Donnadieu, Bruno; Bertrand, Guy

    2008-01-01

    A lithium-halogen exchange reaction occurs when the chloro[bis(diisopropylamino)]cyclopropenium tetrafluoroborate salt 1 (X = BF4) is treated with n-butyllithium. The resulting cyclopropenylidene-lithium adduct 3 has been isolated in 45% yield. In the solid state, this compound exists as a polymeric chain with an overall stoichiometry of two LiBF4 per carbene ligand. Addition of 12-crown-4-ether does not liberate the carbene from the lithium cation, but affords a monomeric tertiary complex (60% yield) that includes the crown ether. Moreover, complex 3 can also be synthesized by depro tonation of the bis(diisopropylamino)cyclopropenium tetrafluoroborate salt 2 (X = BF4) with n-butyllithium, whereas using potassium bis(trimethylsilyl)amide the free cyclopropenylidene was isolated in 53% yield. These results as whole seem to demonstrate that only certain counteranions allow for the isolation of cyclopropenylidene-lithium adducts, and only bases not containing lithium allow for the isolation of the free cyclopropenylidene. The former and the latter presumably prevented Weiss and Yoshida from isolating what would have been the first example of a stable carbene-lithium adduct and a free carbene, respectively. PMID:16986195

  4. [Psychoanalysis and facism: two incompatible approaches. The difficult role of Edoardo Weiss].

    Science.gov (United States)

    Accerboni, A M

    1988-01-01

    Edoardo Weiss, the only direct disciple of Freud in Italy, returned to Trieste, his native town, in 1919 after a long period of psychoanalitical studies in Vienna. An enthusiastic acceptance of Freud's ideas in the cultural, mainly Jewish, circles in Trieste was parallel to a sort of distrust of the Fascist ideology. In 1930 Weiss decided to move to Rome where he hoped to be able to found an Italian psychoanalytical movement. The Catholic Church, Fascist Ideology, philosophical Idealism and scientific Positivism were all factors hampering the spread of psychoanalysis in Italy. In 1932 Weiss founded the Italian Psychoanalytical Society in Rome with a very small number of followers. The relations between Weiss' newborn Society and the dictatorship were going to be quite troublesome. Ernst Jones was drastically accused by Weiss of misrepresenting his entrée with Mussolini. Thanks to Weiss' efforts the Italian society was acknowledged by the I.P.A. Finally, mention will be made of Weiss' forced move to America as a result of the racial laws, and of the consequences for the future of Psychoanalysis in Italy.

  5. TOP500 Supercomputers for June 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamos National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.

  6. TOP500 Supercomputers for June 2002

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-06-20

    19th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 19th edition of the TOP500 list of the worlds fastest supercomputers was released today (June 20, 2002). The recently installed Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is as expected the clear new number 1. Its performance of 35.86 Tflop/s (trillions of calculations per second) running the Linpack benchmark is almost five times higher than the performance of the now No.2 IBM ASCI White system at Lawrence Livermore National Laboratory (7.2 Tflop/s). This powerful leap frogging to the top by a system so much faster than the previous top system is unparalleled in the history of the TOP500.

  7. TOP500 Supercomputers for November 2002

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-11-15

    20th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 20th edition of the TOP500 list of the world's fastest supercomputers was released today (November 15, 2002). The Earth Simulator supercomputer installed earlier this year at the Earth Simulator Center in Yokohama, Japan, is with its Linpack benchmark performance of 35.86 Tflop/s (trillions of calculations per second) retains the number one position. The No.2 and No.3 positions are held by two new, identical ASCI Q systems at Los Alamos National Laboratory (7.73Tflop/s each). These systems are built by Hewlett-Packard and based on the Alpha Server SC computer system.

  8. Input/output behavior of supercomputing applications

    Science.gov (United States)

    Miller, Ethan L.

    1991-01-01

    The collection and analysis of supercomputer I/O traces and their use in a collection of buffering and caching simulations are described. This serves two purposes. First, it gives a model of how individual applications running on supercomputers request file system I/O, allowing system designer to optimize I/O hardware and file system algorithms to that model. Second, the buffering simulations show what resources are needed to maximize the CPU utilization of a supercomputer given a very bursty I/O request rate. By using read-ahead and write-behind in a large solid stated disk, one or two applications were sufficient to fully utilize a Cray Y-MP CPU.

  9. GPUs: An Oasis in the Supercomputing Desert

    CERN Document Server

    Kamleh, Waseem

    2012-01-01

    A novel metric is introduced to compare the supercomputing resources available to academic researchers on a national basis. Data from the supercomputing Top 500 and the top 500 universities in the Academic Ranking of World Universities (ARWU) are combined to form the proposed "500/500" score for a given country. Australia scores poorly in the 500/500 metric when compared with other countries with a similar ARWU ranking, an indication that HPC-based researchers in Australia are at a relative disadvantage with respect to their overseas competitors. For HPC problems where single precision is sufficient, commodity GPUs provide a cost-effective means of quenching the computational thirst of otherwise parched Lattice practitioners traversing the Australian supercomputing desert. We explore some of the more difficult terrain in single precision territory, finding that BiCGStab is unreliable in single precision at large lattice sizes. We test the CGNE and CGNR forms of the conjugate gradient method on the normal equa...

  10. Floating point arithmetic in future supercomputers

    Science.gov (United States)

    Bailey, David H.; Barton, John T.; Simon, Horst D.; Fouts, Martin J.

    1989-01-01

    Considerations in the floating-point design of a supercomputer are discussed. Particular attention is given to word size, hardware support for extended precision, format, and accuracy characteristics. These issues are discussed from the perspective of the Numerical Aerodynamic Simulation Systems Division at NASA Ames. The features believed to be most important for a future supercomputer floating-point design include: (1) a 64-bit IEEE floating-point format with 11 exponent bits, 52 mantissa bits, and one sign bit and (2) hardware support for reasonably fast double-precision arithmetic.

  11. Social and emotional loneliness: an examination of Weiss's typology of loneliness.

    Science.gov (United States)

    Russell, D; Cutrona, C E; Rose, J; Yurko, K

    1984-06-01

    This study examined Weiss' conceptualization of social and emotional loneliness. Using data from an extensive survey of undergraduate and graduate students at the University of Iowa, we measured social and emotional loneliness, students' affective and behavioral reactions to loneliness, students' social relationships, and their judgments of the degree to which their relationships supply the six social provisions described by Weiss. As expected, we found differences in the subjective experiences of social and emotional loneliness, although both forms of loneliness were also characterized by a common core of experiences. The results generally supported Weiss's ideas concerning the determinants of social and emotional loneliness. Predictions concerning the affective and behavioral consequences associated with each type of loneliness, however, were only partly supported, although the two forms of loneliness were associated with different affective reactions and coping behaviors. The implications of these findings for Weiss's typology of loneliness are discussed.

  12. Adventures in Supercomputing: An innovative program

    Energy Technology Data Exchange (ETDEWEB)

    Summers, B.G.; Hicks, H.R.; Oliver, C.E.

    1995-06-01

    Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology and serve as a spur to systemic reform. The Adventures in Supercomputing (AiS) program, sponsored by the Department of Energy, is such a program. Adventures in Supercomputing is a program for high school and middle school teachers. It has helped to change the teaching paradigm of many of the teachers involved in the program from a teacher-centered classroom to a student-centered classroom. ``A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode``. Not only is the process of teaching changed, but evidences of systemic reform are beginning to surface. After describing the program, the authors discuss the teaching strategies being used and the evidences of systemic change in many of the AiS schools in Tennessee.

  13. Effect of shallow donors on Curie–Weiss temperature of Co-doped ZnO

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Shuxia, E-mail: gsx0391@sina.com [Department of Physics, Jiaozuo Teachers College, Jiaozuo 454001 (China); Key Laboratory for Special Functional Materials of Ministry of Education, Henan University, Kaifeng 475004 (China); Li, Jiwu [Department of Physics, Jiaozuo Teachers College, Jiaozuo 454001 (China); Du, Zuliang [Key Laboratory for Special Functional Materials of Ministry of Education, Henan University, Kaifeng 475004 (China)

    2014-12-15

    Co-doped ZnO and Al, Co co-doped ZnO polycrystalline powders were synthesized by co-precipitation method. The magnetization curves measured at 2 K show no hysteresis neither remanence for all samples. ZnO:Co grown at low temperature has a positive Curie–Weiss temperature Θ, and ZnO:Co grown at high temperature has a negative Θ. But Al-doped ZnO:Co grown at high temperature has a positive Θ. Positive Curie–Weiss temperature Θ was considered to have relation to the presence of shallow donors in the samples. - Highlights: • Co-doped ZnO and Al, Co co-doped ZnO polycrystalline powders were synthesized. • No hysteresis is observed for all samples. • The Curie–Weiss temperature Θ changes its sign by Al doping. • Positive Θ should be related to shallow donors.

  14. Acute Mallory-Weiss syndrome after cardiopulmonary resuscitation by health care providers in the emergency department

    Institute of Scientific and Technical Information of China (English)

    Dae Hee Kim; Dong Yoon Rhee; Seon Hee Woo; Woon Jeong Lee; Seung Hwan Seol; Won Jung Jeong

    2015-01-01

    A report of a 62-year-old female patient with severe Mallory-Weiss syndrome after successful cardiopulmonary resuscitation (CPR) by health care providers in the emergency department is presented. The bleeding continued for five days, and the patient’s total blood loss was estimated to be approximately 3 000 mL. After 7 days, the patient died due to respiratory distress syndrome. Severe Mallory-Weiss syndrome afterCPR may occur and should be considered as a potentially serious complication afterCPR.

  15. Os elementos do surrealismo na peça Marat/Sade de Peter Weiss

    Directory of Open Access Journals (Sweden)

    Eloá Heise

    1997-11-01

    Full Text Available Die Analyse des Dramas Die Verfolgung und Ermordung Jean Paul Marats dargestellt durch die Schauspielgruppe des Hospizes zu Charenton unter der Anleitung des Herrn de Sade (1964 von Peter Weiss zeigt, wie sich Weiss surrealistischer Elemente, z.B. des Themas des Wahnsinns und der alogischen Struktur des Stückes im Stück, bedient hat. Mit diesen Drama hatte er Autor den größten Erfolg des deutschen Theaters auf den Bühnen der Welt nach Brecht.

  16. Data-intensive computing on numerically-insensitive supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Ahrens, James P [Los Alamos National Laboratory; Fasel, Patricia K [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Heitmann, Katrin [Los Alamos National Laboratory; Lo, Li - Ta [Los Alamos National Laboratory; Patchett, John M [Los Alamos National Laboratory; Williams, Sean J [Los Alamos National Laboratory; Woodring, Jonathan L [Los Alamos National Laboratory; Wu, Joshua [Los Alamos National Laboratory; Hsu, Chung - Hsing [ONL

    2010-12-03

    With the advent of the era of petascale supercomputing, via the delivery of the Roadrunner supercomputing platform at Los Alamos National Laboratory, there is a pressing need to address the problem of visualizing massive petascale-sized results. In this presentation, I discuss progress on a number of approaches including in-situ analysis, multi-resolution out-of-core streaming and interactive rendering on the supercomputing platform. These approaches are placed in context by the emerging area of data-intensive supercomputing.

  17. Parallel supercomputers for lattice gauge theory.

    Science.gov (United States)

    Brown, F R; Christ, N H

    1988-03-18

    During the past 10 years, particle physicists have increasingly employed numerical simulation to answer fundamental theoretical questions about the properties of quarks and gluons. The enormous computer resources required by quantum chromodynamic calculations have inspired the design and construction of very powerful, highly parallel, dedicated computers optimized for this work. This article gives a brief description of the numerical structure and current status of these large-scale lattice gauge theory calculations, with emphasis on the computational demands they make. The architecture, present state, and potential of these special-purpose supercomputers is described. It is argued that a numerical solution of low energy quantum chromodynamics may well be achieved by these machines.

  18. Mallory-Weiss tear following cardiac surgery : transoesophageal echoprobe or nasogastric tube?

    NARCIS (Netherlands)

    De Vries, AJ; van der Maaten, JMAA; Laurens, RRP

    2000-01-01

    A case of fatal upper gastrointestinal bleeding from a Mallory-Weiss tear after transoesophageal echocardiography during cardiac surgery is reported. After the echocardiographic examination, which is considered a safe procedure, a nasogastric tube was inserted which immediately revealed bright red b

  19. Ferroelectric Transition and Curie—Weiss Behavior in Some Filled Tungsten Bronze Ceramics

    Science.gov (United States)

    Zhu, Xiao-Li; Chen, Xiang-Ming

    2014-01-01

    Ferroelectric transitions in filled tungsten bronze ceramics Sr4R2Ti4Nb6O30, Sr5RTi3Nb7O30 (R=La, Nb, Sm & Eu) and Ba4Nd2Ti4Nb6O30 are investigated with differential scanning calorimetry (DSC) and the Curie—Weiss law fitting to the dielectric constant. The magnitude of the Curie-Weiss constant C ~ 105 suggests displacement-type ferroelectric transition in the present compounds. The large ΔT difference between dielectric maximum temperature Tm and Curie—Weiss temperature T0) values indicate the difficult formation of ferroelectric domains or polar nanoregions in the present compounds and also the characteristics of the first order ferroelectric transition. Three categories are suggested for the ferroelectric transition in the above tungsten bronzes. The ferroelectric transition exhibits large thermal hysteresis. According to the DSC results, gradual recovery of the endothermic peak occurs after aging at temperature below the Curie point, indicating the gradual stability of the ferroelectric phase after cooling from the high-temperature para-electric phase. The relationship between the Curie—Weiss law fitting parameters and the nature of the ferroelectric transition is modified for the filled tungsten bronzes.

  20. On the dielectric curie-weiss law and diffuse phase transition in ferroelectrics

    NARCIS (Netherlands)

    Jonker, G.H.

    1983-01-01

    A simple derivation of parabolic 1/εr-T curves is obtained by reconsidering the origin of the dielectric Curie-Weiss law. The only assumption needed is the introduction of a non-linear temperature dependance of the macroscopic dielectric polarization in the macroscopic Clausius-Mossotti equation

  1. Mallory-Weiss tear following cardiac surgery : transoesophageal echoprobe or nasogastric tube?

    NARCIS (Netherlands)

    De Vries, AJ; van der Maaten, JMAA; Laurens, RRP

    A case of fatal upper gastrointestinal bleeding from a Mallory-Weiss tear after transoesophageal echocardiography during cardiac surgery is reported. After the echocardiographic examination, which is considered a safe procedure, a nasogastric tube was inserted which immediately revealed bright red

  2. Assessing Diagnostic Expertise of Counselors Using the Cochran-Weiss-Shanteau (CWS) Index

    NARCIS (Netherlands)

    Witteman, C.L.M.; Weiss, D.J.; Metzmacher, M.

    2012-01-01

    Counseling studies have shown that increasing experience is not always associated with better judgments. However, in such studies performance is assessed against external criteria, which may lack validity. The authors applied the CochranWeissShanteau (CWS) index, which assesses the ability to consis

  3. Assessing Diagnostic Expertise of Counselors Using the Cochran-Weiss-Shanteau (CWS) Index

    Science.gov (United States)

    Witteman, Cilia L. M.; Weiss, David J.; Metzmacher, Martin

    2012-01-01

    Counseling studies have shown that increasing experience is not always associated with better judgments. However, in such studies performance is assessed against external criteria, which may lack validity. The authors applied the Cochran-Weiss-Shanteau (CWS) index, which assesses the ability to consistently discriminate. Results showed that novice…

  4. Variational description of Gibbs-non-Gibbs dynamical transitions for the Curie-Weiss model

    NARCIS (Netherlands)

    Fernandez, R.; den Hollander, F.; Martinez, J.

    2013-01-01

    We perform a detailed study of Gibbs-non-Gibbs transitions for the Curie- Weiss model subject to independent spin-flip dynamics (“infinite-temperature” dynamics). We show that, in this setup, the program outlined in van Enter et al. (Moscow Math J 10:687–711, 2010) can be fully completed, namely, Gi

  5. Supercomputing Centers and Electricity Service Providers

    DEFF Research Database (Denmark)

    Patki, Tapasya; Bates, Natalie; Ghatikar, Girish

    2016-01-01

    Supercomputing Centers (SCs) have high and variable power demands, which increase the challenges of the Electricity Service Providers (ESPs) with regards to efficient electricity distribution and reliable grid operation. High penetration of renewable energy generation further exacerbates this pro......Supercomputing Centers (SCs) have high and variable power demands, which increase the challenges of the Electricity Service Providers (ESPs) with regards to efficient electricity distribution and reliable grid operation. High penetration of renewable energy generation further exacerbates...... from a detailed, quantitative survey-based analysis and compare the perspectives of the European grid and SCs to the ones of the United States (US). We then show that contrary to the expectation, SCs in the US are more open toward cooperating and developing demand-management strategies with their ESPs...... (LRZ). We conclude that perspectives on demand management are dependent on the electricity market and pricing in the geographical region and on the degree of control that a particular SC has in terms of power-purchase negotiation....

  6. Multi-petascale highly efficient parallel supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O' Brien, John K.; O' Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  7. A workbench for tera-flop supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Resch, M.M.; Kuester, U.; Mueller, M.S.; Lang, U. [High Performance Computing Center Stuttgart (HLRS), Stuttgart (Germany)

    2003-07-01

    Supercomputers currently reach a peak performance in the range of TFlop/s. With but one exception - the Japanese Earth Simulator - none of these systems has so far been able to also show a level of sustained performance for a variety of applications that comes close to the peak performance. Sustained TFlop/s are therefore rarely seen. The reasons are manifold and are well known: Bandwidth and latency both for main memory and for the internal network are the key internal technical problems. Cache hierarchies with large caches can bring relief but are no remedy to the problem. However, there are not only technical problems that inhibit the full exploitation by scientists of the potential of modern supercomputers. More and more organizational issues come to the forefront. This paper shows the approach of the High Performance Computing Center Stuttgart (HLRS) to deliver a sustained performance of TFlop/s for a wide range of applications from a large group of users spread over Germany. The core of the concept is the role of the data. Around this we design a simulation workbench that hides the complexity of interacting computers, networks and file systems from the user. (authors)

  8. Seismic signal processing on heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Fichtner, Andreas

    2015-04-01

    The processing of seismic signals - including the correlation of massive ambient noise data sets - represents an important part of a wide range of seismological applications. It is characterized by large data volumes as well as high computational input/output intensity. Development of efficient approaches towards seismic signal processing on emerging high performance computing systems is therefore essential. Heterogeneous supercomputing systems introduced in the recent years provide numerous computing nodes interconnected via high throughput networks, every node containing a mix of processing elements of different architectures, like several sequential processor cores and one or a few graphical processing units (GPU) serving as accelerators. A typical representative of such computing systems is "Piz Daint", a supercomputer of the Cray XC 30 family operated by the Swiss National Supercomputing Center (CSCS), which we used in this research. Heterogeneous supercomputers provide an opportunity for manifold application performance increase and are more energy-efficient, however they have much higher hardware complexity and are therefore much more difficult to program. The programming effort may be substantially reduced by the introduction of modular libraries of software components that can be reused for a wide class of seismology applications. The ultimate goal of this research is design of a prototype for such library suitable for implementing various seismic signal processing applications on heterogeneous systems. As a representative use case we have chosen an ambient noise correlation application. Ambient noise interferometry has developed into one of the most powerful tools to image and monitor the Earth's interior. Future applications will require the extraction of increasingly small details from noise recordings. To meet this demand, more advanced correlation techniques combined with very large data volumes are needed. This poses new computational problems that

  9. Most Social Scientists Shun Free Use of Supercomputers.

    Science.gov (United States)

    Kiernan, Vincent

    1998-01-01

    Social scientists, who frequently complain that the federal government spends too little on them, are passing up what scholars in the physical and natural sciences see as the government's best give-aways: free access to supercomputers. Some social scientists say the supercomputers are difficult to use; others find desktop computers provide…

  10. Supercomputing - Use Cases, Advances, The Future (2/2)

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the second day, we will focus on software and software paradigms driving supercomputers, workloads that need supercomputing treatment, advances in technology and possible future developments. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and i...

  11. Will Your Next Supercomputer Come from Costco?

    Energy Technology Data Exchange (ETDEWEB)

    Farber, Rob

    2007-04-15

    A fun topic for April, one that is not an April fool’s joke, is that you can purchase a commodity 200+ Gflop (single-precision) Linux supercomputer for around $600 from your favorite electronic vendor. Yes, it’s true. Just walk in and ask for a Sony Playstation 3 (PS3), take it home and install Linux on it. IBM has provided an excellent tutorial for installing Linux and building applications at http://www-128.ibm.com/developerworks/power/library/pa-linuxps3-1. If you want to raise some eyebrows at work, then submit a purchase request for a Sony PS3 game console and watch the reactions as your paperwork wends its way through the procurement process.

  12. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  13. Multiprocessing on supercomputers for computational aerodynamics

    Science.gov (United States)

    Yarrow, Maurice; Mehta, Unmeel B.

    1991-01-01

    Little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPS or more) to improve turnaround time in computational aerodynamics. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, such improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) is applied through multitasking via a strategy that requires relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-Fortran-Unix interface. The existing code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor.

  14. The PMS project Poor Man's Supercomputer

    CERN Document Server

    Csikor, Ferenc; Hegedüs, P; Horváth, V K; Katz, S D; Piróth, A

    2001-01-01

    We briefly describe the Poor Man's Supercomputer (PMS) project that is carried out at Eotvos University, Budapest. The goal is to develop a cost effective, scalable, fast parallel computer to perform numerical calculations of physical problems that can be implemented on a lattice with nearest neighbour interactions. To reach this goal we developed the PMS architecture using PC components and designed a special, low cost communication hardware and the driver software for Linux OS. Our first implementation of the PMS includes 32 nodes (PMS1). The performance of the PMS1 was tested by Lattice Gauge Theory simulations. Using SU(3) pure gauge theory or bosonic MSSM on the PMS1 computer we obtained 3$/Mflops price-per-sustained performance ratio. The design of the special hardware and the communication driver are freely available upon request for non-profit organizations.

  15. The BlueGene/L Supercomputer

    CERN Document Server

    Bhanot, G V; Gara, A; Vranas, P M; Bhanot, Gyan; Chen, Dong; Gara, Alan; Vranas, Pavlos

    2002-01-01

    The architecture of the BlueGene/L massively parallel supercomputer is described. Each computing node consists of a single compute ASIC plus 256 MB of external memory. The compute ASIC integrates two 700 MHz PowerPC 440 integer CPU cores, two 2.8 Gflops floating point units, 4 MB of embedded DRAM as cache, a memory controller for external memory, six 1.4 Gbit/s bi-directional ports for a 3-dimensional torus network connection, three 2.8 Gbit/s bi-directional ports for connecting to a global tree network and a Gigabit Ethernet for I/O. 65,536 of such nodes are connected into a 3-d torus with a geometry of 32x32x64. The total peak performance of the system is 360 Teraflops and the total amount of memory is 16 TeraBytes.

  16. Weiss-Weinstein Family of Error Bounds for Quantum Parameter Estimation

    CERN Document Server

    Lu, Xiao-Ming

    2015-01-01

    To approach the fundamental limits on the estimation precision for random parameters in quantum systems, we propose a quantum version of the Weiss-Weinstein family of lower bounds on estimation errors. The quantum Weiss-Weinstein bounds (QWWB) include the popular quantum Cram\\'er-Rao bound (QCRB) as a special case, and do not require the differentiability of prior distributions and conditional quantum states as the QCRB does; thus, the QWWB is a superior alternative for the QCRB. We show that the QWWB well captures the insurmountable error caused by the ambiguity of the phase in quantum states, which cannot be revealed by the QCRB. Furthermore, we use the QWWB to expose the possible shortcomings of the QCRB when the number of independent and identically distributed systems is not sufficiently large.

  17. DOE Zero Energy Ready Home Case Study: Weiss Building & Development, Downers Grove, Illinois

    Energy Technology Data Exchange (ETDEWEB)

    none,

    2013-09-01

    This single-family home built in a peat bog has underground storage tanks and drainage tanks, blown fiberglass insulation, coated rigid polyisocyanurate, and flashing. The 3,600-square-foot custom home built by Weiss Building & Development LLC is the first home in Illinois certified to the DOE Challenge Home criteria, which requires that homes meet the EPA Indoor airPlus guidelines.The builder won a 2013 Housing Innovation Award in the custom builder category.

  18. Ising Critical Behavior of Inhomogeneous Curie-Weiss Models and Annealed Random Graphs

    Science.gov (United States)

    Dommers, Sander; Giardinà, Cristian; Giberti, Claudio; van der Hofstad, Remco; Prioriello, Maria Luisa

    2016-11-01

    We study the critical behavior for inhomogeneous versions of the Curie-Weiss model, where the coupling constant {J_{ij}(β)} for the edge {ij} on the complete graph is given by {J_{ij}(β)=β w_iw_j/( {sum_{kin[N]}w_k})}. We call the product form of these couplings the rank-1 inhomogeneous Curie-Weiss model. This model also arises [with inverse temperature {β} replaced by {sinh(β)} ] from the annealed Ising model on the generalized random graph. We assume that the vertex weights {(w_i)_{iin[N]}} are regular, in the sense that their empirical distribution converges and the second moment converges as well. We identify the critical temperatures and exponents for these models, as well as a non-classical limit theorem for the total spin at the critical point. These depend sensitively on the number of finite moments of the weight distribution. When the fourth moment of the weight distribution converges, then the critical behavior is the same as on the (homogeneous) Curie-Weiss model, so that the inhomogeneity is weak. When the fourth moment of the weights converges to infinity, and the weights satisfy an asymptotic power law with exponent {τ} with {τin(3,5)}, then the critical exponents depend sensitively on {τ}. In addition, at criticality, the total spin {S_N} satisfies that {S_N/N^{(τ-2)/(τ-1)}} converges in law to some limiting random variable whose distribution we explicitly characterize.

  19. Upper Gastrointestinal System Bleeding Associated with Mallory-Weiss Syndrome in a Patient with Prosthetic Mitral Valve Using Warfarin Sodium

    Directory of Open Access Journals (Sweden)

    Banu Şahin Yıldız

    2013-08-01

    Full Text Available Mallory-Weiss syndrome refers to bleeding from tears in the mucosa at the junction of the stomach and esophagus. Bleeding has been recognised as the major treatment-limiting complication in patients with prosthetic mitral valve using anticoagulant treatment. We report that upper gastrointestinal system bleeding associated with Mallory-Weiss syndrome in patient with prosthetic mitral valve using warfarin sodium.

  20. World's biggest 'virtual supercomputer' given the go-ahead

    CERN Multimedia

    2003-01-01

    "The Particle Physics and Astronomy Research Council has today announced GBP 16 million to create a massive computing Grid, equivalent to the world's second largest supercomputer after Japan's Earth Simulator computer" (1 page).

  1. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  2. Taking ASCI supercomputing to the end game.

    Energy Technology Data Exchange (ETDEWEB)

    DeBenedictis, Erik P.

    2004-03-01

    The ASCI supercomputing program is broadly defined as running physics simulations on progressively more powerful digital computers. What happens if we extrapolate the computer technology to its end? We have developed a model for key ASCI computations running on a hypothetical computer whose technology is parameterized in ways that account for advancing technology. This model includes technology information such as Moore's Law for transistor scaling and developments in cooling technology. The model also includes limits imposed by laws of physics, such as thermodynamic limits on power dissipation, limits on cooling, and the limitation of signal propagation velocity to the speed of light. We apply this model and show that ASCI computations will advance smoothly for another 10-20 years to an 'end game' defined by thermodynamic limits and the speed of light. Performance levels at the end game will vary greatly by specific problem, but will be in the Exaflops to Zetaflops range for currently anticipated problems. We have also found an architecture that would be within a constant factor of giving optimal performance at the end game. This architecture is an evolutionary derivative of the mesh-connected microprocessor (such as ASCI Red Storm or IBM Blue Gene/L). We provide designs for the necessary enhancement to microprocessor functionality and the power-efficiency of both the processor and memory system. The technology we develop in the foregoing provides a 'perfect' computer model with which we can rate the quality of realizable computer designs, both in this writing and as a way of designing future computers. This report focuses on classical computers based on irreversible digital logic, and more specifically on algorithms that simulate space computing, irreversible logic, analog computers, and other ways to address stockpile stewardship that are outside the scope of this report.

  3. Simulating functional magnetic materials on supercomputers.

    Science.gov (United States)

    Gruner, Markus Ernst; Entel, Peter

    2009-07-22

    The recent passing of the petaflop per second landmark by the Roadrunner project at the Los Alamos National Laboratory marks a preliminary peak of an impressive world-wide development in the high-performance scientific computing sector. Also, purely academic state-of-the-art supercomputers such as the IBM Blue Gene/P at Forschungszentrum Jülich allow us nowadays to investigate large systems of the order of 10(3) spin polarized transition metal atoms by means of density functional theory. Three applications will be presented where large-scale ab initio calculations contribute to the understanding of key properties emerging from a close interrelation between structure and magnetism. The first two examples discuss the size dependent evolution of equilibrium structural motifs in elementary iron and binary Fe-Pt and Co-Pt transition metal nanoparticles, which are currently discussed as promising candidates for ultra-high-density magnetic data storage media. However, the preference for multiply twinned morphologies at smaller cluster sizes counteracts the formation of a single-crystalline L1(0) phase, which alone provides the required hard magnetic properties. The third application is concerned with the magnetic shape memory effect in the Ni-Mn-Ga Heusler alloy, which is a technologically relevant candidate for magnetomechanical actuators and sensors. In this material strains of up to 10% can be induced by external magnetic fields due to the field induced shifting of martensitic twin boundaries, requiring an extremely high mobility of the martensitic twin boundaries, but also the selection of the appropriate martensitic structure from the rich phase diagram.

  4. Supercomputing - Use Cases, Advances, The Future (1/2)

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the first day, we will focus on the history and theory of supercomputing, the top500 list and the hardware that makes supercomputers tick. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP an...

  5. Palestina kak neudavshejesja gossudarstvo / Shlomo Avineri

    Index Scriptorium Estoniae

    Avineri, Shlomo

    2007-01-01

    Palestiina konfliktist. Autor on seisukohal, et kuigi kriisi põhjustes on kerge süüdistada Palestiina poliitilisi liidreid, Iisraeli okupatsiooni või USA poliitikat, peituvad põhjused palestiinlaste suutmatuses ületada ajaloolisi vastuolusid ning luua ühtne ja kooskõlastatult tegutsev valitsus

  6. Palestina kak neudavshejesja gossudarstvo / Shlomo Avineri

    Index Scriptorium Estoniae

    Avineri, Shlomo

    2007-01-01

    Palestiina konfliktist. Autor on seisukohal, et kuigi kriisi põhjustes on kerge süüdistada Palestiina poliitilisi liidreid, Iisraeli okupatsiooni või USA poliitikat, peituvad põhjused palestiinlaste suutmatuses ületada ajaloolisi vastuolusid ning luua ühtne ja kooskõlastatult tegutsev valitsus

  7. An integrated distributed processing interface for supercomputers and workstations

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, J.; McGavran, L.

    1989-01-01

    Access to documentation, communication between multiple processes running on heterogeneous computers, and animation of simulations of engineering problems are typically weak in most supercomputer environments. This presentation will describe how we are improving this situation in the Computer Research and Applications group at Los Alamos National Laboratory. We have developed a tool using UNIX filters and a SunView interface that allows users simple access to documentation via mouse driven menus. We have also developed a distributed application that integrated a two point boundary value problem on one of our Cray Supercomputers. It is controlled and displayed graphically by a window interface running on a workstation screen. Our motivation for this research has been to improve the usual typewriter/static interface using language independent controls to show capabilities of the workstation/supercomputer combination. 8 refs.

  8. System approaches of Weiss and Bertalanffy and their relevance for systems biology today.

    Science.gov (United States)

    Drack, Manfred; Wolkenhauer, Olaf

    2011-06-01

    System approaches in biology have a long history. We focus here on the thinking of Paul A. Weiss and Ludwig von Bertalanffy, who contributed a great deal towards making the system concept operable in biology in the early 20th century. To them, considering whole living systems, which includes their organisation or order, is equally important as the dynamics within systems and the interplay between different levels from molecules over cells to organisms. They also called for taking the intrinsic activity of living systems and the conservation of system states into account. We compare these notions with today's systems biology, which is often a bottom-up approach from molecular dynamics to cellular behaviour. We conclude that bringing together the early heuristics with recent formalisms and novel experimental set-ups can lead to fruitful results and understanding. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Recent results from the Swinburne supercomputer software correlator

    Science.gov (United States)

    Tingay, Steven; et al.

    I will descrcibe the development of software correlators on the Swinburne Beowulf supercomputer and recent work using the Cray XD-1 machine. I will also describe recent Australian and global VLBI experiments that have been processed on the Swinburne software correlator, along with imaging results from these data. The role of the software correlator in Australia's eVLBI project will be discussed.

  10. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    Science.gov (United States)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  11. Access to Supercomputers. Higher Education Panel Report 69.

    Science.gov (United States)

    Holmstrom, Engin Inel

    This survey was conducted to provide the National Science Foundation with baseline information on current computer use in the nation's major research universities, including the actual and potential use of supercomputers. Questionnaires were sent to 207 doctorate-granting institutions; after follow-ups, 167 institutions (91% of the institutions…

  12. The Sky's the Limit When Super Students Meet Supercomputers.

    Science.gov (United States)

    Trotter, Andrew

    1991-01-01

    In a few select high schools in the U.S., supercomputers are allowing talented students to attempt sophisticated research projects using simultaneous simulations of nature, culture, and technology not achievable by ordinary microcomputers. Schools can get their students online by entering contests and seeking grants and partnerships with…

  13. INTEGRATION OF PANDA WORKLOAD MANAGEMENT SYSTEM WITH SUPERCOMPUTERS

    Energy Technology Data Exchange (ETDEWEB)

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Maeno, T [Brookhaven National Laboratory (BNL); Mashinistov, R. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Nilsson, P [Brookhaven National Laboratory (BNL); Novikov, A. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Poyda, A. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Ryabinkin, E. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Teslyuk, A. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Tsulaia, V. [Lawrence Berkeley National Laboratory (LBNL); Velikhov, V. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Wen, G. [University of Wisconsin, Madison; Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    Abstract The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the funda- mental nature of matter and the basic forces that shape our universe, and were recently credited for the dis- covery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data cen- ters are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Com- puting Facility (OLCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single- threaded workloads in parallel on Titan s multi-core worker nodes. This implementation was tested with a variety of

  14. Integration of Panda Workload Management System with supercomputers

    Science.gov (United States)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  15. Spin-flip dynamics of the Curie-Weiss model : Loss of Gibbsianness with possibly broken symmetry

    NARCIS (Netherlands)

    Kulske, Christof; Le Ny, Arnaud

    2007-01-01

    We study the conditional probabilities of the Curie-Weiss Ising model in vanishing external field under a symmetric independent stochastic spin-flip dynamics and discuss their set of points of discontinuity (bad points). We exhibit a complete analysis of the transition between Gibbsian and non-Gibbs

  16. Applications of parallel supercomputers: Scientific results and computer science lessons

    Energy Technology Data Exchange (ETDEWEB)

    Fox, G.C.

    1989-07-12

    Parallel Computing has come of age with several commercial and inhouse systems that deliver supercomputer performance. We illustrate this with several major computations completed or underway at Caltech on hypercubes, transputer arrays and the SIMD Connection Machine CM-2 and AMT DAP. Applications covered are lattice gauge theory, computational fluid dynamics, subatomic string dynamics, statistical and condensed matter physics,theoretical and experimental astronomy, quantum chemistry, plasma physics, grain dynamics, computer chess, graphics ray tracing, and Kalman filters. We use these applications to compare the performance of several advanced architecture computers including the conventional CRAY and ETA-10 supercomputers. We describe which problems are suitable for which computers in the terms of a matching between problem and computer architecture. This is part of a set of lessons we draw for hardware, software, and performance. We speculate on the emergence of new academic disciplines motivated by the growing importance of computers. 138 refs., 23 figs., 10 tabs.

  17. Extending ATLAS Computing to Commercial Clouds and Supercomputers

    CERN Document Server

    Nilsson, P; The ATLAS collaboration; Filipcic, A; Klimentov, A; Maeno, T; Oleynik, D; Panitkin, S; Wenaus, T; Wu, W

    2014-01-01

    The Large Hadron Collider will resume data collection in 2015 with substantially increased computing requirements relative to its first 2009-2013 run. A near doubling of the energy and the data rate, high level of event pile-up, and detector upgrades will mean the number and complexity of events to be analyzed will increase dramatically. A naive extrapolation of the Run 1 experience would suggest that a 5-6 fold increase in computing resources are needed - impossible within the anticipated flat computing budgets in the near future. Consequently ATLAS is engaged in an ambitious program to expand its computing to all available resources, notably including opportunistic use of commercial clouds and supercomputers. Such resources present new challenges in managing heterogeneity, supporting data flows, parallelizing workflows, provisioning software, and other aspects of distributed computing, all while minimizing operational load. We will present the ATLAS experience to date with clouds and supercomputers, and des...

  18. Integration of Titan supercomputer at OLCF with ATLAS production system

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration

    2016-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job...

  19. Supercomputers ready for use as discovery machines for neuroscience

    OpenAIRE

    Kunkel, Susanne; Schmidt, Maximilian; Helias, Moritz; Eppler, Jochen Martin; Igarashi, Jun; Masumoto, Gen; Fukai, Tomoki; Ishii, Shin; Plesser, Hans Ekkehard; Morrison, Abigail; Diesmann, Markus

    2013-01-01

    NEST is a widely used tool to simulate biological spiking neural networks [1]. The simulator is subject to continuous development, which is driven by the requirements of the current neuroscientific questions. At present, a major part of the software development focuses on the improvement of the simulator's fundamental data structures in order to enable brain-scale simulations on supercomputers such as the Blue Gene system in Jülich and the K computer in Kobe. Based on our memory-u...

  20. Scientists turn to supercomputers for knowledge about universe

    CERN Multimedia

    White, G

    2003-01-01

    The DOE is funding the computers at the Center for Astrophysical Thermonuclear Flashes which is based at the University of Chicago and uses supercomputers at the nation's weapons labs to study explosions in and on certain stars. The DOE is picking up the project's bill in the hope that the work will help the agency learn to better simulate the blasts of nuclear warheads (1 page).

  1. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  2. Study of ATLAS TRT performance with GRID and supercomputers

    Science.gov (United States)

    Krasnopevtsev, D. V.; Klimentov, A. A.; Mashinistov, R. Yu.; Belyaev, N. L.; Ryabinkin, E. A.

    2016-09-01

    One of the most important studies dedicated to be solved for ATLAS physical analysis is a reconstruction of proton-proton events with large number of interactions in Transition Radiation Tracker. Paper includes Transition Radiation Tracker performance results obtained with the usage of the ATLAS GRID and Kurchatov Institute's Data Processing Center including Tier-1 grid site and supercomputer as well as analysis of CPU efficiency during these studies.

  3. Cluster Mott insulators and two Curie-Weiss regimes on an anisotropic kagome lattice

    Science.gov (United States)

    Chen, Gang; Kee, Hae-Young; Kim, Yong Baek

    2016-06-01

    Motivated by recent experiments on the quantum-spin-liquid candidate material LiZn2Mo3O8 , we study a single-band extended Hubbard model on an anisotropic kagome lattice with the 1/6 electron filling. Due to the partial filling of the lattice, the intersite repulsive interaction is necessary to generate Mott insulators, where electrons are localized in clusters rather than at lattice sites. It is shown that these cluster Mott insulators are generally U(1) quantum spin liquids with spinon Fermi surfaces. The nature of charge excitations in cluster Mott insulators can be quite different from conventional Mott insulator and we show that there exists a cluster Mott insulator where charge fluctuations around the hexagonal cluster induce a plaquette charge order (PCO). The spinon excitation spectrum in this spin-liquid cluster Mott insulator is reconstructed due to the PCO so that only 1/3 of the total spinon excitations are magnetically active. Based on these results, we propose that the two Curie-Weiss regimes of the spin susceptibility in LiZn2Mo3O8 may be explained by finite-temperature properties of the cluster Mott insulator with the PCO as well as fractionalized spinon excitations. Existing and possible future experiments on LiZn2Mo3O8 , and other Mo-based cluster magnets are discussed in light of these theoretical predictions.

  4. From Thread to Transcontinental Computer: Disturbing Lessons in Distributed Supercomputing

    CERN Document Server

    Groen, Derek

    2015-01-01

    We describe the political and technical complications encountered during the astronomical CosmoGrid project. CosmoGrid is a numerical study on the formation of large scale structure in the universe. The simulations are challenging due to the enormous dynamic range in spatial and temporal coordinates, as well as the enormous computer resources required. In CosmoGrid we dealt with the computational requirements by connecting up to four supercomputers via an optical network and make them operate as a single machine. This was challenging, if only for the fact that the supercomputers of our choice are separated by half the planet, as three of them are located scattered across Europe and fourth one is in Tokyo. The co-scheduling of multiple computers and the 'gridification' of the code enabled us to achieve an efficiency of up to $93\\%$ for this distributed intercontinental supercomputer. In this work, we find that high-performance computing on a grid can be done much more effectively if the sites involved are will...

  5. Proceedings of the first energy research power supercomputer users symposium

    Energy Technology Data Exchange (ETDEWEB)

    1991-01-01

    The Energy Research Power Supercomputer Users Symposium was arranged to showcase the richness of science that has been pursued and accomplished in this program through the use of supercomputers and now high performance parallel computers over the last year: this report is the collection of the presentations given at the Symposium. Power users'' were invited by the ER Supercomputer Access Committee to show that the use of these computational tools and the associated data communications network, ESNet, go beyond merely speeding up computations. Today the work often directly contributes to the advancement of the conceptual developments in their fields and the computational and network resources form the very infrastructure of today's science. The Symposium also provided an opportunity, which is rare in this day of network access to computing resources, for the invited users to compare and discuss their techniques and approaches with those used in other ER disciplines. The significance of new parallel architectures was highlighted by the interesting evening talk given by Dr. Stephen Orszag of Princeton University.

  6. Porting Ordinary Applications to Blue Gene/Q Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy; Katz, Daniel S.; Binkowski, T. Andrew; Zhong, Xiaoliang; Heinonen, Olle; Karpeyev, Dmitry; Wilde, Michael

    2015-08-31

    Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt's sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.

  7. Extracting the Textual and Temporal Structure of Supercomputing Logs

    Energy Technology Data Exchange (ETDEWEB)

    Jain, S; Singh, I; Chandra, A; Zhang, Z; Bronevetsky, G

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an online clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.

  8. Model studies of the iron-catalysed Haber-Weiss cycle and the ascorbate-driven Fenton reaction.

    Science.gov (United States)

    Burkitt, M J; Gilbert, B C

    1990-01-01

    Complementary hydroxylation assays and stopped-flow e.s.r. techniques have been employed in the investigation of the effect of various iron chelators (of chemical, biological and clinical importance) on hydroxyl-radical generation via the Haber-Weiss cycle and the ascorbate-driven Fenton reaction. Chelators have been identified which selectively promote or inhibit various reactions involved in hydroxyl-radical generation (for example, NTA and EDTA promote all the reactions of both the Haber-Weiss cycle and the ascorbate-driven Fenton reaction, whereas DTPA and phytate inhibit the recycling of iron in these reactions). The biological chelators succinate and citrate are shown to be relatively poor catalysts of the Haber-Weiss cycle, whereas they are found to be effective catalysts of .OH generation in the ascorbate-driven Fenton reaction. It is also suggested that continuous redox-cycling reactions between iron, oxygen and ascorbate may represent an important mechanism of cell death in biological systems.

  9. Cofactors, coreceptors, and new retroviruses. An interview with Robin A Weiss, PhD. Interview by Mark Mascolini.

    Science.gov (United States)

    Weiss, R A

    1995-02-01

    Dr. Robin A. Weiss, Director of Research at the Chester Beatty Laboratories of the Institute of Cancer Research in London, England presents his thoughts on the subjects of cofactors, coreceptors, and new retroviruses in HIV infection. Dr. Weiss responds to questions in the following areas: balancing basic research and clinical trials, the importance of sheer viral load, the importance of pathogenic cofactors in HIV progression, genetic factors and susceptibility to HIV, possible reasons for long-term nonprogression, the importance of immunotherapy, the difficulty in finding a second receptor as a cofactor necessary for disease progression, and whether more human retroviruses are likely to be discovered. Among Weiss' observations are his beliefs that there should be more of a funding shift into basic research, that evidence is getting stronger for the theory that beating back the viral burden as soon as possible forestalls progression, that it appears possible that some people may have a genetic disposition against becoming HIV infected, that it is just as important to find a preinfection vaccine as it is a post-infection vaccine, and his belief that CD26 is not a coreceptor in AIDS progression.

  10. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    Energy Technology Data Exchange (ETDEWEB)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S [Earth Sciences Department. Barcelona Supercomputing Center. Barcelona (Spain); Cuevas, E [Izanaa Atmospheric Research Center. Agencia Estatal de Meteorologia, Tenerife (Spain); Nickovic, S [Atmospheric Research and Environment Branch, World Meteorological Organization, Geneva (Switzerland)], E-mail: carlos.perez@bsc.es

    2009-03-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  11. Programming Environment for a High-Performance Parallel Supercomputer with Intelligent Communication

    OpenAIRE

    A. Gunzinger; BÄumle, B.; Frey, M.; Klebl, M.; Kocheisen, M.; Kohler, P.; Morel, R.; Müller, U; Rosenthal, M

    1996-01-01

    At the Electronics Laboratory of the Swiss Federal Institute of Technology (ETH) in Zürich, the high-performance parallel supercomputer MUSIC (MUlti processor System with Intelligent Communication) has been developed. As applications like neural network simulation and molecular dynamics show, the Electronics Laboratory supercomputer is absolutely on par with those of conventional supercomputers, but electric power requirements are reduced by a factor of 1,000, weight is reduced by a factor of...

  12. Numerical simulations of astrophysical problems on massively parallel supercomputers

    Science.gov (United States)

    Kulikov, Igor; Chernykh, Igor; Glinsky, Boris

    2016-10-01

    In this paper, we propose the last version of the numerical model for simulation of astrophysical objects dynamics, and a new realization of our AstroPhi code for Intel Xeon Phi based RSC PetaStream supercomputers. The co-design of a computational model for the description of astrophysical objects is described. The parallel implementation and scalability tests of the AstroPhi code are presented. We achieve a 73% weak scaling efficiency with using of 256x Intel Xeon Phi accelerators with 61440 threads.

  13. AENEAS A Custom-built Parallel Supercomputer for Quantum Gravity

    CERN Document Server

    Hamber, H W

    1998-01-01

    Accurate Quantum Gravity calculations, based on the simplicial lattice formulation, are computationally very demanding and require vast amounts of computer resources. A custom-made 64-node parallel supercomputer capable of performing up to $2 \\times 10^{10}$ floating point operations per second has been assembled entirely out of commodity components, and has been operational for the last ten months. It will allow the numerical computation of a variety of quantities of physical interest in quantum gravity and related field theories, including the estimate of the critical exponents in the vicinity of the ultraviolet fixed point to an accuracy of a few percent.

  14. A special purpose silicon compiler for designing supercomputing VLSI systems

    Science.gov (United States)

    Venkateswaran, N.; Murugavel, P.; Kamakoti, V.; Shankarraman, M. J.; Rangarajan, S.; Mallikarjun, M.; Karthikeyan, B.; Prabhakar, T. S.; Satish, V.; Venkatasubramaniam, P. R.

    1991-01-01

    Design of general/special purpose supercomputing VLSI systems for numeric algorithm execution involves tackling two important aspects, namely their computational and communication complexities. Development of software tools for designing such systems itself becomes complex. Hence a novel design methodology has to be developed. For designing such complex systems a special purpose silicon compiler is needed in which: the computational and communicational structures of different numeric algorithms should be taken into account to simplify the silicon compiler design, the approach is macrocell based, and the software tools at different levels (algorithm down to the VLSI circuit layout) should get integrated. In this paper a special purpose silicon (SPS) compiler based on PACUBE macrocell VLSI arrays for designing supercomputing VLSI systems is presented. It is shown that turn-around time and silicon real estate get reduced over the silicon compilers based on PLA's, SLA's, and gate arrays. The first two silicon compiler characteristics mentioned above enable the SPS compiler to perform systolic mapping (at the macrocell level) of algorithms whose computational structures are of GIPOP (generalized inner product outer product) form. Direct systolic mapping on PLA's, SLA's, and gate arrays is very difficult as they are micro-cell based. A novel GIPOP processor is under development using this special purpose silicon compiler.

  15. Solidification in a Supercomputer: From Crystal Nuclei to Dendrite Assemblages

    Science.gov (United States)

    Shibuta, Yasushi; Ohno, Munekazu; Takaki, Tomohiro

    2015-08-01

    Thanks to the recent progress in high-performance computational environments, the range of applications of computational metallurgy is expanding rapidly. In this paper, cutting-edge simulations of solidification from atomic to microstructural levels performed on a graphics processing unit (GPU) architecture are introduced with a brief introduction to advances in computational studies on solidification. In particular, million-atom molecular dynamics simulations captured the spontaneous evolution of anisotropy in a solid nucleus in an undercooled melt and homogeneous nucleation without any inducing factor, which is followed by grain growth. At the microstructural level, the quantitative phase-field model has been gaining importance as a powerful tool for predicting solidification microstructures. In this paper, the convergence behavior of simulation results obtained with this model is discussed, in detail. Such convergence ensures the reliability of results of phase-field simulations. Using the quantitative phase-field model, the competitive growth of dendrite assemblages during the directional solidification of a binary alloy bicrystal at the millimeter scale is examined by performing two- and three-dimensional large-scale simulations by multi-GPU computation on the supercomputer, TSUBAME2.5. This cutting-edge approach using a GPU supercomputer is opening a new phase in computational metallurgy.

  16. The TeraGyroid Experiment – Supercomputing 2003

    Directory of Open Access Journals (Sweden)

    R.J. Blake

    2005-01-01

    Full Text Available Amphiphiles are molecules with hydrophobic tails and hydrophilic heads. When dispersed in solvents, they self assemble into complex mesophases including the beautiful cubic gyroid phase. The goal of the TeraGyroid experiment was to study defect pathways and dynamics in these gyroids. The UK's supercomputing and USA's TeraGrid facilities were coupled together, through a dedicated high-speed network, into a single computational Grid for research work that peaked around the Supercomputing 2003 conference. The gyroids were modeled using lattice Boltzmann methods with parameter spaces explored using many 1283 and 3grid point simulations, this data being used to inform the world's largest three-dimensional time dependent simulation with 10243-grid points. The experiment generated some 2 TBytes of useful data. In terms of Grid technology the project demonstrated the migration of simulations (using Globus middleware to and fro across the Atlantic exploiting the availability of resources. Integration of the systems accelerated the time to insight. Distributed visualisation of the output datasets enabled the parameter space of the interactions within the complex fluid to be explored from a number of sites, informed by discourse over the Access Grid. The project was sponsored by EPSRC (UK and NSF (USA with trans-Atlantic optical bandwidth provided by British Telecommunications.

  17. Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Jibonananda [ORNL; New, Joshua Ryan [ORNL; Edwards, Richard [ORNL; Parker, Lynne Edwards [ORNL

    2014-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.

  18. Optimizing Linpack Benchmark on GPU-Accelerated Petascale Supercomputer

    Institute of Scientific and Technical Information of China (English)

    Feng Wang; Can-Qun Yang; Yun-Fei Du; Juan Chen; Hui-Zhan Yi; Wei-Xia Xu

    2011-01-01

    In this paper we present the programming of the Linpack benchmark on TianHe-1 system,the first petascale supercomputer system of China,and the largest GPU-accelerated heterogeneous system ever attempted before.A hybrid programming model consisting of MPI,OpenMP and streaming computing is described to explore the task parallel,thread parallel and data parallel of the Linpack.We explain how we optimized the load distribution across the CPUs and GPUs using the two-level adaptive method and describe the implementation in details.To overcome the low-bandwidth between the CPU and GPU communication,we present a software pipelining technique to hide the communication overhead.Combined with other traditional optimizations,the Linpack we developed achieved 196.7 GFLOPS on a single compute element of TianHe-1.This result is 70.1% of the peak compute capability,3.3 times faster than the result by using the vendor's library.On the full configuration of TianHe-1 our optimizations resulted in a Linpack performance of 0.563 PFLOPS,which made TianHe-1 the 5th fastest supercomputer on the Top500 list in November,2009.

  19. Automatic discovery of the communication network topology for building a supercomputer model

    Science.gov (United States)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  20. Integration of Titan supercomputer at OLCF with ATLAS Production System

    CERN Document Server

    Barreiro Megino, Fernando Harald; The ATLAS collaboration

    2017-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS ex- periment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercom- puter at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modi ed PanDA Pilot framework for ...

  1. Lectures in Supercomputational Neurosciences Dynamics in Complex Brain Networks

    CERN Document Server

    Graben, Peter beim; Thiel, Marco; Kurths, Jürgen

    2008-01-01

    Computational Neuroscience is a burgeoning field of research where only the combined effort of neuroscientists, biologists, psychologists, physicists, mathematicians, computer scientists, engineers and other specialists, e.g. from linguistics and medicine, seem to be able to expand the limits of our knowledge. The present volume is an introduction, largely from the physicists' perspective, to the subject matter with in-depth contributions by system neuroscientists. A conceptual model for complex networks of neurons is introduced that incorporates many important features of the real brain, such as various types of neurons, various brain areas, inhibitory and excitatory coupling and the plasticity of the network. The computational implementation on supercomputers, which is introduced and discussed in detail in this book, will enable the readers to modify and adapt the algortihm for their own research. Worked-out examples of applications are presented for networks of Morris-Lecar neurons to model the cortical co...

  2. Modeling the weather with a data flow supercomputer

    Science.gov (United States)

    Dennis, J. B.; Gao, G.-R.; Todd, K. W.

    1984-01-01

    A static concept of data flow architecture is considered for a supercomputer for weather modeling. The machine level instructions are loaded into specific memory locations before computation is initiated, with only one instruction active at a time. The machine would have processing element, functional unit, array memory, memory routing and distribution routing network elements all contained on microprocessors. A value-oriented algorithmic language (VAL) would be employed and would have, as basic operations, simple functions deriving results from operand values. Details of the machine language format, computations with an array and file processing procedures are outlined. A global weather model is discussed in terms of a static architecture and the potential computation rate is analyzed. The results indicate that detailed design studies are warranted to quantify costs and parts fabrication requirements.

  3. Toward the Graphics Turing Scale on a Blue Gene Supercomputer

    CERN Document Server

    McGuigan, Michael

    2008-01-01

    We investigate raytracing performance that can be achieved on a class of Blue Gene supercomputers. We measure a 822 times speedup over a Pentium IV on a 6144 processor Blue Gene/L. We measure the computational performance as a function of number of processors and problem size to determine the scaling performance of the raytracing calculation on the Blue Gene. We find nontrivial scaling behavior at large number of processors. We discuss applications of this technology to scientific visualization with advanced lighting and high resolution. We utilize three racks of a Blue Gene/L in our calculations which is less than three percent of the the capacity of the worlds largest Blue Gene computer.

  4. Direct numerical simulation of turbulence using GPU accelerated supercomputers

    Science.gov (United States)

    Khajeh-Saeed, Ali; Blair Perot, J.

    2013-02-01

    Direct numerical simulations of turbulence are optimized for up to 192 graphics processors. The results from two large GPU clusters are compared to the performance of corresponding CPU clusters. A number of important algorithm changes are necessary to access the full computational power of graphics processors and these adaptations are discussed. It is shown that the handling of subdomain communication becomes even more critical when using GPU based supercomputers. The potential for overlap of MPI communication with GPU computation is analyzed and then optimized. Detailed timings reveal that the internal calculations are now so efficient that the operations related to MPI communication are the primary scaling bottleneck at all but the very largest problem sizes that can fit on the hardware. This work gives a glimpse of the CFD performance issues will dominate many hardware platform in the near future.

  5. Internal computational fluid mechanics on supercomputers for aerospace propulsion systems

    Science.gov (United States)

    Andersen, Bernhard H.; Benson, Thomas J.

    1987-01-01

    The accurate calculation of three-dimensional internal flowfields for application towards aerospace propulsion systems requires computational resources available only on supercomputers. A survey is presented of three-dimensional calculations of hypersonic, transonic, and subsonic internal flowfields conducted at the Lewis Research Center. A steady state Parabolized Navier-Stokes (PNS) solution of flow in a Mach 5.0, mixed compression inlet, a Navier-Stokes solution of flow in the vicinity of a terminal shock, and a PNS solution of flow in a diffusing S-bend with vortex generators are presented and discussed. All of these calculations were performed on either the NAS Cray-2 or the Lewis Research Center Cray XMP.

  6. Refinement of herpesvirus B-capsid structure on parallel supercomputers.

    Science.gov (United States)

    Zhou, Z H; Chiu, W; Haskell, K; Spears, H; Jakana, J; Rixon, F J; Scott, L R

    1998-01-01

    Electron cryomicroscopy and icosahedral reconstruction are used to obtain the three-dimensional structure of the 1250-A-diameter herpesvirus B-capsid. The centers and orientations of particles in focal pairs of 400-kV, spot-scan micrographs are determined and iteratively refined by common-lines-based local and global refinement procedures. We describe the rationale behind choosing shared-memory multiprocessor computers for executing the global refinement, which is the most computationally intensive step in the reconstruction procedure. This refinement has been implemented on three different shared-memory supercomputers. The speedup and efficiency are evaluated by using test data sets with different numbers of particles and processors. Using this parallel refinement program, we refine the herpesvirus B-capsid from 355-particle images to 13-A resolution. The map shows new structural features and interactions of the protein subunits in the three distinct morphological units: penton, hexon, and triplex of this T = 16 icosahedral particle.

  7. Enhanced autotrophic astaxanthin production from Haematococcus pluvialis under high temperature via heat stress-driven Haber-Weiss reaction.

    Science.gov (United States)

    Hong, Min-Eui; Hwang, Sung Kwan; Chang, Won Seok; Kim, Byung Woo; Lee, Jeewon; Sim, Sang Jun

    2015-06-01

    High temperatures (30-36 °C) inhibited astaxanthin accumulation in Haematococcus pluvialis under photoautotrophic conditions. The depression of carotenogenesis was primarily attributed to excess intracellular less reactive oxygen species (LROS; O2 (-) and H2O2) levels generated under high temperature conditions. Here, we show that the heat stress-driven inefficient astaxanthin production was improved by accelerating the iron-catalyzed Haber-Weiss reaction to convert LROS into more reactive oxygen species (MROS; O2 and OH·), thereby facilitating lipid peroxidation. As a result, during 18 days of photoautotrophic induction, the astaxanthin concentration of cells cultured in high temperatures in the presence of iron (450 μM) was dramatically increased by 75 % (30 °C) and 133 % (36 °C) compared to that of cells exposed to heat stress alone. The heat stress-driven Haber-Weiss reaction will be useful for economically producing astaxanthin by reducing energy cost and enhancing photoautotrophic astaxanthin production, particularly outdoors utilizing natural solar radiation including heat and light for photo-induction of H. pluvialis.

  8. Solving global shallow water equations on heterogeneous supercomputers.

    Science.gov (United States)

    Fu, Haohuan; Gan, Lin; Yang, Chao; Xue, Wei; Wang, Lanning; Wang, Xinliang; Huang, Xiaomeng; Yang, Guangwen

    2017-01-01

    The scientific demand for more accurate modeling of the climate system calls for more computing power to support higher resolutions, inclusion of more component models, more complicated physics schemes, and larger ensembles. As the recent improvements in computing power mostly come from the increasing number of nodes in a system and the integration of heterogeneous accelerators, how to scale the computing problems onto more nodes and various kinds of accelerators has become a challenge for the model development. This paper describes our efforts on developing a highly scalable framework for performing global atmospheric modeling on heterogeneous supercomputers equipped with various accelerators, such as GPU (Graphic Processing Unit), MIC (Many Integrated Core), and FPGA (Field Programmable Gate Arrays) cards. We propose a generalized partition scheme of the problem domain, so as to keep a balanced utilization of both CPU resources and accelerator resources. With optimizations on both computing and memory access patterns, we manage to achieve around 8 to 20 times speedup when comparing one hybrid GPU or MIC node with one CPU node with 12 cores. Using a customized FPGA-based data-flow engines, we see the potential to gain another 5 to 8 times improvement on performance. On heterogeneous supercomputers, such as Tianhe-1A and Tianhe-2, our framework is capable of achieving ideally linear scaling efficiency, and sustained double-precision performances of 581 Tflops on Tianhe-1A (using 3750 nodes) and 3.74 Pflops on Tianhe-2 (using 8644 nodes). Our study also provides an evaluation on the programming paradigm of various accelerator architectures (GPU, MIC, FPGA) for performing global atmospheric simulation, to form a picture about both the potential performance benefits and the programming efforts involved.

  9. Virtualizing Super-Computation On-Board Uas

    Science.gov (United States)

    Salami, E.; Soler, J. A.; Cuadrado, R.; Barrado, C.; Pastor, E.

    2015-04-01

    Unmanned aerial systems (UAS, also known as UAV, RPAS or drones) have a great potential to support a wide variety of aerial remote sensing applications. Most UAS work by acquiring data using on-board sensors for later post-processing. Some require the data gathered to be downlinked to the ground in real-time. However, depending on the volume of data and the cost of the communications, this later option is not sustainable in the long term. This paper develops the concept of virtualizing super-computation on-board UAS, as a method to ease the operation by facilitating the downlink of high-level information products instead of raw data. Exploiting recent developments in miniaturized multi-core devices is the way to speed-up on-board computation. This hardware shall satisfy size, power and weight constraints. Several technologies are appearing with promising results for high performance computing on unmanned platforms, such as the 36 cores of the TILE-Gx36 by Tilera (now EZchip) or the 64 cores of the Epiphany-IV by Adapteva. The strategy for virtualizing super-computation on-board includes the benchmarking for hardware selection, the software architecture and the communications aware design. A parallelization strategy is given for the 36-core TILE-Gx36 for a UAS in a fire mission or in similar target-detection applications. The results are obtained for payload image processing algorithms and determine in real-time the data snapshot to gather and transfer to ground according to the needs of the mission, the processing time, and consumed watts.

  10. Fractal fluctuations and quantum-like chaos in the brain by analysis of variability of brain waves: A new method based on a fractal variance function and random matrix theory: A link with El Naschie fractal Cantorian space-time and V. Weiss and H. Weiss golden ratio in brain

    Energy Technology Data Exchange (ETDEWEB)

    Conte, Elio [Department of Pharmacology and Human Physiology and Tires, Center for Innovative Technologies for Signal Detection and Processing, University of Bari (Italy); School of Advanced International Studies on Theoretical and Nonlinear Methodologies-Bari (Italy)], E-mail: elio.conte@fastwebnet.it; Khrennikov, Andrei [International Center for Mathematical Modelling in Physics and Cognitive Sciences, M.S.I., University of Vaexjoe, S-35195 (Sweden); Federici, Antonio [Department of Pharmacology and Human Physiology and Tires, Center for Innovative Technologies for Signal Detection and Processing, University of Bari (Italy); Zbilut, Joseph P. [Department of Molecular Biophysics and Physiology, Rush University Medical Center, 1653W Congress, Chicago, IL 60612 (United States)

    2009-09-15

    We develop a new method for analysis of fundamental brain waves as recorded by the EEG. To this purpose we introduce a Fractal Variance Function that is based on the calculation of the variogram. The method is completed by using Random Matrix Theory. Some examples are given. We also discuss the link of such formulation with H. Weiss and V. Weiss golden ratio found in the brain, and with El Naschie fractal Cantorian space-time theory.

  11. Non-preconditioned conjugate gradient on cell and FPCA-based hybrid supercomputer nodes

    Energy Technology Data Exchange (ETDEWEB)

    Dubois, David H [Los Alamos National Laboratory; Dubois, Andrew J [Los Alamos National Laboratory; Boorman, Thomas M [Los Alamos National Laboratory; Connor, Carolyn M [Los Alamos National Laboratory

    2009-03-10

    This work presents a detailed implementation of a double precision, Non-Preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{trademark} in conjunction with x86 Opteron{trademark} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  12. Non-preconditioned conjugate gradient on cell and FPGA based hybrid supercomputer nodes

    Energy Technology Data Exchange (ETDEWEB)

    Dubois, David H [Los Alamos National Laboratory; Dubois, Andrew J [Los Alamos National Laboratory; Boorman, Thomas M [Los Alamos National Laboratory; Connor, Carolyn M [Los Alamos National Laboratory

    2009-01-01

    This work presents a detailed implementation of a double precision, non-preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{sup TM} in conjunction with x86 Opteron{sup TM} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  13. Programming Environment for a High-Performance Parallel Supercomputer with Intelligent Communication

    Directory of Open Access Journals (Sweden)

    A. Gunzinger

    1996-01-01

    Full Text Available At the Electronics Laboratory of the Swiss Federal Institute of Technology (ETH in Zürich, the high-performance parallel supercomputer MUSIC (MUlti processor System with Intelligent Communication has been developed. As applications like neural network simulation and molecular dynamics show, the Electronics Laboratory supercomputer is absolutely on par with those of conventional supercomputers, but electric power requirements are reduced by a factor of 1,000, weight is reduced by a factor of 400, and price is reduced by a factor of 100. Software development is a key issue of such parallel systems. This article focuses on the programming environment of the MUSIC system and on its applications.

  14. ON THE MAKING OF A SYSTEM THEORY OF LIFE: PAUL A WEISS AND LUDWIG VON BERTALANFFY’S CONCEPTUAL CONNECTION

    Science.gov (United States)

    Drack, Manfred; Apfalter, Wilfried; Pouvreau, David

    2010-01-01

    In this article, we review how two eminent Viennese system thinkers, Paul A Weiss and Ludwig von Bertalanffy, began to develop their own perspectives toward a system theory of life in the 1920s. Their work is especially rooted in experimental biology as performed at the Biologische Versuchsanstalt, as well as in philosophy, and they converge in basic concepts. We underline the conceptual connections of their thinking, among them the organism as an organized system, hierarchical organization, and primary activity. With their system thinking, both biologists shared a strong desire to overcome what they viewed as a “mechanistic” approach in biology. Their interpretations are relevant to the renaissance of system thinking in biology—“systems biology.” Unless otherwise noted, all translations are our own. PMID:18217527

  15. On the making of a system theory of life: Paul A Weiss and Ludwig von Bertalanffy's conceptual connection.

    Science.gov (United States)

    Drack, Manfred; Apfalter, Wilfried; Pouvreau, David

    2007-12-01

    In this article, we review how two eminent Viennese system thinkers, Paul A Weiss and Ludwig von Bertalanffy, began to develop their own perspectives toward a system theory of life in the 1920s. Their work is especially rooted in experimental biology as performed at the Biologische Versuchsanstalt, as well as in philosophy, and they converge in basic concepts. We underline the conceptual connections of their thinking, among them the organism as an organized system, hierarchical organization, and primary activity. With their system thinking, both biologists shared a strong desire to overcome what they viewed as a "mechanistic" approach in biology. Their interpretations are relevant to the renaissance of system thinking in biology--"systems biology." Unless otherwise noted, all translations are our own.

  16. The nature of the Roberge-Weiss transition in $N_f=2$ QCD with Wilson fermions

    CERN Document Server

    Philipsen, Owe

    2014-01-01

    At imaginary values of the quark chemical potential $\\mu$, Quantum Chromodynamics shows an interesting phase structure due to an exact center, or Roberge-Weiss (RW), symmetry. This can be used to constrain QCD at real $\\mu$, where the sign problem prevents Monte Carlo simulations of the lattice theory. In previous studies of this region with staggered fermions it was found that the RW endpoint, where the center transition changes from first-order to a crossover, depends non-trivially on the quark mass: for high and low masses, it is a triple point connecting to the deconfinement and chiral transitions, respectively, changing to a second-order endpoint for intermediate mass values. These parameter regions are separated by tricritical points. Here we present a confirmation of these findings using Wilson fermions on $N_\\tau=4$ lattices. In addition, our results provide a successful quantitative check for a heavy quark effective lattice theory at finite density.

  17. Roberge-Weiss transition in $N_\\text{f}=2$ QCD with Wilson fermions and $N_\\tau=6$

    CERN Document Server

    Cuteri, Francesca; Sciarra, Alessandro; Czaban, Christopher; Philipsen, Owe

    2015-01-01

    QCD with imaginary chemical potential is free of the sign problem and exhibits a rich phase structure constraining the phase diagram at real chemical potential. We simulate the critical endpoint of the Roberge-Weiss (RW) transition at imaginary chemical potential for $N_\\text{f}=2$ QCD on $N_\\tau=6$ lattices with standard Wilson fermions. As found on coarser lattices, the RW endpoint is a triple point connecting the deconfinement/chiral transitions in the heavy/light quark mass regions and changes to a second-order endpoint for intermediate masses. These regimes are separated by two tricritical values of the quark mass, which we determine by extracting the critical exponent $\

  18. Requirements for supercomputing in energy research: The transition to massively parallel computing

    Energy Technology Data Exchange (ETDEWEB)

    1993-02-01

    This report discusses: The emergence of a practical path to TeraFlop computing and beyond; requirements of energy research programs at DOE; implementation: supercomputer production computing environment on massively parallel computers; and implementation: user transition to massively parallel computing.

  19. Novel Supercomputing Approaches for High Performance Linear Algebra Using FPGAs Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Supercomputing plays a major role in many areas of science and engineering, and it has had tremendous impact for decades in areas such as aerospace, defense, energy,...

  20. SUPERCOMPUTERS FOR AIDING ECONOMIC PROCESSES WITH REFERENCE TO THE FINANCIAL SECTOR

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2014-12-01

    Full Text Available The article discusses the use of supercomputers to support business processes with particular emphasis on the financial sector. A reference was made to the selected projects that support economic development. In particular, we propose the use of supercomputers to perform artificial intel-ligence methods in banking. The proposed methods combined with modern technology enables a significant increase in the competitiveness of enterprises and banks by adding new functionality.

  1. A novel VLSI processor architecture for supercomputing arrays

    Science.gov (United States)

    Venkateswaran, N.; Pattabiraman, S.; Devanathan, R.; Ahmed, Ashaf; Venkataraman, S.; Ganesh, N.

    1993-01-01

    Design of the processor element for general purpose massively parallel supercomputing arrays is highly complex and cost ineffective. To overcome this, the architecture and organization of the functional units of the processor element should be such as to suit the diverse computational structures and simplify mapping of complex communication structures of different classes of algorithms. This demands that the computation and communication structures of different class of algorithms be unified. While unifying the different communication structures is a difficult process, analysis of a wide class of algorithms reveals that their computation structures can be expressed in terms of basic IP,IP,OP,CM,R,SM, and MAA operations. The execution of these operations is unified on the PAcube macro-cell array. Based on this PAcube macro-cell array, we present a novel processor element called the GIPOP processor, which has dedicated functional units to perform the above operations. The architecture and organization of these functional units are such to satisfy the two important criteria mentioned above. The structure of the macro-cell and the unification process has led to a very regular and simpler design of the GIPOP processor. The production cost of the GIPOP processor is drastically reduced as it is designed on high performance mask programmable PAcube arrays.

  2. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Ryabinkin, E.; Wenaus, T.

    2016-02-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed Analysis)Workload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF), is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF's Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  3. Developing and Deploying Advanced Algorithms to Novel Supercomputing Hardware

    CERN Document Server

    Brunner, Robert J; Myers, Adam D

    2007-01-01

    The objective of our research is to demonstrate the practical usage and orders of magnitude speedup of real-world applications by using alternative technologies to support high performance computing. Currently, the main barrier to the widespread adoption of this technology is the lack of development tools and case studies that typically impede non-specialists that might otherwise develop applications that could leverage these technologies. By partnering with the Innovative Systems Laboratory at the National Center for Supercomputing, we have obtained access to several novel technologies, including several Field-Programmable Gate Array (FPGA) systems, NVidia Graphics Processing Units (GPUs), and the STI Cell BE platform. Our goal is to not only demonstrate the capabilities of these systems, but to also serve as guides for others to follow in our path. To date, we have explored the efficacy of the SRC-6 MAP-C and MAP-E and SGI RASC Athena and RC100 reconfigurable computing platforms in supporting a two-point co...

  4. Numerical infinities and infinitesimals in a new supercomputing framework

    Science.gov (United States)

    Sergeyev, Yaroslav D.

    2016-06-01

    Traditional computers are able to work numerically with finite numbers only. The Infinity Computer patented recently in USA and EU gets over this limitation. In fact, it is a computational device of a new kind able to work numerically not only with finite quantities but with infinities and infinitesimals, as well. The new supercomputing methodology is not related to non-standard analysis and does not use either Cantor's infinite cardinals or ordinals. It is founded on Euclid's Common Notion 5 saying `The whole is greater than the part'. This postulate is applied to all numbers (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). It is shown that it becomes possible to write down finite, infinite, and infinitesimal numbers by a finite number of symbols as numerals belonging to a positional numeral system with an infinite radix described by a specific ad hoc introduced axiom. Numerous examples of the usage of the introduced computational tools are given during the lecture. In particular, algorithms for solving optimization problems and ODEs are considered among the computational applications of the Infinity Computer. Numerical experiments executed on a software prototype of the Infinity Computer are discussed.

  5. Micro-mechanical Simulations of Soils using Massively Parallel Supercomputers

    Directory of Open Access Journals (Sweden)

    David W. Washington

    2004-06-01

    Full Text Available In this research a computer program, Trubal version 1.51, based on the Discrete Element Method was converted to run on a Connection Machine (CM-5,a massively parallel supercomputer with 512 nodes, to expedite the computational times of simulating Geotechnical boundary value problems. The dynamic memory algorithm in Trubal program did not perform efficiently in CM-2 machine with the Single Instruction Multiple Data (SIMD architecture. This was due to the communication overhead involving global array reductions, global array broadcast and random data movement. Therefore, a dynamic memory algorithm in Trubal program was converted to a static memory arrangement and Trubal program was successfully converted to run on CM-5 machines. The converted program was called "TRUBAL for Parallel Machines (TPM." Simulating two physical triaxial experiments and comparing simulation results with Trubal simulations validated the TPM program. With a 512 nodes CM-5 machine TPM produced a nine-fold speedup demonstrating the inherent parallelism within algorithms based on the Discrete Element Method.

  6. Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters

    Science.gov (United States)

    Fluke, Christopher J.; Barnes, David G.; Barsdell, Benjamin R.; Hassan, Amr H.

    2011-01-01

    General-purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplifying the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks and make the investment of time and effort to become early adopters of GPGPU in astronomy, stand to reap great benefits.

  7. Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters

    CERN Document Server

    Fluke, Christopher J; Barsdell, Benjamin R; Hassan, Amr H

    2010-01-01

    General purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplyfing the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best-practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks, and make the investment of time and effort to become early adopters of GPGPU in astronomy, s...

  8. Developing Fortran Code for Kriging on the Stampede Supercomputer

    Science.gov (United States)

    Hodgess, Erin

    2016-04-01

    Kriging is easily accessible in the open source statistical language R (R Core Team, 2015) in the gstat (Pebesma, 2004) package. It works very well, but can be slow on large data sets, particular if the prediction space is large as well. We are working on the Stampede supercomputer at the Texas Advanced Computing Center to develop code using a combination of R and the Message Passage Interface (MPI) bindings to Fortran. We have a function similar to the autofitVariogram found in the automap (Hiemstra {et al}, 2008) package and it is very effective. We are comparing R with MPI/Fortran, MPI/Fortran alone, and R with the Rmpi package, which uses bindings to C. We will present results from simulation studies and real-world examples. References: Hiemstra, P.H., Pebesma, E.J., Twenhofel, C.J.W. and G.B.M. Heuvelink, 2008. Real-time automatic interpolation of ambient gamma dose rates from the Dutch Radioactivity Monitoring Network. Computers and Geosciences, accepted for publication. Pebesma, E.J., 2004. Multivariable geostatistics in S: the gstat package. Computers and Geosciences, 30: 683-691. R Core Team, 2015. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/.

  9. Using the multistage cube network topology in parallel supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Siegel, H.J.; Nation, W.G. (Purdue Univ., Lafayette, IN (USA). School of Electrical Engineering); Kruskal, C.P. (Maryland Univ., College Park, MD (USA). Dept. of Computer Science); Napolitano, L.M. Jr. (Sandia National Labs., Livermore, CA (USA))

    1989-12-01

    A variety of approaches to designing the interconnection network to support communications among the processors and memories of supercomputers employing large-scale parallel processing have been proposed and/or implemented. These approaches are often based on the multistage cube topology. This topology is the subject of much ongoing research and study because of the ways in which the multistage cube can be used. The attributes of the topology that make it useful are described. These include O(N log{sub 2} N) cost for an N input/output network, decentralized control, a variety of implementation options, good data permuting capability to support single instruction stream/multiple data stream (SIMD) parallelism, good throughput to support multiple instruction stream/multiple data stream (MIMD) parallelism, and ability to be partitioned into independent subnetworks to support reconfigurable systems. Examples of existing systems that use multistage cube networks are overviewed. The multistage cube topology can be converted into a single-stage network by associating with each switch in the network a processor (and a memory). Properties of systems that use the multistage cube network in this way are also examined.

  10. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    Directory of Open Access Journals (Sweden)

    De K.

    2016-01-01

    Full Text Available The Large Hadron Collider (LHC, operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed AnalysisWorkload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF, is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF’s Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  11. Supercomputers ready for use as discovery machines for neuroscience

    Directory of Open Access Journals (Sweden)

    Moritz eHelias

    2012-11-01

    Full Text Available NEST is a widely used tool to simulate biological spiking neural networks. Here we explain theimprovements, guided by a mathematical model of memory consumption, that enable us to exploitfor the first time the computational power of the K supercomputer for neuroscience. Multi-threadedcomponents for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling.K is capable of simulating networks corresponding to a brain area with 10^8 neurons and 10^12 synapsesin the worst case scenario of random connectivity; for larger networks of the brain its hierarchicalorganization can be exploited to constrain the number of communicating computer nodes. Wediscuss the limits of the software technology, comparing maximum-□lling scaling plots for K andthe JUGENE BG/P system. The usability of these machines for network simulations has becomecomparable to running simulations on a single PC. Turn-around times in the range of minutes evenfor the largest systems enable a quasi-interactive working style and render simulations on this scalea practical tool for computational neuroscience.

  12. Supercomputers ready for use as discovery machines for neuroscience.

    Science.gov (United States)

    Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus

    2012-01-01

    NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 10(8) neurons and 10(12) synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience.

  13. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    Science.gov (United States)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  14. Low-temperature dynamics of the Curie-Weiss Model: Periodic orbits, multiple histories, and loss of Gibbsianness

    CERN Document Server

    Ermolaev, Victor

    2010-01-01

    We consider the Curie-Weiss model at a given initial temperature in vanishing external field evolving under a Glauber spin-flip dynamics corresponding to a possibly different temperature. We study the limiting conditional probabilities and their continuity properties and discuss their set of points of discontinuity (bad points). We provide a complete analysis of the transition between Gibbsian and non-Gibbsian behavior as a function of time, extending earlier work for the case of independent spin-flip dynamics. For initial temperature bigger than one we prove that the time-evolved measure stays Gibbs forever, for any (possibly low) temperature of the dynamics. In the regime of heating to low-temperatures from even lower temperatures, when the initial temperature is smaller than the temperature of the dynamics, and smaller than 1, we prove that the time-evolved measure is Gibbs initially and becomes non-Gibbs after a sharp transition time. We find this regime is further divided into a region where only symmetr...

  15. Use of remodeled femoral head allograft for tarsal reconstruction in the treatment of müller-weiss disease.

    Science.gov (United States)

    Tan, Anouk; Smulders, Yvonne C M M; Zöphel, Oliver T

    2011-01-01

    Müller-Weiss disease (MWD), spontaneous avascular necrosis of the navicular in adults, is rare. Without treatment, it can result in permanent disability. Operative treatment is often required. MWD was suspected in a 51-year-old woman with spontaneous pain in her right foot. The radiologic tests showed a comma shaped deformed navicular and severe talonavicular necrosis and sclerosis. After excision of the necrotic navicular, a 5 × 5 × 3-cm defect appeared. A femoral head bone allograft was remodeled to fit this defect precisely. Autologous cancellous bone was also used. The allograft interposition arthrodesis was stabilized with a low contact plate. The histopathologic results showed avascular osteonecrosis, supporting the diagnosis of MWD. After 12 weeks of non-weight-bearing plaster cast immobilization, the radiographs showed consolidation and no osteolysis. At 6 months after surgery, she was fully weight-bearing. The low contact plate was removed, because it impeded exercise. After 10 months, she was walking pain free. At 14 months after surgery, her radiographs still showed good consolidation, with no sign of osteolysis. The use of a bone allograft to cover a tarsal defect could be a safe and effective operative treatment of MWD that has not yet been reported in English-language studies. This treatment also results in minimal donor site morbidity. Copyright © 2011 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  16. Roberge-Weiss transition in $N_f=2$ QCD with staggered fermions and $N_\\tau=6$

    CERN Document Server

    Philipsen, Owe

    2016-01-01

    The QCD phase diagram at imaginary chemical potential exhibits a rich structure and studying it can constrain the phase diagram at real values of the chemical potential. Moreover, at imaginary chemical potential standard numerical techniques based on importance sampling can be applied, since no sign problem is present. In the last decade, a first understanding of the QCD phase diagram at purely imaginary chemical potential has been developed, but most of it is so far based on investigations on coarse lattices ($N_\\tau=4$, $a=0.3\\:$fm). Considering the $N_f=2$ case, at the Roberge-Weiss critical value of the imaginary chemical potential, the chiral/deconfinement transition is first order for light/heavy quark masses and second order for intermediate values of the mass: there are then two tricritical masses, whose position strongly depends on the lattice spacing and on the discretization. On $N_\\tau=4$, we have the chiral $m_\\pi^{\\text{tric.}}=400\\:$MeV with unimproved staggered fermions and $m_\\pi^{\\text{tric....

  17. Large deviations of the finite-time magnetization of the Curie-Weiss random-field Ising model

    Science.gov (United States)

    Paga, Pierre; Kühn, Reimer

    2017-08-01

    We study the large deviations of the magnetization at some finite time in the Curie-Weiss random field Ising model with parallel updating. While relaxation dynamics in an infinite-time horizon gives rise to unique dynamical trajectories [specified by initial conditions and governed by first-order dynamics of the form mt +1=f (mt) ] , we observe that the introduction of a finite-time horizon and the specification of terminal conditions can generate a host of metastable solutions obeying second-order dynamics. We show that these solutions are governed by a Newtonian-like dynamics in discrete time which permits solutions in terms of both the first-order relaxation ("forward") dynamics and the backward dynamics mt +1=f-1(mt) . Our approach allows us to classify trajectories for a given final magnetization as stable or metastable according to the value of the rate function associated with them. We find that in analogy to the Freidlin-Wentzell description of the stochastic dynamics of escape from metastable states, the dominant trajectories may switch between the two types (forward and backward) of first-order dynamics. Additionally, we show how to compute rate functions when uncertainty in the quenched disorder is introduced.

  18. Zur Geschichte der Geowissenschaften im Museum für Naturkunde zu Berlin. Teil 3: Von A. G. Werner und R. J. Haüy zu C. S. Weiss – Der Weg von C. S. Weiss zum Direktor des Mineralogischen Museums der Berliner Universität

    OpenAIRE

    G. Hoppe

    2000-01-01

    Der Berufung von C. S. Weiss an die Universität Berlin im Jahre 1810 gingen Entwicklungen voraus, die durch die Kristallographie des Franzosen R. J. Haüy, besonders durch dessen Lehrbuch der Mineralogie, ausgelöst wurden. Sie stehen mit der Übersetzung dieses Lehrbuchs im Zusammenhang und führten zur Qualifizierung von C. S. Weiss zum Mineralogen und Kristallographen sowie zur weiteren Entwicklung der Kristallographie innerhalb des Lehrgebäudes der Mineralogie. Den Anstoß gab der mit dem Berl...

  19. Cyberdyn supercomputer - a tool for imaging geodinamic processes

    Science.gov (United States)

    Pomeran, Mihai; Manea, Vlad; Besutiu, Lucian; Zlagnean, Luminita

    2014-05-01

    More and more physical processes developed within the deep interior of our planet, but with significant impact on the Earth's shape and structure, become subject to numerical modelling by using high performance computing facilities. Nowadays, worldwide an increasing number of research centers decide to make use of such powerful and fast computers for simulating complex phenomena involving fluid dynamics and get deeper insight to intricate problems of Earth's evolution. With the CYBERDYN cybernetic infrastructure (CCI), the Solid Earth Dynamics Department in the Institute of Geodynamics of the Romanian Academy boldly steps into the 21st century by entering the research area of computational geodynamics. The project that made possible this advancement, has been jointly supported by EU and Romanian Government through the Structural and Cohesion Funds. It lasted for about three years, ending October 2013. CCI is basically a modern high performance Beowulf-type supercomputer (HPCC), combined with a high performance visualization cluster (HPVC) and a GeoWall. The infrastructure is mainly structured around 1344 cores and 3 TB of RAM. The high speed interconnect is provided by a Qlogic InfiniBand switch, able to transfer up to 40 Gbps. The CCI storage component is a 40 TB Panasas NAS. The operating system is Linux (CentOS). For control and maintenance, the Bright Cluster Manager package is used. The SGE job scheduler manages the job queues. CCI has been designed for a theoretical peak performance up to 11.2 TFlops. Speed tests showed that a high resolution numerical model (256 × 256 × 128 FEM elements) could be resolved with a mean computational speed of 1 time step at 30 seconds, by employing only a fraction of the computing power (20%). After passing the mandatory tests, the CCI has been involved in numerical modelling of various scenarios related to the East Carpathians tectonic and geodynamic evolution, including the Neogene magmatic activity, and the intriguing

  20. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C. (LCF)

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to

  1. Data mining method for anomaly detection in the supercomputer task flow

    Science.gov (United States)

    Voevodin, Vadim; Voevodin, Vladimir; Shaikhislamov, Denis; Nikitenko, Dmitry

    2016-10-01

    The efficiency of most supercomputer applications is extremely low. At the same time, the user rarely even suspects that their applications may be wasting computing resources. Software tools need to be developed to help detect inefficient applications and report them to the users. We suggest an algorithm for detecting anomalies in the supercomputer's task flow, based on a data mining methods. System monitoring is used to calculate integral characteristics for every job executed, and the data is used as input for our classification method based on the Random Forest algorithm. The proposed approach can currently classify the application as one of three classes - normal, suspicious and definitely anomalous. The proposed approach has been demonstrated on actual applications running on the "Lomonosov" supercomputer.

  2. Case with a Nonreassuring Fetal Status Induced by Massive Hematemesis due to Mallory-Weiss Tear That Required Emergency Cesarean Section at 38 Weeks’ Gestation

    Directory of Open Access Journals (Sweden)

    Takashi Suzuki

    2015-01-01

    Full Text Available We describe a rare case of Mallory-Weiss tear with massive hematemesis at 38 weeks’ gestation. A 35-year-old woman presented with epigastralgia followed by massive hematemesis. An emergency endoscopy indicated active pulsatile bleeding at the esophagocardial junction. Although an emergency endoscopic hemostasis was successful, late decelerations without acceleration on cardiotocogram were observed. Therefore, the patient underwent emergency cesarean section, along with blood transfusion, following the endoscopic hemostasis. The hemoglobin level just before the operation was 5.1 g/dL. We suspected that massive hematemesis induced maternal acute anemia and hypovolemia, which resulted in a nonreassuring fetal status. Hence, urgent endoscopic hemostasis, adequate blood transfusion, and emergency cesarean section were needed. Mallory-Weiss tear during the third trimester may have a possibility of massive hematemesis and urgent blood transfusion, emergency endoscopic hemostasis, and emergency cesarean section may be needed.

  3. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Energy Technology Data Exchange (ETDEWEB)

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Klimentov, A [Brookhaven National Laboratory (BNL); Maeno, T [Brookhaven National Laboratory (BNL); Nilsson, P [Brookhaven National Laboratory (BNL); Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation

  4. Guide to dataflow supercomputing basic concepts, case studies, and a detailed example

    CERN Document Server

    Milutinovic, Veljko; Trifunovic, Nemanja; Giorgi, Roberto

    2015-01-01

    This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; revie

  5. Supercomputing '91; Proceedings of the 4th Annual Conference on High Performance Computing, Albuquerque, NM, Nov. 18-22, 1991

    Science.gov (United States)

    1991-01-01

    Various papers on supercomputing are presented. The general topics addressed include: program analysis/data dependence, memory access, distributed memory code generation, numerical algorithms, supercomputer benchmarks, latency tolerance, parallel programming, applications, processor design, networks, performance tools, mapping and scheduling, characterization affecting performance, parallelism packaging, computing climate change, combinatorial algorithms, hardware and software performance issues, system issues. (No individual items are abstracted in this volume)

  6. High-Performance Computing: Industry Uses of Supercomputers and High-Speed Networks. Report to Congressional Requesters.

    Science.gov (United States)

    General Accounting Office, Washington, DC. Information Management and Technology Div.

    This report was prepared in response to a request for information on supercomputers and high-speed networks from the Senate Committee on Commerce, Science, and Transportation, and the House Committee on Science, Space, and Technology. The following information was requested: (1) examples of how various industries are using supercomputers to…

  7. Visit of Mr. Shlomo Benizri, Minister of Health, Israel

    CERN Multimedia

    Patrice Loïez

    2000-01-01

    No 11: Left to right: Mr. S. Benizri, Minister, Mr. Y. Buchris, Advisor to the Minister, G. Mikenberg, P. Jenni. No 15 Mr. Y. Buchris, P. Jenni, G. Mikenberg, Mr. Benizri, Mr. R. Keren (Head of Sec., Min. of Health), Prof. B. Rager (Chief scientist, Min. of Health). No 25: Mr. Y. Buchris, Mr. Y. Amikan (Deputy, Director General, Min. of Health), Mr. S. Benizri, H. Hoffmann, G. Mikenberg. No 31: Mr. S. Benizri, Mr. H. Hoffmann.

  8. Mallory-Weiss lesions

    DEFF Research Database (Denmark)

    Lange, J.; Jensen, Lone Susanne

    2010-01-01

    period and to investigate the prognosis of these patients. Material and methods: Data from the patient records of 49 patients with endoscopically verified MW admitted through a five-year period were analysed. At follow-up, 35 patients were alive and contacted. A total of 29 responded. The mean time...... to follow-up from admittance was 42.7 months (range: 10.1-77.1). Results: Haemostasis was achieved in all 49 patients. Sixteen received active therapy during the endoscopic procedure. Haemoglobin at admitance was lower (p = 0.008), the presence of bleeding stigmata higher (p ... of patients receiving blood transfusion higher (p = 0.01) among those receiving active therapy than among the group receiving no therapy at the time of their endoscopy. At follow-up, 50% of those receiving active therapy were dead (eight of 16) compared with 18% (six of 33) in the no-therapy group (p = 0...

  9. Argonne National Lab deploys Force10 networks' massively dense ethernet switch for supercomputing cluster

    CERN Multimedia

    2003-01-01

    "Force10 Networks, Inc. today announced that Argonne National Laboratory (Argonne, IL) has successfully deployed Force10 E-Series switch/routers to connect to the TeraGrid, the world's largest supercomputing grid, sponsored by the National Science Foundation (NSF)" (1/2 page).

  10. Design and performance characterization of electronic structure calculations on massively parallel supercomputers

    DEFF Research Database (Denmark)

    Romero, N. A.; Glinsvad, Christian; Larsen, Ask Hjorth

    2013-01-01

    Density function theory (DFT) is the most widely employed electronic structure method because of its favorable scaling with system size and accuracy for a broad range of molecular and condensed-phase systems. The advent of massively parallel supercomputers has enhanced the scientific community's ...

  11. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  12. The impact of the U.S. supercomputing initiative will be global

    Energy Technology Data Exchange (ETDEWEB)

    Crawford, Dona [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-15

    Last July, President Obama issued an executive order that created a coordinated federal strategy for HPC research, development, and deployment called the U.S. National Strategic Computing Initiative (NSCI). However, this bold, necessary step toward building the next generation of supercomputers has inaugurated a new era for U.S. high performance computing (HPC).

  13. Congressional Panel Seeks To Curb Access of Foreign Students to U.S. Supercomputers.

    Science.gov (United States)

    Kiernan, Vincent

    1999-01-01

    Fearing security problems, a congressional committee on Chinese espionage recommends that foreign students and other foreign nationals be barred from using supercomputers at national laboratories unless they first obtain export licenses from the federal government. University officials dispute the data on which the report is based and find the…

  14. [Experience in simulating the structural and dynamic features of small proteins using table supercomputers].

    Science.gov (United States)

    Kondrat'ev, M S; Kabanov, A V; Komarov, V M; Khechinashvili, N N; Samchenko, A A

    2011-01-01

    The results of theoretical studies of the structural and dynamic features of peptides and small proteins have been presented that were carried out by quantum chemical and molecular dynamics methods in high-performance graphic stations, "table supercomputers", using distributed calculations by the CUDA technology.

  15. Interactive steering of supercomputing simulation for aerodynamic noise radiated from square cylinder; Supercomputer wo mochiita steering system ni yoru kakuchu kara hoshasareru kurikion no suchi kaiseki

    Energy Technology Data Exchange (ETDEWEB)

    Yokono, Y. [Toshiba Corp., Tokyo (Japan); Fujita, H. [Tokyo Inst. of Technology, Tokyo (Japan). Precision Engineering Lab.

    1995-03-25

    This paper describes extensive computer simulation for aerodynamic noise radiated from a square cylinder using an interactive steering supercomputing simulation system. The unsteady incompressible three-dimensional Navier-Stokes equations are solved by the finite volume method using a steering system which can visualize the numerical process during calculation and alter the numerical parameter. Using the fluctuating surface pressure of the square cylinder, the farfield sound pressure is calculated based on Lighthill-Curle`s equation. The results are compared with those of low noise wind tunnel experiments, and good agreement is observed for the peak spectrum frequency of the sound pressure level. 14 refs., 10 figs.

  16. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Science.gov (United States)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  17. The TianHe-1A Supercomputer: Its Hardware and Software

    Institute of Scientific and Technical Information of China (English)

    Xue-Jun Yang; Xiang-Ke Liao; Kai Lu; Qing-Feng Hu; Jun-Qiang Song; Jin-Shu Su

    2011-01-01

    This paper presents an overview of TianHe-1A (TH-1A) supercomputer, which is built by National University of Defense Technology of China (NUDT). TH-1A adopts a hybrid architecture by integrating CPUs and GPUs, and its interconnect network is a proprietary high-speed communication network. The theoretical peak performance of TH-1A is 4700TFlops, and its LINPACK test result is 2566TFlops. It was ranked the No. 1 on the TOP500 List released in November, 2010. TH-1A is now deployed in National Supercomputer Center in Tianjin and provides high performance computing services. TH-1A has played an important role in many applications, such as oil exploration, weather forecast, bio-medical research.

  18. Explaining the Gap between Theoretical Peak Performance and Real Performance for Supercomputer Architectures

    Directory of Open Access Journals (Sweden)

    W. Schönauer

    1994-01-01

    Full Text Available The basic architectures of vector and parallel computers and their properties are presented followed by a discussion of memory size and arithmetic operations in the context of memory bandwidth. For a single operation micromeasurements of the vector triad for the IBM 3090 VF and the CRAY Y-MP/8 are presented, revealing in detail the losses for this operation. The global performance of a whole supercomputer is then considered by identifying reduction factors that reduce the theoretical peak performance to the poor real performance. The responsibilities of the manufacturer and of the user for these losses are discussed. The price-performance ratio for different architectures as of January 1991 is briefly mentioned. Finally a user-friendly architecture for a supercomputer is proposed.

  19. HACC: Simulating Sky Surveys on State-of-the-Art Supercomputing Architectures

    CERN Document Server

    Habib, Salman; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukic, Zarija; Sehrish, Saba; Liao, Wei-keng

    2014-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of prog...

  20. Direct exploitation of a top 500 Supercomputer for Analysis of CMS Data

    Science.gov (United States)

    Cabrillo, I.; Cabellos, L.; Marco, J.; Fernandez, J.; Gonzalez, I.

    2014-06-01

    The Altamira Supercomputer hosted at the Instituto de Fisica de Cantatbria (IFCA) entered in operation in summer 2012. Its last generation FDR Infiniband network used (for message passing) in parallel jobs, supports the connection to General Parallel File System (GPFS) servers, enabling an efficient simultaneous processing of multiple data demanding jobs. Sharing a common GPFS system and a single LDAP-based identification with the existing Grid clusters at IFCA allows CMS researchers to exploit the large instantaneous capacity of this supercomputer to execute analysis jobs. The detailed experience describing this opportunistic use for skimming and final analysis of CMS 2012 data for a specific physics channel, resulting in an order of magnitude reduction of the waiting time, is presented.

  1. Sandia`s network for supercomputing `95: Validating the progress of Asynchronous Transfer Mode (ATM) switching

    Energy Technology Data Exchange (ETDEWEB)

    Pratt, T.J.; Vahle, O.; Gossage, S.A.

    1996-04-01

    The Advanced Networking Integration Department at Sandia National Laboratories has used the annual Supercomputing conference sponsored by the IEEE and ACM for the past three years as a forum to demonstrate and focus communication and networking developments. For Supercomputing `95, Sandia elected: to demonstrate the functionality and capability of an AT&T Globeview 20Gbps Asynchronous Transfer Mode (ATM) switch, which represents the core of Sandia`s corporate network, to build and utilize a three node 622 megabit per second Paragon network, and to extend the DOD`s ACTS ATM Internet from Sandia, New Mexico to the conference`s show floor in San Diego, California, for video demonstrations. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations supports Sandia`s overall strategies in ATM networking.

  2. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  3. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Del Debbio, Luigi; Jordan, Kirk; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  4. Towards 21st century stellar models: Star clusters, supercomputing and asteroseismology

    Science.gov (United States)

    Campbell, S. W.; Constantino, T. N.; D'Orazi, V.; Meakin, C.; Stello, D.; Christensen-Dalsgaard, J.; Kuehn, C.; De Silva, G. M.; Arnett, W. D.; Lattanzio, J. C.; MacLean, B. T.

    2016-09-01

    Stellar models provide a vital basis for many aspects of astronomy and astrophysics. Recent advances in observational astronomy - through asteroseismology, precision photometry, high-resolution spectroscopy, and large-scale surveys - are placing stellar models under greater quantitative scrutiny than ever. The model limitations are being exposed and the next generation of stellar models is needed as soon as possible. The current uncertainties in the models propagate to the later phases of stellar evolution, hindering our understanding of stellar populations and chemical evolution. Here we give a brief overview of the evolution, importance, and substantial uncertainties of core helium burning stars in particular and then briefly discuss a range of methods, both theoretical and observational, that we are using to advance the modelling. This study uses observational data from from HST, VLT, AAT, Kepler, and supercomputing resources in Australia provided by the National Computational Infrastructure (NCI) and Pawsey Supercomputing Centre.

  5. Analyzing the Interplay of Failures and Workload on a Leadership-Class Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Meneses, Esteban [University of Pittsburgh; Ni, Xiang [University of Illinois at Urbana-Champaign; Jones, Terry R [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of fault tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.

  6. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    Energy Technology Data Exchange (ETDEWEB)

    Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brightwell, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In this paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.

  7. TSP:A Heterogeneous Multiprocessor Supercomputing System Based on i860XP

    Institute of Scientific and Technical Information of China (English)

    黄国勇; 李三立

    1994-01-01

    Numerous new RISC processors provide support for supercomputing.By using the “mini-Cray” i860 superscalar processor,an add-on board has been developed to boost the performance of a real time system.A parallel heterogeneous multiprocessor surercomputing system,TSP,is constructed.In this paper,we present the system design consideration and described the architecture of the TSP and its features.

  8. US Department of Energy High School Student Supercomputing Honors Program: A follow-up assessment

    Energy Technology Data Exchange (ETDEWEB)

    1987-01-01

    The US DOE High School Student Supercomputing Honors Program was designed to recognize high school students with superior skills in mathematics and computer science and to provide them with formal training and experience with advanced computer equipment. This document reports on the participants who attended the first such program, which was held at the National Magnetic Fusion Energy Computer Center at the Lawrence Livermore National Laboratory (LLNL) during August 1985.

  9. Application of Supercomputer Technologies for Simulation Of Socio-Economic Systems

    Directory of Open Access Journals (Sweden)

    Vladimir Valentinovich Okrepilov

    2015-06-01

    Full Text Available To date, an extensive experience has been accumulated in investigation of problems related to quality, assessment of management systems, modeling of economic system sustainability. The performed studies have created a basis for development of a new research area — Economics of Quality. Its tools allow to use opportunities of model simulation for construction of the mathematical models adequately reflecting the role of quality in natural, technical, social regularities of functioning of the complex socio-economic systems. Extensive application and development of models, and also system modeling with use of supercomputer technologies, on our deep belief, will bring the conducted research of socio-economic systems to essentially new level. Moreover, the current scientific research makes a significant contribution to model simulation of multi-agent social systems and that is not less important, it belongs to the priority areas in development of science and technology in our country. This article is devoted to the questions of supercomputer technologies application in public sciences, first of all, — regarding technical realization of the large-scale agent-focused models (AFM. The essence of this tool is that owing to the power computer increase it has become possible to describe the behavior of many separate fragments of a difficult system, as socio-economic systems are. The article also deals with the experience of foreign scientists and practicians in launching the AFM on supercomputers, and also the example of AFM developed in CEMI RAS, stages and methods of effective calculating kernel display of multi-agent system on architecture of a modern supercomputer will be analyzed. The experiments on the basis of model simulation on forecasting the population of St. Petersburg according to three scenarios as one of the major factors influencing the development of socio-economic system and quality of life of the population are presented in the

  10. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  11. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential of PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.

  12. Visualization on supercomputing platform level II ASC milestone (3537-1B) results from Sandia.

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk (Kitware, Inc., Clifton Park, NY); Fabian, Nathan; Marion, Patrick (Kitware, Inc., Clifton Park, NY); Moreland, Kenneth D.

    2010-09-01

    This report provides documentation for the completion of the Sandia portion of the ASC Level II Visualization on the platform milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratories and Los Alamos National Laboratories. This milestone contains functionality required for performing visualization directly on a supercomputing platform, which is necessary for peta-scale visualization. Sandia's contribution concerns in-situ visualization, running a visualization in tandem with a solver. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors(GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the performance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. Scientific simulation on parallel supercomputers is traditionally performed in four

  13. Weiss Max, In the Shadow of Sectarianism. Law, Shi‘ism and the Making of Modern Lebanon, Cambridge, Harvard University Press, 2010, 341 p.

    Directory of Open Access Journals (Sweden)

    Sabrina Mervin

    2012-06-01

    Full Text Available Reprenant à son compte le titre d’un article fort connu des historiens du chiisme (Hodgson 1955, Max Weiss se demande, dans le prologue de cet ouvrage, « How did the Lebanese Shi‘a become Sectarian ? ». Il poursuit par une explication de ce qu’il entend par sectarianism, et s’interroge ensuite sur son lien à la modernité, en s’appuyant sur les travaux d’Ussama Makdisi et d’Ayesha Jalal. On n’entrera pas ici dans des débats sur la définition et la traduction du terme ṭâ’ifiyya utilisé par les...

  14. Combined Use of Clips and Nylon Snare (“Tulip-Bundle” as a Rescue Endoscopic Bleeding Control in a Mallory-Weiss Syndrome

    Directory of Open Access Journals (Sweden)

    Hrvoje Ivekovic

    2014-01-01

    Full Text Available Mallory-Weiss syndrome (MWS accounts for 6–14% of all cases of upper gastrointestinal bleeding. Prognosis of patients with MWS is generally good, with a benign course and rare recurrence of bleeding. However, no strict recommendations exist in regard to the mode of action after a failure of primary endoscopic hemostasis. We report a case of an 83-year-old male with MWS and rebleeding after the initial endoscopic treatment with epinephrine and clips. The final endoscopic control of bleeding was achieved by a combined application of clips and a nylon snare in a “tulip-bundle” fashion. The patient had an uneventful postprocedural clinical course and was discharged from the hospital five days later. To the best of our knowledge, this is the first case report showing the “tulip-bundle” technique as a rescue endoscopic bleeding control in the esophagus.

  15. Combined use of clips and nylon snare ("tulip-bundle") as a rescue endoscopic bleeding control in a mallory-weiss syndrome.

    Science.gov (United States)

    Ivekovic, Hrvoje; Radulovic, Bojana; Jankovic, Suzana; Markos, Pave; Rustemovic, Nadan

    2014-01-01

    Mallory-Weiss syndrome (MWS) accounts for 6-14% of all cases of upper gastrointestinal bleeding. Prognosis of patients with MWS is generally good, with a benign course and rare recurrence of bleeding. However, no strict recommendations exist in regard to the mode of action after a failure of primary endoscopic hemostasis. We report a case of an 83-year-old male with MWS and rebleeding after the initial endoscopic treatment with epinephrine and clips. The final endoscopic control of bleeding was achieved by a combined application of clips and a nylon snare in a "tulip-bundle" fashion. The patient had an uneventful postprocedural clinical course and was discharged from the hospital five days later. To the best of our knowledge, this is the first case report showing the "tulip-bundle" technique as a rescue endoscopic bleeding control in the esophagus.

  16. Jackson-Weiss syndrome: Clinical and radiological findings in a large kindred and exclusion of the gene from 7p21 and 5qter

    Energy Technology Data Exchange (ETDEWEB)

    Ades, L.C.; Haan, E.A.; Mulley, J.C.; Senga, I.P.; Morris, L.L.; David, D.J. [Women`s and Children`s Hospital, North Adelaide (Australia)

    1994-06-01

    We describe the clinical and radiological manifestations of the Jackson-Weiss syndrome (JWS) in a large South Australian kindred. Radiological abnormalities not previously described in the hands include coned epiphyses, distal and middle phalangeal hypoplasia, and carpal bone malsegmentation. New radiological findings in the feet include coned epiphyses, hallux valgus, phalangeal, tarso-navicular and calcaneo-navicular fusions, and uniform absence of metatarsal fusions. Absence of linkage to eight markers along the short arm of chromosome 7 excluded allelian between JWS and Saethre-Chotzen syndrome at 7p21. No linkage was detected to D5S211, excluding allelism to another recently described cephalosyndactyly syndrome mapping to 5qter. 35 refs., 5 figs., 4 tabs.

  17. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    Science.gov (United States)

    Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.

  18. Integration of PanDA workload management system with Titan supercomputer at OLCF

    Science.gov (United States)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  19. Development of the general interpolants method for the CYBER 200 series of supercomputers

    Science.gov (United States)

    Stalnaker, J. F.; Robinson, M. A.; Spradley, L. W.; Kurzius, S. C.; Thoenes, J.

    1988-01-01

    The General Interpolants Method (GIM) is a 3-D, time-dependent, hybrid procedure for generating numerical analogs of the conservation laws. This study is directed toward the development and application of the GIM computer code for fluid dynamic research applications as implemented for the Cyber 200 series of supercomputers. An elliptic and quasi-parabolic version of the GIM code are discussed. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and an implicit finite difference scheme are also included.

  20. Scalable parallel programming for high performance seismic simulation on petascale heterogeneous supercomputers

    Science.gov (United States)

    Zhou, Jun

    The 1994 Northridge earthquake in Los Angeles, California, killed 57 people, injured over 8,700 and caused an estimated $20 billion in damage. Petascale simulations are needed in California and elsewhere to provide society with a better understanding of the rupture and wave dynamics of the largest earthquakes at shaking frequencies required to engineer safe structures. As the heterogeneous supercomputing infrastructures are becoming more common, numerical developments in earthquake system research are particularly challenged by the dependence on the accelerator elements to enable "the Big One" simulations with higher frequency and finer resolution. Reducing time to solution and power consumption are two primary focus area today for the enabling technology of fault rupture dynamics and seismic wave propagation in realistic 3D models of the crust's heterogeneous structure. This dissertation presents scalable parallel programming techniques for high performance seismic simulation running on petascale heterogeneous supercomputers. A real world earthquake simulation code, AWP-ODC, one of the most advanced earthquake codes to date, was chosen as the base code in this research, and the testbed is based on Titan at Oak Ridge National Laboraratory, the world's largest hetergeneous supercomputer. The research work is primarily related to architecture study, computation performance tuning and software system scalability. An earthquake simulation workflow has also been developed to support the efficient production sets of simulations. The highlights of the technical development are an aggressive performance optimization focusing on data locality and a notable data communication model that hides the data communication latency. This development results in the optimal computation efficiency and throughput for the 13-point stencil code on heterogeneous systems, which can be extended to general high-order stencil codes. Started from scratch, the hybrid CPU/GPU version of AWP

  1. Accelerating Virtual High-Throughput Ligand Docking: current technology and case study on a petascale supercomputer.

    Science.gov (United States)

    Ellingson, Sally R; Dakshanamurthy, Sivanesan; Brown, Milton; Smith, Jeremy C; Baudry, Jerome

    2014-04-25

    In this paper we give the current state of high-throughput virtual screening. We describe a case study of using a task-parallel MPI (Message Passing Interface) version of Autodock4 [1], [2] to run a virtual high-throughput screen of one-million compounds on the Jaguar Cray XK6 Supercomputer at Oak Ridge National Laboratory. We include a description of scripts developed to increase the efficiency of the predocking file preparation and postdocking analysis. A detailed tutorial, scripts, and source code for this MPI version of Autodock4 are available online at http://www.bio.utk.edu/baudrylab/autodockmpi.htm.

  2. A New Hydrodynamic Model for Numerical Simulation of Interacting Galaxies on Intel Xeon Phi Supercomputers

    Science.gov (United States)

    Kulikov, Igor; Chernykh, Igor; Tutukov, Alexander

    2016-05-01

    This paper presents a new hydrodynamic model of interacting galaxies based on the joint solution of multicomponent hydrodynamic equations, first moments of the collisionless Boltzmann equation and the Poisson equation for gravity. Using this model, it is possible to formulate a unified numerical method for solving hyperbolic equations. This numerical method has been implemented for hybrid supercomputers with Intel Xeon Phi accelerators. The collision of spiral and disk galaxies considering the star formation process, supernova feedback and molecular hydrogen formation is shown as a simulation result.

  3. Scheduling Supercomputers.

    Science.gov (United States)

    1983-02-01

    no task is scheduled with overlap. Let numpi be the total number of preemptions and idle slots of size at most to that are introduced. We see that if...no usable block remains on Qm-*, then numpi < m-k. Otherwise, numpi ! m-k-1. If j>n when this procedure terminates, then all tasks have been scheduled

  4. Grassroots Supercomputing

    CERN Multimedia

    Buchanan, Mark

    2005-01-01

    What started out as a way for SETI to plow through its piles or radio-signal data from deep space has turned into a powerful research tool as computer users acrosse the globe donate their screen-saver time to projects as diverse as climate-change prediction, gravitational-wave searches, and protein folding (4 pages)

  5. Feynman diagrams sampling for quantum field theories on the QPACE 2 supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Rappl, Florian

    2016-08-01

    This work discusses the application of Feynman diagram sampling in quantum field theories. The method uses a computer simulation to sample the diagrammatic space obtained in a series expansion. For running large physical simulations powerful computers are obligatory, effectively splitting the thesis in two parts. The first part deals with the method of Feynman diagram sampling. Here the theoretical background of the method itself is discussed. Additionally, important statistical concepts and the theory of the strong force, quantum chromodynamics, are introduced. This sets the context of the simulations. We create and evaluate a variety of models to estimate the applicability of diagrammatic methods. The method is then applied to sample the perturbative expansion of the vertex correction. In the end we obtain the value for the anomalous magnetic moment of the electron. The second part looks at the QPACE 2 supercomputer. This includes a short introduction to supercomputers in general, as well as a closer look at the architecture and the cooling system of QPACE 2. Guiding benchmarks of the InfiniBand network are presented. At the core of this part, a collection of best practices and useful programming concepts are outlined, which enables the development of efficient, yet easily portable, applications for the QPACE 2 system.

  6. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration; Klimentov, Alexei; Oleynik, Danila; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently uses more than 100,000 cores at well over 100 Grid sites with a peak performance of 0.3 petaFLOPS, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real tim...

  7. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    De, Kaushik; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Vaniachine, Alexandre; Wenaus, Torre; Schovancova, Jaroslava

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modi ed PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real time, information about unused...

  8. PREFACE: HITES 2012: 'Horizons of Innovative Theories, Experiments, and Supercomputing in Nuclear Physics'

    Science.gov (United States)

    Hecht, K. T.

    2012-12-01

    This volume contains the contributions of the speakers of an international conference in honor of Jerry Draayer's 70th birthday, entitled 'Horizons of Innovative Theories, Experiments and Supercomputing in Nuclear Physics'. The list of contributors includes not only international experts in these fields, but also many former collaborators, former graduate students, and former postdoctoral fellows of Jerry Draayer, stressing innovative theories such as special symmetries and supercomputing, both of particular interest to Jerry. The organizers of the conference intended to honor Jerry Draayer not only for his seminal contributions in these fields, but also for his administrative skills at departmental, university, national and international level. Signed: Ted Hecht University of Michigan Conference photograph Scientific Advisory Committee Ani AprahamianUniversity of Notre Dame Baha BalantekinUniversity of Wisconsin Bruce BarrettUniversity of Arizona Umit CatalyurekOhio State Unversity David DeanOak Ridge National Laboratory Jutta Escher (Chair)Lawrence Livermore National Laboratory Jorge HirschUNAM, Mexico David RoweUniversity of Toronto Brad Sherill & Michigan State University Joel TohlineLouisiana State University Edward ZganjarLousiana State University Organizing Committee Jeff BlackmonLouisiana State University Mark CaprioUniversity of Notre Dame Tomas DytrychLouisiana State University Ana GeorgievaINRNE, Bulgaria Kristina Launey (Co-chair)Louisiana State University Gabriella PopaOhio University Zanesville James Vary (Co-chair)Iowa State University Local Organizing Committee Laura LinhardtLouisiana State University Charlie RascoLouisiana State University Karen Richard (Coordinator)Louisiana State University

  9. Graph visualization for the analysis of the structure and dynamics of extreme-scale supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Berkbigler, K. P. (Kathryn P.); Bush, B. W. (Brian W.); Davis, Kei,; Hoisie, A. (Adolfy); Smith, S. A. (Steve A.)

    2002-01-01

    We are exploring the development and application of information visualization techniques for the analysis of new extreme-scale supercomputer architectures. Modern supercomputers typically comprise very large clusters of commodity SMPs interconnected by possibly dense and often nonstandard networks. The scale, complexity, and inherent nonlocality of the structure and dynamics of this hardware, and the systems and applications distributed over it, challenge traditional analysis methods. As part of the a la carte team at Los Alamos National Laboratory, who are simulating these advanced architectures, we are exploring advanced visualization techniques and creating tools to provide intuitive exploration, discovery, and analysis of these simulations. This work complements existing and emerging algorithmic analysis tools. Here we gives background on the problem domain, a description of a prototypical computer architecture of interest (on the order of 10,000 processors connected by a quaternary fat-tree network), and presentations of several visualizations of the simulation data that make clear the flow of data in the interconnection network.

  10. Groundwater cooling of a supercomputer in Perth, Western Australia: hydrogeological simulations and thermal sustainability

    Science.gov (United States)

    Sheldon, Heather A.; Schaubs, Peter M.; Rachakonda, Praveen K.; Trefry, Michael G.; Reid, Lynn B.; Lester, Daniel R.; Metcalfe, Guy; Poulet, Thomas; Regenauer-Lieb, Klaus

    2015-12-01

    Groundwater cooling (GWC) is a sustainable alternative to conventional cooling technologies for supercomputers. A GWC system has been implemented for the Pawsey Supercomputing Centre in Perth, Western Australia. Groundwater is extracted from the Mullaloo Aquifer at 20.8 °C and passes through a heat exchanger before returning to the same aquifer. Hydrogeological simulations of the GWC system were used to assess its performance and sustainability. Simulations were run with cooling capacities of 0.5 or 2.5 Mega Watts thermal (MWth), with scenarios representing various combinations of pumping rate, injection temperature and hydrogeological parameter values. The simulated system generates a thermal plume in the Mullaloo Aquifer and overlying Superficial Aquifer. Thermal breakthrough (transfer of heat from injection to production wells) occurred in 2.7-4.3 years for a 2.5 MWth system. Shielding (reinjection of cool groundwater between the injection and production wells) resulted in earlier thermal breakthrough but reduced the rate of temperature increase after breakthrough, such that shielding was beneficial after approximately 5 years pumping. Increasing injection temperature was preferable to increasing flow rate for maintaining cooling capacity after thermal breakthrough. Thermal impacts on existing wells were small, with up to 10 wells experiencing a temperature increase ≥ 0.1 °C (largest increase 6 °C).

  11. OpenMC:Towards Simplifying Programming for TianHe Supercomputers

    Institute of Scientific and Technical Information of China (English)

    廖湘科; 杨灿群; 唐滔; 易会战; 王锋; 吴强; 薛京灵

    2014-01-01

    Modern petascale and future exascale systems are massively heterogeneous architectures. Developing produc-tive intra-node programming models is crucial toward addressing their programming challenge. We introduce a directive-based intra-node programming model, OpenMC, and show that this new model can achieve ease of programming, high performance, and the degree of portability desired for heterogeneous nodes, especially those in TianHe supercomputers. While existing models are geared towards offloading computations to accelerators (typically one), OpenMC aims to more uniformly and adequately exploit the potential offered by multiple CPUs and accelerators in a compute node. OpenMC achieves this by providing a unified abstraction of hardware resources as workers and facilitating the exploitation of asyn-chronous task parallelism on the workers. We present an overview of OpenMC, a prototyping implementation, and results from some initial comparisons with OpenMP and hand-written code in developing six applications on two types of nodes from TianHe supercomputers.

  12. Visualization at Supercomputing Centers: The Tale of Little Big Iron and the Three Skinny Guys

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; van Rosendale, John; Southard, Dale; Gaither, Kelly; Childs, Hank; Brugger, Eric; Ahern, Sean

    2010-12-01

    Supercomputing Centers (SC's) are unique resources that aim to enable scientific knowledge discovery through the use of large computational resources, the Big Iron. Design, acquisition, installation, and management of the Big Iron are activities that are carefully planned and monitored. Since these Big Iron systems produce a tsunami of data, it is natural to co-locate visualization and analysis infrastructure as part of the same facility. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys does not receive the same level of treatment as that of the Big Iron. The main focus of this article is to explore different aspects of planning, designing, fielding, and maintaining the visualization and analysis infrastructure at supercomputing centers. Some of the questions we explore in this article include:"How should the Little Iron be sized to adequately support visualization and analysis of data coming off the Big Iron?" What sort of capabilities does it need to have?" Related questions concern the size of visualization support staff:"How big should a visualization program be (number of persons) and what should the staff do?" and"How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?"

  13. Frequently updated noise threat maps created with use of supercomputing grid

    Directory of Open Access Journals (Sweden)

    Szczodrak Maciej

    2014-09-01

    Full Text Available An innovative supercomputing grid services devoted to noise threat evaluation were presented. The services described in this paper concern two issues, first is related to the noise mapping, while the second one focuses on assessment of the noise dose and its influence on the human hearing system. The discussed serviceswere developed within the PL-Grid Plus Infrastructure which accumulates Polish academic supercomputer centers. Selected experimental results achieved by the usage of the services proposed were presented. The assessment of the environmental noise threats includes creation of the noise maps using either ofline or online data, acquired through a grid of the monitoring stations. A concept of estimation of the source model parameters based on the measured sound level for the purpose of creating frequently updated noise maps was presented. Connecting the noise mapping grid service with a distributed sensor network enables to automatically update noise maps for a specified time period. Moreover, a unique attribute of the developed software is the estimation of the auditory effects evoked by the exposure to noise. The estimation method uses a modified psychoacoustic model of hearing and is based on the calculated noise level values and on the given exposure period. Potential use scenarios of the grid services for research or educational purpose were introduced. Presentation of the results of predicted hearing threshold shift caused by exposure to excessive noise can raise the public awareness of the noise threats.

  14. Unique Methodologies for Nano/Micro Manufacturing Job Training Via Desktop Supercomputer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kimball, Clyde [Northern Illinois Univ., DeKalb, IL (United States); Karonis, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Lurio, Laurence [Northern Illinois Univ., DeKalb, IL (United States); Piot, Philippe [Northern Illinois Univ., DeKalb, IL (United States); Xiao, Zhili [Northern Illinois Univ., DeKalb, IL (United States); Glatz, Andreas [Northern Illinois Univ., DeKalb, IL (United States); Pohlman, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Hou, Minmei [Northern Illinois Univ., DeKalb, IL (United States); Demir, Veysel [Northern Illinois Univ., DeKalb, IL (United States); Song, Jie [Northern Illinois Univ., DeKalb, IL (United States); Duffin, Kirk [Northern Illinois Univ., DeKalb, IL (United States); Johns, Mitrick [Northern Illinois Univ., DeKalb, IL (United States); Sims, Thomas [Northern Illinois Univ., DeKalb, IL (United States); Yin, Yanbin [Northern Illinois Univ., DeKalb, IL (United States)

    2012-11-21

    This project establishes an initiative in high speed (Teraflop)/large-memory desktop supercomputing for modeling and simulation of dynamic processes important for energy and industrial applications. It provides a training ground for employment of current students in an emerging field with skills necessary to access the large supercomputing systems now present at DOE laboratories. It also provides a foundation for NIU faculty to quantum leap beyond their current small cluster facilities. The funding extends faculty and student capability to a new level of analytic skills with concomitant publication avenues. The components of the Hewlett Packard computer obtained by the DOE funds create a hybrid combination of a Graphics Processing System (12 GPU/Teraflops) and a Beowulf CPU system (144 CPU), the first expandable via the NIU GAEA system to ~60 Teraflops integrated with a 720 CPU Beowulf system. The software is based on access to the NVIDIA/CUDA library and the ability through MATLAB multiple licenses to create additional local programs. A number of existing programs are being transferred to the CPU Beowulf Cluster. Since the expertise necessary to create the parallel processing applications has recently been obtained at NIU, this effort for software development is in an early stage. The educational program has been initiated via formal tutorials and classroom curricula designed for the coming year. Specifically, the cost focus was on hardware acquisitions and appointment of graduate students for a wide range of applications in engineering, physics and computer science.

  15. Supercomputer Assisted Generation of Machine Learning Agents for the Calibration of Building Energy Models

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Jibonananda [ORNL; New, Joshua Ryan [ORNL; Edwards, Richard [ORNL

    2013-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrot pur- poses. EnergyPlus is the agship Department of Energy software that performs BEM for dierent types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manu- ally by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building en- ergy modeling unfeasible for smaller projects. In this paper, we describe the \\Autotune" research which employs machine learning algorithms to generate agents for the dierent kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of En- ergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-eective cali- bration of building models.

  16. Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; Leinweber, David; Ruebel, Oliver; Wu, Kesheng

    2011-09-16

    This paper describes collaborative work between active traders, regulators, economists, and supercomputing researchers to replicate and extend investigations of the Flash Crash and other market anomalies in a National Laboratory HPC environment. Our work suggests that supercomputing tools and methods will be valuable to market regulators in achieving the goal of market safety, stability, and security. Research results using high frequency data and analytics are described, and directions for future development are discussed. Currently the key mechanism for preventing catastrophic market action are “circuit breakers.” We believe a more graduated approach, similar to the “yellow light” approach in motorsports to slow down traffic, might be a better way to achieve the same goal. To enable this objective, we study a number of indicators that could foresee hazards in market conditions and explore options to confirm such predictions. Our tests confirm that Volume Synchronized Probability of Informed Trading (VPIN) and a version of volume Herfindahl-Hirschman Index (HHI) for measuring market fragmentation can indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.

  17. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; D' Azevedo, Eduardo [ORNL; Philip, Bobby [ORNL; Worley, Patrick H [ORNL

    2016-01-01

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phase of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.

  18. Federal Council on Science, Engineering and Technology: Committee on Computer Research and Applications, Subcommittee on Science and Engineering Computing: The US Supercomputer Industry

    Energy Technology Data Exchange (ETDEWEB)

    1987-12-01

    The Federal Coordinating Council on Science, Engineering, and Technology (FCCSET) Committee on Supercomputing was chartered by the Director of the Office of Science and Technology Policy in 1982 to examine the status of supercomputing in the United States and to recommend a role for the Federal Government in the development of this technology. In this study, the FCCSET Committee (now called the Subcommittee on Science and Engineering Computing of the FCCSET Committee on Computer Research and Applications) reports on the status of the supercomputer industry and addresses changes that have occured since issuance of the 1983 and 1985 reports. The review based upon periodic meetings with and site visits to supercomputer manufacturers and consultation with experts in high performance scientific computing. White papers have been contributed to this report by industry leaders and supercomputer experts.

  19. A Framework for HI Spectral Source Finding Using Distributed-Memory Supercomputing

    CERN Document Server

    Westerlund, Stefan

    2014-01-01

    The latest generation of radio astronomy interferometers will conduct all sky surveys with data products consisting of petabytes of spectral line data. Traditional approaches to identifying and parameterising the astrophysical sources within this data will not scale to datasets of this magnitude, since the performance of workstations will not keep up with the real-time generation of data. For this reason, it is necessary to employ high performance computing systems consisting of a large number of processors connected by a high-bandwidth network. In order to make use of such supercomputers substantial modifications must be made to serial source finding code. To ease the transition, this work presents the Scalable Source Finder Framework, a framework providing storage access, networking communication and data composition functionality, which can support a wide range of source finding algorithms provided they can be applied to subsets of the entire image. Additionally, the Parallel Gaussian Source Finder was imp...

  20. Diskless supercomputers: Scalable, reliable I/O for the Tera-Op technology base

    Science.gov (United States)

    Katz, Randy H.; Ousterhout, John K.; Patterson, David A.

    1993-01-01

    Computing is seeing an unprecedented improvement in performance; over the last five years there has been an order-of-magnitude improvement in the speeds of workstation CPU's. At least another order of magnitude seems likely in the next five years, to machines with 500 MIPS or more. The goal of the ARPA Teraop program is to realize even larger, more powerful machines, executing as many as a trillion operations per second. Unfortunately, we have seen no comparable breakthroughs in I/O performance; the speeds of I/O devices and the hardware and software architectures for managing them have not changed substantially in many years. We have completed a program of research to demonstrate hardware and software I/O architectures capable of supporting the kinds of internetworked 'visualization' workstations and supercomputers that will appear in the mid 1990s. The project had three overall goals: high performance, high reliability, and scalable, multipurpose system.

  1. An Optimized Parallel FDTD Topology for Challenging Electromagnetic Simulations on Supercomputers

    Directory of Open Access Journals (Sweden)

    Shugang Jiang

    2015-01-01

    Full Text Available It may not be a challenge to run a Finite-Difference Time-Domain (FDTD code for electromagnetic simulations on a supercomputer with more than 10 thousands of CPU cores; however, to make FDTD code work with the highest efficiency is a challenge. In this paper, the performance of parallel FDTD is optimized through MPI (message passing interface virtual topology, based on which a communication model is established. The general rules of optimal topology are presented according to the model. The performance of the method is tested and analyzed on three high performance computing platforms with different architectures in China. Simulations including an airplane with a 700-wavelength wingspan, and a complex microstrip antenna array with nearly 2000 elements are performed very efficiently using a maximum of 10240 CPU cores.

  2. Large-scale integrated super-computing platform for next generation virtual drug discovery.

    Science.gov (United States)

    Mitchell, Wayne; Matsumoto, Shunji

    2011-08-01

    Traditional drug discovery starts by experimentally screening chemical libraries to find hit compounds that bind to protein targets, modulating their activity. Subsequent rounds of iterative chemical derivitization and rescreening are conducted to enhance the potency, selectivity, and pharmacological properties of hit compounds. Although computational docking of ligands to targets has been used to augment the empirical discovery process, its historical effectiveness has been limited because of the poor correlation of ligand dock scores and experimentally determined binding constants. Recent progress in super-computing, coupled to theoretical insights, allows the calculation of the Gibbs free energy, and therefore accurate binding constants, for usually large ligand-receptor systems. This advance extends the potential of virtual drug discovery. A specific embodiment of the technology, integrating de novo, abstract fragment based drug design, sophisticated molecular simulation, and the ability to calculate thermodynamic binding constants with unprecedented accuracy, are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Gallarno, George [Christian Brothers University; Rogers, James H [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  4. Operational numerical weather prediction on a GPU-accelerated cluster supercomputer

    Science.gov (United States)

    Lapillonne, Xavier; Fuhrer, Oliver; Spörri, Pascal; Osuna, Carlos; Walser, André; Arteaga, Andrea; Gysi, Tobias; Rüdisühli, Stefan; Osterried, Katherine; Schulthess, Thomas

    2016-04-01

    The local area weather prediction model COSMO is used at MeteoSwiss to provide high resolution numerical weather predictions over the Alpine region. In order to benefit from the latest developments in computer technology the model was optimized and adapted to run on Graphical Processing Units (GPUs). Thanks to these model adaptations and the acquisition of a dedicated hybrid supercomputer a new set of operational applications have been introduced, COSMO-1 (1 km deterministic), COSMO-E (2 km ensemble) and KENDA (data assimilation) at MeteoSwiss. These new applications correspond to an increase of a factor 40x in terms of computational load as compared to the previous operational setup. We present an overview of the porting approach of the COSMO model to GPUs together with a detailed description of and performance results on the new hybrid Cray CS-Storm computer, Piz Kesch.

  5. A CPU/MIC Collaborated Parallel Framework for GROMACS on Tianhe-2 Supercomputer.

    Science.gov (United States)

    Peng, Shaoliang; Yang, Shunyun; Su, Wenhe; Zhang, Xiaoyu; Zhang, Tenglilang; Liu, Weiguo; Zhao, Xingming

    2017-06-16

    Molecular Dynamics (MD) is the simulation of the dynamic behavior of atoms and molecules. As the most popular software for molecular dynamics, GROMACS cannot work on large-scale data because of limit computing resources. In this paper, we propose a CPU and Intel® Xeon Phi Many Integrated Core (MIC) collaborated parallel framework to accelerate GROMACS using the offload mode on a MIC coprocessor, with which the performance of GROMACS is improved significantly, especially with the utility of Tianhe-2 supercomputer. Furthermore, we optimize GROMACS so that it can run on both the CPU and MIC at the same time. In addition, we accelerate multi-node GROMACS so that it can be used in practice. Benchmarking on real data, our accelerated GROMACS performs very well and reduces computation time significantly. Source code: https://github.com/tianhe2/gromacs-mic.

  6. Mixed precision numerical weather prediction on hybrid GPU-CPU supercomputers

    Science.gov (United States)

    Lapillonne, Xavier; Osuna, Carlos; Spoerri, Pascal; Osterried, Katherine; Charpilloz, Christophe; Fuhrer, Oliver

    2017-04-01

    A new version of the climate and weather model COSMO that runs faster on traditional high performance computing systems with CPUs as well as on heterogeneous architectures using graphics processing units (GPUs) has been developed. The model was in addition adapted to be able to run in "single precision" mode. After discussing the key changes introduced in this new model version and the tools used in the porting approach, we present 3 applications, namely the MeteoSwiss operational weather prediction system, COSMO-LEPS and the CALMO project, which already take advantage of the performance improvement, up to a factor 4, by running on GPU system and using the single precision mode. We discuss how the code changes open new perspectives for scientific research and can enable researchers to get access to a new class of supercomputers.

  7. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  8. Modern Gyrokinetic Particle-In-Cell Simulation of Fusion Plasmas on Top Supercomputers

    CERN Document Server

    Wang, Bei; Tang, William; Ibrahim, Khaled; Madduri, Kamesh; Williams, Samuel; Oliker, Leonid

    2015-01-01

    The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability of the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon...

  9. Dawning Nebulae: A PetaFLOPS Supercomputer with a Heterogeneous Structure

    Institute of Scientific and Technical Information of China (English)

    Ning-Hui Sun; Jing Xing; Zhi-Gang Huo; Guang-Ming Tan; Jin Xiong; Bo Li; Can Ma

    2011-01-01

    Dawning Nebulae is a heterogeneous system composed of 9280 multi-core x86 CPUs and 4640 NVIDIA Fermi GPUs. With a Linpack performance of 1.271 petaFLOPS, it was ranked the second in the TOP500 List released in June 2010. In this paper, key issues in the system design of Dawning Nebulae are introduced. System tuning methodologies aiming at petaFLOPS Linpack result are presented, including algorithmic optimization and communication improvement. The design of its file I/O subsystem, including HVFS and the underlying DCFS3, is also described. Performance evaluations show that the Linpack efficiency of each node reaches 69.89%, and 1024-node aggregate read and write bandwidths exceed 100 GB/s and 70 GB/s respectively. The success of Dawning Nebulae has demonstrated the viability of CPU/GPU heterogeneous structure for future designs of supercomputers.

  10. Zur Geschichte der Geowissenschaften im Museum für Naturkunde zu Berlin Teil 4: Das Mineralogische Museum der Universität Berlin unter Christian Samuel Weiss von 1810 bis 1856

    Directory of Open Access Journals (Sweden)

    G. Hoppe

    2001-01-01

    Full Text Available Die Universitätsgründung in Berlin von 1810 war verbunden mit der Übernahme des Lehrbetriebes der aufgelösten Bergakademie, die nur noch in Form des Bergeleveninstituts bzw. Bergelevenklasse für die Finanzierung der Ausbildung der Bergeleven weiter bestand, sowie mit der Übernahme des von der Bergakademie genutzten Königlichen Mineralienkabinetts der preußischen Bergverwaltung als Mineralogisches Museum der Universität. Infolge des Todes von D. L. G. Karsten im Jahre 1810 erhielt der Leipziger Physiker und Mineraloge C. S. Weiss den Lehrstuhl für Mineralogie, den er bis zu seinem Tode 1856 innehatte. Weiss entwickelte die Lehre Werners, die die Mineralogie einschließlich Geologie umfasste, in kristallographischer Hinsicht weiter, während sich später neben ihm zwei seiner Schüler anderen Teilgebieten der Mineralogie annahmen, G. Rose der speziellen Mineralogie und E. Beyrich der geologischen Paläontologie. Der Ausbau der Sammlungen durch eigene Aufsammlungen, Schenkungen und Käufe konnte in starkem Maße fortgesetzt werden, auch zunehmend in paläontologischer Hinsicht, sodass das Mineralogische Museum für das ganze Spektrum der Lehre gut bestückt war. Der streitbare Charakter von Weiss verursachte zahlreiche Reibungspunkte. History of the Geoscience Institutes of the Natural History Museum in Berlin. Part 4 The establishment of the University in Berlin in 1810 resulted in the adoption of the teaching of the dissolved Bergakademie and of the royal Mineralienkabinett of the Prussian mining department, which was used by the Bergakademie before it became the Mineralogical Museum of the University. The Bergakademie continued to exist only as Bergeleveninstitut or Bergelevenklasse for financing the education of the mining students. The physicist and mineralogist C. S. Weiss was offered the chair of mineralogy after the death of D. L. G. Karsten 1810; he had the position to his death in 1856. Weiss developped the crystallographic part

  11. Scalability Test of multiscale fluid-platelet model for three top supercomputers

    Science.gov (United States)

    Zhang, Peng; Zhang, Na; Gao, Chao; Zhang, Li; Gao, Yuxiang; Deng, Yuefan; Bluestein, Danny

    2016-07-01

    We have tested the scalability of three supercomputers: the Tianhe-2, Stampede and CS-Storm with multiscale fluid-platelet simulations, in which a highly-resolved and efficient numerical model for nanoscale biophysics of platelets in microscale viscous biofluids is considered. Three experiments involving varying problem sizes were performed: Exp-S: 680,718-particle single-platelet; Exp-M: 2,722,872-particle 4-platelet; and Exp-L: 10,891,488-particle 16-platelet. Our implementations of multiple time-stepping (MTS) algorithm improved the performance of single time-stepping (STS) in all experiments. Using MTS, our model achieved the following simulation rates: 12.5, 25.0, 35.5 μs/day for Exp-S and 9.09, 6.25, 14.29 μs/day for Exp-M on Tianhe-2, CS-Storm 16-K80 and Stampede K20. The best rate for Exp-L was 6.25 μs/day for Stampede. Utilizing current advanced HPC resources, the simulation rates achieved by our algorithms bring within reach performing complex multiscale simulations for solving vexing problems at the interface of biology and engineering, such as thrombosis in blood flow which combines millisecond-scale hematology with microscale blood flow at resolutions of micro-to-nanoscale cellular components of platelets. This study of testing the performance characteristics of supercomputers with advanced computational algorithms that offer optimal trade-off to achieve enhanced computational performance serves to demonstrate that such simulations are feasible with currently available HPC resources.

  12. Efficient development of memory bounded geo-applications to scale on modern supercomputers

    Science.gov (United States)

    Räss, Ludovic; Omlin, Samuel; Licul, Aleksandar; Podladchikov, Yuri; Herman, Frédéric

    2016-04-01

    Numerical modeling is an actual key tool in the area of geosciences. The current challenge is to solve problems that are multi-physics and for which the length scale and the place of occurrence might not be known in advance. Also, the spatial extend of the investigated domain might strongly vary in size, ranging from millimeters for reactive transport to kilometers for glacier erosion dynamics. An efficient way to proceed is to develop simple but robust algorithms that perform well and scale on modern supercomputers and permit therefore very high-resolution simulations. We propose an efficient approach to solve memory bounded real-world applications on modern supercomputers architectures. We optimize the software to run on our newly acquired state-of-the-art GPU cluster "octopus". Our approach shows promising preliminary results on important geodynamical and geomechanical problematics: we have developed a Stokes solver for glacier flow and a poromechanical solver including complex rheologies for nonlinear waves in stressed rocks porous rocks. We solve the system of partial differential equations on a regular Cartesian grid and use an iterative finite difference scheme with preconditioning of the residuals. The MPI communication happens only locally (point-to-point); this method is known to scale linearly by construction. The "octopus" GPU cluster, which we use for the computations, has been designed to achieve maximal data transfer throughput at minimal hardware cost. It is composed of twenty compute nodes, each hosting four Nvidia Titan X GPU accelerators. These high-density nodes are interconnected with a parallel (dual-rail) FDR InfiniBand network. Our efforts show promising preliminary results for the different physics investigated. The glacier flow solver achieves good accuracy in the relevant benchmarks and the coupled poromechanical solver permits to explain previously unresolvable focused fluid flow as a natural outcome of the porosity setup. In both cases

  13. Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science

    Science.gov (United States)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.

    2008-12-01

    The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.

  14. Nature of Roberge-Weiss transition end points for heavy quarks in $N_f=2$ lattice QCD with Wilson fermions

    CERN Document Server

    Wu, Liang-Kai

    2014-01-01

    The phase structure of QCD with imaginary chemical potential provides information on the phase diagram of QCD with real chemical potential. At imaginary chemical potential $i\\mu_I=i\\pi T$, previous studies show that the Roberge-Weiss (RW) transition end points are triple points at both large and small quark masses, and second order transition points at intermediate quark masses, the triple points and second order points are separated by two tricritical points. We present simulations with $ 2 $ flavor Wilson fermions to investigate the nature of RW transition end points. The simulations are carried out at 8 values of the hopping parameter $\\kappa$ ranging from 0.020 to 0.140 on different lattice volume. The Binder cumulant, susceptibility and reweighted distribution of the imaginary part of Polyakov loop are employed to determine the nature of RW transition end points. The simulations show that the RW transition end point is of first order with $\\kappa$ values within the interval $0.020-0.070$ and $0.120-0.140...

  15. Genetic heterogeneity among craniosynostosis syndromes: Mapping the Saethre-Chotzen syndrome locus between D7S513 and D7S516 and exclusion of Jackson-Weiss and Crouzon syndrome loci from 7p

    Energy Technology Data Exchange (ETDEWEB)

    Lewanda, A.F.; Taylor, E.W.; Li, Xiang; Beloff, M. (Johns Hopkins School of Medicine, Baltimore, MD (United States)); Cohen, M.M. Jr. (Dalhousie Univ., Nova Scotia (Canada)); Jackson, C.E. (Henry Ford Hospital, Detroit, MI (United States)); Day, D. (Texas Dept. of Health, Denton, TX (United States)); Clarren, S.K. (Univ. of Washington School of Medicine, Seattle, WA (United States)); Ortiz, R.; Garcia, C. (Hospital Infantil de Mexico, Distrito Federal (Mexico)) (and others)

    1994-01-01

    Saethre-Chotzen, Crouzon, and Jackson-Weiss syndromes are craniosynostotic autosomal dominant conditions with a wide variability in expression. Saethre-Chotzen has been mapped to chromosome 7p by L. A. Brueton et al., the Greig cephalopolysyndactyly gene was identified at 7p13 by A. Vortkamp et al., and many cases of craniosynostosis have been associated with 7p deletions. The authors confirmed linkage of the Saethre-Chotzen syndrome locus to chromosome 7p. The tightest linkage was to locus D7S493 (7 = 5.04, [theta] = 0.00), and linkage and haplotype analyses refined the location of the gene to the region between D7S513 and D7S516. Jackson-Weiss and Crouzon syndrome loci were analyzed using markers spanning the entire 7p arm and were excluded, proving that they are nonallelic to Saethre-Chotzen, Greig cephalopolysyndactyly, and the del(7p) syndromes. 29 refs., 1 fig., 2 tabs.

  16. Distinctive aspects of peptic ulcer disease, Dieulafoy's lesion, and Mallory-Weiss syndrome in patients with advanced alcoholic liver disease or cirrhosis.

    Science.gov (United States)

    Nojkov, Borko; Cappell, Mitchell S

    2016-01-07

    To systematically review the data on distinctive aspects of peptic ulcer disease (PUD), Dieulafoy's lesion (DL), and Mallory-Weiss syndrome (MWS) in patients with advanced alcoholic liver disease (aALD), including alcoholic hepatitis or alcoholic cirrhosis. Computerized literature search performed via PubMed using the following medical subject heading terms and keywords: "alcoholic liver disease", "alcoholic hepatitis"," alcoholic cirrhosis", "cirrhosis", "liver disease", "upper gastrointestinal bleeding", "non-variceal upper gastrointestinal bleeding", "PUD", ''DL'', ''Mallory-Weiss tear", and "MWS''. While the majority of acute gastrointestinal (GI) bleeding with aALD is related to portal hypertension, about 30%-40% of acute GI bleeding in patients with aALD is unrelated to portal hypertension. Such bleeding constitutes an important complication of aALD because of its frequency, severity, and associated mortality. Patients with cirrhosis have a markedly increased risk of PUD, which further increases with the progression of cirrhosis. Patients with cirrhosis or aALD and peptic ulcer bleeding (PUB) have worse clinical outcomes than other patients with PUB, including uncontrolled bleeding, rebleeding, and mortality. Alcohol consumption, nonsteroidal anti-inflammatory drug use, and portal hypertension may have a pathogenic role in the development of PUD in patients with aALD. Limited data suggest that Helicobacter pylori does not play a significant role in the pathogenesis of PUD in most cirrhotic patients. The frequency of bleeding from DL appears to be increased in patients with aALD. DL may be associated with an especially high mortality in these patients. MWS is strongly associated with heavy alcohol consumption from binge drinking or chronic alcoholism, and is associated with aALD. Patients with aALD have more severe MWS bleeding and are more likely to rebleed when compared to non-cirrhotics. Pre-endoscopic management of acute GI bleeding in patients with a

  17. Distinctive aspects of peptic ulcer disease, Dieulafoy's lesion, and Mallory-Weiss syndrome in patients with advanced alcoholic liver disease or cirrhosis

    Science.gov (United States)

    Nojkov, Borko; Cappell, Mitchell S

    2016-01-01

    AIM: To systematically review the data on distinctive aspects of peptic ulcer disease (PUD), Dieulafoy’s lesion (DL), and Mallory-Weiss syndrome (MWS) in patients with advanced alcoholic liver disease (aALD), including alcoholic hepatitis or alcoholic cirrhosis. METHODS: Computerized literature search performed via PubMed using the following medical subject heading terms and keywords: “alcoholic liver disease”, “alcoholic hepatitis”,“ alcoholic cirrhosis”, “cirrhosis”, “liver disease”, “upper gastrointestinal bleeding”, “non-variceal upper gastrointestinal bleeding”, “PUD”, ‘‘DL’’, ‘‘Mallory-Weiss tear”, and “MWS’’. RESULTS: While the majority of acute gastrointestinal (GI) bleeding with aALD is related to portal hypertension, about 30%-40% of acute GI bleeding in patients with aALD is unrelated to portal hypertension. Such bleeding constitutes an important complication of aALD because of its frequency, severity, and associated mortality. Patients with cirrhosis have a markedly increased risk of PUD, which further increases with the progression of cirrhosis. Patients with cirrhosis or aALD and peptic ulcer bleeding (PUB) have worse clinical outcomes than other patients with PUB, including uncontrolled bleeding, rebleeding, and mortality. Alcohol consumption, nonsteroidal anti-inflammatory drug use, and portal hypertension may have a pathogenic role in the development of PUD in patients with aALD. Limited data suggest that Helicobacter pylori does not play a significant role in the pathogenesis of PUD in most cirrhotic patients. The frequency of bleeding from DL appears to be increased in patients with aALD. DL may be associated with an especially high mortality in these patients. MWS is strongly associated with heavy alcohol consumption from binge drinking or chronic alcoholism, and is associated with aALD. Patients with aALD have more severe MWS bleeding and are more likely to rebleed when compared to non

  18. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  19. 手术治疗成人足舟骨缺血性坏死的临床分析%Surgical treatment for Müller-Weiss disease

    Institute of Scientific and Technical Information of China (English)

    赵良军; 劳山; 赵劲民; 薄占东; 花奇凯; 罗高斌

    2015-01-01

    目的 探讨手术治疗成人足舟骨缺血性坏死的手术方式和疗效. 方法 对2008年12月至2013年12月采用舟骨关节植骨融合内固定手术治疗的9例成人舟骨缺血性坏死患者的临床资料进行回顾性分析,男2例,女7例;年龄32 ~ 60岁,平均42岁;按Maceira分期:Ⅱ期1例,Ⅲ期4例,Ⅳ期3例,Ⅴ期1例;6例行距舟-舟楔关节融合术,2例行单纯距舟关节融合术,1例行三关节融合术.结果 9例患者术后获12 ~ 18个月(平均15.8个月)随访,患足疼痛及间歇性跛行等症状消失,患足均获骨性融合,融合时间12 ~ 17周(平均14.6周);术后1年美国足踝外科协会的踝-后足评分为80分,其中优1例,良7例,可1例.术后1年患足的足长度[(15.5±0.8) cm]、足弓高度[(18.6±0.9) mm]、内侧纵弓顶角(119.2°±6.4°)分别与术前[(14.3±0.9) cm、(10.2±0.7) mm、136.5°±7.8°]比较,差异均有统计学意义(P<0.05). 结论 对于成人足舟骨缺血性坏死的治疗,根据术前及术中评估舟骨及相关联的关节病变累及程度选择合适的手术方式,可以取得满意疗效.%Objective To evaluate surgical treatment of Müller-Weiss disease.Methods From December 2008 to December 2013,9 cases of Müller-Weiss disease were treated at our department with arthrodesis of the scaphoid joint and internal fixation.They were 2 males and 7 females,32 to 60 years of age (average,42 years).According to the Maceira classification,there were one case of phase 2,4 cases of phase 3,3 cases of phase 4,and one case of phase 5.Six cases received arthrodesis of the talo-naviculo-cuneiform joints,2 cases arthrodesis of the talonavicular joint and one case triple arthrodesis.Results The 9 patients were followed up for 12 to 18 months (average,15.8 months).Pain and intermittent claudication disappeared in all the patients.The bony fusion was achieved in all the affected feet after 12 to 17 months (average,14.6 months).The American Orthopaedic Foot and Ankle

  20. The company's mainframes join CERN's openlab for DataGrid apps and are pivotal in a new $22 million Supercomputer in the U.K.

    CERN Multimedia

    2002-01-01

    Hewlett-Packard has installed a supercomputer system valued at more than $22 million at the Wellcome Trust Sanger Institute (WTSI) in the U.K. HP has also joined the CERN openlab for DataGrid applications (1 page).

  1. Research center Juelich to install Germany's most powerful supercomputer new IBM System for science and research will achieve 5.8 trillion computations per second

    CERN Multimedia

    2002-01-01

    "The Research Center Juelich, Germany, and IBM today announced that they have signed a contract for the delivery and installation of a new IBM supercomputer at the Central Institute for Applied Mathematics" (1/2 page).

  2. « Ni bas-bleu, ni pot-au-feu » : la conception de « la » femme selon Augusta Moll-Weiss (France, tournant des XIXe-XXe siècles

    Directory of Open Access Journals (Sweden)

    Sandrine Roll

    2009-12-01

    Full Text Available Cet article analyse les idées et l’œuvre de la directrice de l’École des mères, Augusta Moll-Weiss. Dans le contexte du tournant des XIXe-XXe siècles où l’éloge de la ménagère est un thème récurrent des discours moralisateurs émanant des ouvriers ou des bourgeois, Augusta Moll-Weiss met en place un projet de cours ménagers. Son œuvre offre une approche singulière de ce que des activités reconnues comme typiquement féminines peuvent offrir aux femmes. Loin de former uniquement des « fées du logis », les cours professés à l’École des mères donnent aussi aux élèves la possibilité de se préparer à une carrière professionnelle. Augusta Moll-Weiss imagine alors pléthore de débouchés qui placent les activités « du souci des autres » dans la sphère du travail social. Son engagement en faveur de l’enseignement ménager s’accompagne d’une réflexion sur de nouveaux modèles domestiques susceptibles d’accompagner l’entrée des femmes sur le marché du travail. La rationalisation et le partage sexué des tâches domestiques ainsi que la question du temps partiel sont au cœur de son projet. Aux marges de la philanthropie et du féminisme, Augusta Moll-Weiss s’engage donc dans une stratégie de reconnaissance du rôle des femmes dans l’espace civique. Après avoir présenté l’œuvre scolaire de cette femme, cet article s’intéresse à sa vision de la « Ménagère nouvelle » et à son approche du féminisme.This article explores the ideas and works of the Augusta Moll-Weiss, head of the École des mères. The latter started housewifery lessons at the turn of the 19th century at a moment when public discourse among both working and upper-class reformers sang the praises of the housewife. Her work offered a new approach to these typically feminine activities. Far from training only “angels of the home”, the lessons given at the Mothers’ School also offered students the possibility of

  3. Earth and environmental science in the 1980's: Part 1: Environmental data systems, supercomputer facilities and networks

    Science.gov (United States)

    1986-01-01

    Overview descriptions of on-line environmental data systems, supercomputer facilities, and networks are presented. Each description addresses the concepts of content, capability, and user access relevant to the point of view of potential utilization by the Earth and environmental science community. The information on similar systems or facilities is presented in parallel fashion to encourage and facilitate intercomparison. In addition, summary sheets are given for each description, and a summary table precedes each section.

  4. A Parallel Supercomputer Implementation of a Biological Inspired Neural Network and its use for Pattern Recognition

    Science.gov (United States)

    de Ladurantaye, Vincent; Lavoie, Jean; Bergeron, Jocelyn; Parenteau, Maxime; Lu, Huizhong; Pichevar, Ramin; Rouat, Jean

    2012-02-01

    A parallel implementation of a large spiking neural network is proposed and evaluated. The neural network implements the binding by synchrony process using the Oscillatory Dynamic Link Matcher (ODLM). Scalability, speed and performance are compared for 2 implementations: Message Passing Interface (MPI) and Compute Unified Device Architecture (CUDA) running on clusters of multicore supercomputers and NVIDIA graphical processing units respectively. A global spiking list that represents at each instant the state of the neural network is described. This list indexes each neuron that fires during the current simulation time so that the influence of their spikes are simultaneously processed on all computing units. Our implementation shows a good scalability for very large networks. A complex and large spiking neural network has been implemented in parallel with success, thus paving the road towards real-life applications based on networks of spiking neurons. MPI offers a better scalability than CUDA, while the CUDA implementation on a GeForce GTX 285 gives the best cost to performance ratio. When running the neural network on the GTX 285, the processing speed is comparable to the MPI implementation on RQCHP's Mammouth parallel with 64 notes (128 cores).

  5. Assessment techniques for a learning-centered curriculum: evaluation design for adventures in supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Helland, B. [Ames Lab., IA (United States); Summers, B.G. [Oak Ridge National Lab., TN (United States)

    1996-09-01

    As the classroom paradigm shifts from being teacher-centered to being learner-centered, student assessments are evolving from typical paper and pencil testing to other methods of evaluation. Students should be probed for understanding, reasoning, and critical thinking abilities rather than their ability to return memorized facts. The assessment of the Department of Energy`s pilot program, Adventures in Supercomputing (AiS), offers one example of assessment techniques developed for learner-centered curricula. This assessment has employed a variety of methods to collect student data. Methods of assessment used were traditional testing, performance testing, interviews, short questionnaires via email, and student presentations of projects. The data obtained from these sources have been analyzed by a professional assessment team at the Center for Children and Technology. The results have been used to improve the AiS curriculum and establish the quality of the overall AiS program. This paper will discuss the various methods of assessment used and the results.

  6. Massively-parallel electrical-conductivity imaging of hydrocarbonsusing the Blue Gene/L supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Commer, M.; Newman, G.A.; Carazzone, J.J.; Dickens, T.A.; Green,K.E.; Wahrmund, L.A.; Willen, D.E.; Shiu, J.

    2007-05-16

    Large-scale controlled source electromagnetic (CSEM)three-dimensional (3D) geophysical imaging is now receiving considerableattention for electrical conductivity mapping of potential offshore oiland gas reservoirs. To cope with the typically large computationalrequirements of the 3D CSEM imaging problem, our strategies exploitcomputational parallelism and optimized finite-difference meshing. Wereport on an imaging experiment, utilizing 32,768 tasks/processors on theIBM Watson Research Blue Gene/L (BG/L) supercomputer. Over a 24-hourperiod, we were able to image a large scale marine CSEM field data setthat previously required over four months of computing time ondistributed clusters utilizing 1024 tasks on an Infiniband fabric. Thetotal initial data misfit could be decreased by 67 percent within 72completed inversion iterations, indicating an electrically resistiveregion in the southern survey area below a depth of 1500 m below theseafloor. The major part of the residual misfit stems from transmitterparallel receiver components that have an offset from the transmittersail line (broadside configuration). Modeling confirms that improvedbroadside data fits can be achieved by considering anisotropic electricalconductivities. While delivering a satisfactory gross scale image for thedepths of interest, the experiment provides important evidence for thenecessity of discriminating between horizontal and verticalconductivities for maximally consistent 3D CSEM inversions.

  7. 369 TFlop/s molecular dynamics simulations on the Roadrunner general-purpose heterogeneous supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Swaminarayan, Sriram [Los Alamos National Laboratory; Germann, Timothy C [Los Alamos National Laboratory; Kadau, Kai [Los Alamos National Laboratory; Fossum, Gordon C [IBM CORPORATION

    2008-01-01

    The authors present timing and performance numbers for a short-range parallel molecular dynamics (MD) code, SPaSM, that has been rewritten for the heterogeneous Roadrunner supercomputer. Each Roadrunner compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, so that there are four MPI ranks per node, each with one Opteron and one Cell. The interatomic forces are computed on the Cells (each with one PPU and eight SPU cores), while the Opterons are used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard-Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), corresponding to 124 MFlop/Watt/s at a price of approximately 3.69 MFlops/dollar. They demonstrate an initial target application, the jetting and ejection of material from a shocked surface.

  8. Cell-based Adaptive Mesh Refinement on the GPU with Applications to Exascale Supercomputing

    Science.gov (United States)

    Trujillo, Dennis; Robey, Robert; Davis, Neal; Nicholaeff, David

    2011-10-01

    We present an OpenCL implementation of a cell-based adaptive mesh refinement (AMR) scheme for the shallow water equations. The challenges associated with ensuring the locality of algorithm architecture to fully exploit the massive number of parallel threads on the GPU is discussed. This includes a proof of concept that a cell-based AMR code can be effectively implemented, even on a small scale, in the memory and threading model provided by OpenCL. Additionally, the program requires dynamic memory in order to properly implement the mesh; as this is not supported in the OpenCL 1.1 standard, a combination of CPU memory management and GPU computation effectively implements a dynamic memory allocation scheme. Load balancing is achieved through a new stencil-based implementation of a space-filling curve, eliminating the need for a complete recalculation of the indexing on the mesh. A cartesian grid hash table scheme to allow fast parallel neighbor accesses is also discussed. Finally, the relative speedup of the GPU-enabled AMR code is compared to the original serial version. We conclude that parallelization using the GPU provides significant speedup for typical numerical applications and is feasible for scientific applications in the next generation of supercomputing.

  9. Distributed computing as a virtual supercomputer: Tools to run and manage large-scale BOINC simulations

    Science.gov (United States)

    Giorgino, Toni; Harvey, M. J.; de Fabritiis, Gianni

    2010-08-01

    Distributed computing (DC) projects tackle large computational problems by exploiting the donated processing power of thousands of volunteered computers, connected through the Internet. To efficiently employ the computational resources of one of world's largest DC efforts, GPUGRID, the project scientists require tools that handle hundreds of thousands of tasks which run asynchronously and generate gigabytes of data every day. We describe RBoinc, an interface that allows computational scientists to embed the DC methodology into the daily work-flow of high-throughput experiments. By extending the Berkeley Open Infrastructure for Network Computing (BOINC), the leading open-source middleware for current DC projects, with mechanisms to submit and manage large-scale distributed computations from individual workstations, RBoinc turns distributed grids into cost-effective virtual resources that can be employed by researchers in work-flows similar to conventional supercomputers. The GPUGRID project is currently using RBoinc for all of its in silico experiments based on molecular dynamics methods, including the determination of binding free energies and free energy profiles in all-atom models of biomolecules.

  10. A user-friendly web portal for T-Coffee on supercomputers

    Directory of Open Access Journals (Sweden)

    Koetsier Jos

    2011-05-01

    Full Text Available Abstract Background Parallel T-Coffee (PTC was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. Results In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. Conclusions The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution.

  11. PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan; Mills, Richard T.

    2012-04-18

    PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors per realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.

  12. A user-friendly web portal for T-Coffee on supercomputers.

    Science.gov (United States)

    Rius, Josep; Cores, Fernando; Solsona, Francesc; van Hemert, Jano I; Koetsier, Jos; Notredame, Cedric

    2011-05-12

    Parallel T-Coffee (PTC) was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution.

  13. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  14. Bringing ATLAS production to HPC resources - A use case with the Hydra supercomputer of the Max Planck Society

    Science.gov (United States)

    Kennedy, J. A.; Kluth, S.; Mazzaferro, L.; Walker, Rodney

    2015-12-01

    The possible usage of HPC resources by ATLAS is now becoming viable due to the changing nature of these systems and it is also very attractive due to the need for increasing amounts of simulated data. In recent years the architecture of HPC systems has evolved, moving away from specialized monolithic systems, to a more generic linux type platform. This change means that the deployment of non HPC specific codes has become much easier. The timing of this evolution perfectly suits the needs of ATLAS and opens a new window of opportunity. The ATLAS experiment at CERN will begin a period of high luminosity data taking in 2015. This high luminosity phase will be accompanied by a need for increasing amounts of simulated data which is expected to exceed the capabilities of the current Grid infrastructure. ATLAS aims to address this need by opportunistically accessing resources such as cloud and HPC systems. This paper presents the results of a pilot project undertaken by ATLAS and the MPP/RZG to provide access to the HYDRA supercomputer facility. Hydra is the supercomputer of the Max Planck Society, it is a linux based supercomputer with over 80000 cores and 4000 physical nodes located at the RZG near Munich. This paper describes the work undertaken to integrate Hydra into the ATLAS production system by using the Nordugrid ARC-CE and other standard Grid components. The customization of these components and the strategies for HPC usage are discussed as well as possibilities for future directions.

  15. Health care public reporting utilization - user clusters, web trails, and usage barriers on Germany's public reporting portal Weisse-Liste.de.

    Science.gov (United States)

    Pross, Christoph; Averdunk, Lars-Henrik; Stjepanovic, Josip; Busse, Reinhard; Geissler, Alexander

    2017-04-21

    Quality of care public reporting provides structural, process and outcome information to facilitate hospital choice and strengthen quality competition. Yet, evidence indicates that patients rarely use this information in their decision-making, due to limited awareness of the data and complex and conflicting information. While there is enthusiasm among policy makers for public reporting, clinicians and researchers doubt its overall impact. Almost no study has analyzed how users behave on public reporting portals, which information they seek out and when they abort their search. This study employs web-usage mining techniques on server log data of 17 million user actions from Germany's premier provider transparency portal Weisse-Liste.de (WL.de) between 2012 and 2015. Postal code and ICD search requests facilitate identification of geographical and treatment area usage patterns. User clustering helps to identify user types based on parameters like session length, referrer and page topic visited. First-level markov chains illustrate common click paths and premature exits. In 2015, the WL.de Hospital Search portal had 2,750 daily users, with 25% mobile traffic, a bounce rate of 38% and 48% of users examining hospital quality information. From 2013 to 2015, user traffic grew at 38% annually. On average users spent 7 min on the portal, with 7.4 clicks and 54 s between clicks. Users request information for many oncologic and orthopedic conditions, for which no process or outcome quality indicators are available. Ten distinct user types, with particular usage patterns and interests, are identified. In particular, the different types of professional and non-professional users need to be addressed differently to avoid high premature exit rates at several key steps in the information search and view process. Of all users, 37% enter hospital information correctly upon entry, while 47% require support in their hospital search. Several onsite and offsite improvement options are

  16. Influence of Earth crust composition on continental collision style in Precambrian conditions: Results of supercomputer modelling

    Science.gov (United States)

    Zavyalov, Sergey; Zakharov, Vladimir

    2016-04-01

    A number of issues concerning Precambrian geodynamics still remain unsolved because of uncertainity of many physical (thermal regime, lithosphere thickness, crust thickness, etc.) and chemical (mantle composition, crust composition) parameters, which differed considerably comparing to the present day values. In this work, we show results of numerical supercomputations based on petrological and thermomechanical 2D model, which simulates the process of collision between two continental plates, each 80-160 km thick, with various convergence rates ranging from 5 to 15 cm/year. In the model, the upper mantle temperature is 150-200 ⁰C higher than the modern value, while the continental crust radiogenic heat production is higher than the present value by the factor of 1.5. These settings correspond to Archean conditions. The present study investigates the dependence of collision style on various continental crust parameters, especially on crust composition. The 3 following archetypal settings of continental crust composition are examined: 1) completely felsic continental crust; 2) basic lower crust and felsic upper crust; 3) basic upper crust and felsic lower crust (hereinafter referred to as inverted crust). Modeling results show that collision with completely felsic crust is unlikely. In the case of basic lower crust, a continental subduction and subsequent continental rocks exhumation can take place. Therefore, formation of ultra-high pressure metamorphic rocks is possible. Continental subduction also occurs in the case of inverted continental crust. However, in the latter case, the exhumation of felsic rocks is blocked by upper basic layer and their subsequent interaction depends on their volume ratio. Thus, if the total inverted crust thickness is about 15 km and the thicknesses of the two layers are equal, felsic rocks cannot be exhumed. If the total thickness is 30 to 40 km and that of the felsic layer is 20 to 25 km, it breaks through the basic layer leading to

  17. Preliminary Test of the Werry-Weiss-Peters Activity Rating Scale%韦里-威斯-彼得斯活动水平评定量表的初步测试

    Institute of Scientific and Technical Information of China (English)

    姜秀举; 苏林雁; 罗学荣

    2001-01-01

    Objective: To evaluate the utility of the Werry-Weiss-Peters Activity Rating Scale (WWPARS) for children with a Chinese background. Methods:In the city of Changsha, 465 normal children aged 8~12 in two primary schools and 27 ADHD children in an outpatient department were tested with WWPARS. Psychometric properties of the scale were analyzed for reliability and validity. Results: While levels of activity increased with age for all children, there was no significant difference between the two groups assessed. The scale attained good reliability and validity. Conclusion: This scale can be applied for evaluation of activity levels for Chinese children.

  18. The PVM (Parallel Virtual Machine) system: Supercomputer level concurrent computation on a network of IBM RS/6000 power stations

    Energy Technology Data Exchange (ETDEWEB)

    Sunderam, V.S. (Emory Univ., Atlanta, GA (USA). Dept. of Mathematics and Computer Science); Geist, G.A. (Oak Ridge National Lab., TN (USA))

    1991-01-01

    The PVM (Parallel Virtual Machine) system enables supercomputer level concurrent computations to be performed on interconnected networks of heterogeneous computer systems. Specifically, a network of 13 IBM RS/6000 powerstations has been successfully used to execute production quality runs of superconductor modeling codes at more than 250 Mflops. This work demonstrates the effectiveness of cooperative concurrent processing for high performance applications, and shows that supercomputer level computations may be attained at a fraction of the cost on distributed computing platforms. This paper describes the PVM programming environment and user facilities, as they apply to hardware platforms comprising a network of IBM RS/6000 powerstations. The salient design features of PVM will be discussed; including heterogeneity, scalability, multilanguage support, provisions for fault tolerance, the use of multiprocessors and scalar machines, an interactive graphical front end, and support for profiling, tracing, and visual analysis. The PVM system has been used extensively, and a range of production quality concurrent applications have been successfully executed using PVM on a variety of networked platforms. The paper will mention representative examples, and discuss two in detail. The first is a material sciences problem that was originally developed on a Cray 2. This application code calculates the electronic structure of metallic alloys from first principles and is based on the KKR-CPA algorithm. The second is a molecular dynamics simulation for calculating materials properties. Performance results for both applicants on networks of RS/6000 powerstations will be presented, and accompanied by discussions of the other advantages of PVM and its potential as a complement or alternative to conventional supercomputers.

  19. Calculation of Free Energy Landscape in Multi-Dimensions with Hamiltonian-Exchange Umbrella Sampling on Petascale Supercomputer.

    Science.gov (United States)

    Jiang, Wei; Luo, Yun; Maragliano, Luca; Roux, Benoît

    2012-11-13

    An extremely scalable computational strategy is described for calculations of the potential of mean force (PMF) in multidimensions on massively distributed supercomputers. The approach involves coupling thousands of umbrella sampling (US) simulation windows distributed to cover the space of order parameters with a Hamiltonian molecular dynamics replica-exchange (H-REMD) algorithm to enhance the sampling of each simulation. In the present application, US/H-REMD is carried out in a two-dimensional (2D) space and exchanges are attempted alternatively along the two axes corresponding to the two order parameters. The US/H-REMD strategy is implemented on the basis of parallel/parallel multiple copy protocol at the MPI level, and therefore can fully exploit computing power of large-scale supercomputers. Here the novel technique is illustrated using the leadership supercomputer IBM Blue Gene/P with an application to a typical biomolecular calculation of general interest, namely the binding of calcium ions to the small protein Calbindin D9k. The free energy landscape associated with two order parameters, the distance between the ion and its binding pocket and the root-mean-square deviation (rmsd) of the binding pocket relative the crystal structure, was calculated using the US/H-REMD method. The results are then used to estimate the absolute binding free energy of calcium ion to Calbindin D9k. The tests demonstrate that the 2D US/H-REMD scheme greatly accelerates the configurational sampling of the binding pocket, thereby improving the convergence of the potential of mean force calculation.

  20. Scalable geocomputation: evolving an environmental model building platform from single-core to supercomputers

    Science.gov (United States)

    Schmitz, Oliver; de Jong, Kor; Karssenberg, Derek

    2017-04-01

    There is an increasing demand to run environmental models on a big scale: simulations over large areas at high resolution. The heterogeneity of available computing hardware such as multi-core CPUs, GPUs or supercomputer potentially provides significant computing power to fulfil this demand. However, this requires detailed knowledge of the underlying hardware, parallel algorithm design and the implementation thereof in an efficient system programming language. Domain scientists such as hydrologists or ecologists often lack this specific software engineering knowledge, their emphasis is (and should be) on exploratory building and analysis of simulation models. As a result, models constructed by domain specialists mostly do not take full advantage of the available hardware. A promising solution is to separate the model building activity from software engineering by offering domain specialists a model building framework with pre-programmed building blocks that they combine to construct a model. The model building framework, consequently, needs to have built-in capabilities to make full usage of the available hardware. Developing such a framework providing understandable code for domain scientists and being runtime efficient at the same time poses several challenges on developers of such a framework. For example, optimisations can be performed on individual operations or the whole model, or tasks need to be generated for a well-balanced execution without explicitly knowing the complexity of the domain problem provided by the modeller. Ideally, a modelling framework supports the optimal use of available hardware whichsoever combination of model building blocks scientists use. We demonstrate our ongoing work on developing parallel algorithms for spatio-temporal modelling and demonstrate 1) PCRaster, an environmental software framework (http://www.pcraster.eu) providing spatio-temporal model building blocks and 2) parallelisation of about 50 of these building blocks using

  1. Comparison of neuronal spike exchange methods on a Blue Gene/P supercomputer.

    Science.gov (United States)

    Hines, Michael; Kumar, Sameer; Schürmann, Felix

    2011-01-01

    For neural network simulations on parallel machines, interprocessor spike communication can be a significant portion of the total simulation time. The performance of several spike exchange methods using a Blue Gene/P (BG/P) supercomputer has been tested with 8-128 K cores using randomly connected networks of up to 32 M cells with 1 k connections per cell and 4 M cells with 10 k connections per cell, i.e., on the order of 4·10(10) connections (K is 1024, M is 1024(2), and k is 1000). The spike exchange methods used are the standard Message Passing Interface (MPI) collective, MPI_Allgather, and several variants of the non-blocking Multisend method either implemented via non-blocking MPI_Isend, or exploiting the possibility of very low overhead direct memory access (DMA) communication available on the BG/P. In all cases, the worst performing method was that using MPI_Isend due to the high overhead of initiating a spike communication. The two best performing methods-the persistent Multisend method using the Record-Replay feature of the Deep Computing Messaging Framework DCMF_Multicast; and a two-phase multisend in which a DCMF_Multicast is used to first send to a subset of phase one destination cores, which then pass it on to their subset of phase two destination cores-had similar performance with very low overhead for the initiation of spike communication. Departure from ideal scaling for the Multisend methods is almost completely due to load imbalance caused by the large variation in number of cells that fire on each processor in the interval between synchronization. Spike exchange time itself is negligible since transmission overlaps with computation and is handled by a DMA controller. We conclude that ideal performance scaling will be ultimately limited by imbalance between incoming processor spikes between synchronization intervals. Thus, counterintuitively, maximization of load balance requires that the distribution of cells on processors should not reflect

  2. Genetic heterogeneity among craniosynostosis syndromes: mapping the Saethre-Chotzen syndrome locus between D7S513 and D7S516 and exclusion of Jackson-Weiss and Crouzon syndrome loci from 7p.

    Science.gov (United States)

    Lewanda, A F; Cohen, M M; Jackson, C E; Taylor, E W; Li, X; Beloff, M; Day, D; Clarren, S K; Ortiz, R; Garcia, C

    1994-01-01

    Saethre-Chotzen, Crouzon, and Jackson-Weiss syndromes are craniosynostotic autosomal dominant conditions with a wide variability in expression. Saethre-Chotzen has been mapped to chromosome 7p by L. A. Brueton et al. (1992, J. Med. Genet. 29: 681-685), the Greig cephalopolysyndactyly gene was identified at 7p13 by A. Vortkamp et al. (1991, Nature 352: 539-540), and many cases of craniosynostosis have been associated with 7p deletions. We confirmed linkage of the Saethre-Chotzen syndrome locus to chromosome 7p. The tightest linkage was to locus D7S493 (Z = 5.04, theta = 0.00), and linkage and haplotype analyses refined the location of the gene to the region between D7S513 and D7S516. Jackson-Weiss and Crouzon syndrome loci were analyzed using markers spanning the entire 7p arm and were excluded, proving that they are nonallelic to Saethre-Chotzen, Greig cephalopolysyndactyly, and the del(7p) syndromes.

  3. Proposal of a Desk-Side Supercomputer with Reconfigurable Data-Paths Using Rapid Single-Flux-Quantum Circuits

    Science.gov (United States)

    Takagi, Naofumi; Murakami, Kazuaki; Fujimaki, Akira; Yoshikawa, Nobuyuki; Inoue, Koji; Honda, Hiroaki

    We propose a desk-side supercomputer with large-scale reconfigurable data-paths (LSRDPs) using superconducting rapid single-flux-quantum (RSFQ) circuits. It has several sets of computing unit which consists of a general-purpose microprocessor, an LSRDP and a memory. An LSRDP consists of a lot of, e. g., a few thousand, floating-point units (FPUs) and operand routing networks (ORNs) which connect the FPUs. We reconfigure the LSRDP to fit a computation, i. e., a group of floating-point operations, which appears in a ‘for’ loop of numerical programs by setting the route in ORNs before the execution of the loop. We propose to implement the LSRDPs by RSFQ circuits. The processors and the memories can be implemented by semiconductor technology. We expect that a 10 TFLOPS supercomputer, as well as a refrigerating engine, will be housed in a desk-side rack, using a near-future RSFQ process technology, such as 0.35μm process.

  4. 超级计算中心核心应用的浅析%Brief Exploration on Technical Development of Key Applications at Supercomputing Center

    Institute of Scientific and Technical Information of China (English)

    党岗; 程志全

    2013-01-01

    目前,我国国家级超算中心大多采用“地方政府投资、以市场为导向开展应用”的建设思路,地方政府更关心涉及本地企事业单位的高性能计算应用和服务,超算中心常被用于普通的应用,很难充分发挥超级计算的战略作用.如何让超算中心这艘能力超强的航母生存下来,进而“攻城掠地”,推动技术创新,一直是业内人士研究的课题.初步探讨了国内超算中心核心应用所面临的挑战,提出了超算中心核心应用服务地方建设的几点建议.%National supercomputing centers at China work use building thought of local government investigation, and market-oriented application performing. Supercomputing resources are always applied at general applications,as the local govenment more focuses on the high-performance computing applications and services related to local business, rather than supercomputing working as strategical role in the traditional way. It is a long-term researching topic how to make the supercomputing powerful as a super-carrier active and applicable to benefit the technical innovation. Some challenging technical issues suiting for the superomputing were discussed by taking domestic supercomputing center as the example, and some useful advises were addressed for applying international supercomputing center at local services.

  5. Recherche d’identité et théâtre politiqueLe choix du genre chez Peter Weiss dans les années soixante et soixante-dix

    OpenAIRE

    Götze, Karl Heinz

    2013-01-01

    « Recherche d’identité et théâtre politique » retrace les diverses tentatives artistiques (peinture, journalisme, cinéma, prose romanesque et autobiographique, théâtre) de Peter Weiss pendant les années soixante et soixante-dix pour montrer comment le choix du genre s’articule autour de la problématique de l’identité qui se cherche dans l’engagement politique et dans d’autres cheminements. Quelle écriture exprime le mieux l’antagonisme et la contradiction qui caractérisent l’œuvre entière de ...

  6. Coherent 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an Optimal Supercomputer Optical Switch Fabric

    DEFF Research Database (Denmark)

    Karinou, Fotini; Borkowski, Robert; Zibar, Darko

    2013-01-01

    We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates....

  7. Car2x with software defined networks, network functions virtualization and supercomputers technical and scientific preparations for the Amsterdam Arena telecoms fieldlab

    NARCIS (Netherlands)

    Meijer R.J.; Cushing R.; De Laat C.; Jackson P.; Klous S.; Koning R.; Makkes M.X.; Meerwijk A.

    2015-01-01

    In the invited talk 'Car2x with SDN, NFV and supercomputers' we report about how our past work with SDN [1, 2] allows the design of a smart mobility fieldlab in the huge parking lot the Amsterdam Arena. We explain how we can engineer and test software that handle the complex conditions of the Car2X

  8. Car2x with software defined networks, network functions virtualization and supercomputers technical and scientific preparations for the Amsterdam Arena telecoms fieldlab

    NARCIS (Netherlands)

    Meijer R.J.; Cushing R.; De Laat C.; Jackson P.; Klous S.; Koning R.; Makkes M.X.; Meerwijk A.

    2015-01-01

    In the invited talk 'Car2x with SDN, NFV and supercomputers' we report about how our past work with SDN [1, 2] allows the design of a smart mobility fieldlab in the huge parking lot the Amsterdam Arena. We explain how we can engineer and test software that handle the complex conditions of the Car2X

  9. Nonperturbative Lattice Simulation of High Multiplicity Cross Section Bound in $\\phi^4_3$ on Beowulf Supercomputer

    CERN Document Server

    Charng, Y Y

    2001-01-01

    In this thesis, we have investigated the possibility of large cross sections at large multiplicity in weakly coupled three dimensional $\\phi^4$ theory using Monte Carlo Simulation methods. We have built a Beowulf Supercomputer for this purpose. We use spectral function sum rules to derive a bound on the total cross section where the quantity determining the bound can be measured by Monte Carlo simulation in Euclidean space. We determine the critical threshold energy for large high multiplicity cross section according to the analysis of M.B. Volosion and E.N. Argyres, R.M.P. Kleiss, and C.G. Papadopoulos. We compare the simulation results with the perturbation results and see no evidence for large cross section in the range where tree diagram estimates suggest they should exist.

  10. Performance Evaluation of an Intel Haswell- and Ivy Bridge-Based Supercomputer Using Scientific and Engineering Applications

    Science.gov (United States)

    Saini, Subhash; Hood, Robert T.; Chang, Johnny; Baron, John

    2016-01-01

    We present a performance evaluation conducted on a production supercomputer of the Intel Xeon Processor E5- 2680v3, a twelve-core implementation of the fourth-generation Haswell architecture, and compare it with Intel Xeon Processor E5-2680v2, an Ivy Bridge implementation of the third-generation Sandy Bridge architecture. Several new architectural features have been incorporated in Haswell including improvements in all levels of the memory hierarchy as well as improvements to vector instructions and power management. We critically evaluate these new features of Haswell and compare with Ivy Bridge using several low-level benchmarks including subset of HPCC, HPCG and four full-scale scientific and engineering applications. We also present a model to predict the performance of HPCG and Cart3D within 5%, and Overflow within 10% accuracy.

  11. Parallel workflow manager for non-parallel bioinformatic applications to solve large-scale biological problems on a supercomputer.

    Science.gov (United States)

    Suplatov, Dmitry; Popova, Nina; Zhumatiy, Sergey; Voevodin, Vladimir; Švedas, Vytas

    2016-04-01

    Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads - one for task management and communication, and another for subtask execution - are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper .

  12. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry

    2017-02-27

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey\\'s acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less

  13. Hurricane Modeling and Supercomputing: Can a global mesoscale model be useful in improving forecasts of tropical cyclogenesis?

    Science.gov (United States)

    Shen, B.; Tao, W.; Atlas, R.

    2007-12-01

    Hurricane modeling, along with guidance from observations, has been used to help construct hurricane theories since the 1960s. CISK (conditional instability of the second kind, Charney and Eliassen 1964; Ooyama 1964,1969) and WISHE (wind-induced surface heat exchange, Emanuel 1986) are among the well-known theories being used to understand hurricane intensification. For hurricane genesis, observations have indicated the importance of large-scale flows (e.g., the Madden-Julian Oscillation or MJO, Maloney and Hartmann, 2000) on the modulation of hurricane activity. Recent modeling studies have focused on the role of the MJO and Rossby waves (e.g., Ferreira and Schubert, 1996; Aivyer and Molinari, 2003) and/or the interaction of small-scale vortices (e.g., Holland 1995; Simpson et al. 1997; Hendrick et al. 2004), of which determinism could be also built by large-scale flows. The aforementioned studies suggest a unified view on hurricane formation, consisting of multiscale processes such as scale transition (e.g., from the MJO to Equatorial Rossby Waves and from waves to vortices), and scale interactions among vortices, convection, and surface heat and moisture fluxes. To depict the processes in the unified view, a high-resolution global model is needed. During the past several years, supercomputers have enabled the deployment of ultra-high resolution global models, obtaining remarkable forecasts of hurricane track and intensity (Atlas et al. 2005; Shen et al. 2006). In this work, hurricane genesis is investigated with the aid of a global mesoscale model on the NASA Columbia supercomputer by conducting numerical experiments on the genesis of six consecutive tropical cyclones (TCs) in May 2002. These TCs include two pairs of twin TCs in the Indian Ocean, Supertyphoon Hagibis in the West Pacific Ocean and Hurricane Alma in the East Pacific Ocean. It is found that the model is capable of predicting the genesis of five of these TCs about two to three days in advance. Our

  14. An efficient highly parallel implementation of a large air pollution model on an IBM blue gene supercomputer

    Science.gov (United States)

    Ostromsky, Tz.; Georgiev, K.; Zlatev, Z.

    2012-10-01

    In this paper we discuss the efficient distributed-memory parallelization strategy of the Unified Danish Eulerian Model (UNI-DEM). We apply an improved decomposition strategy to the spatial domain in order to get more parallel tasks (based on the larger number of subdomains) with less communications between them (due to optimization of the overlapping area when the advection-diffusion problem is solved numerically). This kind of rectangular block partitioning (with a squareshape trend) allows us not only to increase significantly the number of potential parallel tasks, but also to reduce the local memory requirements per task, which is critical for the distributed-memory implementation of the higher-resolution/finergrid versions of UNI-DEM on some parallel systems, and particularly on the IBM BlueGene/P platform - our target hardware. We will show by experiments that our new parallel implementation can use rather efficiently the resources of the powerful IBM BlueGene/P supercomputer, the largest in Bulgaria, up to its full capacity. It turned out to be extremely useful in the large and computationally expensive numerical experiments, carried out to calculate some initial data for sensitivity analysis of the Danish Eulerian model.

  15. The ASCI Network for SC '99: A Step on the Path to a 100 Gigabit Per Second Supercomputing Network

    Energy Technology Data Exchange (ETDEWEB)

    PRATT,THOMAS J.; TARMAN,THOMAS D.; MARTINEZ,LUIS M.; MILLER,MARC M.; ADAMS,ROGER L.; CHEN,HELEN Y.; BRANDT,JAMES M.; WYCKOFF,PETER S.

    2000-07-24

    This document highlights the Discom{sup 2}'s Distance computing and communication team activities at the 1999 Supercomputing conference in Portland, Oregon. This conference is sponsored by the IEEE and ACM. Sandia, Lawrence Livermore and Los Alamos National laboratories have participated in this conference for eleven years. For the last four years the three laboratories have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives rubric. Communication support for the ASCI exhibit is provided by the ASCI DISCOM{sup 2} project. The DISCOM{sup 2} communication team uses this forum to demonstrate and focus communication and networking developments within the community. At SC 99, DISCOM built a prototype of the next generation ASCI network demonstrated remote clustering techniques, demonstrated the capabilities of the emerging Terabit Routers products, demonstrated the latest technologies for delivering visualization data to the scientific users, and demonstrated the latest in encryption methods including IP VPN technologies and ATM encryption research. The authors also coordinated the other production networking activities within the booth and between their demonstration partners on the exhibit floor. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia's overall strategies in ASCI networking.

  16. Combining density functional theory calculations, supercomputing, and data-driven methods to design new materials (Conference Presentation)

    Science.gov (United States)

    Jain, Anubhav

    2017-04-01

    Density functional theory (DFT) simulations solve for the electronic structure of materials starting from the Schrödinger equation. Many case studies have now demonstrated that researchers can often use DFT to design new compounds in the computer (e.g., for batteries, catalysts, and hydrogen storage) before synthesis and characterization in the lab. In this talk, I will focus on how DFT calculations can be executed on large supercomputing resources in order to generate very large data sets on new materials for functional applications. First, I will briefly describe the Materials Project, an effort at LBNL that has virtually characterized over 60,000 materials using DFT and has shared the results with over 17,000 registered users. Next, I will talk about how such data can help discover new materials, describing how preliminary computational screening led to the identification and confirmation of a new family of bulk AMX2 thermoelectric compounds with measured zT reaching 0.8. I will outline future plans for how such data-driven methods can be used to better understand the factors that control thermoelectric behavior, e.g., for the rational design of electronic band structures, in ways that are different from conventional approaches.

  17. BLAS (Basic Linear Algebra Subroutines), linear algebra modules, and supercomputers. Technical report for period ending 15 December 1984

    Energy Technology Data Exchange (ETDEWEB)

    Rice, J.R.

    1984-12-31

    On October 29 and 30, 1984 about 20 people met at Purdue University to consider extensions to the Basic Linear Algebra Subroutines (BLAS) and linear algebra software modules in general. The need for these extensions and new sets of modules is largely due to the advent of new supercomputer architectures which make it difficult for ordinary coding techniques to achieve even a significant fraction of the potential computing power. The workshop format was one of informal presentations with ample discussions followed by sessions of general discussions of the issues raised. This report is a summary of the presentations, the issues raised, the conclusions reached and the open issue discussions. Each participant had an opportunity to comment on this report, but it also clearly reflects the author's filtering of the extensive discussions. Section 2 describes seven proposals for linear algebra software modules and Section 3 describes four presentations on the use of such modules. Discussion summaries are given next; Section 4 for those where near concensus was reached and Section 5 where the issues were left open.

  18. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-03-29

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  19. Supercomputations and big-data analysis in strong-field ultrafast optical physics: filamentation of high-peak-power ultrashort laser pulses

    Science.gov (United States)

    Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.

    2016-06-01

    High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3  +  1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.

  20. Evidence for locus heterogeneity in acrocephalosyndactyly: a refined localization for the Saethre-Chotzen syndrome locus on distal chromosome 7p--and exclusion of Jackson-Weiss syndrome from craniosynostosis loci on 7p and 5q.

    Science.gov (United States)

    van Herwerden, L; Rose, C S; Reardon, W; Brueton, L A; Weissenbach, J; Malcolm, S; Winter, R M

    1994-04-01

    Craniosynostosis (premature fusion of the skull sutures) occurs as a clinically heterogeneous group of disorders, frequently involving digital abnormalities. We have previously provisionally assigned the gene for one such condition, Saethre-Chotzen syndrome (ACS III), to chromosome 7p. Linkage analysis is now reported between ACS III and dinucleotide repeat loci on distal 7p. The maximum lod scores, Zmax, were 5.57 at a recombination fraction of .05, with D7S488, and 4.74 at a recombination fraction of .05, with D7S493. Only weak linkage, not reaching significance, was found with distal markers (D7S513 and afm281vc9) and a proximal marker (D7S516). Multipoint analysis shows that the disease locus lies between D7S513 and D7S516. Analysis of individual recombinants shows that the most likely position is between D7S493 and D7S516. Linkage data in regard of Jackson-Weiss syndrome demonstrate that this autosomal dominant form of acrocephalosyndactyly does not map to the ACS III region on 7p or to the acrocephalosyndactyly locus on 5q (Boston type). These findings underline the genetic heterogeneity among the different clinical conditions manifesting with acrocephalosyndactyly.

  1. Evidence for locus heterogeneity in acrocephalosyndactyly: A refined localization for the Saethre-Chotzen syndrome locus on distal chromosome 7p-and exclusion of Jackson-Weiss syndrome from craniosynostosis loci on 7p and 5q

    Energy Technology Data Exchange (ETDEWEB)

    Herwerden, L. van; Rose, C.S.P.; Reardon, W.; Malcolm, S.; Winter, R.M. (Institute of Child Health, London (United Kingdom)); Brueton, L.A. (Northwick Park Hospital, Harrow (United Kingdom)); Weissenbach, J. (Human Genome Research Centre, Evry (France))

    1994-04-01

    Craniosynostosis (premature fusion of the skull sutures) occurs as a clinically heterogeneous group of disorders, frequently involving digital abnormalities. The authors have previously provisionally assigned the gene for one such condition, Saethre-Chotzen syndrome (ACS III), to chromosome 7p. Linkage analysis is now reported between ACS III and dinucleotide repeat loci on distal 7p. The maximum lod scores, Z[sub max], were 5.57 at a recombination fraction of .05, with D7S488, and 4.74 at a recombination fraction of .05, with D7S493. Only weak linkage, not reaching significance, was found with distal markers (D7S513 and afm281vc9) and a proximal marker (D7S516). Multipoint analysis shows that the disease locus lies between D7S513 and D7S516. Analysis of individual recombinants shows that the most likely position is between D7S493 and D7S516. Linkage data in regard of Jackson-Weiss syndrome demonstrate that this autosomal dominant form of acrocephalosyndactyly does not map to the ACS III region on 7p or to the acrocephalosyndactyly locus on 5q (Boston type). These findings underline the genetic heterogeneity among the different clinical conditions manifesting with acrocephalosyndactyly. 20 refs., 3 figs., 2 tabs.

  2. Large-scale Particle Simulations for Debris Flows using Dynamic Load Balance on a GPU-rich Supercomputer

    Science.gov (United States)

    Tsuzuki, Satori; Aoki, Takayuki

    2016-04-01

    Numerical simulations for debris flows including a countless of objects is one of important topics in fluid dynamics and many engineering applications. Particle-based method is a promising approach to carry out the simulations for flows interacting with objects. In this paper, we propose an efficient method to realize a large-scale simulation for fluid-structure interaction by combining SPH (Smoothed Particle Hydrodynamics) method for fluid with DEM (Discrete Element Method) for objects on a multi-GPU system. By applying space filling curves to decomposition of the computational domain, we are able to contain the same number of particles in each decomposed domain. In our implementation, several techniques for particle counting and data movement have been introduced. Fragmentation of the memory used for particles happens during the time-integration and the frequency of de-fragmentation is examined by taking account for computational load balance and the communication cost between CPU and GPU. A link-list technique of the particle interaction is introduced to save the memory drastically. It is found that the sorting of particle data for the neighboring particle list using linked-list method improves the memory access greatly with a certain interval. The weak and strong scalabilities for a SPH simulation using 111 Million particles was measured from 4 GPUs to 512 GPUs for three types of space filling curves. A large-scale debris flow simulation of tsunami with 10,368 floating rubbles using 117 Million particles were successfully carried out with 256 GPUs on the TSUBAME 2.5 supercomputer at Tokyo Institute of Technology.

  3. Statistical correlations and risk analyses techniques for a diving dual phase bubble model and data bank using massively parallel supercomputers.

    Science.gov (United States)

    Wienke, B R; O'Leary, T R

    2008-05-01

    Linking model and data, we detail the LANL diving reduced gradient bubble model (RGBM), dynamical principles, and correlation with data in the LANL Data Bank. Table, profile, and meter risks are obtained from likelihood analysis and quoted for air, nitrox, helitrox no-decompression time limits, repetitive dive tables, and selected mixed gas and repetitive profiles. Application analyses include the EXPLORER decompression meter algorithm, NAUI tables, University of Wisconsin Seafood Diver tables, comparative NAUI, PADI, Oceanic NDLs and repetitive dives, comparative nitrogen and helium mixed gas risks, USS Perry deep rebreather (RB) exploration dive,world record open circuit (OC) dive, and Woodville Karst Plain Project (WKPP) extreme cave exploration profiles. The algorithm has seen extensive and utilitarian application in mixed gas diving, both in recreational and technical sectors, and forms the bases forreleased tables and decompression meters used by scientific, commercial, and research divers. The LANL Data Bank is described, and the methods used to deduce risk are detailed. Risk functions for dissolved gas and bubbles are summarized. Parameters that can be used to estimate profile risk are tallied. To fit data, a modified Levenberg-Marquardt routine is employed with L2 error norm. Appendices sketch the numerical methods, and list reports from field testing for (real) mixed gas diving. A Monte Carlo-like sampling scheme for fast numerical analysis of the data is also detailed, as a coupled variance reduction technique and additional check on the canonical approach to estimating diving risk. The method suggests alternatives to the canonical approach. This work represents a first time correlation effort linking a dynamical bubble model with deep stop data. Supercomputing resources are requisite to connect model and data in application.

  4. New Mexico High School Supercomputing Challenge, 1990--1995: Five years of making a difference to students, teachers, schools, and communities. Progress report

    Energy Technology Data Exchange (ETDEWEB)

    Foster, M.; Kratzer, D.

    1996-02-01

    The New Mexico High School Supercomputing Challenge is an academic program dedicated to increasing interest in science and math among high school students by introducing them to high performance computing. This report provides a summary and evaluation of the first five years of the program, describes the program and shows the impact that it has had on high school students, their teachers, and their communities. Goals and objectives are reviewed and evaluated, growth and development of the program are analyzed, and future directions are discussed.

  5. Super-computer architecture

    CERN Document Server

    Hockney, R W

    1977-01-01

    This paper examines the design of the top-of-the-range, scientific, number-crunching computers. The market for such computers is not as large as that for smaller machines, but on the other hand it is by no means negligible. The present work-horse machines in this category are the CDC 7600 and IBM 360/195, and over fifty of the former machines have been sold. The types of installation that form the market for such machines are not only the major scientific research laboratories in the major countries-such as Los Alamos, CERN, Rutherford laboratory-but also major universities or university networks. It is also true that, as with sports cars, innovations made to satisfy the top of the market today often become the standard for the medium-scale computer of tomorrow. Hence there is considerable interest in examining present developments in this area. (0 refs).

  6. Supercomputer debugging workshop `92

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.S.

    1993-02-01

    This report contains papers or viewgraphs on the following topics: The ABCs of Debugging in the 1990s; Cray Computer Corporation; Thinking Machines Corporation; Cray Research, Incorporated; Sun Microsystems, Inc; Kendall Square Research; The Effects of Register Allocation and Instruction Scheduling on Symbolic Debugging; Debugging Optimized Code: Currency Determination with Data Flow; A Debugging Tool for Parallel and Distributed Programs; Analyzing Traces of Parallel Programs Containing Semaphore Synchronization; Compile-time Support for Efficient Data Race Detection in Shared-Memory Parallel Programs; Direct Manipulation Techniques for Parallel Debuggers; Transparent Observation of XENOOPS Objects; A Parallel Software Monitor for Debugging and Performance Tools on Distributed Memory Multicomputers; Profiling Performance of Inter-Processor Communications in an iWarp Torus; The Application of Code Instrumentation Technology in the Los Alamos Debugger; and CXdb: The Road to Remote Debugging.

  7. The GF11 supercomputer

    Science.gov (United States)

    Beetem, J.; Denneau, M.; Weingarten, D.

    1987-01-01

    GF11 is a parallel computer currently under construction at the IBM Yorktown Research Center. The machine incorporates 576 floating-point processors arranged in a modified SIMD architecture. Each has space for 2 Mbytes of memory and is capable of 20 Mflops, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 Gflops. The floating-point processors are interconnected by a dynamically reconfigurable non-blocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics.

  8. Associative Memories for Supercomputers

    Science.gov (United States)

    1992-12-01

    Transform (FFT) is computed. The real part is extracted and a bias equal to its minimum is added to it in order to make all the values positive. Each...Transform (FM) is computed. The real part is extracted and a bias equal to its minimum is added to it in order to make all the values positive. Each...masque numero un de Figure 12: Photographic de Ia reconstruction obtenuc avec Ia plaquc IOCDL correspondant k Ia phase binaire. en rotition, montrant

  9. Power-constrained supercomputing

    Science.gov (United States)

    Bailey, Peter E.

    As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound. Adaptive power balancing efficiently predicts where critical paths are likely to occur and distributes power to those paths. Greater power, in turn, allows increased thread concurrency levels, CPU frequency/voltage, or both. We describe these techniques in detail and show that, compared to the state-of-the-art technique of using statically predetermined, per-node power caps, Conductor leads to a best-case performance improvement of up to 30%, and an average improvement of 19.1%. At the node level, an accurate power/performance model will aid in selecting the right configuration from a large set of available configurations. We present a novel approach to generate such a model offline using kernel clustering and multivariate linear regression. Our model requires only two iterations to select a configuration, which provides a significant advantage over exhaustive search-based strategies. We apply our model to predict power and performance for different applications using arbitrary configurations, and show that our model, when used with hardware frequency-limiting in a runtime system, selects configurations with significantly higher performance at a given power limit than those chosen by frequency-limiting alone. When applied to a set of 36 computational kernels from a range of applications, our model accurately predicts power and performance; our runtime system based on the model maintains 91% of optimal performance while meeting power constraints 88% of the time. When the runtime system violates a power constraint, it exceeds the constraint by only 6% in the average case, while simultaneously achieving 54% more performance than an oracle. Through the combination of the above contributions, we hope to provide guidance and inspiration to research practitioners working on runtime systems for power-constrained environments. We also hope this dissertation will draw attention to the need for software and runtime-controlled power management under power constraints at various levels, from the processor level to the cluster level.

  10. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu

    2013-07-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  11. MPI/OpenMP Hybrid Parallel Algorithm of Resolution of Identity Second-Order Møller-Plesset Perturbation Calculation for Massively Parallel Multicore Supercomputers.

    Science.gov (United States)

    Katouda, Michio; Nakajima, Takahito

    2013-12-10

    A new algorithm for massively parallel calculations of electron correlation energy of large molecules based on the resolution of identity second-order Møller-Plesset perturbation (RI-MP2) technique is developed and implemented into the quantum chemistry software NTChem. In this algorithm, a Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) hybrid parallel programming model is applied to attain efficient parallel performance on massively parallel supercomputers. An in-core storage scheme of intermediate data of three-center electron repulsion integrals utilizing the distributed memory is developed to eliminate input/output (I/O) overhead. The parallel performance of the algorithm is tested on massively parallel supercomputers such as the K computer (using up to 45 992 central processing unit (CPU) cores) and a commodity Intel Xeon cluster (using up to 8192 CPU cores). The parallel RI-MP2/cc-pVTZ calculation of two-layer nanographene sheets (C150H30)2 (number of atomic orbitals is 9640) is performed using 8991 node and 71 288 CPU cores of the K computer.

  12. Reliability and validity of the Chinese version of Weiss Functional Impairment Scale-Parent form for school age children%Weiss功能缺陷量表父母版的信效度

    Institute of Scientific and Technical Information of China (English)

    钱英; 杜巧新; 曲姗; 王玉凤

    2011-01-01

    Objective: To test the reliability and validity of Chinese version of the Weiss Functional Impairment Scale-Parent form (WFIRS-P) in China. Methods: Totally 123 outpatients who met the diagnostic criteria of Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) for attention deficit hyperactivity disorder (ADHD) and 240 normal children were recruited in this study. The parents of the subjects completed the WFIRS-P. At the same time, the parents of 39 outpatients comleted the ADHD Rating Scale-IV (ADHD RS-IV) and Behavior Rating Inventory of Executive Function (BRIEF) and doctors who made diagnosis for these 39 outpatients filled in the Global Assessment Function (GAF) to test the criteria validity. One or two weeks later, the parents of 29 outpatients were required to complete the WFIRS-P again to test the test-retest reliability. Results: The test-retest reliability were 0. 61 - 0. 87 and the Cronbach a coefficients were 0. 70 - 0. 92. Subscale scores of WFIRS-P were significantly correlated with scores of ADHD RS-IV (r = 0. 32 - 0. 50, P < 0. 05), BRIEF (r = 0. 23 -0. 71, P < 0. 05) and GAF (r = - 0. 29 - - 0. 59, P < 0. 05). Lisrel Confirmatory factor analysis showed the 5 -subscale model of BRIEF was reasonable (CFI = 0. 97 for control group, 0. 89 for ADHD group, RMSEA < 0. 08). Compared with control group, the ADHD group got significant higher scores in all subscales of WFIRS-P respectively (Ps < 0. 01) . Conclusion: The Chinese version of the Weiss Functional Impairment Scale-Parent form WFIRS-P have adequate reliability and validity.%目的:评价Weiss功能缺陷量表父母版(WFIRS-P)中文版的信效度.方法:选取符合美国精神障碍诊断与统计手册第四版(DSM-Ⅳ)注意缺陷多动障碍(ADHD)诊断标准的门诊患者123名及正常儿童240名,同时请病例组中39名儿童父母填写执行功能行为评定量表父母版(BRIEF)和ADHD评定量表-Ⅳ(ADHD RS-Ⅳ),并请进行诊断的医师对这39名患

  13. Estrategia metodológica para mejorar el aprendizaje de los estudiantes de 1º grado de secundaria en el área de Historia, Geografía y Economía; I.E. "Karl Weiss", Chiclayo, año 2013

    OpenAIRE

    Serquen Farro, Felipe.

    2014-01-01

    Nuestro trabajo de investigación, tiene como objetivo general: Diseñar Estrategias Metodológicas sustentadas en la Teoría de David Ausubel, Robert Gagné y Álvarez de Zayas, para mejorar el aprendizaje en los estudiantes de 1º grado de secundaria en el área de Historia, Geograffa y Economía de la l. E. "Karl Weiss", Chiclayo. Metodológicamente aplicamos guías de observación, de encuesta, de entrevistas en profundidad y recojo de testimonios. Luego de haber terminado esta parte procedimos a exa...

  14. Performance Analysis and Scaling Behavior of the Terrestrial Systems Modeling Platform TerrSysMP in Large-Scale Supercomputing Environments

    Science.gov (United States)

    Kollet, S. J.; Goergen, K.; Gasper, F.; Shresta, P.; Sulis, M.; Rihani, J.; Simmer, C.; Vereecken, H.

    2013-12-01

    In studies of the terrestrial hydrologic, energy and biogeochemical cycles, integrated multi-physics simulation platforms take a central role in characterizing non-linear interactions, variances and uncertainties of system states and fluxes in reciprocity with observations. Recently developed integrated simulation platforms attempt to honor the complexity of the terrestrial system across multiple time and space scales from the deeper subsurface including groundwater dynamics into the atmosphere. Technically, this requires the coupling of atmospheric, land surface, and subsurface-surface flow models in supercomputing environments, while ensuring a high-degree of efficiency in the utilization of e.g., standard Linux clusters and massively parallel resources. A systematic performance analysis including profiling and tracing in such an application is crucial in the understanding of the runtime behavior, to identify optimum model settings, and is an efficient way to distinguish potential parallel deficiencies. On sophisticated leadership-class supercomputers, such as the 28-rack 5.9 petaFLOP IBM Blue Gene/Q 'JUQUEEN' of the Jülich Supercomputing Centre (JSC), this is a challenging task, but even more so important, when complex coupled component models are to be analysed. Here we want to present our experience from coupling, application tuning (e.g. 5-times speedup through compiler optimizations), parallel scaling and performance monitoring of the parallel Terrestrial Systems Modeling Platform TerrSysMP. The modeling platform consists of the weather prediction system COSMO of the German Weather Service; the Community Land Model, CLM of NCAR; and the variably saturated surface-subsurface flow code ParFlow. The model system relies on the Multiple Program Multiple Data (MPMD) execution model where the external Ocean-Atmosphere-Sea-Ice-Soil coupler (OASIS3) links the component models. TerrSysMP has been instrumented with the performance analysis tool Scalasca and analyzed

  15. Use of QUADRICS supercomputer as embedded simulator in emergency management systems; Utilizzo del calcolatore QUADRICS come simulatore in linea in un sistema di gestione delle emergenze

    Energy Technology Data Exchange (ETDEWEB)

    Bove, R.; Di Costanzo, G.; Ziparo, A. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Dip. Energia

    1996-07-01

    The experience related to the implementation of a MRBT, atmospheric spreading model with a short duration releasing, are reported. This model was implemented on a QUADRICS-Q1 supercomputer. First is reported a description of the MRBT model. It is an analytical model to study the speadings of light gases realised in the atmosphere cause incidental releasing. The solution of diffusion equation is Gaussian like. It yield the concentration of pollutant substance released. The concentration is function of space and time. Thus the QUADRICS architecture is introduced. And the implementation of the model is described. At the end it will be consider the integration of the QUADRICS-based model as simulator in a emergency management system.

  16. Sandia`s network for Supercomputing `94: Linking the Los Alamos, Lawrence Livermore, and Sandia National Laboratories using switched multimegabit data service

    Energy Technology Data Exchange (ETDEWEB)

    Vahle, M.O.; Gossage, S.A.; Brenkosh, J.P. [Sandia National Labs., Albuquerque, NM (United States). Advanced Networking Integration Dept.

    1995-01-01

    Supercomputing `94, a high-performance computing and communications conference, was held November 14th through 18th, 1994 in Washington DC. For the past four years, Sandia National Laboratories has used this conference to showcase and focus its communications and networking endeavors. At the 1994 conference, Sandia built a Switched Multimegabit Data Service (SMDS) network running at 44.736 megabits per second linking its private SMDS network between its facilities in Albuquerque, New Mexico and Livermore, California to the convention center in Washington, D.C. For the show, the network was also extended from Sandia, New Mexico to Los Alamos National Laboratory and from Sandia, California to Lawrence Livermore National Laboratory. This paper documents and describes this network and how it was used at the conference.

  17. A Reliability Calculation Method for Web Service Composition Using Fuzzy Reasoning Colored Petri Nets and Its Application on Supercomputing Cloud Platform

    Directory of Open Access Journals (Sweden)

    Ziyun Deng

    2016-09-01

    Full Text Available In order to develop a Supercomputing Cloud Platform (SCP prototype system using Service-Oriented Architecture (SOA and Petri nets, we researched some technologies for Web service composition. Specifically, in this paper, we propose a reliability calculation method for Web service compositions, which uses Fuzzy Reasoning Colored Petri Net (FRCPN to verify the Web service compositions. We put forward a definition of semantic threshold similarity for Web services and a formal definition of FRCPN. We analyzed five kinds of production rules in FRCPN, and applied our method to the SCP prototype. We obtained the reliability value of the end Web service as an indicator of the overall reliability of the FRCPN. The method can test the activity of FRCPN. Experimental results show that the reliability of the Web service composition has a correlation with the number of Web services and the range of reliability transition values.

  18. Enabling Loosely-Coupled Serial Job Execution on the IBM BlueGene/P Supercomputer and the SiCortex SC5832

    CERN Document Server

    Raicu, Ioan; Wilde, Mike; Foster, Ian

    2008-01-01

    Our work addresses the enabling of the execution of highly parallel computations composed of loosely coupled serial jobs with no modifications to the respective applications, on large-scale systems. This approach allows new-and potentially far larger-classes of application to leverage systems such as the IBM Blue Gene/P supercomputer and similar emerging petascale architectures. We present here the challenges of I/O performance encountered in making this model practical, and show results using both micro-benchmarks and real applications on two large-scale systems, the BG/P and the SiCortex SC5832. Our preliminary benchmarks show that we can scale to 4096 processors on the Blue Gene/P and 5832 processors on the SiCortex with high efficiency, and can achieve thousands of tasks/sec sustained execution rates for parallel workloads of ordinary serial applications. We measured applications from two domains, economic energy modeling and molecular dynamics.

  19. Seismic Sensors to Supercomputers: Internet Mapping and Computational Tools for Teaching and Learning about Earthquakes and the Structure of the Earth from Seismology

    Science.gov (United States)

    Meertens, C. M.; Seber, D.; Hamburger, M.

    2004-12-01

    The Internet has become an integral resource in the classrooms and homes of teachers and students. Widespread Web-access to seismic data and analysis tools enhances opportunities for teaching and learning about earthquakes and the structure of the earth from seismic tomography. We will present an overview and demonstration of the UNAVCO Voyager Java- and Javascript-based mapping tools (jules.unavco.org) and the Cornell University/San Diego Supercomputer Center (www.discoverourearth.org) Java-based data analysis and mapping tools. These map tools, datasets, and related educational websites have been developed and tested by collaborative teams of scientific programmers, research scientists, and educators. Dual-use by research and education communities ensures persistence of the tools and data, motivates on-going development, and encourages fresh content. With these tools are curricular materials and on-going evaluation processes that are essential for an effective application in the classroom. The map tools provide not only seismological data and tomographic models of the earth's interior, but also a wealth of associated map data such as topography, gravity, sea-floor age, plate tectonic motions and strain rates determined from GPS geodesy, seismic hazard maps, stress, and a host of geographical data. These additional datasets help to provide context and enable comparisons leading to an integrated view of the planet and the on-going processes that shape it. Emerging Cyberinfrastructure projects such as the NSF-funded GEON Information Technology Research project (www.geongrid.org) are developing grid/web services, advanced visualization software, distributed databases and data sharing methods, concept-based search mechanisms, and grid-computing resources for earth science and education. These developments in infrastructure seek to extend the access to data and to complex modeling tools from the hands of a few researchers to a much broader set of users. The GEON

  20. Research of Customer Segmentation and Differentiated Services in Supercomputing Center%超级计算中心客户细分及差异化服务策略研究

    Institute of Scientific and Technical Information of China (English)

    赵芸卿

    2013-01-01

    This paper applies K-means method to analyze the data of the customers’ Super-Computer renting information in the Supercomputing Center, achieving a number of sorted groups, and puts differentiated services strategies forward accordingly. As a result, we can allocate our supercomputer resources according to these groups, making the services more effectively and conveniently.%本文以中国科学院计算机网络信息中心超级计算中心(以下简称超级计算中心)客户服务工作为研究对象,运用 K-means 算法对客户进行细分,进而对每类客户群提出相应的差异化服务策略。实施差异化服务策略可以更好地分配资源、提供更有效的客户服务。

  1. Implementation and scaling of the fully coupled Terrestrial Systems Modeling Platform (TerrSysMP v1.0) in a massively parallel supercomputing environment - a case study on JUQUEEN (IBM Blue Gene/Q)

    Science.gov (United States)

    Gasper, F.; Goergen, K.; Shrestha, P.; Sulis, M.; Rihani, J.; Geimer, M.; Kollet, S.

    2014-10-01

    Continental-scale hyper-resolution simulations constitute a grand challenge in characterizing nonlinear feedbacks of states and fluxes of the coupled water, energy, and biogeochemical cycles of terrestrial systems. Tackling this challenge requires advanced coupling and supercomputing technologies for earth system models that are discussed in this study, utilizing the example of the implementation of the newly developed Terrestrial Systems Modeling Platform (TerrSysMP v1.0) on JUQUEEN (IBM Blue Gene/Q) of the Jülich Supercomputing Centre, Germany. The applied coupling strategies rely on the Multiple Program Multiple Data (MPMD) paradigm using the OASIS suite of external couplers, and require memory and load balancing considerations in the exchange of the coupling fields between different component models and the allocation of computational resources, respectively. Using the advanced profiling and tracing tool Scalasca to determine an optimum load balancing leads to a 19% speedup. In massively parallel supercomputer environments, the coupler OASIS-MCT is recommended, which resolves memory limitations that may be significant in case of very large computational domains and exchange fields as they occur in these specific test cases and in many applications in terrestrial research. However, model I/O and initialization in the petascale range still require major attention, as they constitute true big data challenges in light of future exascale computing resources. Based on a factor-two speedup due to compiler optimizations, a refactored coupling interface using OASIS-MCT and an optimum load balancing, the problem size in a weak scaling study can be increased by a factor of 64 from 512 to 32 768 processes while maintaining parallel efficiencies above 80% for the component models.

  2. Intel 80860 or I860: The million transistor RISC microprocessor chip with supercomputer capability. April 1988-September 1989 (Citations from the Computer data base). Report for April 1988-September 1989

    Energy Technology Data Exchange (ETDEWEB)

    1989-10-01

    This bibliography contains citations concerning Intel's new microprocessor which has more than a million transistors and is capable of performing up to 80 million floating-point operations per second (80 mflops). The I860 (originally code named the N-10 during development) is to be used in workstation type applications. It will be suited for problems such as fluid dynamics, molecular modeling, structural analysis, and economic modeling which requires supercomputer number crunching and advanced graphics. (Contains 64 citations fully indexed and including a title list.)

  3. Collaborating CPU and GPU for large-scale high-order CFD simulations with complex grids on the TianHe-1A supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Chuanfu, E-mail: xuchuanfu@nudt.edu.cn [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Deng, Xiaogang; Zhang, Lilun [College of Computer Science, National University of Defense Technology, Changsha 410073 (China); Fang, Jianbin [Parallel and Distributed Systems Group, Delft University of Technology, Delft 2628CD (Netherlands); Wang, Guangxue; Jiang, Yi [State Key Laboratory of Aerodynamics, P.O. Box 211, Mianyang 621000 (China); Cao, Wei; Che, Yonggang; Wang, Yongxian; Wang, Zhenghua; Liu, Wei; Cheng, Xinghua [College of Computer Science, National University of Defense Technology, Changsha 410073 (China)

    2014-12-01

    Programming and optimizing complex, real-world CFD codes on current many-core accelerated HPC systems is very challenging, especially when collaborating CPUs and accelerators to fully tap the potential of heterogeneous systems. In this paper, with a tri-level hybrid and heterogeneous programming model using MPI + OpenMP + CUDA, we port and optimize our high-order multi-block structured CFD software HOSTA on the GPU-accelerated TianHe-1A supercomputer. HOSTA adopts two self-developed high-order compact definite difference schemes WCNS and HDCS that can simulate flows with complex geometries. We present a dual-level parallelization scheme for efficient multi-block computation on GPUs and perform particular kernel optimizations for high-order CFD schemes. The GPU-only approach achieves a speedup of about 1.3 when comparing one Tesla M2050 GPU with two Xeon X5670 CPUs. To achieve a greater speedup, we collaborate CPU and GPU for HOSTA instead of using a naive GPU-only approach. We present a novel scheme to balance the loads between the store-poor GPU and the store-rich CPU. Taking CPU and GPU load balance into account, we improve the maximum simulation problem size per TianHe-1A node for HOSTA by 2.3×, meanwhile the collaborative approach can improve the performance by around 45% compared to the GPU-only approach. Further, to scale HOSTA on TianHe-1A, we propose a gather/scatter optimization to minimize PCI-e data transfer times for ghost and singularity data of 3D grid blocks, and overlap the collaborative computation and communication as far as possible using some advanced CUDA and MPI features. Scalability tests show that HOSTA can achieve a parallel efficiency of above 60% on 1024 TianHe-1A nodes. With our method, we have successfully simulated an EET high-lift airfoil configuration containing 800M cells and China's large civil airplane configuration containing 150M cells. To our best knowledge, those are the largest-scale CPU–GPU collaborative simulations

  4. Advanced Architectures for Astrophysical Supercomputing

    CERN Document Server

    Barsdell, Benjamin R; Fluke, Christopher J

    2010-01-01

    Astronomers have come to rely on the increasing performance of computers to reduce, analyze, simulate and visualize their data. In this environment, faster computation can mean more science outcomes or the opening up of new parameter spaces for investigation. If we are to avoid major issues when implementing codes on advanced architectures, it is important that we have a solid understanding of our algorithms. A recent addition to the high-performance computing scene that highlights this point is the graphics processing unit (GPU). The hardware originally designed for speeding-up graphics rendering in video games is now achieving speed-ups of $O(100\\times)$ in general-purpose computation -- performance that cannot be ignored. We are using a generalized approach, based on the analysis of astronomy algorithms, to identify the optimal problem-types and techniques for taking advantage of both current GPU hardware and future developments in computing architectures.

  5. Supercomputer debugging workshop '92

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.S.

    1993-01-01

    This report contains papers or viewgraphs on the following topics: The ABCs of Debugging in the 1990s; Cray Computer Corporation; Thinking Machines Corporation; Cray Research, Incorporated; Sun Microsystems, Inc; Kendall Square Research; The Effects of Register Allocation and Instruction Scheduling on Symbolic Debugging; Debugging Optimized Code: Currency Determination with Data Flow; A Debugging Tool for Parallel and Distributed Programs; Analyzing Traces of Parallel Programs Containing Semaphore Synchronization; Compile-time Support for Efficient Data Race Detection in Shared-Memory Parallel Programs; Direct Manipulation Techniques for Parallel Debuggers; Transparent Observation of XENOOPS Objects; A Parallel Software Monitor for Debugging and Performance Tools on Distributed Memory Multicomputers; Profiling Performance of Inter-Processor Communications in an iWarp Torus; The Application of Code Instrumentation Technology in the Los Alamos Debugger; and CXdb: The Road to Remote Debugging.

  6. Supercomputing "Grid" passes latest test

    CERN Multimedia

    Dumé, Belle

    2005-01-01

    When the Large Hadron Collider (LHC) comes online at the CERN in 2007, it will produce more data than any other experiment in the history of physics. Particle physicists have now passed another milestone in their preparations for the LHC by sustaining a continuous flow of 600 megabytes of dat per second (MB/s) for 10 days from the Geneva laboratory to seven sites in Europe and the US (1/2 page)

  7. [Teacher enhancement at Supercomputing `96

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-02-13

    The SC`96 Education Program provided a three-day professional development experience for middle and high school science, mathematics, and computer technology teachers. The program theme was Computers at Work in the Classroom, and a majority of the sessions were presented by classroom teachers who have had several years experience in using these technologies with their students. The teachers who attended the program were introduced to classroom applications of computing and networking technologies and were provided to the greatest extent possible with lesson plans, sample problems, and other resources that could immediately be used in their own classrooms. The attached At a Glance Schedule and Session Abstracts describes in detail the three-day SC`96 Education Program. Also included is the SC`96 Education Program evaluation report and the financial report.

  8. Implementation and scaling of the fully coupled Terrestrial Systems Modeling Platform (TerrSysMP in a massively parallel supercomputing environment – a case study on JUQUEEN (IBM Blue Gene/Q

    Directory of Open Access Journals (Sweden)

    F. Gasper

    2014-06-01

    Full Text Available Continental-scale hyper-resolution simulations constitute a grand challenge in characterizing non-linear feedbacks of states and fluxes of the coupled water, energy, and biogeochemical cycles of terrestrial systems. Tackling this challenge requires advanced coupling and supercomputing technologies for earth system models that are discussed in this study, utilizing the example of the implementation of the newly developed Terrestrial Systems Modeling Platform (TerrSysMP on JUQUEEN (IBM Blue Gene/Q of the Jülich Supercomputing Centre, Germany. The applied coupling strategies rely on the Multiple Program Multiple Data (MPMD paradigm and require memory and load balancing considerations in the exchange of the coupling fields between different component models and allocation of computational resources, respectively. These considerations can be reached with advanced profiling and tracing tools leading to the efficient use of massively parallel computing environments, which is then mainly determined by the parallel performance of individual component models. However, the problem of model I/O and initialization in the peta-scale range requires major attention, because this constitutes a true big data challenge in the perspective of future exa-scale capabilities, which is unsolved.

  9. Implementation and scaling of the fully coupled Terrestrial Systems Modeling Platform (TerrSysMP) in a massively parallel supercomputing environment - a case study on JUQUEEN (IBM Blue Gene/Q)

    Science.gov (United States)

    Gasper, F.; Goergen, K.; Kollet, S.; Shrestha, P.; Sulis, M.; Rihani, J.; Geimer, M.

    2014-06-01

    Continental-scale hyper-resolution simulations constitute a grand challenge in characterizing non-linear feedbacks of states and fluxes of the coupled water, energy, and biogeochemical cycles of terrestrial systems. Tackling this challenge requires advanced coupling and supercomputing technologies for earth system models that are discussed in this study, utilizing the example of the implementation of the newly developed Terrestrial Systems Modeling Platform (TerrSysMP) on JUQUEEN (IBM Blue Gene/Q) of the Jülich Supercomputing Centre, Germany. The applied coupling strategies rely on the Multiple Program Multiple Data (MPMD) paradigm and require memory and load balancing considerations in the exchange of the coupling fields between different component models and allocation of computational resources, respectively. These considerations can be reached with advanced profiling and tracing tools leading to the efficient use of massively parallel computing environments, which is then mainly determined by the parallel performance of individual component models. However, the problem of model I/O and initialization in the peta-scale range requires major attention, because this constitutes a true big data challenge in the perspective of future exa-scale capabilities, which is unsolved.

  10. Report for CS 698-95 ?Directed Research ? Performance Modeling:? Using Queueing Network Modeling to Analyze the University of San Francisco Keck Cluster Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, M L

    2005-09-28

    In today's world, the need for computing power is becoming more pressing daily. Our need to process, analyze, and store data is quickly exceeding the capabilities of small self-contained serial machines, such as the modern desktop PC. Initially, this gap was filled by the creation of supercomputers: large-scale self-contained parallel machines. However, current markets, as well as the costs to develop and maintain such machines, are quickly making such machines a rarity, used only in highly specialized environments. A third type of machine exists, however. This relatively new type of machine, known as a cluster, is built from common, and often inexpensive, commodity self-contained desktop machines. But how well do these clustered machines work? There have been many attempts to quantify the performance of clustered computers. One approach, Queueing Network Modeling (QNM), appears to be a potentially useful and rarely tried method of modeling such systems. QNM, which has its beginnings in the modeling of traffic patterns, has expanded, and is now used to model everything from CPU and disk services, to computer systems, to service rates in store checkout lines. This history of successful usage, as well as the correspondence of QNM components to commodity clusters, suggests that QNM can be a useful tool for both the cluster designer, interested in the best value for the cost, and the user of existing machines, interested in performance rates and time-to-solution. So, what is QNM? Queueing Network Modeling is an approach to computer system modeling where the computer is represented as a network of queues and evaluated analytically. How does this correspond to clusters? There is a neat one-to-one relationship between the components of a QNM model and a cluster. For example: A cluster is made from a combination of computational nodes and network switches. Both of these fit nicely with the QNM descriptions of service centers (delay, queueing, and load

  11. The Tallinntellect / Toomas Hendrik Ilves ; intervjueerinud Michael Weiss

    Index Scriptorium Estoniae

    Ilves, Toomas Hendrik, 1953-

    2013-01-01

    Vene-USA poliitilistest suhetest, Eesti säästupoliitikast, USA käitumisest totalitaarsete režiimidega, presidendi lemmikkirjanik Vladimir Nabokovist ning praeguste poliitikute endisest kuulumisest kommunistlikku parteisse

  12. Paul Weiss and the genesis of canonical quantization

    Science.gov (United States)

    Rickles, Dean; Blum, Alexander

    2015-12-01

    This paper describes the life and work of a figure who, we argue, was of primary importance during the early years of field quantisation and (albeit more indirectly) quantum gravity. A student of Dirac and Born, he was interned in Canada during the second world war as an enemy alien and after his release never seemed to regain a good foothold in physics, identifying thereafter as a mathematician. He developed a general method of quantizing (linear and non-linear) field theories based on the parameters labelling an arbitrary hypersurface. This method (the `parameter formalism' often attributed to Dirac), though later discarded, was employed (and viewed at the time as an extremely important tool) by the leading figures associated with canonical quantum gravity: Dirac, Pirani and Schild, Bergmann, DeWitt, and others. We argue that he deserves wider recognition for this and other innovations.

  13. Weisse Blätter - Sorge um Informantenschutz

    Index Scriptorium Estoniae

    2010-01-01

    Eestis arutletakse informaatorite kaitse üle. Protestiks kavandatava ajakirjandusseaduse vastu ilmuvad märtsis suuremad päevalehed valge esilehega. Ajakirjanike ja justiitsminister Rein Langi vastuolulistest seisukohtadest

  14. The Tallinntellect / Toomas Hendrik Ilves ; intervjueerinud Michael Weiss

    Index Scriptorium Estoniae

    Ilves, Toomas Hendrik, 1953-

    2013-01-01

    Vene-USA poliitilistest suhetest, Eesti säästupoliitikast, USA käitumisest totalitaarsete režiimidega, presidendi lemmikkirjanik Vladimir Nabokovist ning praeguste poliitikute endisest kuulumisest kommunistlikku parteisse

  15. Personal semantics: Is it distinct from episodic and semantic memory? An electrophysiological study of memory for autobiographical facts and repeated events in honor of Shlomo Bentin.

    Science.gov (United States)

    Renoult, Louis; Tanguay, Annick; Beaudry, Myriam; Tavakoli, Paniz; Rabipour, Sheida; Campbell, Kenneth; Moscovitch, Morris; Levine, Brian; Davidson, Patrick S R

    2016-03-01

    Declarative memory is thought to consist of two independent systems: episodic and semantic. Episodic memory represents personal and contextually unique events, while semantic memory represents culturally-shared, acontextual factual knowledge. Personal semantics refers to aspects of declarative memory that appear to fall somewhere in between the extremes of episodic and semantic. Examples include autobiographical knowledge and memories of repeated personal events. These two aspects of personal semantics have been studied little and rarely compared to both semantic and episodic memory. We recorded the event-related potentials (ERPs) of 27 healthy participants while they verified the veracity of sentences probing four types of questions: general (i.e., semantic) facts, autobiographical facts, repeated events, and unique (i.e., episodic) events. Behavioral results showed equivalent reaction times in all 4 conditions. True sentences were verified faster than false sentences, except for unique events for which no significant difference was observed. Electrophysiological results showed that the N400 (which is classically associated with retrieval from semantic memory) was maximal for general facts and the LPC (which is classically associated with retrieval from episodic memory) was maximal for unique events. For both ERP components, the two personal semantic conditions (i.e., autobiographical facts and repeated events) systematically differed from semantic memory. In addition, N400 amplitudes also differentiated autobiographical facts from unique events. Autobiographical facts and repeated events did not differ significantly from each other but their corresponding scalp distributions differed from those associated with general facts. Our results suggest that the neural correlates of personal semantics can be distinguished from those of semantic and episodic memory, and may provide clues as to how unique events are transformed to semantic memory.

  16. Design of charging model using supercomputing CAE cloud platform of user feedback mechanism%基于用户反馈机制的超级计算CAE云平台计费模型设计

    Institute of Scientific and Technical Information of China (English)

    马亿旿; 池鹏; 陈磊; 梁小林; 蔡立军

    2015-01-01

    As the traditional charging model of CAE cloud platform has many shortcomings, such as user behavior and feedback are not considered, the single charging model can not support differentiated services, and it has poor business flexibility, a charging model was proposed based on plug-in in the supercomputer CAE cloud platform and a charging algorithm was put forward based on user feedback mechanism. The plug-in accounting model regards service as a basic unit, and provides different charging solutions for user’s service by a form of plug-in unit, which makes it easy to solve those problems, and to some extent, it strengthens the characteristic of the strong business dynamics of supercomputer CAE cloud platform. The charging algorithm can dynamically adjust the user's charging parameters according to the historical behavior and feedback of user mechanism, and reduce service costs by the activity and the importance of user, which enhances the quality of services and user experience.%针对传统 CAE 云平台中计费算法未考虑用户行为与反馈等缺陷以及传统计费模型的模式单一、无法支撑差异化服务、业务灵活性差等缺点,建立一种插件式的超级计算 CAE 云平台计费模型,提出一种基于用户反馈机制的计费算法。插件式计费模型以服务为基本单位,通过插件的形式为用户的服务提供不同的计费方案,从而解决了传统计费模型的模式单一、灵活性差等缺陷,增强超级计算 CAE 云平台的业务动态性。基于用户反馈的计费算法能够根据用户的历史行为和反馈情况,动态调整用户的计费参数,实现了根据用户的活跃度和重要性来减少服务费用的目的,保证了服务质量,提升了用户体验。

  17. "0" and "1" of Supercomputer: Design of Yinhe Building in the University of Defense Technology%超级计算机的"0"与"1"——国防科技大学银河楼设计

    Institute of Scientific and Technical Information of China (English)

    宋明星; 魏春雨; 尹佳斌

    2011-01-01

    通过对国防科技大学银河楼设计创作中规划布局、空间构成、造型、适宜生态技术处理的分析,阐释了建筑设计手法与超级巨型机研究测试中心功能间的联系与思考.规划布局考虑了主体机房与南、北楼的衔接关系,通过中庭、花园、观光电梯等空间语汇进行了空间的构成,造型手法通过简洁的立柱与玻璃的对比反映计算机语言的0与1,同时在设计中多个平台采用了植被屋顶这一适宜生态技术.%Through the analysis of the planning, spatial composition, modeling and ecological technology of the design of Yinhe Building, this article analyzes the relationship between the architectural design and the demands of the supercomputer labs. The planning focuses on the relationship between the main computer room and the north and the south buildings. The spatial composition is achieved with architectural vocabularies such as atriums, gardens and panoramic lifts, etc. Modeling of the buildings reflects the computer languages: 0 and 1 through the contrast between the column and the glass. And appropriate ecological technology of green roofs is used on several decks.

  18. Computational Fluid Dynamics: Algorithms and Supercomputers

    Science.gov (United States)

    1988-03-01

    became an issue. Hanon Potash, the SCS architect, has often claimed that the key to designing a vector machine is to "super-impose" a scalar design and...of Thompson ([123], Chapter 6.8) is given in the next chapter. 5.4 ITERATIVE ALGORITHMS In order to illustrate restructuring of iterative methods for t...and development of grid generation using Laplace’s and Poisson’s equations has been done by Thompson (1979) and his co-workers [123]. Figure 6.1: Basic

  19. The QCDOC supercomputer: hardware, software, and performance

    CERN Document Server

    Boyle, P A; Wettig, T

    2003-01-01

    An overview is given of the QCDOC architecture, a massively parallel and highly scalable computer optimized for lattice QCD using system-on-a-chip technology. The heart of a single node is the PowerPC-based QCDOC ASIC, developed in collaboration with IBM Research, with a peak speed of 1 GFlop/s. The nodes communicate via high-speed serial links in a 6-dimensional mesh with nearest-neighbor connections. We find that highly optimized four-dimensional QCD code obtains over 50% efficiency in cycle accurate simulations of QCDOC, even for problems of fixed computational difficulty run on tens of thousands of nodes. We also provide an overview of the QCDOC operating system, which manages and runs QCDOC applications on partitions of variable dimensionality. Finally, the SciDAC activity for QCDOC and the message-passing interface QMP specified as a part of the SciDAC effort are discussed for QCDOC. We explain how to make optimal use of QMP routines on QCDOC in conjunction with existing C and C++ lattice QCD codes, inc...

  20. Supercomputer modeling of volcanic eruption dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Kieffer, S.W. [Arizona State Univ., Tempe, AZ (United States); Valentine, G.A. [Los Alamos National Lab., NM (United States); Woo, Mahn-Ling [Arizona State Univ., Tempe, AZ (United States)

    1995-06-01

    Our specific goals are to: (1) provide a set of models based on well-defined assumptions about initial and boundary conditions to constrain interpretations of observations of active volcanic eruptions--including movies of flow front velocities, satellite observations of temperature in plumes vs. time, and still photographs of the dimensions of erupting plumes and flows on Earth and other planets; (2) to examine the influence of subsurface conditions on exit plane conditions and plume characteristics, and to compare the models of subsurface fluid flow with seismic constraints where possible; (3) to relate equations-of-state for magma-gas mixtures to flow dynamics; (4) to examine, in some detail, the interaction of the flowing fluid with the conduit walls and ground topography through boundary layer theory so that field observations of erosion and deposition can be related to fluid processes; and (5) to test the applicability of existing two-phase flow codes for problems related to the generation of volcanic long-period seismic signals; (6) to extend our understanding and simulation capability to problems associated with emplacement of fragmental ejecta from large meteorite impacts.

  1. Using Supercomputers to Probe the Early Universe

    Energy Technology Data Exchange (ETDEWEB)

    Giorgi, Elena Edi [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-17

    For decades physicists have been trying to decipher the first moments after the Big Bang. Using very large telescopes, for example, scientists scan the skies and look at how fast galaxies move. Satellites study the relic radiation left from the Big Bang, called the cosmic microwave background radiation. And finally, particle colliders, like the Large Hadron Collider at CERN, allow researchers to smash protons together and analyze the debris left behind by such collisions. Physicists at Los Alamos National Laboratory, however, are taking a different approach: they are using computers. In collaboration with colleagues at University of California San Diego, the Los Alamos researchers developed a computer code, called BURST, that can simulate conditions during the first few minutes of cosmological evolution.

  2. Supercomputing:HPCMP, Performance Measures and Opportunities

    Science.gov (United States)

    2007-11-02

    28 PEs Redstone Technical Test Center (RTTC) SGI Origin 3900 24 PEs Simulations & Analysis Facility (SIMAF) Beowulf Cluster Linux...HPCMP Systems (MSRCs) HPC Center System Processors Army Research Laboratory (ARL) IBM P3 SGI Origin 3800 IBM P4 Linux Networx Cluster LNX1...Xeon Cluster IBM Opteron Cluster SGI Altix Cluster 1,280 PEs 256 PEs 512 PEs 768 PEs 128 PEs 256 PEs 2,100 PEs 2,372 PEs 256 PEs

  3. LAPACK: Linear algebra software for supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Bischof, C.H.

    1991-01-01

    This paper presents an overview of the LAPACK library, a portable, public-domain library to solve the most common linear algebra problems. This library provides a uniformly designed set of subroutines for solving systems of simultaneous linear equations, least-squares problems, and eigenvalue problems for dense and banded matrices. We elaborate on the design methodologies incorporated to make the LAPACK codes efficient on today's high-performance architectures. In particular, we discuss the use of block algorithms and the reliance on the Basic Linear Algebra Subprograms. We present performance results that show the suitability of the LAPACK approach for vector uniprocessors and shared-memory multiprocessors. We also address some issues that have to be dealt with in tuning LAPACK for specific architectures. Lastly, we present results that show that the LAPACK software can be adapted with little effort to distributed-memory environments, and we discuss future efforts resulting from this project. 31 refs., 10 figs., 2 tabs.

  4. Foundry provides the network backbone for supercomputing

    CERN Multimedia

    2003-01-01

    Some of the results from the fourth annual High-Performance Bandwidth Challenge, held in conjunction with SC2003, the international conference on high-performance computing and networking which occurred last week in Phoenix, AZ (1/2 page).

  5. Supercomputers and biological sequence comparison algorithms.

    Science.gov (United States)

    Core, N G; Edmiston, E W; Saltz, J H; Smith, R M

    1989-12-01

    Comparison of biological (DNA or protein) sequences provides insight into molecular structure, function, and homology and is increasingly important as the available databases become larger and more numerous. One method of increasing the speed of the calculations is to perform them in parallel. We present the results of initial investigations using two dynamic programming algorithms on the Intel iPSC hypercube and the Connection Machine as well as an inexpensive, heuristically-based algorithm on the Encore Multimax.

  6. DOE Zero Energy Ready Home Case Study, Weiss Building & Development, LLC., System Home, River Forest, Illinois

    Energy Technology Data Exchange (ETDEWEB)

    none,

    2013-09-01

    The Passive House Challenge Home located in River Forest, Illinois, is a 5-bedroom, 4.5-bath, 3,600 ft2 two-story home (plus basement) that costs about $237 less per month to operate than a similar sized home built to the 2009 IECC. For a home with no solar photovoltaic panels installed, it scored an amazingly low 27 on the Home Energy Rating System (HERS) score.An ENERGY STAR-rated dishwasher, clothes washer, and refrigerator; an induction cooktop, condensing clothes dryer, and LED lighting are among the energy-saving devices inside the home. All plumbing fixtures comply with EPA WaterSense criteria. The home was awarded a 2013 Housing Innovation Award in the "systems builder" category.

  7. The Oral History of Evaluation, Part 4: The Professional Evolution of Carol H. Weiss

    Science.gov (United States)

    American Journal of Evaluation, 2006

    2006-01-01

    During the past 3 years, the Oral History Project Study Team, which comprises Jean King, Mel Mark, and Robin Miller, has conducted interviews with individuals who have made signal contributions to the program evaluation field. Their goal was to capture the professional evolution of those who have contributed to the way evaluation in the United…

  8. Weiss oscillations and particle-hole symmetry at the half-filled Landau level

    CERN Document Server

    Cheung, Alfred K C; Mulligan, Michael

    2016-01-01

    Particle-hole symmetry in the lowest Landau level of the two-dimensional electron gas requires the electrical Hall conductivity to equal $\\pm e^2/2h$ at half-filling. We study the consequences of weakly broken particle-hole symmetry for magnetoresistance oscillations about half-filling in the presence of an applied periodic one-dimensional electrostatic potential. At fixed electron density, the oscillation minima are asymmetrically biased towards higher magnetic fields, while at fixed magnetic field, the oscillations occur symmetrically as the electron density is varied about half-filling. We find an approximate "sum rule" obeyed for all pairs of oscillation minima that can be tested in experiment. We discuss the implications of our results and approximations for the description of the half-filled Landau level.

  9. 高阶精度CFD应用在天河2系统上的异构并行模拟与性能优化%Heterogeneous Computing and Optimization on Tianhe-2 Supercomputer System for High-Order Accurate CFD Applications

    Institute of Scientific and Technical Information of China (English)

    王勇献; 张理论; 车永刚; 徐传福; 刘巍; 程兴华

    2015-01-01

    There still exist great challenges when simulating the large‐scale computational fluid dynamics ( CFD ) applications on the contemporary supercomputer systems with many‐core heterogeneous architecture like Tianhe‐2 ,which is also one of the research hotspots in this field .In this paper ,we focus on exploring the techniques of efficient parallel simulations on the heterogeneous high‐performance computing ( HPC ) platform for large‐scale CFD applications with high‐order accurate scheme .Some approaches and strategies of performance optimization matched with both the characteristic of CFD application and the architectures of heterogeneous HPC platform are proposed from the perspective of task decomposition , exploration of parallelism , optimization for multi‐threaded running ,vectorization by employing single‐instruction multiple‐data (SIMD) ,optimization for the cooperation of both CPUs and co‐processors ,and so on .To evaluate the performance of these techniques ,some numerical experiments are performed on Tianhe‐2 supercomputer system with the maximum number of grid points achieving 1 .228 × 1011 ,and the total amount of processors and/or co‐processors being 590000 .Such a large‐scale CFD simulation with high‐order accurate scheme has to our best knowledge never been attempted before .It shows that the optimized code can get the speedup of 2 .6X on CPU and co‐processor hybrid platform than that on the CPU platform only ,and perfect scalability is also observed from the test results . The present work redefines the frontier of high performance computing for fluid dynamics simulations on heterogeneous platform .%在当前主流的众核异构高性能计算机平台上开展超大规模计算流体力学(computational fluid dynamics ,CFD)应用的高效并行数值模拟仍然面临着一系列挑战性技术问题,也是该领域的热点研究问题之一.面向天河2高性能异构并行计算

  10. Research on Non-intervention Information Acquisition and Public Sentiment Analysis System for Public Wi-Fi Wireless Networks Based on Supercomputer Platform%基于超算平台的公共Wi-Fi无线网络无痕信息获取与舆情分析系统研究

    Institute of Scientific and Technical Information of China (English)

    杨明; 舒明雷; 顾卫东; 郭强; 周书旺

    2013-01-01

    提出一种利用国家超级计算济南中心的千万亿次计算平台对整个城市范围内的公共Wi-Fi无线网络进行信息获取和舆情分析的系统,它基于非介入式的无线数据包捕获技术、Web页面还原与容错修复技术、多种文本挖掘技术和海量数据处理技术,可对公共Wi-Fi无线网络中的各种非法行为进行取证,对网络舆情进行准确分析和预测,可为相关部门的网络舆论导向工作提供全面准确的参考.%An information acquisition and public sentiment analysis system for the city public Wi-Fi wireless networks was presented, which uses the petaflops computing platform in National Supercomputer Center in Ji'nan. Based on the non-intervention wireless packets capture technology,Web page recovery and fault-tolerant reassembly technology,multiple text data mining technology and mass data process technology,the system can implement the functionality of wireless network forensics, public sentiment analysis and prediction,and provide overall and accurate references for the guidance of public sentiment for the government.

  11. JESPP: Joint Experimentation on Scalable Parallel Processors Supercomputers

    Science.gov (United States)

    2010-03-01

    for traceability of individual pro- grammers during the tumult of operations in a simulation bay, where many operators will need to log-in, use...of which remains to be apprehended. The author’s experienced in teaching an introductory course on Data Mining at the Viterbi School of Engineering

  12. Associative memories for supercomputers. Final report, July 1989-January 1991

    Energy Technology Data Exchange (ETDEWEB)

    Esener, S.C.; Marchand, P.; Krishnamoorthy, A.

    1992-12-01

    A motionless head 2-D parallel readout system for optical disks is presented. Its unique features are discussed and it is compared to various parallel access optical storage media. The motionless-head parallel read-out system for optical disks is shown to meet current and near-term future requirements for high performance secondary storage. In order to select a memory architecture compatible with the motionless-head disk, inner-product and outer-product associative memory algorithms are compared in terms of their storage requirements, search times, system complexities, and fault tolerance. Based on this comparison, the page serial, bit-parallel inner-product method is shown to be well suited to implementation with parallel readout optical disk, and opto-electronic XNOR gate arrays, using for instance the Si/PLZT technology. Finally, the associative memory system design is presented.... Memory, Associative memory, Optical disks.

  13. Benchmarking and tuning the MILC code on clusters and supercomputers

    CERN Document Server

    Gottlieb, S

    2002-01-01

    Recently, we have benchmarked and tuned the MILC code on a number of architectures including Intel Itanium and Pentium IV (PIV), dual-CPU Athlon, and the latest Compaq Alpha nodes. Results will be presented for many of these, and we shall discuss some simple code changes that can result in a very dramatic speedup of the KS conjugate gradient on processors with more advanced memory systems such as PIV, IBM SP and Alpha.

  14. Parametric Parallel Simulation of Discrete Event Systems on SIMD Supercomputers

    Science.gov (United States)

    1994-05-01

    Arrival @ Node i )r, - i. (5.20) qmaxBE P(Accepting Departure @ Node i => Join Nodej )1•. - •i’,P, - (5.21) qmax,BE k XDri + g) P(Null Event)!P,.,.a =W1...network. The departure rate from node j is 0 when that node is in state 0 and g, otherwise. Departure Rate from Nodej = 0* n(0Oj) + j(l - (0j)) 168

  15. Sparse matrix-vector multiplication on a reconfigurable supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Dubois, David H [Los Alamos National Laboratory; Dubois, Andrew J [Los Alamos National Laboratory; Boorman, Thomas M [Los Alamos National Laboratory; Connor, Carolyn M [Los Alamos National Laboratory; Poole, Steve [ORNL

    2008-01-01

    Double precision floating point Sparse Matrix-Vector Multiplication (SMVM) is a critical computational kernel used in iterative solvers for systems of sparse linear equations. The poor data locality exhibited by sparse matrices along with the high memory bandwidth requirements of SMVM result in poor performance on general purpose processors. Field Programmable Gate Arrays (FPGAs) offer a possible alternative with their customizable and application-targeted memory sub-system and processing elements. In this work we investigate two separate implementations of the SMVM on an SRC-6 MAPStation workstation. The first implementation investigates the peak performance capability, while the second implementation balances the amount of instantiated logic with the available sustained bandwidth of the FPGA subsystem. Both implementations yield the same sustained performance with the second producing a much more efficient solution. The metrics of processor and application balance are introduced to help provide some insight into the efficiencies of the FPGA and CPU based solutions explicitly showing the tight coupling of the available bandwidth to peak floating point performance. Due to the FPGA's ability to balance the amount of implemented logic to the available memory bandwidth it can provide a much more efficient solution. Finally, making use of the lessons learned implementing the SMVM, we present an fully implemented nonpreconditioned Conjugate Gradient Algorithm utilizing the second SMVM design.

  16. Parallel Earthquake Simulations on Large-Scale Multicore Supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-01-01

    Earthquakes are one of the most destructive natural hazards on our planet Earth. Hugh earthquakes striking offshore may cause devastating tsunamis, as evidenced by the 11 March 2011 Japan (moment magnitude Mw9.0) and the 26 December 2004 Sumatra (Mw9.1) earthquakes. Earthquake prediction (in terms of the precise time, place, and magnitude of a coming earthquake) is arguably unfeasible in the foreseeable future. To mitigate seismic hazards from future earthquakes in earthquake-prone areas, such as California and Japan, scientists have been using numerical simulations to study earthquake rupture propagation along faults and seismic wave propagation in the surrounding media on ever-advancing modern computers over past several decades. In particular, ground motion simulations for past and future (possible) significant earthquakes have been performed to understand factors that affect ground shaking in populated areas, and to provide ground shaking characteristics and synthetic seismograms for emergency preparation and design of earthquake-resistant structures. These simulation results can guide the development of more rational seismic provisions for leading to safer, more efficient, and economical50pt]Please provide V. Taylor author e-mail ID. structures in earthquake-prone regions.

  17. Supercomputing for weather and climate modelling: convenience or necessity

    CSIR Research Space (South Africa)

    Landman, WA

    2009-12-01

    Full Text Available Weather and climate modelling require dedicated computer infrastructure in order to generate high-resolution, large ensemble, various models with different configurations, etc. in order to optimise operational forecasts and climate projections. High...

  18. Multiscale Hy3S: Hybrid stochastic simulation for supercomputers

    Directory of Open Access Journals (Sweden)

    Kaznessis Yiannis N

    2006-02-01

    Full Text Available Abstract Background Stochastic simulation has become a useful tool to both study natural biological systems and design new synthetic ones. By capturing the intrinsic molecular fluctuations of "small" systems, these simulations produce a more accurate picture of single cell dynamics, including interesting phenomena missed by deterministic methods, such as noise-induced oscillations and transitions between stable states. However, the computational cost of the original stochastic simulation algorithm can be high, motivating the use of hybrid stochastic methods. Hybrid stochastic methods partition the system into multiple subsets and describe each subset as a different representation, such as a jump Markov, Poisson, continuous Markov, or deterministic process. By applying valid approximations and self-consistently merging disparate descriptions, a method can be considerably faster, while retaining accuracy. In this paper, we describe Hy3S, a collection of multiscale simulation programs. Results Building on our previous work on developing novel hybrid stochastic algorithms, we have created the Hy3S software package to enable scientists and engineers to both study and design extremely large well-mixed biological systems with many thousands of reactions and chemical species. We have added adaptive stochastic numerical integrators to permit the robust simulation of dynamically stiff biological systems. In addition, Hy3S has many useful features, including embarrassingly parallelized simulations with MPI; special discrete events, such as transcriptional and translation elongation and cell division; mid-simulation perturbations in both the number of molecules of species and reaction kinetic parameters; combinatorial variation of both initial conditions and kinetic parameters to enable sensitivity analysis; use of NetCDF optimized binary format to quickly read and write large datasets; and a simple graphical user interface, written in Matlab, to help users create biological systems and analyze data. We demonstrate the accuracy and efficiency of Hy3S with examples, including a large-scale system benchmark and a complex bistable biochemical network with positive feedback. The software itself is open-sourced under the GPL license and is modular, allowing users to modify it for their own purposes. Conclusion Hy3S is a powerful suite of simulation programs for simulating the stochastic dynamics of networks of biochemical reactions. Its first public version enables computational biologists to more efficiently investigate the dynamics of realistic biological systems.

  19. Study of ATLAS TRT performance with GRID and supercomputers.

    CERN Document Server

    Krasnopevtsev, Dimitriy; The ATLAS collaboration; Mashinistov, Ruslan; Belyaev, Nikita; Ryabinkin, Evgeny

    2015-01-01

    After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at high occupancy conditions is important for many on-going physics analyses. The ATLAS Transition Radiation Tracker (TRT) is one of these detectors. TRT is a large straw tube tracking system that is the outermost of the three subsystems of the ATLAS Inner Detector (ID). TRT contributes significantly to the resolution for high-pT tracks in the ID providing excellent particle identification capabilities and electron-pion separation. ATLAS experiment is using Worldwide LHC Computing Grid. WLCG is a global collaboration of computer centers and provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualisation tools and more. WLCG...

  20. ATLAS FTK a - very complex - custom parallel supercomputer

    CERN Document Server

    Kimura, Naoki; The ATLAS collaboration

    2016-01-01

    In the ever increasing pile-up LHC environment advanced techniques of analysing the data are implemented in order to increase the rate of relevant physics processes with respect to background processes. The Fast TracKer (FTK) is a track finding implementation at hardware level that is designed to deliver full-scan tracks with $p_{T}$ above 1GeV to the ATLAS trigger system for every L1 accept (at a maximum rate of 100kHz). In order to achieve this performance a highly parallel system was designed and now it is under installation in ATLAS. In the beginning of 2016 it will provide tracks for the trigger system in a region covering the central part of the ATLAS detector, and during the year it's coverage will be extended to the full detector coverage. The system relies on matching hits coming from the silicon tracking detectors against 1 billion patterns stored in specially designed ASICS chips (Associative memory - AM06). In a first stage coarse resolution hits are matched against the patterns and the accepted h...

  1. Integration Of PanDA Workload Management System With Supercomputers

    CERN Document Server

    Klimentov, Alexei; The ATLAS collaboration; Maeno, Tadashi; Mashinistov, Ruslan; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Read, Kenneth; Ryabinkin, Evgeny; Wenaus, Torre

    2015-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 100,000 co...

  2. Study of ATLAS TRT performance with GRID and supercomputers.

    CERN Document Server

    Krasnopevtsev, Dimitriy; The ATLAS collaboration; Belyaev, Nikita; Mashinistov, Ruslan; Ryabinkin, Evgeny

    2015-01-01

    After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at highoccupancy conditions is important for many on-going physics analyses. The ATLAS Transition Radiation Tracker (TRT) is one of these detectors. TRT is a large straw tube tracking system that is the outermost of the three subsystems of the ATLAS Inner Detector (ID). TRT contributes significantly to the resolution for high-pT tracks in the ID providing excellent particle identification capabilities and electron-pion separation. ATLAS experiment is using Worldwide LHC Computing Grid. WLCG is a global collaboration of computer centers and provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualization tools and more. WLCG ...

  3. International Conference Nuclear Theory in the Supercomputing Era 2014

    CERN Document Server

    2014-01-01

    The conference focuses on forefront challenges in physics, namely the fundamentals of nuclear structure and reactions, the origin of the strong inter-nucleon interactions from QCD, and computational nuclear physics with leadership class computer facilities to provide forefront simulations leading to new discoveries.This is the fourth in the series of NTSE-HITES conferences aimed to bring together nuclear theorists, computer scientists and applied mathematicians.

  4. Internal fluid mechanics research on supercomputers for aerospace propulsion systems

    Science.gov (United States)

    Miller, Brent A.; Anderson, Bernhard H.; Szuch, John R.

    1988-01-01

    The Internal Fluid Mechanics Division of the NASA Lewis Research Center is combining the key elements of computational fluid dynamics, aerothermodynamic experiments, and advanced computational technology to bring internal computational fluid mechanics (ICFM) to a state of practical application for aerospace propulsion systems. The strategies used to achieve this goal are to: (1) pursue an understanding of flow physics, surface heat transfer, and combustion via analysis and fundamental experiments, (2) incorporate improved understanding of these phenomena into verified 3-D CFD codes, and (3) utilize state-of-the-art computational technology to enhance experimental and CFD research. Presented is an overview of the ICFM program in high-speed propulsion, including work in inlets, turbomachinery, and chemical reacting flows. Ongoing efforts to integrate new computer technologies, such as parallel computing and artificial intelligence, into high-speed aeropropulsion research are described.

  5. San Diego supercomputer center reaches data transfer milestone

    CERN Multimedia

    2002-01-01

    The SDSC's huge, updated tape storage system has illustrated its effectiveness by transferring data at 828 megabytes per second making it the fastest data archive system according to program director Phil Andrews (1/2 page).

  6. PPARC: World's biggest 'virtual supercomputer' given the go-ahead

    CERN Multimedia

    2003-01-01

    The Particle Physics and Astronomy Research Council has today announced a grant of 16 million pounds to create a massive computing Grid. This Grid, known as GridPP2, will eventually form part of a larger European Grid, to be used to process the data deluge from CERN, when the Large Hadron Collider (LHC), comes online in 2007 (1 page).

  7. Intelligent Testing with Wechsler’s Fourth Editions: Perspectives on the Weiss et al. Studies and the Eight Commentaries

    Science.gov (United States)

    Kaufman, Alan S.

    2013-01-01

    The two featured articles and eight commentaries on the WISC-IV (Wechsler, 2003) and WAIS-IV (Wechsler, 2008) in this special issue of "Journal of Psychoeducational Assessment" are of exceptional quality. As a collective, this special issue greatly advances the field of cognitive assessment by intelligently synthesizing the best of…

  8. Mida tähendab meile holokaust? : võrdlevalt Ameerikast ja Eestist / Anton Weiss-Wendt

    Index Scriptorium Estoniae

    Weiss-Wendt, Anton

    2001-01-01

    Holokausti tähendusele pühendatud Ameerikas hiljuti ilmunud raamatutest : Novick, Peter. Holocaust in American Life ; Cole, Tim. Selling the Holocaust: from Auschwitz to Schindler. How history is Bought, Packaged, and Sold ; Finkelstein, Norman. The Holocaust Industry: Reflection on the Exploitation of Jewish Suffering

  9. Curie-Weiss behavior of Y1-xSrxMnO3 (x = 0 and 0.03)

    Science.gov (United States)

    Thakur, Rajesh K.; Thakur, Rasna; Gaur, N. K.; Bharathi, A.; Kaurav, N.; Okram, G. S.

    2015-06-01

    The effect of bivalent cation Sr-doping on magnetic properties in multiferroic YMnO3 manganites was systemically studied by DC magnetic measurements. Both of the reported samples were prepared by solid-state reaction method with composition Y1-xSrxMnO3 (x = 0.00 and 0.03). The X-ray diffraction (XRD) results show that the compounds are synthesized in hexagonal crystal structure with space group P63cm (JCPDS: 25-1079) and slight increase in the lattice parameter is observed with strontium doping. The magnetisation versus temperature curve shows no clear anomaly near the antiferromagnetic transition temperature (TN), however from the magnetic measurements at 1000Oe a slight increase in the magnetisation is clearly witnessed with increasing Stront ium content to the Y-site.

  10. Mida tähendab meile holokaust? : võrdlevalt Ameerikast ja Eestist / Anton Weiss-Wendt

    Index Scriptorium Estoniae

    Weiss-Wendt, Anton

    2001-01-01

    Holokausti tähendusele pühendatud Ameerikas hiljuti ilmunud raamatutest : Novick, Peter. Holocaust in American Life ; Cole, Tim. Selling the Holocaust: from Auschwitz to Schindler. How history is Bought, Packaged, and Sold ; Finkelstein, Norman. The Holocaust Industry: Reflection on the Exploitation of Jewish Suffering

  11. Intelligent Testing with Wechsler’s Fourth Editions: Perspectives on the Weiss et al. Studies and the Eight Commentaries

    Science.gov (United States)

    Kaufman, Alan S.

    2013-01-01

    The two featured articles and eight commentaries on the WISC-IV (Wechsler, 2003) and WAIS-IV (Wechsler, 2008) in this special issue of "Journal of Psychoeducational Assessment" are of exceptional quality. As a collective, this special issue greatly advances the field of cognitive assessment by intelligently synthesizing the best of…

  12. Computer simulation of sphenopsid architecture. Part II. Calamites multiramis Weiss, as an example of Late Paleozoic arborescent Sphenopsids.

    Science.gov (United States)

    Daviero; Lecoustre

    2000-04-01

    A late Carboniferous arborescent sphenopsid has been modelled for the first time with the AMAP 1 system. The natural entity consisting of the three form species 'Calamites multiramis/Annularia stellata/Calamostachys tuberculata' (respectively the trunk/branches and foliage/cones) representing the aerial part of this plant is reconstructed and its architecture modelled. The different growth stages are extrapolated, generating a dynamic view that did not exist until now. The model is based on the hypothesis that the modelled part is not preformed but results from the successive production and elongation of internodes. This growth led to old ontogenetic stages of the plant in agreement with Remy and Remy's reconstruction (Remy, W., Remy, R., 1977. Die Floren des Erdaltertums. Verlag Glückauf, Essen, 468 pp.). With its verticillate sterile organs and cone-shaped fructifications similar to the extant herbaceous relative Equisetum, this calamite is distinguished from the latter taxon by having possible 'throw-away' phyllomorphic branches. We assumed the presence of a restricted zone of branches located in the apical part of the trunk. Moreover, the production of reproductive organs that succeeds the vegetative stage implies a major photosynthetic phase associated with a monocarpical form of development of the fossil plant.

  13. Cushing Syndrome: Other FAQs

    Science.gov (United States)

    ... Kronenberg, H. M., Shlomo, M., Polonsky, K. S., Larsen P. R. (Eds.). Williams textbook of endocrinology (12th ed.). (chap. 15). Philadelphia, PA: Saunders Elsevier. [top] Abraham, M. R., & Smith, C. V. Adrenal disease and pregnancy. Retrieved April ...

  14. Curie-Weiss behavior of Y{sub 1-x}Sr{sub x}MnO{sub 3} (x = 0 and 0.03)

    Energy Technology Data Exchange (ETDEWEB)

    Thakur, Rajesh K., E-mail: thakur.rajesh2009@gmail.com; Thakur, Rasna; Gaur, N. K. [Department of Physics, Barkatullah University, Bhopal-462026 (India); Bharathi, A. [Condensed Matter Physics Division, Materials Science Group, Indira Gandhi Centre for Atomic Research, Kalpakkam-603102 (India); Kaurav, N. [Department of Physics, Government Holkar Science College, A.B. Road, Indore-452001 (India); Okram, G. S. [Inter-university Consortium for DAE Facilities, Indore-452001 (India)

    2015-06-24

    The effect of bivalent cation Sr-doping on magnetic properties in multiferroic YMnO{sub 3} manganites was systemically studied by DC magnetic measurements. Both of the reported samples were prepared by solid-state reaction method with composition Y{sub 1−x}Sr{sub x}MnO{sub 3} (x = 0.00 and 0.03). The X-ray diffraction (XRD) results show that the compounds are synthesized in hexagonal crystal structure with space group P6{sub 3}cm (JCPDS: 25-1079) and slight increase in the lattice parameter is observed with strontium doping. The magnetisation versus temperature curve shows no clear anomaly near the antiferromagnetic transition temperature (T{sub N}), however from the magnetic measurements at 1000Oe a slight increase in the magnetisation is clearly witnessed with increasing Stront ium content to the Y-site.

  15. 对魏伊丝代际公平说的几点质疑%Some Doubts on Weiss' Theory of Intergenerational Equity

    Institute of Scientific and Technical Information of China (English)

    刘卫先

    2010-01-01

    在环境哲学、环境政治学、环境社会学、环境法学等领域影响广泛并为大多数人接受的代际公平理论在论证逻辑上存在诸多断裂,致使该理论自身具有不可圆说性.所谓的后代人及其权利以及各代之间的信托关系都是一种纯粹的虚构,当代人所承担的环境义务并非与"后代人的权利"相对应,而是与人类整体的环境利益相对应.人们普遍承担的环境保护义务才是代际公平理论的正确出路和归宿.虽然代际公平理论无法在法律上加以实现,但如果把它作为我们自己伦理关怀的准则,则对地球环境资源的保护也不无裨益.

  16. La reconstrucción estética de la historia del trabajador (Un diálogo casi posible entre Jünger y Weiss

    Directory of Open Access Journals (Sweden)

    Molinuevo, José Luis

    1991-10-01

    Full Text Available Not available.

    Se exponen dos modalidades de reconstrucción estética de la historia, entretejidas con los avatares de un programa romántico para el que la libertad sólo es posible en la belleza y la tarea del Arte consiste en la edificación de una nueva sociedad ético-política. El diálogo entre ambos autores, de talantes tan diversos, es posible desde las actuales experiencias narrativas como experiencias históricas. Su prolongación (no realizada aquí mostraría algunos límites de las estéticas contemporáneas de la resistencia, los supuestos de esos discursos históricos, así como el ethos que subyace a la vacilante pérdida de la nonnatividad ética.

  17. Towards 21st Century Stellar Models: Star Clusters, Supercomputing, and Asteroseismology

    DEFF Research Database (Denmark)

    Campbell, S. W.; Constantino, T. N.; D'Orazi, V.;

    2016-01-01

    Stellar models provide a vital basis for many aspects of astronomy and astrophysics. Recent advances in observational astronomy -- through asteroseismology, precision photometry, high-resolution spectroscopy, and large-scale surveys -- are placing stellar models under greater quantitative scrutin...... a brief overview of the evolution, importance, and substantial uncertainties of core helium burning stars in particular and then briefly discuss a range of methods, both theoretical and observational, that we are using to advance the modelling....

  18. Theory, design, and simulation of GASP: A block data flow architecture for gallium arsenide supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Fouts, D.J.

    1990-01-01

    The advantages and disadvantages of using high-speed gallium arsenide (GaAs) logic for implementing digital systems are reviewed. A set of design guidelines is presented for systems that will be constructed with high-speed technologies such as GaAs and silicon emitter coupled logic (ECL). A new class of computer and digital system architectures, known as functionally modular architectures, is defined and explained. Functionally modular architectures are ideal for implementation in GaAs because they adhere to the design guidelines. GASP, a new, functionally modular, block data flow computer architecture is then described. SPICE simulations indicate that if constructed with existing GaAs IC technology, parts of GASP could run at a clock speed of 1 GHz, with the rest of the architecture using a 500 MHz clock. The new architecture uses data flow techniques at a program block level, which allows efficient execution of parallel programs while maintaining reasonably good performance on sequential programs. A simulation study of the architecture's best case and worst case performance is presented. Simulations of GASP executing a highly parallel program indicate that an instruction execution rate of over 30,000 MIPS can be attained with a 65 processor system.

  19. GROMACS: High performance molecular simulations through multi-level parallelism from laptops to supercomputers

    Directory of Open Access Journals (Sweden)

    Mark James Abraham

    2015-09-01

    Full Text Available GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules. It provides a rich set of calculation types, preparation and analysis tools. Several advanced techniques for free-energy calculations are supported. In version 5, it reaches new performance heights, through several new and enhanced parallelization algorithms. These work on every level; SIMD registers inside cores, multithreading, heterogeneous CPU–GPU acceleration, state-of-the-art 3D domain decomposition, and ensemble-level parallelization through built-in replica exchange and the separate Copernicus framework. The latest best-in-class compressed trajectory storage format is supported.

  20. Distributed Processing of PIV images with a low power cluster supercomputer

    Science.gov (United States)

    Smith, Barton; Horne, Kyle; Hauser, Thomas

    2007-11-01

    Recent advances in digital photography and solid-state lasers make it possible to acquire images at up to 3000 frames per second. However, as the ability to acquire large samples very quickly has been realized, processing speed has not kept pace. A 2-D Particle Image Velocimetry (PIV) acquisition computer would require over five hours to process the data that can be acquired in one second with a Time-resolved Stereo PIV (TRSPIV) system. To decrease the computational time, parallel processing using a Beowulf cluster has been applied. At USU we have developed a low-power Beowulf cluster integrated with the data acquisition system of a TRSPIV system. This approach of integrating the PIV system and the Beowulf cluster eliminates the communication time, thus speeding up the process. In addition to improving the practicality of TRSPIV, this system will also be useful to researchers performing any PIV measurement where a large number of samples are required. Our presentation will describe the hardware and software implementation of our approach.

  1. Time Parallel Solution of Linear Partial Differential Equations on the Intel Touchstone Delta Supercomputer

    Science.gov (United States)

    Toomarian, N.; Fijany, A.; Barhen, J.

    1993-01-01

    Evolutionary partial differential equations are usually solved by decretization in time and space, and by applying a marching in time procedure to data and algorithms potentially parallelized in the spatial domain.

  2. Programmable lithography engine (ProLE) grid-type supercomputer and its applications

    Science.gov (United States)

    Petersen, John S.; Maslow, Mark J.; Gerold, David J.; Greenway, Robert T.

    2003-06-01

    There are many variables that can affect lithographic dependent device yield. Because of this, it is not enough to make optical proximity corrections (OPC) based on the mask type, wavelength, lens, illumination-type and coherence. Resist chemistry and physics along with substrate, exposure, and all post-exposure processing must be considered too. Only a holistic approach to finding imaging solutions will accelerate yield and maximize performance. Since experiments are too costly in both time and money, accomplishing this takes massive amounts of accurate simulation capability. Our solution is to create a workbench that has a set of advanced user applications that utilize best-in-class simulator engines for solving litho-related DFM problems using distributive computing. Our product, ProLE (Programmable Lithography Engine), is an integrated system that combines Petersen Advanced Lithography Inc."s (PAL"s) proprietary applications and cluster management software wrapped around commercial software engines, along with optional commercial hardware and software. It uses the most rigorous lithography simulation engines to solve deep sub-wavelength imaging problems accurately and at speeds that are several orders of magnitude faster than current methods. Specifically, ProLE uses full vector thin-mask aerial image models or when needed, full across source 3D electromagnetic field simulation to make accurate aerial image predictions along with calibrated resist models;. The ProLE workstation from Petersen Advanced Lithography, Inc., is the first commercial product that makes it possible to do these intensive calculations at a fraction of a time previously available thus significantly reducing time to market for advance technology devices. In this work, ProLE is introduced, through model comparison to show why vector imaging and rigorous resist models work better than other less rigorous models, then some applications of that use our distributive computing solution are shown. Topics covered describe why ProLE solutions are needed from an economic and technical aspect, a high level discussion of how the distributive system works, speed benchmarking, and finally, a brief survey of applications including advanced aberrations for lens sensitivity and flare studies, optical-proximity-correction for a bitcell and an application that will allow evaluation of the potential of a design to have systematic failures during fabrication.

  3. Erasmus Computing Grid : Het Bouwen van een 20 TeraFLOP Virtuelle Supercomputer

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2007-01-01

    textabstractHet Erasmus Medische Centrum (Erasmus MC) en Hogeschool Rotterdam (HR) zijn in 2005 een unieke samenwerking begonnen om 95% van de capaciteit op al haar computers en die van anderen beschikbaar te maken voor onderzoek en onderwijs. Deze samenwerking heeft geleid tot het Erasmus

  4. Solving sparse linear least squares problems on some supercomputers by using large dense blocks

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Ostromsky, T; Sameh, A;

    1997-01-01

    technique is preferable to sparse matrix technique when the matrices are not large, because the high computational speed compensates fully the disadvantages of using more arithmetic operations and more storage. For very large matrices the computations must be organized as a sequence of tasks in each...... the matrix so that dense blocks can be constructed and treated with some standard software, say LAPACK or NAG. These ideas are implemented for linear least-squares problems. The rectangular matrices (that appear in such problems) are decomposed by an orthogonal method. Results obtained on a CRAY C92A...

  5. Comparison of neuronal spike exchange methods on a Blue Gene/P supercomputer

    Directory of Open Access Journals (Sweden)

    Michael eHines

    2011-11-01

    Full Text Available The performance of several spike exchange methods using a Blue Gene/P supercomputerhas been tested with 8K to 128K cores using randomly connected networks of up to 32M cells with 1k connections per cell and 4M cells with 10k connections per cell. The spike exchange methods used are the standard Message Passing Interface collective, MPI_Allgather, and several variants of the non-blocking multisend method either implemented via non-blocking MPI_Isend, or exploiting the possibility of very low overhead direct memory access communication available on the Blue Gene/P. In all cases the worst performing method was that using MPI_Isend due to the high overhead of initiating a spike communication. The two best performing methods --- the persistent multisend method using the Record-Replay feature of the Deep Computing Messaging Framework DCMF_Multicast;and a two phase multisend in which a DCMF_Multicast is used to first send to a subset of phase 1 destination cores which then pass it on to their subset of phase 2 destination cores --- had similar performance with very low overhead for the initiation of spike communication. Departure from ideal scaling for the multisend methods is almost completely due to load imbalance caused by the largevariation in number of cells that fire on each processor in the interval between synchronization. Spike exchange time itself is negligible since transmission overlaps with computation and is handled by a direct memory access controller. We conclude that ideal performance scaling will be ultimately limited by imbalance between incoming processor spikes between synchronization intervals. Thus, counterintuitively, maximization of load balance requires that the distribution of cells on processors should not reflect neural net architecture but be randomly distributed so that sets of cells which are burst firing together should be on different processors with their targets on as large a set of processors as possible.

  6. Defect generation and motion in polyethylene-like crystals, analyzed by simulation with supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Wunderlich, B.; Xenopoulos, A.; Noid, D.W.; Sumpter, B.G. (Oak Ridge National Lab., TN (USA) Tennessee Univ., Knoxville, TN (USA). Dept. of Chemistry)

    1990-01-01

    Defects in polymers were observed by high resolution electron microscopy and inferred from their mechanical and dielectrical behavior. The details of their generation was not known, however, in the past. During the last few years we have been able to extend the molecular dynamics simulation of polyethylene to crystals containing up to 6100 atoms and to times as long as 100 ps. The major observation was that single bond rotations of more than 90{degree} become possible already more than 100 K below the melting temperature. These defects have lifetimes of only a few ps. By coupling to kinks (2g1) they can extend their lifetime considerably. Addition of a thermal, mechanical or dielectric free energy gradient to the thermally created defects seems to be able to account for the microscopic motion needed to explain the macroscopically observed annealing, deformations and relaxation effects. Key to the mechanical and dielectric properties is thus the existence of conformational disorder (condis crystal). 47 refs., 9 figs.

  7. UbiWorld: An environment integrating virtual reality, supercomputing, and design

    Energy Technology Data Exchange (ETDEWEB)

    Disz, T.; Papka, M.E.; Stevens, R. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.

    1997-07-01

    UbiWorld is a concept being developed by the Futures Laboratory group at Argonne National Laboratory that ties together the notion of ubiquitous computing (Ubicomp) with that of using virtual reality for rapid prototyping. The goal is to develop an environment where one can explore Ubicomp-type concepts without having to build real Ubicomp hardware. The basic notion is to extend object models in a virtual world by using distributed wide area heterogeneous computing technology to provide complex networking and processing capabilities to virtual reality objects.

  8. A Sixty-Year Timeline of the Air Force Maui Optical and Supercomputing Site

    Science.gov (United States)

    2013-01-01

    work from the beginning. This research would not have been as successful without the enthusiastic help and guidance that we received from a number of...Advanced Research Projects Agency (ARPA) proposes the ARPA Midcourse Observation Station (AMOS) as an astronomical-quality observatory for obtaining... guidance on implementation of a basic research program at AMOS. Site Management Duffner, undated 2001 AFRL Det 15 initiates a project to develop and

  9. DNS of MHD turbulent flow via the HELIOS supercomputer system at IFERC-CSC

    Science.gov (United States)

    Satake, Shin-ichi; Kimura, Masato; Yoshimori, Hajime; Kunugi, Tomoaki; Takase, Kazuyuki

    2014-06-01

    The simulation plays an important role to estimate characteristics of cooling in a blanket for such high heating plasma in ITER-BA. An objective of this study is to perform large -scale direct numerical simulation (DNS) on heat transfer of magneto hydro dynamic (MHD) turbulent flow on coolant materials assumed from Flibe to lithium. The coolant flow conditions in ITER-BA are assumed to be Reynolds number and Hartmann number of a higher order. The maximum target of the DNS assumed by this study based on the result of the benchmark of Helios at IFERC-CSC for Project cycle 1 is 116 TB (2048 nodes). Moreover, we tested visualization by ParaView to visualize directly the large-scale computational result. If this large-scale DNS becomes possible, an essential understanding and modelling of a MHD turbulent flow and a design of nuclear fusion reactor contributes greatly.

  10. Using Mitrion-C to Implement Floating-Point Arithmetic on a Cray XD1 Supercomputer

    Science.gov (United States)

    2008-01-01

    Authorized licensed use limited to: US Naval Academy. Downloaded on February 5, 2010 at 07:24 from IEEE Xplore . Restrictions apply. Report Documentation...Downloaded on February 5, 2010 at 07:24 from IEEE Xplore . Restrictions apply. FPGA’s memory and host memory present on the same compute node...Naval Academy. Downloaded on February 5, 2010 at 07:24 from IEEE Xplore . Restrictions apply. memories had a bit-width of 64 bits, or 8 bytes, the

  11. LDRD final report : a lightweight operating system for multi-core capability class supercomputers.

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Suzanne Marie; Hudson, Trammell B. (OS Research); Ferreira, Kurt Brian; Bridges, Patrick G. (University of New Mexico); Pedretti, Kevin Thomas Tauke; Levenhagen, Michael J.; Brightwell, Ronald Brian

    2010-09-01

    The two primary objectives of this LDRD project were to create a lightweight kernel (LWK) operating system(OS) designed to take maximum advantage of multi-core processors, and to leverage the virtualization capabilities in modern multi-core processors to create a more flexible and adaptable LWK environment. The most significant technical accomplishments of this project were the development of the Kitten lightweight kernel, the co-development of the SMARTMAP intra-node memory mapping technique, and the development and demonstration of a scalable virtualization environment for HPC. Each of these topics is presented in this report by the inclusion of a published or submitted research paper. The results of this project are being leveraged by several ongoing and new research projects.

  12. Supercomputing for molecular dynamics simulations handling multi-trillion particles in nanofluidics

    CERN Document Server

    Heinecke, Alexander; Horsch, Martin; Bungartz, Hans-Joachim

    2015-01-01

    This work presents modern implementations of relevant molecular dynamics algorithms using ls1 mardyn, a simulation program for engineering applications. The text focuses strictly on HPC-related aspects, covering implementation on HPC architectures, taking Intel Xeon and Intel Xeon Phi clusters as representatives of current platforms. The work describes distributed and shared-memory parallelization on these platforms, including load balancing, with a particular focus on the efficient implementation of the compute kernels. The text also discusses the software-architecture of the resulting code.

  13. What would a data scientist do with 10 seconds on a supercomputer?

    Science.gov (United States)

    Nychka, D. W.

    2014-12-01

    The statistical problems of large climate datasets, the flexibility ofhigh level data languages such as R, and the architectures of currentsupercomputers have motivated a different paradigm for data analysis problems that are amenable to being parallelized. Part of theswitch in thinking is to harness many cores for a short amount of timeto produce interactive-like exploratory data analysis for thespace-time data sets typically encountered in the geosciences. As motivation we consider the near interactive analysis ofdaily observed temperature and rainfall fields for North America over thepast 30 years. For certain kinds of analysis the potential is forspeedups on the order of a factor of a 1000 more and so changestraditional work flows of statistical modeling and inference for largegeophysical datasets.

  14. Modeling cardiovascular hemodynamics using the lattice Boltzmann method on massively parallel supercomputers

    Science.gov (United States)

    Randles, Amanda Elizabeth

    Accurate and reliable modeling of cardiovascular hemodynamics has the potential to improve understanding of the localization and progression of heart diseases, which are currently the most common cause of death in Western countries. However, building a detailed, realistic model of human blood flow is a formidable mathematical and computational challenge. The simulation must combine the motion of the fluid, the intricate geometry of the blood vessels, continual changes in flow and pressure driven by the heartbeat, and the behavior of suspended bodies such as red blood cells. Such simulations can provide insight into factors like endothelial shear stress that act as triggers for the complex biomechanical events that can lead to atherosclerotic pathologies. Currently, it is not possible to measure endothelial shear stress in vivo, making these simulations a crucial component to understanding and potentially predicting the progression of cardiovascular disease. In this thesis, an approach for efficiently modeling the fluid movement coupled to the cell dynamics in real-patient geometries while accounting for the additional force from the expansion and contraction of the heart will be presented and examined. First, a novel method to couple a mesoscopic lattice Boltzmann fluid model to the microscopic molecular dynamics model of cell movement is elucidated. A treatment of red blood cells as extended structures, a method to handle highly irregular geometries through topology driven graph partitioning, and an efficient molecular dynamics load balancing scheme are introduced. These result in a large-scale simulation of the cardiovascular system, with a realistic description of the complex human arterial geometry, from centimeters down to the spatial resolution of red-blood cells. The computational methods developed to enable scaling of the application to 294,912 processors are discussed, thus empowering the simulation of a full heartbeat. Second, further extensions to enable the modeling of fluids in vessels with smaller diameters and a method for introducing the deformational forces exerted on the arterial flows from the movement of the heart by borrowing concepts from cosmodynamics are presented. These additional forces have a great impact on the endothelial shear stress. Third, the fluid model is extended to not only recover Navier-Stokes hydrodynamics, but also a wider range of Knudsen numbers, which is especially important in micro- and nano-scale flows. The tradeoffs of many optimizations methods such as the use of deep halo level ghost cells that, alongside hybrid programming models, reduce the impact of such higher-order models and enable efficient modeling of extreme regimes of computational fluid dynamics are discussed. Fourth, the extension of these models to other research questions like clogging in microfluidic devices and determining the severity of co-arctation of the aorta is presented. Through this work, a validation of these methods by taking real patient data and the measured pressure value before the narrowing of the aorta and predicting the pressure drop across the co-arctation is shown. Comparison with the measured pressure drop in vivo highlights the accuracy and potential impact of such patient specific simulations. Finally, a method to enable the simulation of longer trajectories in time by discretizing both spatially and temporally is presented. In this method, a serial coarse iterator is used to initialize data at discrete time steps for a fine model that runs in parallel. This coarse solver is based on a larger time step and typically a coarser discretization in space. Iterative refinement enables the compute-intensive fine iterator to be modeled with temporal parallelization. The algorithm consists of a series of prediction-corrector iterations completing when the results have converged within a certain tolerance. Combined, these developments allow large fluid models to be simulated for longer time durations than previously possible.

  15. Science Driven Supercomputing Architectures: AnalyzingArchitectural Bottlenecks with Applications and Benchmark Probes

    Energy Technology Data Exchange (ETDEWEB)

    Kamil, S.; Yelick, K.; Kramer, W.T.; Oliker, L.; Shalf, J.; Shan,H.; Strohmaier, E.

    2005-09-26

    There is a growing gap between the peak speed of parallel computing systems and the actual delivered performance for scientific applications. In general this gap is caused by inadequate architectural support for the requirements of modern scientific applications, as commercial applications and the much larger market they represent, have driven the evolution of computer architectures. This gap has raised the importance of developing better benchmarking methodologies to characterize and to understand the performance requirements of scientific applications, to communicate them efficiently to influence the design of future computer architectures. This improved understanding of the performance behavior of scientific applications will allow improved performance predictions, development of adequate benchmarks for identification of hardware and application features that work well or poorly together, and a more systematic performance evaluation in procurement situations. The Berkeley Institute for Performance Studies has developed a three-level approach to evaluating the design of high end machines and the software that runs on them: (1) A suite of representative applications; (2) A set of application kernels; and (3) Benchmarks to measure key system parameters. The three levels yield different type of information, all of which are useful in evaluating systems, and enable NSF and DOE centers to select computer architectures more suited for scientific applications. The analysis will further allow the centers to engage vendors in discussion of strategies to alleviate the present architectural bottlenecks using quantitative information. These may include small hardware changes or larger ones that may be out interest to non-scientific workloads. Providing quantitative models to the vendors allows them to assess the benefits of technology alternatives using their own internal cost-models in the broader marketplace, ideally facilitating the development of future computer architectures more suited for scientific computations. The three levels also come with vastly different investments: the benchmarking efforts require significant rewriting to effectively use a given architecture, which is much more difficult on full applications than on smaller benchmarks.

  16. Science Driven Supercomputing Architectures: AnalyzingArchitectural Bottlenecks with Applications and Benchmark Probes

    Energy Technology Data Exchange (ETDEWEB)

    Kamil, S.; Yelick, K.; Kramer, W.T.; Oliker, L.; Shalf, J.; Shan,H.; Strohmaier, E.

    2005-09-26

    There is a growing gap between the peak speed of parallelcomputing systems and the actual delivered performance for scientificapplications. In general this gap is caused by inadequate architecturalsupport for the requirements of modern scientific applications, ascommercial applications and the much larger market they represent, havedriven the evolution of computer architectures. This gap has raised theimportance of developing better benchmarking methodologies tocharacterize and to understand the performance requirements of scientificapplications, to communicate them efficiently to influence the design offuture computer architectures. This improved understanding of theperformance behavior of scientific applications will allow improvedperformance predictions, development of adequate benchmarks foridentification of hardware and application features that work well orpoorly together, and a more systematic performance evaluation inprocurement situations. The Berkeley Institute for Performance Studieshas developed a three-level approach to evaluating the design of high endmachines and the software that runs on them: 1) A suite of representativeapplications; 2) A set of application kernels; and 3) Benchmarks tomeasure key system parameters. The three levels yield different type ofinformation, all of which are useful in evaluating systems, and enableNSF and DOE centers to select computer architectures more suited forscientific applications. The analysis will further allow the centers toengage vendors in discussion of strategies to alleviate the presentarchitectural bottlenecks using quantitative information. These mayinclude small hardware changes or larger ones that may be out interest tonon-scientific workloads. Providing quantitative models to the vendorsallows them to assess the benefits of technology alternatives using theirown internal cost-models in the broader marketplace, ideally facilitatingthe development of future computer architectures more suited forscientific computations. The three levels also come with vastly differentinvestments: the benchmarking efforts require significant rewriting toeffectively use a given architecture, which is much more difficult onfull applications than on smaller benchmarks.

  17. Erasmus Computing Grid : Het Bouwen van een 20 TeraFLOP Virtuelle Supercomputer

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2007-01-01

    textabstractHet Erasmus Medische Centrum (Erasmus MC) en Hogeschool Rotterdam (HR) zijn in 2005 een unieke samenwerking begonnen om 95% van de capaciteit op al haar computers en die van anderen beschikbaar te maken voor onderzoek en onderwijs. Deze samenwerking heeft geleid tot het Erasmus Comp

  18. Air Force Maui Optical and Supercomputing Site (AMOS) Application Briefs 2004

    Science.gov (United States)

    2004-01-01

    behavior of distal residues in HemAT-Hs. Significance and Vision: Characterization of this protein, from the signaling domain of aerotaxis , by theoretical...Bioinformatics project in collaboration with the University of Hawaii at MHPCC. The heme-containing globular protein from the aerotaxis region of

  19. Assessing the Need for Supercomputing Resources Within the Pacific Area of Responsibility

    Science.gov (United States)

    2015-05-26

    database can be divided into smaller shards that are distributed to many nodes, and each node performs the search in parallel with the rest. The desired...subparts that can be solved in parallel, for example, when searching a large database for records satisfying a particular condition. To solve this, the...records are then collected after the parallel searches are completed. However, during the time that the search is being performed on each shard

  20. The Erasmus Computing Grid – Building a Super-Computer for Free

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); A. Abuseiris (Anis); R.M. de Graaf (Rob); M. Lesnussa (Michael); F.G. Grosveld (Frank)

    2011-01-01

    textabstractToday advances in scientific research as well as clinical diagnostics and treatment are inevitably connected with information solutions concerning computation power and information storage. The needs for information technology are enormous and are in many cases the limiting factor for ne

  1. The Erasmus Computing Grid - Building a Super-Computer for FREE

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2007-01-01

    textabstractToday advances in scientific research as well as clinical diagnostics and treatment are inevitably connected with information solutions concerning computation power and information storage. The needs for information technology are enormous and are in many cases the limiting factor fo

  2. Erasmus Computing Grid: Het bouwen van een 20 Tera-FLOPS Virtuele Supercomputer.

    NARCIS (Netherlands)

    L.V. de Zeeuw (Luc); T.A. Knoch (Tobias); J.H. van den Berg (Jan); F.G. Grosveld (Frank)

    2007-01-01

    textabstractHet Erasmus Medisch Centrum en de Hogeschool Rotterdam zijn in 2005 een samenwerking begonnen teneinde de ongeveer 95% onbenutte rekencapaciteit van hun computers beschikbaar te maken voor onderzoek en onderwijs. Deze samenwerking heeft geleid tot het Erasmus Computing GRID (ECG), een vi

  3. Installation of the CDC 7600 supercomputer system in the computer centre in 1972

    CERN Multimedia

    Nettz, William

    1972-01-01

    The CDC 7600 was installed in 1972 in the newly built computer centre. It was said to be the largest and most powerful computer system in Europe at that time and remained the fastest machine at CERN for 9 years. It was replaced after 12 years. Dr. Julian Blake (CERN), Dr. Tor Bloch (CERN), Erwin Gasser (Control Data Corporation), Jean-Marie LaPorte (Control Data Corporation), Peter McWilliam (Control Data Corporation), Hans Oeshlein (Control Data Corporation), and Peter Warn (Control Data Corporation) were heavily involved in this project and may appear on the pictures. William Nettz (who took the pictures) was in charge of the installation. Excerpt from CERN annual report 1972: 'Data handling and evaluation is becoming an increasingly important part of physics experiments. In order to meet these requirements a new central computer system, CDC 7600/6400, has been acquired and it was brought into more or less regular service during the year. Some initial hardware problems have disappeared but work has still to...

  4. Optimization of a Power Transient Stability Program on a Vector Supercomputer, Theory and Applications

    Science.gov (United States)

    1990-04-18

    System Solution Techniques . ....... 22 Introduction ... ............ 22 Gauss- Seidel Method . ........ 23 Newton-Raphson Method . ...... 25 Summary...is acceptable. Traditionally, load-flow and stability studies have been solved using either the Gauss- Seidel method or the Newton-Raphson method...Gauss- Seidel Method The Gauss- Seidel method solves each equation in the system in turn, by assuming a value for each variable, solving the equation

  5. The Erasmus Computing Grid – Building a Super-Computer for Free

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); A. Abuseiris (Anis); R.M. de Graaf (Rob); M. Lesnussa (Michael); F.G. Grosveld (Frank)

    2011-01-01

    textabstractToday advances in scientific research as well as clinical diagnostics and treatment are inevitably connected with information solutions concerning computation power and information storage. The needs for information technology are enormous and are in many cases the limiting factor for

  6. The Erasmus Computing Grid - Building a Super-Computer for FREE

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2007-01-01

    textabstractToday advances in scientific research as well as clinical diagnostics and treatment are inevitably connected with information solutions concerning computation power and information storage. The needs for information technology are enormous and are in many cases the limiting

  7. Optimization of the computational load of a hypercube supercomputer onboard a mobile robot

    Energy Technology Data Exchange (ETDEWEB)

    Barhen, J.; Toomarian, N.; Protopopescu, V.

    1987-12-01

    A combinatorial optimization methodology is developed, which enables the efficient use of hypercube multiprocessors onboard mobile intelligent robots dedicated to time-critical missions. The methodology is implemented in terms of large-scale concurrent algorithms based either on fast simulated annealing, or on nonlinear asynchronous neural networks. In particular, analytic expressions are given for the effect of single-neuron perturbations on the systems' configuration energy. Compact neuromorphic data structures are used to model effects such as precedence constraints, processor idling times, and task-schedule overlaps. Results for a typical robot-dynamics benchmark are presented.

  8. Validation of a multidimensional deterministic nuclear data sensitivity and uncertainty code system: an application needing supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Bidaud, A.; Mastrangelo, V. [Conservatoire National des Arts et Metiers, Laboratoire de Physique (CNAM), 75 - Paris (France); Institut de Physique Nucleaire (IN2P3/CNRS) 91 - Orsay (France); Kodeli, I.; Sartori, E. [OECD NEA Data Bank, 92 - Issy les Moulineaux (France)

    2003-07-01

    The quality of nuclear core modelling is linked to the quality of basic nuclear data such as probability of reaction (i.e. cross sections) between neutrons and the nucleus of the core materials. Perturbation Theory, whose applications in nuclear science has been largely developed in the sixties provides tools for estimating the sensitivity of integral parameters such as k-eff, reaction rates, or breeding ratio to the cross sections. The computation with these tools requires approximations in the simulation of space, angles and energy dependent neutron transport. To minimise the impact of the geometry modelling approximations in the calculation, use of 3 dimensional multigroup transport codes is recommended. Sensitivity and uncertainty analyses are the tools needed to estimate the accuracy that a code system with data libraries can achieve. They can guide users as to the specific need for improved data to carry out reliable simulations. However, as full-scale models in 3 dimensions with refined descriptions of the phase-space are used, high performance computers and codes designed to run on parallel architectures are needed to obtain results within acceptable time limits.

  9. Introductory User’s Guide to the ARL Supercomputer Facility at APG

    Science.gov (United States)

    1993-06-01

    12.4.4 flint ................................. 12-12 13. Interlanguage Communication ................... 13.1 Fortran and C...1993 Interlanguage Communication 13. Interlanguage Communication The considerations expressed in this chapter apply in general to programs whose...2079 6.0 (Sections 1 and 8, and Appendix A), UNICOS Standard C Library Reference Manual, SR-2080 6.0, ( Interlanguage Communications), UNICOS Math

  10. Advancements and performance of iterative methods in industrial applications codes on CRAY parallel/vector supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Poole, G.; Heroux, M. [Engineering Applications Group, Eagan, MN (United States)

    1994-12-31

    This paper will focus on recent work in two widely used industrial applications codes with iterative methods. The ANSYS program, a general purpose finite element code widely used in structural analysis applications, has now added an iterative solver option. Some results are given from real applications comparing performance with the tradition parallel/vector frontal solver used in ANSYS. Discussion of the applicability of iterative solvers as a general purpose solver will include the topics of robustness, as well as memory requirements and CPU performance. The FIDAP program is a widely used CFD code which uses iterative solvers routinely. A brief description of preconditioners used and some performance enhancements for CRAY parallel/vector systems is given. The solution of large-scale applications in structures and CFD includes examples from industry problems solved on CRAY systems.

  11. A visualization environment for supercomputing-based applications in computational mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Pavlakos, C.J.; Schoof, L.A.; Mareda, J.F.

    1993-06-01

    In this paper, we characterize a visualization environment that has been designed and prototyped for a large community of scientists and engineers, with an emphasis in superconducting-based computational mechanics. The proposed environment makes use of a visualization server concept to provide effective, interactive visualization to the user`s desktop. Benefits of using the visualization server approach are discussed. Some thoughts regarding desirable features for visualization server hardware architectures are also addressed. A brief discussion of the software environment is included. The paper concludes by summarizing certain observations which we have made regarding the implementation of such visualization environments.

  12. Novel Supercomputing Approaches for High Performance Linear Algebra Using FPGAs Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop novel FPGA-based algorithmic technology that will enable unprecedented computational power for the solution of large sparse linear equation...

  13. Integration Of PanDA Workload Management System With Supercomputers for ATLAS

    CERN Document Server

    Oleynik, Danila; The ATLAS collaboration; De, Kaushik; Wenaus, Torre; Maeno, Tadashi; Barreiro Megino, Fernando Harald; Nilsson, Paul; Guan, Wen; Panitkin, Sergey

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production ANd Distributed Analysis system) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more t...

  14. Towards 21st Century Stellar Models: Star Clusters, Supercomputing, and Asteroseismology

    CERN Document Server

    Campbell, S W; D'Orazi, V; Meakin, C; Stello, D; Christensen-Dalsgaard, J; Kuehn, C; De Silva, G M; Arnett, W D; Lattanzio, J C; MacLean, B T

    2015-01-01

    Stellar models provide a vital basis for many aspects of astronomy and astrophysics. Recent advances in observational astronomy -- through asteroseismology, precision photometry, high-resolution spectroscopy, and large-scale surveys -- are placing stellar models under greater quantitative scrutiny than ever. The model limitations are being exposed and the next generation of stellar models is needed as soon as possible. The current uncertainties in the models propagate to the later phases of stellar evolution, hindering our understanding of stellar populations and chemical evolution. Here we give a brief overview of the evolution, importance, and substantial uncertainties of core helium burning stars in particular and then briefly discuss a range of methods, both theoretical and observational, that we are using to advance the modelling.

  15. Parallel Supercomputing PC Cluster and Some Physical Results in Lattice QCD

    Institute of Scientific and Technical Information of China (English)

    LUOXiang-Qian; MEIZhong-Hao; EricB.Gregory; YANGJie-Chao; WANGYu-Li; LINYin

    2003-01-01

    We describe the construction of a high performance parallel computer composed of PC components, present some physical results for light hadron and hybrid meson masses from lattice QCD. We also show that the smearing technique is very useful for improving the spectrum calculations.

  16. Towards 21st Century Stellar Models: Star Clusters, Supercomputing, and Asteroseismology

    DEFF Research Database (Denmark)

    Campbell, S. W.; Constantino, T. N.; D'Orazi, V.;

    2016-01-01

    Stellar models provide a vital basis for many aspects of astronomy and astrophysics. Recent advances in observational astronomy -- through asteroseismology, precision photometry, high-resolution spectroscopy, and large-scale surveys -- are placing stellar models under greater quantitative scrutin...

  17. Good Seeing: Best Practices for Sustainable Operations at the Air Force Maui Optical and Supercomputing Site

    Science.gov (United States)

    2016-01-01

    instrument switching capabilities, a steady stream of telemetry must be supplied by detectors and actuators situated at all points of possible failure...remote staff must have constant access to this telemetry . The telescope must be able to be opened and closed remotely, and weather changes must be...and narrow-field facilities rarely occupy the same site, so this dual capacity is part of AMOS’s unique value proposition. 3 Since AFRL took over

  18. Ada compiler validation summary report. Certificate number: 891116W1. 10191. Intel Corporation, IPSC/2 Ada, Release 1. 1, IPSC/2 parallel supercomputer, system resource manager host and IPSC/2 parallel supercomputer, CX-1 nodes target

    Energy Technology Data Exchange (ETDEWEB)

    1989-11-16

    This VSR documents the results of the validation testing performed on an Ada compiler. Testing was carried out for the following purposes: To attempt to identify any language constructs supported by the compiler that do not conform to the Ada Standard; To attempt to identify any language constructs not supported by the compiler but required by the Ada Standard; and To determine that the implementation-dependent behavior is allowed by the Ada Standard. Testing of this compiler was conducted by SofTech, Inc. under the direction of he AVF according to procedures established by the Ada Joint Program Office and administered by the Ada Validation Organization (AVO). On-side testing was completed 16 November 1989 at Aloha OR.

  19. The GLOBE-Consortium: The Erasmus Computing Grid – Building a Super-Computer at Erasmus MC for FREE

    NARCIS (Netherlands)

    T.A. Knoch (Tobias)

    2005-01-01

    textabstractTo meet the enormous computational needs of live-science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing grids in the world – The Erasmus Computing Grid.

  20. SMILEI: A collaborative, open-source, multi-purpose PIC code for the next generation of super-computers

    Science.gov (United States)

    Grech, Mickael; Derouillat, J.; Beck, A.; Chiaramello, M.; Grassi, A.; Niel, F.; Perez, F.; Vinci, T.; Fle, M.; Aunai, N.; Dargent, J.; Plotnikov, I.; Bouchard, G.; Savoini, P.; Riconda, C.

    2016-10-01

    Over the last decades, Particle-In-Cell (PIC) codes have been central tools for plasma simulations. Today, new trends in High-Performance Computing (HPC) are emerging, dramatically changing HPC-relevant software design and putting some - if not most - legacy codes far beyond the level of performance expected on the new and future massively-parallel super computers. SMILEI is a new open-source PIC code co-developed by both plasma physicists and HPC specialists, and applied to a wide range of physics-related studies: from laser-plasma interaction to astrophysical plasmas. It benefits from an innovative parallelization strategy that relies on a super-domain-decomposition allowing for enhanced cache-use and efficient dynamic load balancing. Beyond these HPC-related developments, SMILEI also benefits from additional physics modules allowing to deal with binary collisions, field and collisional ionization and radiation back-reaction. This poster presents the SMILEI project, its HPC capabilities and illustrates some of the physics problems tackled with SMILEI.

  1. Optimization of Applications with Non-blocking Neighborhood Collectives via Multisends on the Blue Gene/P Supercomputer.

    Science.gov (United States)

    Kumar, Sameer; Heidelberger, Philip; Chen, Dong; Hines, Michael

    2010-04-19

    We explore the multisend interface as a data mover interface to optimize applications with neighborhood collective communication operations. One of the limitations of the current MPI 2.1 standard is that the vector collective calls require counts and displacements (zero and nonzero bytes) to be specified for all the processors in the communicator. Further, all the collective calls in MPI 2.1 are blocking and do not permit overlap of communication with computation. We present the record replay persistent optimization to the multisend interface that minimizes the processor overhead of initiating the collective. We present four different case studies with the multisend API on Blue Gene/P (i) 3D-FFT, (ii) 4D nearest neighbor exchange as used in Quantum Chromodynamics, (iii) NAMD and (iv) neural network simulator NEURON. Performance results show 1.9× speedup with 32(3) 3D-FFTs, 1.9× speedup for 4D nearest neighbor exchange with the 2(4) problem, 1.6× speedup in NAMD and almost 3× speedup in NEURON with 256K cells and 1k connections/cell.

  2. Impacts of Academic R&D on High-Tech Manufacturing Products: Tentative Evidence from Supercomputer Data

    Science.gov (United States)

    Le, Thanh; Tang, Kam Ki

    2015-01-01

    This paper empirically examines the impact of academic research on high-tech manufacturing growth of 28 Organisation for Economic Co-operation and Development (OECD) and emerging countries over the 1991-2005 period. A standard research and development (R&D) expenditure based measure is found to be too general to capture the input in high-tech…

  3. The GLOBE-Consortium: The Erasmus Computing Grid – Building a Super-Computer at Erasmus MC for FREE

    NARCIS (Netherlands)

    T.A. Knoch (Tobias)

    2005-01-01

    textabstractTo meet the enormous computational needs of live-science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing grids in the world – The Erasmus Computing Grid. Curren

  4. Cosmological simulations of galaxy formation: Successes and challenges in the era of supercomputers. Ludwig Biermann Award Lecture 2012

    Science.gov (United States)

    Scannapieco, C.

    2013-06-01

    I use cosmological hydrodynamical simulations to study the formation and evolution of galaxies similar in mass to the Milky Way. First, I use a set of eight simulations where the haloes have a great variety of merger and formation histories, to investigate how similar or diverse these galaxies are at the present epoch, and how their final properties are related to the particular formation history of the galaxy. I find that rotationally-supported disks are present in 7 of the 8 galaxies at {z˜ 2}-3; however, only half of the galaxies have significant disks at z=0. Both major mergers and the accretion of gas that is misaligned with the preexisting stellar disk contribute to the transfer of material from the disks to the spheroidal components, lowering the disk-to-total ratios during evolution. I also present and discuss recent results of the Aquila Project, which compares the predictions of 13 different numerical codes for the properties of a galaxy in a \\Lambda cold dark matter universe. All simulations use a unique initial condition and are analysed in the exact same way, allowing a fair comparison of results. We find large code-to-code variations in stellar masses, star formation rates, galaxy sizes and morphologies. We also find that the way feedback is implemented is the main cause of the differences, although some differences might also result from the use of different numerical technique. Our results show that state-of-the-art simulations cannot yet uniquely predict the properties of the baryonic component of a galaxy, even when the assembly history of its host halo is fully specified.

  5. Impacts of Academic R&D on High-Tech Manufacturing Products: Tentative Evidence from Supercomputer Data

    Science.gov (United States)

    Le, Thanh; Tang, Kam Ki

    2015-01-01

    This paper empirically examines the impact of academic research on high-tech manufacturing growth of 28 Organisation for Economic Co-operation and Development (OECD) and emerging countries over the 1991-2005 period. A standard research and development (R&D) expenditure based measure is found to be too general to capture the input in high-tech…

  6. Implementation of a distributed adaptive routing algorithm on the intel IPSC (Intel Personal Supercomputer). Master's thesis

    Energy Technology Data Exchange (ETDEWEB)

    Farinelli, T.C.

    1987-12-01

    The purpose of this study was to examine the use of distributed adaptive routing algorithms on concurrent-class computers. The implemented routing algorithm allowed each node to select the next node based on two criteria: the fewest number of hops; and the smallest delay time. This study was limited to the comparison of a distributed adaptive routing algorithm, implemented at the applications layer, with the current static routing and with a simulation of the current routing implemented at the applications layer. The comparison with the simulated current static routing provides a measure of the possible performance gain had the adaptive routing algorithm been implemented at the network layer. Each of three configuration was comprised of four processes: a Host Process, a Routing Process, a Ring Control Process, and a Network Loading Process. The Host Process controlled the loading of the processes onto the IPSC, the Routing Process controlled the message routing, the Ring Control Process provided the baseline message passing, while the Network Loading Process provided communications congestion on selected links. The metric used to compare the Routing Process performance was the average delay time for passing a message around the ring.

  7. Alternatives to relational databases in precision medicine: Comparison of NoSQL approaches for big data storage using supercomputers

    Science.gov (United States)

    Velazquez, Enrique Israel

    Improvements in medical and genomic technologies have dramatically increased the production of electronic data over the last decade. As a result, data management is rapidly becoming a major determinant, and urgent challenge, for the development of Precision Medicine. Although successful data management is achievable using Relational Database Management Systems (RDBMS), exponential data growth is a significant contributor to failure scenarios. Growing amounts of data can also be observed in other sectors, such as economics and business, which, together with the previous facts, suggests that alternate database approaches (NoSQL) may soon be required for efficient storage and management of big databases. However, this hypothesis has been difficult to test in the Precision Medicine field since alternate database architectures are complex to assess and means to integrate heterogeneous electronic health records (EHR) with dynamic genomic data are not easily available. In this dissertation, we present a novel set of experiments for identifying NoSQL database approaches that enable effective data storage and management in Precision Medicine using patients' clinical and genomic information from the cancer genome atlas (TCGA). The first experiment draws on performance and scalability from biologically meaningful queries with differing complexity and database sizes. The second experiment measures performance and scalability in database updates without schema changes. The third experiment assesses performance and scalability in database updates with schema modifications due dynamic data. We have identified two NoSQL approach, based on Cassandra and Redis, which seems to be the ideal database management systems for our precision medicine queries in terms of performance and scalability. We present NoSQL approaches and show how they can be used to manage clinical and genomic big data. Our research is relevant to the public health since we are focusing on one of the main challenges to the development of Precision Medicine and, consequently, investigating a potential solution to the progressively increasing demands on health care.

  8. Introducing "É VIVO! Virtual Eruptions on a Supercomputer". A DVD aimed at sharing results from numerical simulations of explosive eruptions

    Science.gov (United States)

    de'Michieli Vitturi, M.; Todesco, M.; Neri, A.; Esposti Ongaro, T.; Tola, E.; Rocco, G.

    2011-12-01

    We present a new DVD of the INGV outreach series, aimed at illustrating our research work on pyroclastic flow modeling. Pyroclastic flows (or pyroclastic density currents) are hot, devastating clouds of gas and ashes, generated during explosive eruptions. Understanding their dynamics and impact is crucial for a proper hazard assessment. We employ a 3D numerical model which describes the main features of the multi-phase and multi-component process, from the generation of the flows to their propagation along complex terrains. Our numerical results can be translated into color animations, which describe the temporal evolution of flow variables such as temperature or ash concentration. The animations provide a detailed and effective description of the natural phenomenon which can be used to present this geological process to a general public and to improve the hazard perception in volcanic areas. In our DVD, the computer animations are introduced and commented by professionals and researchers who deals at various levels with the study of pyroclastic flows and their impact. Their comments are taken as short interviews, mounted in a short video (about 10 minutes), which describes the natural process, as well as the model and its applications to some explosive volcanoes like Vesuvio, Campi Flegrei, Mt. St. Helens and Soufriere Hills (Montserrat). The ensemble of different voices and faces provides a direct sense of the multi-disciplinary effort involved in the assessment of pyroclastic flow hazard. The video also introduces the people who address this complex problem, and the personal involvement beyond the scientific results. The full, uncommented animations of the pyroclastic flow propagation on the different volcanic settings are also provided in the DVD, that is meant to be a general, flexible outreach tool.

  9. ASCI Option Red Supercomputer中MRC的选路问题%MRC Routing Algorithm in ASCI Option Red Supercomputer

    Institute of Scientific and Technical Information of China (English)

    柯鹏; 龙潇

    2005-01-01

    对Optional Red的Solit2-D网孔拓扑结构的虫孔路由问题进行了简单探讨,尤其对死锁问题和执行过程以及容错等几个突出问题进行了分析,最后给出了单一信包一到一的无死锁虫孔路由选择算法.

  10. Investigation of supercomputer capabilities for the scalable numerical simulation of computational fluid dynamics problems in industrial applications

    Science.gov (United States)

    Kozelkov, A. S.; Kurulin, V. V.; Lashkin, S. V.; Shagaliev, R. M.; Yalozo, A. V.

    2016-08-01

    Two main issues of the efficient usage of computational fluid dynamics (CFD) in industrial applications—simulation of turbulence and speedup of computations—are analyzed. Results of the investigation of potentials of the eddy-resolving approaches to turbulence simulation in industrial applications with the use of arbitrary unstructured grids are presented. Algorithms for speeding up the scalable high-performance computations based on multigrid technologies are proposed.

  11. Scalable fine-grained parallelization of plane-wave-based ab initio molecular dynamics for large supercomputers.

    Science.gov (United States)

    Vadali, Ramkumar V; Shi, Yan; Kumar, Sameer; Kale, Laxmikant V; Tuckerman, Mark E; Martyna, Glenn J

    2004-12-01

    Many systems of great importance in material science, chemistry, solid-state physics, and biophysics require forces generated from an electronic structure calculation, as opposed to an empirically derived force law to describe their properties adequately. The use of such forces as input to Newton's equations of motion forms the basis of the ab initio molecular dynamics method, which is able to treat the dynamics of chemical bond-breaking and -forming events. However, a very large number of electronic structure calculations must be performed to compute an ab initio molecular dynamics trajectory, making the efficiency as well as the accuracy of the electronic structure representation critical issues. One efficient and accurate electronic structure method is the generalized gradient approximation to the Kohn-Sham density functional theory implemented using a plane-wave basis set and atomic pseudopotentials. The marriage of the gradient-corrected density functional approach with molecular dynamics, as pioneered by Car and Parrinello (R. Car and M. Parrinello, Phys Rev Lett 1985, 55, 2471), has been demonstrated to be capable of elucidating the atomic scale structure and dynamics underlying many complex systems at finite temperature. However, despite the relative efficiency of this approach, it has not been possible to obtain parallel scaling of the technique beyond several hundred processors on moderately sized systems using standard approaches. Consequently, the time scales that can be accessed and the degree of phase space sampling are severely limited. To take advantage of next generation computer platforms with thousands of processors such as IBM's BlueGene, a novel scalable parallelization strategy for Car-Parrinello molecular dynamics is developed using the concept of processor virtualization as embodied by the Charm++ parallel programming system. Charm++ allows the diverse elements of a Car-Parrinello molecular dynamics calculation to be interleaved with low latency such that unprecedented scaling is achieved. As a benchmark, a system of 32 water molecules, a common system size employed in the study of the aqueous solvation and chemistry of small molecules, is shown to scale on more than 1500 processors, which is impossible to achieve using standard approaches. This degree of parallel scaling is expected to open new opportunities for scientific inquiry.

  12. Bringing ATLAS production to HPC resources - A use case with the Hydra supercomputer of the Max Planck Society

    CERN Document Server

    Kennedy, John; The ATLAS collaboration; Mazzaferro, Luca; Walker, Rodney

    2015-01-01

    The possible usage of HPC resources by ATLAS is now becoming viable due to the changing nature of these systems and it is also very attractive due to the need for increasing amounts of simulated data. In recent years the architecture of HPC systems has evolved, moving away from specialized monolithic systems, to a more generic Linux type platform. This change means that the deployment of non HPC specific codes has become much easier. The timing of this evolution perfectly suits the needs of ATLAS and opens a new window of opportunity. The ATLAS experiment at CERN will begin a period of high luminosity data taking in 2015. This high luminosity phase will be accompanied by a need for increasing amounts of simulated data which is expected to exceed the capabilities of the current Grid infrastructure. ATLAS aims to address this need by opportunistically accessing resources such as cloud and HPC systems. This paper presents the results of a pilot project undertaken by ATLAS and the MPP and RZG to provide access to...

  13. JPRS Report Near East South Asia.

    Science.gov (United States)

    2007-11-02

    conducting party meetings. After Dar’ai, the SHAS secretariat chose MK Shim’on Ben Shlomo as secretary general. The scholars council did not...Mr Arun Nehru, Mr V C Shukla and come together to mobilise the people Mr Arif Mohammed Khan. against terrorism. Briefing newsmen later, AICC gen

  14. Trafficking of Intracellular Membranes: Mass action model of virus fusion

    NARCIS (Netherlands)

    Nir, Shlomo; Duzgunes, Nejat; Hoekstra, Dick; Ramalho-Santos, Joao; Pedroso de Lima, Maria C

    1995-01-01

    :Shlomo Nir, Nejat Düzgüneş, Dick Hoekstra, João Ramalho-Santos, Maria C. Pedroso de Lima The purpose of this presentation is to describe procedures of analysis of final extents and kinetics of virus fusion with target membranes. The presentation of results will focus on deductions from studies of f

  15. Iterative Decomposition of Water and Fat with Echo Asymmetric and Least—Squares Estimation (IDEAL (Reeder et al. 2005 Automated Spine Survey Iterative Scan Technique (ASSIST (Weiss et al. 2006

    Directory of Open Access Journals (Sweden)

    Kenneth L. Weiss

    2008-01-01

    Full Text Available Background and Purpose: Multi-parametric MRI of the entire spine is technologist-dependent, time consuming, and often limited by inhomogeneous fat suppression. We tested a technique to provide rapid automated total spine MRI screening with improved tissue contrast through optimized fat-water separation.Methods: The entire spine was auto-imaged in two contiguous 35 cm field of view (FOV sagittal stations, utilizing out-of-phase fast gradient echo (FGRE and T1 and/or T2 weighted fast spin echo (FSE IDEAL (Iterative Decomposition of Water and Fat with Echo Asymmetric and Least-squares Estimation sequences. 18 subjects were studied, one twice at 3.0T (pre and post contrast and one at both 1.5 T and 3.0T for a total of 20 spine examinations (8 at 1.5 T and 12 at 3.0T. Images were independently evaluated by two neuroradiologists and run through Automated Spine Survey Iterative Scan Technique (ASSIST analysis software for automated vertebral numbering.Results: In all 20 total spine studies, neuroradiologist and computer ASSIST labeling were concordant. In all cases, IDEAL provided uniform fat and water separation throughout the entire 70 cm FOV imaged. Two subjects demonstrated breast metastases and one had a large presumptive schwannoma. 14 subjects demonstrated degenerative disc disease with associated Modic Type I or II changes at one or more levels. FGRE ASSIST afforded subminute submillimeter in-plane resolution of the entire spine with high contrast between discs and vertebrae at both 1.5 and 3.0T. Marrow signal abnormalities could be particularly well characterized with IDEAL derived images and parametric maps.Conclusion: IDEAL ASSIST is a promising MRI technique affording a rapid automated high resolution, high contrast survey of the entire spine with optimized tissue characterization.

  16. Study on the Effectiveness of Therapy with Octerotide in Patients with Intractable Mallow-Weiss Syndrome%难治性贲门黏膜撕裂综合征加用奥曲肽疗效观察

    Institute of Scientific and Technical Information of China (English)

    林海; 林志辉; 陈贻胜

    2009-01-01

    目的 观察奥曲肽对难治性贲门黏膜撕裂综合征(MWS)的疗效.方法 将经常规治疗24 h仍出血不止的47例MWS患者随机分为2组.治疗组(n=24)和对照组(n=23).对照组:单用静脉注射奥美拉唑40mg,每12h 1次.治疗组:加用奥曲肽(善宁)25 μg/h微量泵泵入,连用3 d.结果 治疗组总有效率91.67%,明显优于对照组的73.91%(P<0.01).结论 对难治性MWS大出血患者,常规治疗效果不佳时,加用奥曲肽治疗效果显著.

  17. 美国银行系统中的道德风险与逆向选择问题——基于Stiglitz和Weiss(1981)模型的分析

    Institute of Scientific and Technical Information of China (English)

    冯新力

    2012-01-01

    本文分析了银行系统中存在的道德风险与逆向选择问题并侧重于两方面,第一是银行与客户的关系,第二是存货银行(Savings & Loan Association)与联邦存货银行保险公司的关系.通过模型分析,本文认为,存款保险和政府管制加剧了美国银行业的道德风险和逆向选择问题,美国政府应当对金融行业放松管制.

  18. Iterative Decomposition of Water and Fat with Echo Asymmetric and Least—Squares Estimation (IDEAL) (Reeder et al. 2005) Automated Spine Survey Iterative Scan Technique (ASSIST) (Weiss et al. 2006)

    OpenAIRE

    Kenneth L. Weiss; Dongmei Sun; Rebecca S. Cornelius; Jane L. Weiss

    2008-01-01

    Background and Purpose: Multi-parametric MRI of the entire spine is technologist-dependent, time consuming, and often limited by inhomogeneous fat suppression. We tested a technique to provide rapid automated total spine MRI screening with improved tissue contrast through optimized fat-water separation.Methods: The entire spine was auto-imaged in two contiguous 35 cm field of view (FOV) sagittal stations, utilizing out-of-phase fast gradient echo (FGRE) and T1 and/or T2 weighted fast spin ech...

  19. Laudatio für Frau Dr. Andra Thiel anlässlich der Verleihung des Förderpreises der Ingrid Weiss/Horst Wiehe-Stiftung am 23. März 2005 in Dresden

    OpenAIRE

    Hoffmeister, Thomas

    2008-01-01

    Hätte man vor 10 Jahren ANDRA THIEL gefragt, ob sie denn glauben würde, mal einen Forschungspreis für eine entomologische Arbeit zu erhalten, hätte man vermutlich nur ein kopfschüttelndes Lächeln geerntet. Während viele Entomologen schon von Kindesbeinen an der Faszination der Insekten verfallen sind, beschäftigte sich ANDRA THIEL in ihrer Jugend mit Fischen, Amphibien, Vögeln und Kleinsäugern, den Insekten gelang es damals noch nicht hinreichend, ihren Charme bei ANDRA THIEL einzusetzen. Auf...

  20. 透过《白宴》解读日本器官移植伦理文化%Interpretation of Japanese Ethical Culture of Organ Transplant through the Literature of Weiss Kreuz (Baiyan)

    Institute of Scientific and Technical Information of China (English)

    李宁

    2009-01-01

    日本著名小说家渡边淳一在其长篇小说中,透过心脏移植这一医疗事件,演绎了现代医学科学的进步与日本固有传统文化这两者之间激烈的交战.并以犀利的笔锋剖析笔下人物的深层心理,使我们可以从另一角度解读日本器官移植的伦理文化.

  1. Review of: Meinders, Beckedorf and Weiss: Overview of the Appeal Proceedings according to the EPC - Überblick über das Beschwerdeverfahren nach dem EPÜ - Apercu sur la procédure de recours selon la CBE

    NARCIS (Netherlands)

    Mulder, C.A.M.

    2014-01-01

    The book Overview of the Appeal Proceedings according to the EPC gives a survey of the procedures before the Boards of Appeal of the European Patent Office (EPO). The text is trilingual. The book provides an easily readable review of appeal proceedings. This book is a helpful tool for European paten

  2. 75 FR 40023 - Notice of Fiscal Year 2011 Safety Grants and Solicitation for Applications

    Science.gov (United States)

    2010-07-13

    ... Priority Grants--Cim Weiss, cim.weiss@dot.gov , 202-366-0275 CMV Operator Safety Training Grants--Julie...-3030 SaDIP Grants--Cim Weiss, cim.weiss@dot.gov , 202-366-0275 PRISM Grants--Tom Lawler,...

  3. The Israeli-Palestinian Conflict - A Case Study for the United States Military in Foreign Internal Defense

    Science.gov (United States)

    2005-05-26

    context in order to correctly address the problems. Early History of the Jewish and Palestinian Peoples In the book of Genesis, the Bible ...Ibid, 11. 12 Calvin Goldscheider. Cultures in Conflict: The Arab Israeli Conflict (Connecticut: Greenwood Press, 2002), xvii-xxii. 6 upon the land...Gazit, Shlomo. Trapped Fools: Thirty Years of Israeli Policy in the Territories. Portland: Frank Cass Publishers, 2003. Goldscheider, Calvin

  4. Task-parallel message passing interface implementation of Autodock4 for docking of very large databases of compounds using high-performance super-computers.

    Science.gov (United States)

    Collignon, Barbara; Schulz, Roland; Smith, Jeremy C; Baudry, Jerome

    2011-04-30

    A message passing interface (MPI)-based implementation (Autodock4.lga.MPI) of the grid-based docking program Autodock4 has been developed to allow simultaneous and independent docking of multiple compounds on up to thousands of central processing units (CPUs) using the Lamarkian genetic algorithm. The MPI version reads a single binary file containing precalculated grids that represent the protein-ligand interactions, i.e., van der Waals, electrostatic, and desolvation potentials, and needs only two input parameter files for the entire docking run. In comparison, the serial version of Autodock4 reads ASCII grid files and requires one parameter file per compound. The modifications performed result in significantly reduced input/output activity compared with the serial version. Autodock4.lga.MPI scales up to 8192 CPUs with a maximal overhead of 16.3%, of which two thirds is due to input/output operations and one third originates from MPI operations. The optimal docking strategy, which minimizes docking CPU time without lowering the quality of the database enrichments, comprises the docking of ligands preordered from the most to the least flexible and the assignment of the number of energy evaluations as a function of the number of rotatable bounds. In 24 h, on 8192 high-performance computing CPUs, the present MPI version would allow docking to a rigid protein of about 300K small flexible compounds or 11 million rigid compounds.

  5. Core-Collapse Supernovae as Supercomputing Science: a status report toward 6D simulations with exact Boltzmann neutrino transport in full general relativity

    CERN Document Server

    Kotake, Kei; Yamada, Shoichi; Takiwaki, Tomoya; Kuroda, Takami; Suwa, Yudai; Nagakura, Hiroki

    2012-01-01

    This is a status report on our endeavor to reveal the mechanism of core-collapse supernovae (CCSNe) by large-scale numerical simulations. Multi-dimensionality of the supernova engine, general relativisitic magnetohydrodynamics, energy and lepton number transport by neutrinos emitted from the forming neutron star as well as nuclear interactions there, are all believed to play crucial roles in repelling infalling matter and producing energetic explosions. These ingredients are nonlinearly coupled with one another in the dynamics of core-collapse, bounce, and shock expansion. Serious quantitative studies of CCSNe hence make extensive numerical computations mandatory. Since neutrinos are neither in thermal nor in chemical equilibrium in general, their distributions in the phase space should be computed. This is a six dimensional (6D) neutrino transport problem and quite a challenge even for those with an access to the most advanced numerical resources such as the "K computer". To tackle this problem, we have emba...

  6. DNA Sequence Patterns – A Successful Example of Grid Computing in Genome Research and Building Virtual Super-Computers for the Research Commons of e-Societies

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); A. Abuseiris (Anis); M. Lesnussa (Michael); F.N. Kepper (Nick); R.M. de Graaf (Rob); F.G. Grosveld (Frank)

    2011-01-01

    textabstractThe amount of information is growing exponentially with ever-new technologies emerging and is believed to be always at the limit. In contrast, huge resources are obviously available, which are underused in the IT sector, similar as e.g. in the renewable energy sector. Genome research is

  7. Building a Parallel Supercomputer with Pile-of--PCs Cluster%用PC机群组构并行超级计算机

    Institute of Scientific and Technical Information of China (English)

    黎康保; 陶文正; 许丽华; 黎文楼

    2000-01-01

    美国由高等院校、大型实验室和研究部门共同研究推出PC群机Beowulf超级计算机.这一创举,说明超级计算机可以用大众化的PC机集群来完成,这对我国是一个挑战和机遇.这里对Beowulf作了一些研究的基础上,论述了PC群机的结构组成原理,操作系统平台和并行计算程序设计、并行通信程序设计等同题.

  8. A Hybrid Parallel Strategy Based on String Graph Theory to Improve De Novo DNA Assembly on the TianHe-2 Supercomputer.

    Science.gov (United States)

    Zhang, Feng; Liao, Xiangke; Peng, Shaoliang; Cui, Yingbo; Wang, Bingqiang; Zhu, Xiaoqian; Liu, Jie

    2016-06-01

    ' The de novo assembly of DNA sequences is increasingly important for biological researches in the genomic era. After more than one decade since the Human Genome Project, some challenges still exist and new solutions are being explored to improve de novo assembly of genomes. String graph assembler (SGA), based on the string graph theory, is a new method/tool developed to address the challenges. In this paper, based on an in-depth analysis of SGA we prove that the SGA-based sequence de novo assembly is an NP-complete problem. According to our analysis, SGA outperforms other similar methods/tools in memory consumption, but costs much more time, of which 60-70 % is spent on the index construction. Upon this analysis, we introduce a hybrid parallel optimization algorithm and implement this algorithm in the TianHe-2's parallel framework. Simulations are performed with different datasets. For data of small size the optimized solution is 3.06 times faster than before, and for data of middle size it's 1.60 times. The results demonstrate an evident performance improvement, with the linear scalability for parallel FM-index construction. This results thus contribute significantly to improving the efficiency of de novo assembly of DNA sequences.

  9. The Erasmus Computing Grid - Building a Super-Computer Virtually for Free at the Erasmus Medical Center and the Hogeschool Rotterdam

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2006-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop

  10. Fine-grained parallelization of the Car-Parrinello ab initio molecular dynamics method on the IBM Blue Gene/L supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    E. Bohm A. Bhatele L. V. Kale M. E. Tuckerman S. Kumar J. A. Gunnels G. J. Martyna; Bohm, E. [Thomas M. Siebel Center, Univ. of Illinois at Urbana-Champaign, Urbana, IL (United States). Dept of Computer Science; Bhatele, A. [Thomas M. Siebel Center, Univ. of Illinois at Urbana-Champaign, Urbana, IL (United States). Dept of Computer Science; Kale, L. V. [Thomas M. Siebel Center, Univ. of Illinois at Urbana-Champaign, Urbana, IL (United States). Dept of Computer Science; Tuckerman, M. E. [New York Univ., NY (United States). Dept. of Chemistry and Courant Institute of Mathematical Sciences; Kumar, S. [IBM T. J. Watson Research Center, Yorktown Heights, NY (United States). IBM Research Division; Gunnels, J. A. [IBM T. J. Watson Research Center, Yorktown Heights, NY (United States). IBM Research Division; Martyna, G. J. [IBM T. J. Watson Research Center, Yorktown Heights, NY (United States). IBM Research Division

    2008-01-01

    Important scientific problems can be treated via ab initio-based molecular modeling approaches, wherein atomic forces are derived from an energy Junction that explicitly considers the electrons. The Car-Parrinello ab initio molecular dynamics (CPAIMD) method is widely used to study small systems containing on the order of 10 to 103 atoms. However, the impact of CPAIMD has been limited until recently because of difficulties inherent to scaling the technique beyond processor numbers about equal to the number of electronic states. CPAIMD computations involve a large number of interdependent phases with high interprocessor communication overhead. These phases require the evaluation of various transforms and non-square matrix multiplications that require large interprocessor data movement when efficiently parallelized. Using the Charm++ parallel programming language and runtime system, the phases are discretized into a large number of virtual processors, which are, in turn, mapped flexibly onto physical processors, thereby allowing interleaving of work. Algorithmic and IBM Blue Gene/L(tm) system-specific optimizations are employed to scale the CPAIMD method to at least 30 times the number of electronic states in small systems consisting of 24 to 768 atoms (32 to 1,024 electronic states) in order to demonstrate fine-grained parallelism. The largest systems studied scaled well across the entire machine (20,480 nodes).

  11. The Erasmus Computing Grid - Building a Super-Computer Virtually for Free at the Erasmus Medical Center and the Hogeschool Rotterdam

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2006-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing

  12. [Support of theoretical high energy physics research at the Supercomputer Computations Research Institute]. Final report, September 30, 1992--July 31, 1997

    Energy Technology Data Exchange (ETDEWEB)

    Bitar, K.M.; Edwards, R.G.; Heller, U.M.; Kennedy, A.D.

    1998-12-15

    The research primarily involved lattice field theory simulations such as Quantum Chromodynamics (QCD) and the Standard Model of electroweak interactions. Among the works completed by the members of the lattice group and their outside collaborators in QCD simulations are extensive hadronic spectrum computations with both Wilson and staggered fermions, and calculations of hadronic matrix elements and wavefunctions. Studies of the QCD {beta} function with two flavors of Wilson fermions, and the study of a possible flavor-parity breaking phase in QCD with two flavors of Wilson fermions have been completed. Studies of the finite temperature behavior of QCD have also been a major activity within the group. Studies of non-relativistic QCD, both for heavy-heavy mesons and for the heavy quark in heavy-light mesons have been done. Combining large N analytic computations within the Higgs sector of the standard model and numerical simulations at N = 4 have yielded a computation of the upper bound of the mass of the Higgs particle, as well as the energy scale above which deviations from the Standard Model may be expected. A major research topic during the second half of the grant period was the study of improved lattice actions, designed to diminish finite lattice spacing effects and thus accelerate the approach to the continuum limit. A new exact Local Hybrid Monte Carlo (overrelaxation) algorithm with a tunable overrelaxation parameter has been developed for pure gauge theories. The characteristics of this algorithm have been investigated. A study of possible instabilities in the global HMC algorithm has been completed.

  13. Analyses of Twelve New Whole Genome Sequences of Cassava Brown Streak Viruses and Ugandan Cassava Brown Streak Viruses from East Africa: Diversity, Supercomputing and Evidence for Further Speciation.

    Directory of Open Access Journals (Sweden)

    Joseph Ndunguru

    Full Text Available Cassava brown streak disease is caused by two devastating viruses, Cassava brown streak virus (CBSV and Ugandan cassava brown streak virus (UCBSV which are frequently found infecting cassava, one of sub-Saharan Africa's most important staple food crops. Each year these viruses cause losses of up to $100 million USD and can leave entire families without their primary food source, for an entire year. Twelve new whole genomes, including seven of CBSV and five of UCBSV were uncovered in this research, doubling the genomic sequences available in the public domain for these viruses. These new sequences disprove the assumption that the viruses are limited by agro-ecological zones, show that current diagnostic primers are insufficient to provide confident diagnosis of these viruses and give rise to the possibility that there may be as many as four distinct species of virus. Utilizing NGS sequencing technologies and proper phylogenetic practices will rapidly increase the solution to sustainable cassava production.

  14. DNA Sequence Patterns – A Successful Example of Grid Computing in Genome Research and Building Virtual Super-Computers for the Research Commons of e-Societies

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); A. Abuseiris (Anis); M. Lesnussa (Michael); F.N. Kepper (Nick); R.M. de Graaf (Rob); F.G. Grosveld (Frank)

    2011-01-01

    textabstractThe amount of information is growing exponentially with ever-new technologies emerging and is believed to be always at the limit. In contrast, huge resources are obviously available, which are underused in the IT sector, similar as e.g. in the renewable energy sector. Genome research is

  15. Disease: H01186 [KEGG MEDICUS

    Lifescience Database Archive (English)

    Full Text Available XH, Abdullah MS, Lado-Abeal J, Majed FA, Moeller LC, Boran G, Schomburg L, Weiss RE, Refeto...05) PMID:19769464 (description, gene) Dumitrescu AM, Di Cosmo C, Liao XH, Weiss RE, Refetoff S The syndrome

  16. Mujeres en situación de nido vacío, nido demasiado lleno, madres con hijos menores y no madres: un estudio acerca de las teorías implícitas sobre la maternidad, el bienestar psicológico, la iniciativa de crecimiento personal y la satisfacción con la vida.

    OpenAIRE

    Pascual del Rio, Mercedes

    2015-01-01

    La maternidad es un acontecimiento de los más importantes de la vida de la mujer (Taubman-Ben-Ari, Ben Shlomo y Findler, 2012) que repercute en su interacción con el sistema social y, obviamente, en su evolución psicológica (Knoester y Eggebeen, 2006). Con la intención de profundizar en este suceso, el presente estudio analiza la relación que existe entre diferentes situaciones con respecto a la maternidad y las teorías implícitas (Rodrigo et al. 1993) sobre la maternidad, el bienestar psicol...

  17. A Critical Examination of Planning Imperatives Applicable to Hostage Rescue Operations

    Science.gov (United States)

    1984-04-16

    referred to as the "Stockholm Syndrome ," after a Swedish bank robbery incident in 1973 where the hostages began to identify with the bank robbers and...1976, pp. 397-401. Oasit, Shlomo. "Risk, Glory and the Rescue Operation." InternationaI Security. Vol 6, No. 1, Sumner 1981, pp. 111-135. 47 "Group...Conference of May 16." Department of State Bulletin" Vol. 62, No. 1875, 2 June 1975, pp. 719-722. Lapham, Lewis H. "The Somnambulists." Harer’s. Vol. 261, No

  18. Dynamical systems

    CERN Document Server

    Sternberg, Shlomo

    2010-01-01

    Celebrated mathematician Shlomo Sternberg, a pioneer in the field of dynamical systems, created this modern one-semester introduction to the subject for his classes at Harvard University. Its wide-ranging treatment covers one-dimensional dynamics, differential equations, random walks, iterated function systems, symbolic dynamics, and Markov chains. Supplementary materials offer a variety of online components, including PowerPoint lecture slides for professors and MATLAB exercises.""Even though there are many dynamical systems books on the market, this book is bound to become a classic. The the

  19. Modern Perspectives in Applied Mathematics: Theory and Numerics of PDEs

    Science.gov (United States)

    2015-04-13

    Stokes-Fokker- Planck systems 09:45 – 10:25 Helena Lopes ( Universidade Federal do Rio de Janeiro) Boundary correctors and energy estimates for...University x Oleg Diyankov NeurOK% Software %LLC x Nira Dyn TelOAviv%University x x Shlomo Engelberg Jerusalem%College%of%Technology x Michael Fisher...Duke%University x Helena Lopes Universidade %Federal%do%Rio%de%Janeiro x x Andrew Majda New%York%University x Siddhartha Mishra ETH%Zurich x Stanley

  20. Variations on a theme by Kepler

    CERN Document Server

    Guillemin, Victor W

    2006-01-01

    This book is based on the Colloquium Lectures presented by Shlomo Sternberg in 1990. The authors delve into the mysterious role that groups, especially Lie groups, play in revealing the laws of nature by focusing on the familiar example of Kepler motion: the motion of a planet under the attraction of the sun according to Kepler's laws. Newton realized that Kepler's second law-that equal areas are swept out in equal times-has to do with the fact that the force is directed radially to the sun. Kepler's second law is really the assertion of the conservation of angular momentum, reflecting the rot

  1. Benefits of Investing in A Supercomputer to Support Weather Forecasting Research: An example Of Benefit-Cost Analysis%服务于天气预报研究的超级计算机投资收益分析

    Institute of Scientific and Technical Information of China (English)

    Jeffrey K.Lazo; Jennie S.Rice; Marca L.Hagenstad; 吴先华; 管叶莉

    2010-01-01

    在资源方面的潜在投资的几种成本收益分析方法方面,有几种适用于计算成本收益的经济学方法:(1)效益转移法;(2)基于调查的非市场价值评估法;(3)折扣法;(4)统计生命价值法;(5)专家启发法;(6)影响因素图表法;(7)敏感性分析法等.根据2003年进行的一项研究,特地评估了为美国国家海洋和大气管理局所购买的新的超级计算机设备的潜在收益.该超级计算机不仅有助于改进天气预报服务,而且能使许多经济部门潜在获益,如各种政府机构、一些私营工业部门和单个家庭住户等.通过咨询美国国家海洋和大气管理局的工作人员可知,这些经济部门主要集中在家庭住户、某些农作物和公共安全等领域.美国国家海洋和大气管理局购买的新的超级计算机潜在的社会效益是显著的.基于一些前提假设,仅家庭住户所得到的收益大约在3 400万到23 200万美元(以2002年美元为基准)之间.某些农业经济部门所获得的收益与避免因天气原因导致死亡的潜在收益一样多.这三个部门的平均总收益的现值约为11 600万美元(以2002年美元为基准).而这仅为总收益额的下限,因为它尚不包含其它几个也具有重要收益价值的行业,如建筑和能源行业等.美国国家海洋和大气管理局投资购买新型超级计算机的收益净现值约为10 500万美元(以2002年美元为基准).

  2. 75 FR 59784 - Notice of Fiscal Year 2011 Safety Grants and Solicitation for Applications

    Science.gov (United States)

    2010-09-28

    ... Grants--Carla Vagnini, carla.vagnini@dot.gov , 202- 366-3771. MCSAP High Priority Grants--Cim Weiss, cim...-366-0710. CDLPI Grants--Brandon Poarch, brandon.poarch@dot.gov , 202-366-3030. SaDIP Grants--Cim Weiss, cim.weiss@dot.gov , 202-366-0275. PRISM Grants--Tom Lawler, tom.lawler@dot.gov , 202-366-3866....

  3. Substance use

    Science.gov (United States)

    Substance abuse; Illicit drug abuse; Narcotic abuse; Hallucinogen abuse ... Arlington, VA: American Psychiatric Publishing. 2013. Weiss RD. Drugs of abuse. In: Goldman L, Schafer AI, eds. Goldman's Cecil ...

  4. Dal CERN, flusso si dati a una media di 600 megabytes al secondo per dieci giorni consecutivi

    CERN Multimedia

    2005-01-01

    The supercomputer Grid took up successfully its first technologic challenge. Egiht supercomputing centers have supported on internet a continuous flow of data from CERN in Geneva and directed them to seven centers in Europe and United States

  5. NASA Altix 512P SSI

    Science.gov (United States)

    Chan, Davin

    2004-01-01

    This paper presents a general overview of NASA Advances Supercomputing (NAS). The topics include: 1) About NASA Advanced Supercomputing (NAS); 2) System Configuration; 3) Our Experience with the Altix; and 4) Future Plans.

  6. Tera Scale Systems and Applications

    Science.gov (United States)

    Niggley, Chuck; Ciotti, Bob; Parks, John W. (Technical Monitor)

    2002-01-01

    This presentation discusses NASA's efforts to develop tera scale systems designed to push the envelope of supercomputing research. Topics cover include: NASA's existing supercomputing facilities and capabilities, NASA's computational challenges in developing these systems, development of production supercomputer, and potential research projects which could benefit from these types of systems.

  7. Remembering and Celebrating a Realistic Evaluation Visionary

    Science.gov (United States)

    Patton, Michael Quinn

    2013-01-01

    In this article, Michael Quinn Patton pays tribute to Carol Hirschon Weiss, a woman who brought passion to evaluation use. She also brought attention to it, publishing the first article on evaluation in 1967. Weiss' many contributions have come into evaluation currency and overtaken formerly taken-for-granted assumptions. Indeed, her insights…

  8. Paperless Procurement: The Impact of Advanced Automation

    Science.gov (United States)

    1992-09-01

    of similar legislation by individual American states. 2Peter N. Weiss, "Law and Technology: Can They Keep Abreast?" Government Information Quarterly , Volume...Conn.: The Information Economics Press, 1990, p. 315. Peter N. Weiss, "Law and Technology: Can They Keep Abreast?" Government Information Quarterly , Volume

  9. A Devaney Chaotic System Which Is Neither Distributively nor Topologically Chaotic

    Institute of Scientific and Technical Information of China (English)

    Chen Zhi-zhi; Liao Li; Wang Wei

    2013-01-01

    Weiss proved that Devaney chaos does not imply topological chaos and Oprocha pointed out that Devaney chaos does not imply distributional chaos. In this paper, by constructing a simple example which is Devaney chaotic but neither distributively nor topologically chaotic, we give a unified proof for the results of Weiss and Oprocha.

  10. Assessment of the Acute Psychiatric Patient in the Emergency Department: Legal Cases and Caveats

    Science.gov (United States)

    2014-05-01

    1994; 24(4):672–677. 7. Dubin WR, Weiss KJ, Zeccardi JA. Organic brain syndrome: The psychiatric imposter. JAMA. 1993;249(1):60–62. 8. Tintinalli JE...16. Dubin WR, Weiss KJ. Emergency psychiatry. In: Michels R, Cavenar JD, Cooper AM, et al, ed: Psychiatry. Philadelphia: Lippincott-Raven; 1997:1–15

  11. Quantifying the Statistics of Animal Motion: Lévy Flights of the Wandering Albatross

    Science.gov (United States)

    Viswanathan, Gandhimohan M.

    1998-03-01

    Lévy flights are commonly observed in physical and biological systems(See, e.g., M. F. Shlesinger, G. Zaslavsky and U. Frisch, eds., Lévy flights and Related Topics in Physics) (Springer, Berlin, 1995)., raising the possibility that similar random walks may be used to describe animal motion. Here we discuss recent findings showing that the Wandering Albatross and other animals may perform Lévy flights when foraging( G. M. Viswanathan, V. Afanasyev, S. V. Buldyrev, E. J. Murphy, P. A. Prince and H. E. Stanley, ``Lévy Flight Search Patterns of Wandering Albatrosses,'' Nature) 381, 413--415 (1996). We further examine how such random walks may confer biological advantages and discuss recent findings which suggest that under certain conditions there is a universal power law exponent which characterizes Lévy flight foraging ( G. M. Viswanathan, Sergey V. Buldyrev, Shlomo Havlin, M. G. E. da Luz, E. P. Raposo and H. E. Stanley, preprint.).

  12. 著名的钻石设计师从“达芬奇钻石”中汲取灵感

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    伴随着小说《达芬奇密码》的巨大成功.作者Dan Brown的思维得到了肯定.如众所周知的“黄金分割”或者“黄金比例”的运用。许多人认为这将是美学精华的延续和协调。于1982年开创了公主方切割法的以色列钻石设计师Shlomo Cohen.就从中找到灵感.发明了“达芬奇”钻石.一种利用五星形切割法打造的62面的钻石。

  13. Placebo and deception: a commentary.

    Science.gov (United States)

    Barnhill, Anne; Miller, Franklin G

    2015-02-01

    In a recent article in this Journal, Shlomo Cohen and Haim Shapiro (2013) introduce the concept of "comparable placebo treatments" (CPTs)--placebo treatments with biological effects similar to the drugs they replace--and argue that doctors are not being deceptive when they prescribe or administer CPTs without revealing that they are placebos. We critique two of Cohen and Shapiro's primary arguments. First, Cohen and Shapiro argue that offering undisclosed placebos is not lying to the patient, but rather is making a self-fulfilling prophecy--telling a "lie" that, ideally, will become true. We argue that offering undisclosed placebos is not a "lie" but is a straightforward case of deceptively misleading the patient. Second, Cohen and Shapiro argue that offering undisclosed CPTs is not equivocation. We argue that it typically is equivocation or deception of another sort. If justifiable, undisclosed placebo use will have to be justified as a practice that is deceptive in most instances.

  14. Lattice QCD with commodity hardware and software

    Energy Technology Data Exchange (ETDEWEB)

    Holmgren, D.J. [and others

    2000-01-25

    Large scale QCD Monte Carlo calculations have typically been performed on either commercial supercomputers or specially built massively parallel computers such as Fermilab's ACPMAPS. Commodity computer systems offer impressive floating point performance-to-cost ratios which exceed those of commercial supercomputers. As high performance networking components approach commodity pricing, it becomes reasonable to assemble a massively parallel supercomputer from commodity parts. The authors describe the work and progress to date of a collaboration working on this problem.

  15. Technology advances and market forces: Their impact on high performance architectures

    Science.gov (United States)

    Best, D. R.

    1978-01-01

    Reasonable projections into future supercomputer architectures and technology require an analysis of the computer industry market environment, the current capabilities and trends within the component industry, and the research activities on computer architecture in the industrial and academic communities. Management, programmer, architect, and user must cooperate to increase the efficiency of supercomputer development efforts. Care must be taken to match the funding, compiler, architecture and application with greater attention to testability, maintainability, reliability, and usability than supercomputer development programs of the past.

  16. 2014 Annual Report - Argonne Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Collins, James R. [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2014-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  17. 2016 ALCF Science Highlights

    Energy Technology Data Exchange (ETDEWEB)

    Collins, James R. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Wolf, Laura [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  18. Data Analysis and Assessment Center

    Data.gov (United States)

    Federal Laboratory Consortium — The DoD Supercomputing Resource Center (DSRC) Data Analysis and Assessment Center (DAAC) provides classified facilities to enhance customer interactions with the ARL...

  19. 2015 Annual Report - Argonne Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Collins, James R. [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-01-01

    The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines.

  20. ISLAM IN THE NON-MUSLIM AREAS OF NORTHERN NIGERIA, c

    African Journals Online (AJOL)

    QUADRI Y A

    Even though these spirits are represented by physical objects, they are ..... System in Postcolonial Benin (Nigeria) Society', in African Journal of Legal Studies ... 14 R. Weiss, 'Cyber Sex Expose, Centre City, MN: Hazdelden available online in ...

  1. Disease: H00305 [KEGG MEDICUS

    Lifescience Database Archive (English)

    Full Text Available ountries. Expert Rev Anti Infect Ther 3:295-306 (2005) PMID:8748271 (marker) Orle KA, Gates CA, Martin DH, Body BA, Weiss JB Simultan...eous PCR detection of Haemophilus ducreyi, Treponema pal

  2. Feodaalne verepulm teleekraanil / Aare Ermel

    Index Scriptorium Estoniae

    Ermel, Aare, 1957-2013

    2011-01-01

    Alates 8. septembrist Fox Life'i eestikeelsel kanalil jooksma hakkavast George R. R. Martini raamatul põhinevast USA telekanali HBO telesarjast "Troonide mäng" (sarja loojad David Benioff ja Dan Weiss)

  3. Exact Solutions of Bogoyavlenskii Coupled KdV Equations

    Institute of Scientific and Technical Information of China (English)

    HUHeng-Chun; LOUSen-Yue

    2004-01-01

    The special soliton solutions of Bogoyavlenskii coupled KdV equations are obtained by means of the standard Weiss-Tabor -Carnvale Painleve' truncation expansion and the nonstandard truncation of a modified Conte's invariant Painlevé expansion.

  4. Feodaalne verepulm teleekraanil / Aare Ermel

    Index Scriptorium Estoniae

    Ermel, Aare, 1957-2013

    2011-01-01

    Alates 8. septembrist Fox Life'i eestikeelsel kanalil jooksma hakkavast George R. R. Martini raamatul põhinevast USA telekanali HBO telesarjast "Troonide mäng" (sarja loojad David Benioff ja Dan Weiss)

  5. Literature Review on Concurrent Dual Career Development in the URL (unrestricted Line)

    Science.gov (United States)

    1989-06-01

    Weiss (1977, 1978) reports related applications of social learning theory in management development. Wexley (1984) discusses applications of...Saari, L. M. (1979). Application of social learning theory to training supervisors through behavioral modeling. Journal of Applied Psychology, 64

  6. FEASIBILITY OF MEDICAL MALE CIRCUMCISION IN NYANZA ...

    African Journals Online (AJOL)

    hi-tech

    2004-05-05

    May 5, 2004 ... necessary resources and training to perform the procedure safely. ... of the district health management team. ..... and human immunodeficiency virus transmission in Kenya ... Weiss, H.A., Quigley, M.A., and Hayes, R. J., Male.

  7. Research Article

    African Journals Online (AJOL)

    2016-06-15

    Jun 15, 2016 ... the Fenton and Haber-Weiss reactions and indirectly by inhibiting antioxidant ... metabolic processes such as photosynthesis, respiration or ... The cultures were kept under continues light with an intensity of 3000 ft-candle at.

  8. 78 FR 50054 - Ocean Transportation Intermediary License Applicants

    Science.gov (United States)

    2013-08-16

    ..., ] Parkville, MO 64152, Officers:, Matthew P. Weiss, Vice President (QI), Chad Earwood, President, Application..., LLC (OFF), 5200 Dallas Highway, Suite 200 301, Powder Springs, GA 30127, Officers:, Mark Weimann...

  9. ELECTROCHEMICAL REMEDIATION TECHNOLOGIES (ECRTS) - IN SITU REMEDIATION OF CONTAMINATED MARINE SEDIMENTS

    Science.gov (United States)

    This Innovative Technology Evaulation Report summarizes the results of the evaluation of the Electrochemical Remediation Technologies (ECRTs) process, developed by P2-Soil Remediation, Inc. (in partnership with Weiss Associates and Electro-Petroleum, Inc.). This evaluation was co...

  10. IMACS 󈨟: Proceedings of the IMACS World Congress on Computation and Applied Mathematics (13th) Held in Dublin, Ireland on July 22-26, 1991. Volume 2. Computational Fluid Dynamics and Wave Propagation, Parallel Computing, Concurrent and Supercomputing, Computational Physics/Computational Chemistry and Evolutionary Systems

    Science.gov (United States)

    1991-01-01

    as lI, naing rmtnc" cd.c to vaties Softar " in Rahrilit) %rJ Ro btaus-re of Eaga nri-ng -. A E of data in the database. B r a. C.A. Fwrz nte- AJ.rpp-3...Dipartimento di Matematica del Politecnico di Milano Dipartimento di Matematica del Politecnico di Milano Piazza Leonardo da Vinci 32 - 20133 - Milano...Paris VI, 1987. 870 HIERARCHIC FINITE ELEMENTS-FOR MINDLIN PLATES LUCIA DELLA CROCE and TERENZIO SCAPOLLA Dipartimento di Matematica Dipartimento di

  11. Magnetic Properties and Structure of Chromium Niobium Oxide and Iron Tantalum Oxide

    DEFF Research Database (Denmark)

    Nørlund Christensen, A.; Johansson, T.; Lebech, Bente

    1976-01-01

    at 9.3K. At higher temperatures, the Curie-Weiss law is obeyed and the moment of the Cr3+ ions is (3.85+or-0.02) mu B. FeTaO4 shows a field-dependent susceptibility similar to that observed in dilute random alloys. The Curie-Weiss law is not obeyed within the temperature region investigated....

  12. Surface Chemistry and Diffusion of Single Molecules

    Science.gov (United States)

    1993-12-28

    Adsorbate Complex, submitted to Physical Review Letters . M. J. Abrams and P. S. Weiss, Studying Transient Mobility and Energy Loss Using Scanning...submitted to Physical Review Letters . 2. Papers published in refereed journals J. A. Meyer, S. J. Stranick, J. B. Wang, and P. S. Weiss, Field...E. K. Schweizer and N. D. Lang, Scanning Tunneling Microscopy of Xe at 4K, Physical Review Letters 66, 1189 (1991). J. A. Meyer, Extracting

  13. The Specter of an Oily Bear or Geopolitical Challenges of the Modern Russian Petro-State

    Science.gov (United States)

    2009-02-15

    strength again, when it no longer felt that it had to bargain from a position of weakness, Moscow was quick to annul or renegotiate the agreements in...Michael McFaul and Kathryn Stoner-Weiss, “The Myth of the Authoritarian Model : How Putin’s Crackdown Holds Russia Back”, Foreign Affairs , January...February 2008, http://www.foreignaffairs.org/20080101faessay87105/michael-mcfaul-kathryn-stoner-weiss/the- myth -of-the- authoritarian-model.html; M

  14. Aspects of High-Resolution Gas Chromatography as Applied to the Analysis of Hydrocarbon Fuels and Other Complex Organic Mixtures. Volume 1

    Science.gov (United States)

    1985-02-01

    appendices were reviewed by Jerry Strange. Anita Cochran has been especially helpful in correcting my many grammar errors and has improved the readability of...Mechanism of Response of Flame lonization Detectors, In; Gas Chromatography, W. Brenner , J. E. Callen, and M. D. Weiss, eds.. Academic Press, New...T. Swanton, Performance of Coated Capillary Columns, In: Gas Chromatography, N. Brenner , J. E. Callen, and M. D. Weiss, eds.. Academic Press, New

  15. Stabilization and Reconstruction Staffing. Developing U.S. Civilian Personnel Capabilities

    Science.gov (United States)

    2008-01-01

    to study how a civilian reserve corps could be created. Briefing from, and discussion with, two of the IDA authors, Scott Feil and Martin Lidy, on...Department and its agencies (USAID). 3 Nina M. Serafino and Martin A. Weiss, Peacekeeping and Conflict Transitions: Background and Congressional...Service, S/CRS Issue Brief for Congress IB94040, September 15, 2005. Serafino, Nina M., and Martin A. Weiss, Peacekeeping and Conflict Transitions

  16. sarA as a Target for the Treatment and Prevention of Staphylococcal Biofilm-Associated Infection

    Science.gov (United States)

    2015-02-01

    limits biofilm formation in Staphylococcus aureus to a degree that can be correlated with increased antibiotic susceptibility and an improved...Staphylococcus aureus to a degree that can be correlated with increased antibiotic susceptibility (Beenken et al., 2003, Weiss et al., 2009a, Weiss et al...Pharmaceutical Sciences Department at the University of Arkansas for Medical Sciences, who has a distinguished history in drug discovery and development

  17. Federal Coordinating Council on Science, Engineering and Technology, Committee on Computer Research and Applications, Subcommittee on Science and Engineering Computing: Annual report, 1987

    Energy Technology Data Exchange (ETDEWEB)

    1988-03-01

    In the past year the committee initiated efforts that resulted in the report ''A Research and Development Strategy for High Performance Computing'' and will presently provide a government-wide implementation plan to address the technological opportunities possible with the achievement of significantly enhanced supercomputer capability. The committee met on a regular basis to review government supported programs in research, development and application of new supercomputer technology. The Committee annually visits supercomputer manufacturers to be briefed on their plans for future generation machines. Cray Research and ETA Systems continue to make progress toward developing more advanced supercomputers. The US supercomputer manufacturers remain dependent upon their emerging Japanese competitors for high performance IC'S although progress has been made toward achieving more adequate domestic sourcing. Reports by the Defense Science Board and the National Security Council/Economic Policy Council, which addressed semiconductor issues, were completed during the year with advice and input from the Committee. IBM has re-entered the supercomputer marketplace. The current 3090 series with expandable vector processing capability has achieved a low end position in the supercomputer performance spectrum. Subsequent development and marketing by IBM of more powerful machines would have important and far reaching impact on the domestic and world supercomputer market. Computers with massively parallel architecture--thousands of processors--are entering the market place and are beginning to become more of a factor in the computational productivity scale.

  18. ICASE: Scientific Visualization Solutions 6

    Science.gov (United States)

    1997-01-01

    ICASE: Institute for Computer Applications in Science and Engineering Visualizing the results of supercomputer simulations can be a computationaly demanding process. Research in applying supercomputing tecnology to the problem of data visualisation is being conducted at ICASE, ar NASA LAngley. These clips look at the work of ICASE and are illustrated with examples of complex 3D renderings of data sets.

  19. ICASE: Scientific Visualization Solutions 3

    Science.gov (United States)

    1997-01-01

    ICASE: Institute for Computer Applications in Science and Engineering Visualizing the results of supercomputer simulations can be a computationaly demanding process. Research in applying supercomputing tecnology to the problem of data visualisation is being conducted at ICASE, ar NASA LAngley. These clips look at the work of ICASE and are illustrated with examples of complex 3D renderings of data sets.

  20. ICASE: Scientific Visualization Solutions 1

    Science.gov (United States)

    1997-01-01

    ICASE: Institute for Computer Applications in Science and Engineering Visualizing the results of supercomputer simulations can be a computationaly demanding process. Research in applying supercomputing tecnology to the problem of data visualisation is being conducted at ICASE, ar NASA LAngley. These clips look at the work of ICASE and are illustrated with examples of complex 3D renderings of data sets.

  1. ICASE: Scientific Visualization Solutions 7

    Science.gov (United States)

    1997-01-01

    ICASE: Institute for Computer Applications in Science and Engineering Visualizing the results of supercomputer simulations can be a computationaly demanding process. Research in applying supercomputing tecnology to the problem of data visualisation is being conducted at ICASE, ar NASA LAngley. These clips look at the work of ICASE and are illustrated with examples of complex 3D renderings of data sets.

  2. ICASE: Scientific Visualization Solutions 8

    Science.gov (United States)

    1997-01-01

    ICASE: Institute for Computer Applications in Science and Engineering Visualizing the results of supercomputer simulations can be a computationaly demanding process. Research in applying supercomputing tecnology to the problem of data visualisation is being conducted at ICASE, ar NASA LAngley. These clips look at the work of ICASE and are illustrated with examples of complex 3D renderings of data sets.

  3. ICASE: Scientific Visualization Solutions 2

    Science.gov (United States)

    1997-01-01

    ICASE: Institute for Computer Applications in Science and Engineering Visualizing the results of supercomputer simulations can be a computationaly demanding process. Research in applying supercomputing tecnology to the problem of data visualisation is being conducted at ICASE, ar NASA LAngley. These clips look at the work of ICASE and are illustrated with examples of complex 3D renderings of data sets.

  4. ICASE: Scientific Visualization Solutions 4

    Science.gov (United States)

    1997-01-01

    ICASE: Institute for Computer Applications in Science and Engineering Visualizing the results of supercomputer simulations can be a computationaly demanding process. Research in applying supercomputing tecnology to the problem of data visualisation is being conducted at ICASE, ar NASA LAngley. These clips look at the work of ICASE and are illustrated with examples of complex 3D renderings of data sets.

  5. ICASE: Scientific Visualization Solutions 5

    Science.gov (United States)

    1997-01-01

    ICASE: Institute for Computer Applications in Science and Engineering Visualizing the results of supercomputer simulations can be a computationaly demanding process. Research in applying supercomputing tecnology to the problem of data visualisation is being conducted at ICASE, ar NASA LAngley. These clips look at the work of ICASE and are illustrated with examples of complex 3D renderings of data sets.

  6. Do You Know Who Will Win at Last?

    Institute of Scientific and Technical Information of China (English)

    黄宜尧

    2005-01-01

    In May 1997, a computer beat the world famous chess player for the first time. IBM's supercomputer called Deep Blue beat Gary Kasparov, the great chess player from Russia, in New York. This was the first time a supercomputer had ever beaten a man in traditional chess,

  7. Lawrence Livermore National Laboratory selects Intel Itanium 2 processors for world's most powerful Linux cluster

    CERN Multimedia

    2003-01-01

    "Intel Corporation, system manufacturer California Digital and the University of California at Lawrence Livermore National Laboratory (LLNL) today announced they are building one of the world's most powerful supercomputers. The supercomputer project, codenamed "Thunder," uses nearly 4,000 Intel® Itanium® 2 processors... is expected to be complete in January 2004" (1 page).

  8. Big Data in High Performance Scientific Computing

    Directory of Open Access Journals (Sweden)

    Florica Novăcescu

    2013-05-01

    Full Text Available The main concern of this study is to find the bottlenecks that are caused by the deluge of data, in the supercomputing community. To be more precise, the research targets the big data community, and is concentrated around identifying the new challenges posed by the huge amounts of data resulted from the numerical simulations using supercomputers.

  9. New AGU Mass Media Fellow Initiated College Science Communication Course

    Science.gov (United States)

    Weiss, Peter

    2010-10-01

    Marissa Weiss, this year's AGU Mass Media Fellow, feels so strongly about communicating science that she and a fellow graduate student started a course on the subject. Three years ago, she and the other student in the biogeochemistry and environmental biocomplexity program at Cornell University, Ithaca, N. Y., developed—with the aid of mentors—a semester­long science communication class. The course has since become a regular offering at Cornell, where Weiss defended her dissertation in ecology this past August. A soil ecologist, Weiss showed in her thesis research that nitrogen pollution can cause slowing of soil decomposition because of a declining abundance of microbes that break the soil down.

  10. Reseñas

    Directory of Open Access Journals (Sweden)

    HC Reseñas

    2010-02-01

    Full Text Available SHLOMO BEN AMI, La Dictadura de Primo de Rivera, 1923- 1930. Barcelona, Planeta, 1984. DIEZ DE LOS R í o s , M Teresa y Colaboradores, Documentación sobre la guerra civil en Alicante. Alicante, Instituto Juan Gil-Albert. Diputación Provincial, 1984. LOPEZ LOPEZ, Alejandro, El boicot de la derecha a las reformas de la Segunda República. La minoría agraria, el rechazo constitucional y la cuestión de la tierra. Madrid, Instituto de Estudios Agrarios, Pesqueros y Alimentarios, 1984. SERRANO, C , Final del Imperio. España, 1895-1898. Madrid, Siglo XXI de España eds., 1984, 258 págs. SHUBERT, A., Hacia la revolución. Orígenes sociales del movimiento obrero en Asturias, 1860-1934. Ed. Crítica. Barcelona, 1984. ARON, R., Mémoires. 50 ans de reflexion politique. Paris, Julliard, 1984. 778 págs. AROSTEGUI, J., MARTINEZ, J.A., La Junta de Defensa de Madrid. Noviembre 193 6-Abril 1937. Comunidad de Madrid. Madrid, 1984. PABON Y SUAREZ URBINA, J., Narváez y su época. Madrid. Espasa Calpe. Col. Austral. 1983

  11. Synthesis of cellulose by Acetobacter xylinum. VI. Growth on citric acid-cycle intermediates.

    Science.gov (United States)

    GROMET-ELHANAN, Z; HESTRIN, S

    1963-02-01

    Gromet-Elhanan, Zippora (The Hebrew University, Jerusalem, Israel) and Shlomo Hestrin. Synthesis of cellulose by Acetobacter xylinum. VI. Growth on citric acid-cycle intermediates. J. Bacteriol. 85:284-292. 1963.-Acetobacter xylinum could be made to grow on ethanol, acetate, succinate, or l-malate. The growth was accompanied by formation of opaque leathery pellicles on the surface of the growth medium. These pellicles were identified as cellulose on the basis of their chemical properties, solubility behavior, and infrared absorption spectra. Washed-cell suspensions prepared from cultures grown on ethanol or the organic acids, in contrast to washed sugar-grown cells, were able to transform citric-cycle intermediates into cellulose. The variations in the substrate spectrum of cellulose synthesis between sugar-grown cells and organic acids-grown cells were found to be correlated with differences in the oxidative capacity of the cells. The significance of the findings that A. xylinum could be made to grow on ethanol on complex as well as synthetic media is discussed from the viewpoint of the whole pattern of Acetobacter classification.

  12. Structure and magnetic properties of Tb1-xSrxMnO3

    Science.gov (United States)

    N, Hariharan; Elizabeth, Suja

    2015-06-01

    Tb1-xSrxMnO3 (x = 0.1, 0.2, 0.3, 0.4 and 0.5) polycrystalline samples are prepared via conventional solid state synthesis route. All samples crystallize in orthorhombic Pnma space group and possess O'-type distortion. Orthorhombic and octahedral distortion is found to decrease with increase in Sr content. At intermediate distortion, (20% and 30% doping level) Curie-Weiss analysis of inverse dc magnetic susceptibility data yields +ve Curies-Weiss constant, characteristic of FM interaction. Isothermal magnetization measurements give the highest magnitude of magnetic moment at these compositions.

  13. Nasopharyngeal Chondrolipoma

    Directory of Open Access Journals (Sweden)

    A. J. Kinshuck

    2010-01-01

    Full Text Available In this case report, we describe the presentation and treatment of a patient with nasopharyngeal chondrolipoma. Lipomas are common soft tissue tumours, although their incidence in the nasopharynx is very low. A rarer variant of lipoma, chondrolipomas are benign mesenchymal tumours. They are formed by the proliferation of mature adipocytes and contain different amounts of mature cartilaginous tissue; Weiss “Enzinger and Weiss's soft tissue tumours”, 4th ed: Mosby, St Louis; 2001 This represents the second reported case of a nasopharyngeal chondrolipoma. An endonasal approach to excision has not been previously described.

  14. Naval Law Review, Volume 54, 2007

    Science.gov (United States)

    2007-01-01

    249 Id. §§ 1538(a)(1)(A) – (C). 250 Id. § 1533 (b)(5)(B). 251 Id. § 1537 (b). 252 Smith v. United States, 508 U.S. 223, 228 (1993). 253 Victor v...numbing. Where does it end? The Toledo Blade won a Pulitzer Prize in 2005 for a four part series in October 2004 detailing a summer of murder in...Michael Sallah & Mitch Weiss, Tiger Force, TOLEDO BLADE, Oct. 2004; See also MICHAEL SALLAH & MITCH WEISS, TIGER FORCE (2006). The four part series and

  15. Tailoring the Crystal Structure Toward Optimal Super Conductors

    Science.gov (United States)

    2016-06-23

    the report. If additional limitations/ restrictions or special markings are indicated, follow agency authorization procedures, e.g. RD/FRD, PROPIN...Rhodes–Wohlfarth ratio qc/qs. (b) Phase diagram of Co2As1−xPx: TC and Weiss -temperature (θW) as a function of composition x. For 0.04≤x≤0.85, the...compounds are in the hexagonal phase and exhibit ferromagnetic behavior (FM). Above TC, compounds show Curie– Weiss -like paramagnetism (CWPM). For

  16. Effect of coadsorbed CO 2 on the magnetic properties of O 2 confined in graphitic slit-shaped micropores

    Science.gov (United States)

    Tohdoh, A.; Kaneko, K.

    2001-05-01

    The magnetic susceptibility of coadsorbed O 2 and CO 2 confined in a slit-shaped graphitic micropore was measured over the temperature range 2-300 K. Coadsorbed CO 2 markedly repressed the susceptibility of confined O 2 because CO 2 restricts the O 2 molecular arrangement to form spin clusters. Curie-Weiss plots show that the coadsorbed CO 2 reduced the effective spin concentration while the negative Weiss constant with CO 2 was larger than that of pure O 2. These results also indicate that the coadsorbed CO 2 promotes the formation of smaller clusters of O 2 molecules than for pure O 2.

  17. Topological Quantum Information in a 3D Neutral Atom Array

    Science.gov (United States)

    2015-01-02

    v Prescribed by ANSI Std. Z39.18 12-23-2014 Final 12-01-2008-9-30-2014 ( DARPA ) TOPOLOGICAL QUANTUM INFORMATION IN A 3D NEUTRAL ATOM ARRAY FA9550-09...1-0041 David Weiss Penn State UP76XR0 AFOSR DARPA Approved for public release Work was performed to build core elements of a quantum computer...Hamiltonian within our trapped neutral atoms architecture. quantum computing, ultracold atoms David Weiss 814-861-3578 1 DARPA Final Report Grant

  18. Investigation of optical turbulence in the atmospheric surface layer using scintillometer measurements along a slant path and comparison to ultrasonic anemometer measurements

    CSIR Research Space (South Africa)

    Sprung, D

    2014-09-01

    Full Text Available increasing the accuracy and realism of future optical turbulence calculations”, Meteorol. Atmos. Phys. Vol. 90, 159-164, (2005). [2] Weiss-Wrana, K.R., “Turbulence statistics applied to calculate expected turbulence-induced scintillation effects on electro.... Am., Vol. 54, No.52, (1964). [7] Kunkel, K.E., D.L. Walters, and G.A. Ely, “Behaviour of the temperature structure parameter in a desert basin”, J. Appl. Meteorol., Vol.20, p. 103-136, (1981). [8] Weiss-Wrana, K.R. and L.S. Balfour, “Statistical...

  19. Federal High End Computing (HEC) Information Portal

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This portal provides information about opportunities to engage in U.S. Federal government high performance computing activities, including supercomputer use,...

  20. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.