WorldWideScience

Sample records for supercomputer center san

  1. Supercomputing Centers and Electricity Service Providers

    DEFF Research Database (Denmark)

    Patki, Tapasya; Bates, Natalie; Ghatikar, Girish

    2016-01-01

    Supercomputing Centers (SCs) have high and variable power demands, which increase the challenges of the Electricity Service Providers (ESPs) with regards to efficient electricity distribution and reliable grid operation. High penetration of renewable energy generation further exacerbates this pro......Supercomputing Centers (SCs) have high and variable power demands, which increase the challenges of the Electricity Service Providers (ESPs) with regards to efficient electricity distribution and reliable grid operation. High penetration of renewable energy generation further exacerbates...... from a detailed, quantitative survey-based analysis and compare the perspectives of the European grid and SCs to the ones of the United States (US). We then show that contrary to the expectation, SCs in the US are more open toward cooperating and developing demand-management strategies with their ESPs...... (LRZ). We conclude that perspectives on demand management are dependent on the electricity market and pricing in the geographical region and on the degree of control that a particular SC has in terms of power-purchase negotiation....

  2. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    Energy Technology Data Exchange (ETDEWEB)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S [Earth Sciences Department. Barcelona Supercomputing Center. Barcelona (Spain); Cuevas, E [Izanaa Atmospheric Research Center. Agencia Estatal de Meteorologia, Tenerife (Spain); Nickovic, S [Atmospheric Research and Environment Branch, World Meteorological Organization, Geneva (Switzerland)], E-mail: carlos.perez@bsc.es

    2009-03-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  3. San Diego supercomputer center reaches data transfer milestone

    CERN Multimedia

    2002-01-01

    The SDSC's huge, updated tape storage system has illustrated its effectiveness by transferring data at 828 megabytes per second making it the fastest data archive system according to program director Phil Andrews (1/2 page).

  4. Visualization at Supercomputing Centers: The Tale of Little Big Iron and the Three Skinny Guys

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; van Rosendale, John; Southard, Dale; Gaither, Kelly; Childs, Hank; Brugger, Eric; Ahern, Sean

    2010-12-01

    Supercomputing Centers (SC's) are unique resources that aim to enable scientific knowledge discovery through the use of large computational resources, the Big Iron. Design, acquisition, installation, and management of the Big Iron are activities that are carefully planned and monitored. Since these Big Iron systems produce a tsunami of data, it is natural to co-locate visualization and analysis infrastructure as part of the same facility. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys does not receive the same level of treatment as that of the Big Iron. The main focus of this article is to explore different aspects of planning, designing, fielding, and maintaining the visualization and analysis infrastructure at supercomputing centers. Some of the questions we explore in this article include:"How should the Little Iron be sized to adequately support visualization and analysis of data coming off the Big Iron?" What sort of capabilities does it need to have?" Related questions concern the size of visualization support staff:"How big should a visualization program be (number of persons) and what should the staff do?" and"How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?"

  5. Assessment techniques for a learning-centered curriculum: evaluation design for adventures in supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Helland, B. [Ames Lab., IA (United States); Summers, B.G. [Oak Ridge National Lab., TN (United States)

    1996-09-01

    As the classroom paradigm shifts from being teacher-centered to being learner-centered, student assessments are evolving from typical paper and pencil testing to other methods of evaluation. Students should be probed for understanding, reasoning, and critical thinking abilities rather than their ability to return memorized facts. The assessment of the Department of Energy`s pilot program, Adventures in Supercomputing (AiS), offers one example of assessment techniques developed for learner-centered curricula. This assessment has employed a variety of methods to collect student data. Methods of assessment used were traditional testing, performance testing, interviews, short questionnaires via email, and student presentations of projects. The data obtained from these sources have been analyzed by a professional assessment team at the Center for Children and Technology. The results have been used to improve the AiS curriculum and establish the quality of the overall AiS program. This paper will discuss the various methods of assessment used and the results.

  6. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee

    2011-11-15

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  7. Research center Juelich to install Germany's most powerful supercomputer new IBM System for science and research will achieve 5.8 trillion computations per second

    CERN Multimedia

    2002-01-01

    "The Research Center Juelich, Germany, and IBM today announced that they have signed a contract for the delivery and installation of a new IBM supercomputer at the Central Institute for Applied Mathematics" (1/2 page).

  8. NSF Commits to Supercomputers.

    Science.gov (United States)

    Waldrop, M. Mitchell

    1985-01-01

    The National Science Foundation (NSF) has allocated at least $200 million over the next five years to support four new supercomputer centers. Issues and trends related to this NSF initiative are examined. (JN)

  9. San Joaquin Valley Aerosol Health Effects Research Center (SAHERC)

    Data.gov (United States)

    Federal Laboratory Consortium — At the San Joaquin Valley Aerosol Health Effects Center, located at the University of California-Davis, researchers will investigate the properties of particles that...

  10. San Joaquin Valley Aerosol Health Effects Research Center (SAHERC)

    Data.gov (United States)

    Federal Laboratory Consortium — At the San Joaquin Valley Aerosol Health Effects Center, located at the University of California-Davis, researchers will investigate the properties of particles that...

  11. Supercomputational science

    CERN Document Server

    Wilson, S

    1990-01-01

    In contemporary research, the supercomputer now ranks, along with radio telescopes, particle accelerators and the other apparatus of "big science", as an expensive resource, which is nevertheless essential for state of the art research. Supercomputers are usually provided as shar.ed central facilities. However, unlike, telescopes and accelerators, they are find a wide range of applications which extends across a broad spectrum of research activity. The difference in performance between a "good" and a "bad" computer program on a traditional serial computer may be a factor of two or three, but on a contemporary supercomputer it can easily be a factor of one hundred or even more! Furthermore, this factor is likely to increase with future generations of machines. In keeping with the large capital and recurrent costs of these machines, it is appropriate to devote effort to training and familiarization so that supercomputers are employed to best effect. This volume records the lectures delivered at a Summer School ...

  12. 超级计算中心核心应用的浅析%Brief Exploration on Technical Development of Key Applications at Supercomputing Center

    Institute of Scientific and Technical Information of China (English)

    党岗; 程志全

    2013-01-01

    目前,我国国家级超算中心大多采用“地方政府投资、以市场为导向开展应用”的建设思路,地方政府更关心涉及本地企事业单位的高性能计算应用和服务,超算中心常被用于普通的应用,很难充分发挥超级计算的战略作用.如何让超算中心这艘能力超强的航母生存下来,进而“攻城掠地”,推动技术创新,一直是业内人士研究的课题.初步探讨了国内超算中心核心应用所面临的挑战,提出了超算中心核心应用服务地方建设的几点建议.%National supercomputing centers at China work use building thought of local government investigation, and market-oriented application performing. Supercomputing resources are always applied at general applications,as the local govenment more focuses on the high-performance computing applications and services related to local business, rather than supercomputing working as strategical role in the traditional way. It is a long-term researching topic how to make the supercomputing powerful as a super-carrier active and applicable to benefit the technical innovation. Some challenging technical issues suiting for the superomputing were discussed by taking domestic supercomputing center as the example, and some useful advises were addressed for applying international supercomputing center at local services.

  13. The Central San Joaquin Valley Area Health Education Center

    Science.gov (United States)

    Rosinski, Edwin F.

    1978-01-01

    With federal financial support, an area health education center was established in the central San Joaquin Valley of California. The center is a cooperative health sciences education and health care program organized by the University of California and some of the educational and health care institutions of the valley. The center's goals include providing and improving primary health care education, and improving the distribution of health personnel. These goals are achieved through the cooperative development of a number of independent and interdependent activities. An extensive evaluation of the Area Health Education Center has shown that it is a highly effective program. PMID:664636

  14. An assessment of worldwide supercomputer usage

    Energy Technology Data Exchange (ETDEWEB)

    Wasserman, H.J.; Simmons, M.L.; Hayes, A.H.

    1995-01-01

    This report provides a comparative study of advanced supercomputing usage in Japan and the United States as of Spring 1994. It is based on the findings of a group of US scientists whose careers have centered on programming, evaluating, and designing high-performance supercomputers for over ten years. The report is a follow-on to an assessment of supercomputing technology in Europe and Japan that was published in 1993. Whereas the previous study focused on supercomputer manufacturing capabilities, the primary focus of the current work was to compare where and how supercomputers are used. Research for this report was conducted through both literature studies and field research in Japan.

  15. 76 FR 1521 - Security Zone: Fleet Industrial Supply Center Pier, San Diego, CA

    Science.gov (United States)

    2011-01-11

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA87 Security Zone: Fleet Industrial Supply Center Pier, San... of the San Diego Bay, San Diego, CA. The existing security zone is around the former Fleet Industrial... Fleet Industrial Supply Center Pier. The pier is no longer owned by the U.S. Navy and the security zone...

  16. Computational Dimensionalities of Global Supercomputing

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2013-12-01

    Acxiom Laboratory of Applied Research (ALAR, University of Central Arkansas (UCA, Conway, AR, April 9, 2010. [78.] Segall, Richard S., Zhang, Qingyu and Pierce, Ryan M.(2009, "Visualization by Supercomputing Data Mining", Proceedings of the 4th INFORMS Workshop on Data Mining and System Informatics, San Diego, CA, October 10, 2009. [79.] Segall, Richard S., Zhang, Qingyu, and Pierce, Ryan (2010, "Data Mining Supercomputing with SAS™ JMP® Genomics", Proceedings of 14th World Multi-Conference on Systemics, Cybernetics and Informatics: WMSCI 2010, Orlando, FL, June 29-July 2, 2010 [80.] Segall, Richard S., Zhang, Qingyu, and Pierce, Ryan (2010, "Data Mining Supercomputing with SAS™ JMP® Genomics", Journal of Systemics, Cybernetics and Informatics (JSCI, Vol. 9, No. 1, 2011, pp.28-33. [81.] Segall, RS, Zhang, Q., and Pierce, RM (2009, Visualization by supercomputing data mining, Proceedings of the 4 th INFORMS Workshop on Data Mining and System Informatics, San Diego, CA, October 10, 2009

  17. Report for CS 698-95 ?Directed Research ? Performance Modeling:? Using Queueing Network Modeling to Analyze the University of San Francisco Keck Cluster Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, M L

    2005-09-28

    In today's world, the need for computing power is becoming more pressing daily. Our need to process, analyze, and store data is quickly exceeding the capabilities of small self-contained serial machines, such as the modern desktop PC. Initially, this gap was filled by the creation of supercomputers: large-scale self-contained parallel machines. However, current markets, as well as the costs to develop and maintain such machines, are quickly making such machines a rarity, used only in highly specialized environments. A third type of machine exists, however. This relatively new type of machine, known as a cluster, is built from common, and often inexpensive, commodity self-contained desktop machines. But how well do these clustered machines work? There have been many attempts to quantify the performance of clustered computers. One approach, Queueing Network Modeling (QNM), appears to be a potentially useful and rarely tried method of modeling such systems. QNM, which has its beginnings in the modeling of traffic patterns, has expanded, and is now used to model everything from CPU and disk services, to computer systems, to service rates in store checkout lines. This history of successful usage, as well as the correspondence of QNM components to commodity clusters, suggests that QNM can be a useful tool for both the cluster designer, interested in the best value for the cost, and the user of existing machines, interested in performance rates and time-to-solution. So, what is QNM? Queueing Network Modeling is an approach to computer system modeling where the computer is represented as a network of queues and evaluated analytically. How does this correspond to clusters? There is a neat one-to-one relationship between the components of a QNM model and a cluster. For example: A cluster is made from a combination of computational nodes and network switches. Both of these fit nicely with the QNM descriptions of service centers (delay, queueing, and load

  18. 33 CFR 165.1121 - Security Zone: Fleet Supply Center Industrial Pier, San Diego, CA.

    Science.gov (United States)

    2010-07-01

    ... Guard District § 165.1121 Security Zone: Fleet Supply Center Industrial Pier, San Diego, CA. (a... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Security Zone: Fleet Supply Center Industrial Pier, San Diego, CA. 165.1121 Section 165.1121 Navigation and Navigable Waters COAST...

  19. TOP500 Supercomputers for November 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  20. Emerging supercomputer architectures

    Energy Technology Data Exchange (ETDEWEB)

    Messina, P.C.

    1987-01-01

    This paper will examine the current and near future trends for commercially available high-performance computers with architectures that differ from the mainstream ''supercomputer'' systems in use for the last few years. These emerging supercomputer architectures are just beginning to have an impact on the field of high performance computing. 7 refs., 1 tab.

  1. Working Together To Build Beacon Centers in San Francisco: Evaluation Findings from 1998-2000.

    Science.gov (United States)

    Walker, Karen E.; Arbreton, Amy J. A.

    The Beacons Initiative aimed to transform eight public schools (five middle schools and three high schools) into youth and family centers in low-income neighborhoods in San Francisco, California. Using a coalition of local partners and funding from public agencies and foundations, the centers served 7,500 youth and adults between July 1, 1999, and…

  2. A training program for scientific supercomputing users

    Energy Technology Data Exchange (ETDEWEB)

    Hanson, F.; Moher, T.; Sabelli, N.; Solem, A.

    1988-01-01

    There is need for a mechanism to transfer supercomputing technology into the hands of scientists and engineers in such a way that they will acquire a foundation of knowledge that will permit integration of supercomputing as a tool in their research. Most computing center training emphasizes computer-specific information about how to use a particular computer system; most academic programs teach concepts to computer scientists. Only a few brief courses and new programs are designed for computational scientists. This paper describes an eleven-week training program aimed principally at graduate and postdoctoral students in computationally-intensive fields. The program is designed to balance the specificity of computing center courses, the abstractness of computer science courses, and the personal contact of traditional apprentice approaches. It is based on the experience of computer scientists and computational scientists, and consists of seminars and clinics given by many visiting and local faculty. It covers a variety of supercomputing concepts, issues, and practices related to architecture, operating systems, software design, numerical considerations, code optimization, graphics, communications, and networks. Its research component encourages understanding of scientific computing and supercomputer hardware issues. Flexibility in thinking about computing needs is emphasized by the use of several different supercomputer architectures, such as the Cray X/MP48 at the National Center for Supercomputing Applications at University of Illinois at Urbana-Champaign, IBM 3090 600E/VF at the Cornell National Supercomputer Facility, and Alliant FX/8 at the Advanced Computing Research Facility at Argonne National Laboratory. 11 refs., 6 tabs.

  3. Research of Customer Segmentation and Differentiated Services in Supercomputing Center%超级计算中心客户细分及差异化服务策略研究

    Institute of Scientific and Technical Information of China (English)

    赵芸卿

    2013-01-01

    This paper applies K-means method to analyze the data of the customers’ Super-Computer renting information in the Supercomputing Center, achieving a number of sorted groups, and puts differentiated services strategies forward accordingly. As a result, we can allocate our supercomputer resources according to these groups, making the services more effectively and conveniently.%本文以中国科学院计算机网络信息中心超级计算中心(以下简称超级计算中心)客户服务工作为研究对象,运用 K-means 算法对客户进行细分,进而对每类客户群提出相应的差异化服务策略。实施差异化服务策略可以更好地分配资源、提供更有效的客户服务。

  4. Glaucoma at the Hamilton Glaucoma Center and the University of California, San Diego

    Institute of Scientific and Technical Information of China (English)

    Robert N. Weinreb

    2011-01-01

    @@ Known for its unique cross-disciplinary investigative programs and clinical excellence, the scientists and clinicians at the Hamilton Glaucoma Center of the University of California, San Diego seek to enhance the discovery and translation of innovative research to clinical glaucoma care to prevent and cure glaucoma blindness.With state of the art laboratory and clinical facilities located on the La Jolla campus (Figure 1), the Center is a home for a worldrenowned team of scientists and staff.More than 100 post-doctoral fellows in Glaucoma, many of whom hold distinguished academic positions throughout the world, have been trained at the Hamilton Glaucoma Center and the University of California, San Diego.At the core of Hamilton Glaucoma Center activities are the outstanding faculty that are described below.

  5. Energy sciences supercomputing 1990

    Energy Technology Data Exchange (ETDEWEB)

    Mirin, A.A.; Kaiper, G.V. (eds.)

    1990-01-01

    This report contains papers on the following topics: meeting the computational challenge; lattice gauge theory: probing the standard model; supercomputing for the superconducting super collider; and overview of ongoing studies in climate model diagnosis and intercomparison; MHD simulation of the fueling of a tokamak fusion reactor through the injection of compact toroids; gyrokinetic particle simulation of tokamak plasmas; analyzing chaos: a visual essay in nonlinear dynamics; supercomputing and research in theoretical chemistry; monte carlo simulations of light nuclei; parallel processing; and scientists of the future: learning by doing.

  6. TOP500 Supercomputers for June 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamos National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.

  7. TOP500 Supercomputers for June 2002

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-06-20

    19th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 19th edition of the TOP500 list of the worlds fastest supercomputers was released today (June 20, 2002). The recently installed Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is as expected the clear new number 1. Its performance of 35.86 Tflop/s (trillions of calculations per second) running the Linpack benchmark is almost five times higher than the performance of the now No.2 IBM ASCI White system at Lawrence Livermore National Laboratory (7.2 Tflop/s). This powerful leap frogging to the top by a system so much faster than the previous top system is unparalleled in the history of the TOP500.

  8. TOP500 Supercomputers for November 2002

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-11-15

    20th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 20th edition of the TOP500 list of the world's fastest supercomputers was released today (November 15, 2002). The Earth Simulator supercomputer installed earlier this year at the Earth Simulator Center in Yokohama, Japan, is with its Linpack benchmark performance of 35.86 Tflop/s (trillions of calculations per second) retains the number one position. The No.2 and No.3 positions are held by two new, identical ASCI Q systems at Los Alamos National Laboratory (7.73Tflop/s each). These systems are built by Hewlett-Packard and based on the Alpha Server SC computer system.

  9. Supercomputers to transform Science

    CERN Multimedia

    2006-01-01

    "New insights into the structure of space and time, climate modeling, and the design of novel drugs, are but a few of the many research areas that will be transforned by the installation of three supercomputers at the Unversity of Bristol." (1/2 page)

  10. Petaflop supercomputers of China

    Institute of Scientific and Technical Information of China (English)

    Guoliang CHEN

    2010-01-01

    @@ After ten years of development, high performance computing (HPC) in China has made remarkable progress. In November, 2010, the NUDT Tianhe-1A and the Dawning Nebulae respectively claimed the 1st and 3rd places in the Top500 Supercomputers List; this recognizes internationally the level that China has achieved in high performance computer manufacturing.

  11. Introduction to Reconfigurable Supercomputing

    CERN Document Server

    Lanzagorta, Marco; Rosenberg, Robert

    2010-01-01

    This book covers technologies, applications, tools, languages, procedures, advantages, and disadvantages of reconfigurable supercomputing using Field Programmable Gate Arrays (FPGAs). The target audience is the community of users of High Performance Computers (HPe who may benefit from porting their applications into a reconfigurable environment. As such, this book is intended to guide the HPC user through the many algorithmic considerations, hardware alternatives, usability issues, programming languages, and design tools that need to be understood before embarking on the creation of reconfigur

  12. Adventures in Supercomputing: An innovative program

    Energy Technology Data Exchange (ETDEWEB)

    Summers, B.G.; Hicks, H.R.; Oliver, C.E.

    1995-06-01

    Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology and serve as a spur to systemic reform. The Adventures in Supercomputing (AiS) program, sponsored by the Department of Energy, is such a program. Adventures in Supercomputing is a program for high school and middle school teachers. It has helped to change the teaching paradigm of many of the teachers involved in the program from a teacher-centered classroom to a student-centered classroom. ``A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode``. Not only is the process of teaching changed, but evidences of systemic reform are beginning to surface. After describing the program, the authors discuss the teaching strategies being used and the evidences of systemic change in many of the AiS schools in Tennessee.

  13. Upland habitat weed management plan for the Don Edwards San Francisco Bay National Wildlife Refuge Environmental Education Center

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — The purpose of this plan is to outline the current status of invasive weeds at the Don Edwards San Francisco Bay NWR Environmental Education Center and to provide a...

  14. The new library building at the University of Texas Health Science Center at San Antonio.

    Science.gov (United States)

    Kronick, D A; Bowden, V M; Olivier, E R

    1985-01-01

    The new University of Texas Health Science Center at San Antonio Library opened in June 1983, replacing the 1968 library building. Planning a new library building provides an opportunity for the staff to rethink their philosophy of service. Of paramount concern and importance is the need to convey this philosophy to the architects. This paper describes the planning process and the building's external features, interior layouts, and accommodations for technology. Details of the move to the building are considered and various aspects of the building are reviewed. Images PMID:3995205

  15. Enabling department-scale supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  16. Ultrascalable petaflop parallel supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Blumrich, Matthias A. (Ridgefield, CT); Chen, Dong (Croton On Hudson, NY); Chiu, George (Cross River, NY); Cipolla, Thomas M. (Katonah, NY); Coteus, Paul W. (Yorktown Heights, NY); Gara, Alan G. (Mount Kisco, NY); Giampapa, Mark E. (Irvington, NY); Hall, Shawn (Pleasantville, NY); Haring, Rudolf A. (Cortlandt Manor, NY); Heidelberger, Philip (Cortlandt Manor, NY); Kopcsay, Gerard V. (Yorktown Heights, NY); Ohmacht, Martin (Yorktown Heights, NY); Salapura, Valentina (Chappaqua, NY); Sugavanam, Krishnan (Mahopac, NY); Takken, Todd (Brewster, NY)

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  17. FEASIBILITY STUDY OF ESTABLISHING AN ARTIFICIAL INSEMINATION (AI CENTER FOR CARABAOS IN SAN ILDEFONSO, BULACAN, PHILIPPINES

    Directory of Open Access Journals (Sweden)

    F.Q. Arrienda II

    2014-10-01

    Full Text Available The productivity of the carabao subsector is influenced by several constraints such as social,technical, economic and policy factors. The need to enhance the local production of carabaos will helplocal farmers to increase their income. Thus, producing thorough breeds of carabaos and improving itgenetically is the best response to these constraints. This study was conducted to present the feasibilitystudy of establishing an Artificial Insemination (AI Center and its planned area of operation in Brgy.San Juan, Ildefonso, Bulacan. The market, production, organizational and financial viability of operatingthe business would also be evaluated. This particular study will provide insights in establishing an AICenter. Included in this study is the identification of anticipated problems that could affect the businessand recommendation of specific courses of action to counteract these possible problems. Primary datawere obtained through interviews with key informants from the Philippine. Carabao Center (PCC. Togain insights about the present status of an AI Center, interviews with the technicians of PCC and privatefarm were done to get additional information. Secondary data were acquired from various literatures andfrom San Ildefonso Municipal Office. The proposed area would be 1,500 square meters that would beallotted for the laboratory and bullpen. The AI Center will operate six days a week and will be openedfrom 8 AM until 5 PM. However, customers or farmers can call the technicians beyond the office hoursin case of emergency. The total initial investment of Php 3,825,417.39 is needed in establishing the AICenter. The whole amount will be sourced from the owner’s equity. Financial projection showed an IRRof 30% with a computed NPV of Php 2,415,597.00 and a payback period of 3.97 years. Based on all themarket, technical, organizational, financial factors, projections and data analysis, it is said that thisbusiness endeavor is viable and feasible.

  18. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-01-01

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  19. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-12-31

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  20. The Erasmus Computing Grid - Building a Super-Computer Virtually for Free at the Erasmus Medical Center and the Hogeschool Rotterdam

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2006-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop

  1. The Erasmus Computing Grid - Building a Super-Computer Virtually for Free at the Erasmus Medical Center and the Hogeschool Rotterdam

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2006-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing

  2. Microprocessors: from desktops to supercomputers.

    Science.gov (United States)

    Baskett, F; Hennessy, J L

    1993-08-13

    Continuing improvements in integrated circuit technology and computer architecture have driven microprocessors to performance levels that rival those of supercomputers-at a fraction of the price. The use of sophisticated memory hierarchies enables microprocessor-based machines to have very large memories built from commodity dynamic random access memory while retaining the high bandwidth and low access time needed in a high-performance machine. Parallel processors composed of these high-performance microprocessors are becoming the supercomputing technology of choice for scientific and engineering applications. The challenges for these new supercomputers have been in developing multiprocessor architectures that are easy to program and that deliver high performance without extraordinary programming efforts by users. Recent progress in multiprocessor architecture has led to ways to meet these challenges.

  3. World's fastest supercomputer opens up to users

    Science.gov (United States)

    Xin, Ling

    2016-08-01

    China's latest supercomputer - Sunway TaihuLight - has claimed the crown as the world's fastest computer according to the latest TOP500 list, released at the International Supercomputer Conference in Frankfurt in late June.

  4. Improved Access to Supercomputers Boosts Chemical Applications.

    Science.gov (United States)

    Borman, Stu

    1989-01-01

    Supercomputing is described in terms of computing power and abilities. The increase in availability of supercomputers for use in chemical calculations and modeling are reported. Efforts of the National Science Foundation and Cray Research are highlighted. (CW)

  5. Desktop supercomputers. Advance medical imaging.

    Science.gov (United States)

    Frisiello, R S

    1991-02-01

    Medical imaging tools that radiologists as well as a wide range of clinicians and healthcare professionals have come to depend upon are emerging into the next phase of functionality. The strides being made in supercomputing technologies--including reduction of size and price--are pushing medical imaging to a new level of accuracy and functionality.

  6. The BirthPlace collaborative practice model: results from the San Diego Birth Center Study.

    Science.gov (United States)

    Swartz; Jackson; Lang; Ecker; Ganiats; Dickinson; Nguyen

    1998-07-01

    Objective: The search for quality, cost-effective health care programs in the United States is now a major focus in the era of health care reform. New programs need to be evaluated as alternatives are developed in the health care system. The BirthPlace program provides comprehensive perinatal services with certified nurse-midwives and obstetricians working together in an integrated collaborative practice serving a primarily low-income population. Low-risk women are delivered by nurse-midwives in a freestanding birth center (The BirthPlace), which is one component of a larger integrated health network. All others are delivered by team obstetricians at the affiliated tertiary hospital. Wellness, preventive measures, early intervention, and family involvement are emphasized. The San Diego Birth Center Study is a 4-year research project funded by the U.S. Federal Agency for Health Care Policy and Research (#R01-HS07161) to evaluate this program. The National Birth Center Study (NEJM, 1989; 321(26): 1801-11) described the advantages and safety of freestanding birth centers. However, a prospective cohort study with a concurrent comparison group of comparable risk had not been conducted on a collaborative practice-freestanding birth center model to address questions of safety, cost, and patient satisfaction.Methods: The specific aims of this study are to compare this collaborative practice model to the traditional model of perinatal health care (physician providers and hospital delivery). A prospective cohort study comparing these two health care models was conducted with a final expected sample size of approximately 2,000 birth center and 1,350 traditional care subjects. Women were recruited from both the birth center and traditional care programs (private physicians offices and hospital based clinics) at the beginning of prenatal care and followed through the end of the perinatal period. Prenatal, intrapartum, postpartum and infant morbidity and mortality are being

  7. Enterprise Resource Planning (ERP) : a case study of Space and Naval Warfare Systems Center San Diego's Project Cabrillo

    OpenAIRE

    Hoffman, Dean M.; Oxendine, Eric

    2002-01-01

    Approved for public release; distribution unlimited This thesis examines the Enterprise Resource Planning (ERP) pilot implementation conducted at the Space and Naval Warfare Systems Center San Diego (SSC-SD), the first of four Department of the Navy (DON) pilot implementations. Specifically, comparisons are drawn between both successful and unsuccessful ERP implementations within private sector organizations and that of SSC-SD. Any commonalities in implementation challenges could be...

  8. A workbench for tera-flop supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Resch, M.M.; Kuester, U.; Mueller, M.S.; Lang, U. [High Performance Computing Center Stuttgart (HLRS), Stuttgart (Germany)

    2003-07-01

    Supercomputers currently reach a peak performance in the range of TFlop/s. With but one exception - the Japanese Earth Simulator - none of these systems has so far been able to also show a level of sustained performance for a variety of applications that comes close to the peak performance. Sustained TFlop/s are therefore rarely seen. The reasons are manifold and are well known: Bandwidth and latency both for main memory and for the internal network are the key internal technical problems. Cache hierarchies with large caches can bring relief but are no remedy to the problem. However, there are not only technical problems that inhibit the full exploitation by scientists of the potential of modern supercomputers. More and more organizational issues come to the forefront. This paper shows the approach of the High Performance Computing Center Stuttgart (HLRS) to deliver a sustained performance of TFlop/s for a wide range of applications from a large group of users spread over Germany. The core of the concept is the role of the data. Around this we design a simulation workbench that hides the complexity of interacting computers, networks and file systems from the user. (authors)

  9. The GF11 supercomputer

    Science.gov (United States)

    Beetem, J.; Denneau, M.; Weingarten, D.

    1987-01-01

    GF11 is a parallel computer currently under construction at the IBM Yorktown Research Center. The machine incorporates 576 floating-point processors arranged in a modified SIMD architecture. Each has space for 2 Mbytes of memory and is capable of 20 Mflops, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 Gflops. The floating-point processors are interconnected by a dynamically reconfigurable non-blocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics.

  10. Hazard-evaluation and technical-assistance report HETA 90-122-l2073, technical assistance to San Francisco General Hospital and Medical Center, San Francisco, California

    Energy Technology Data Exchange (ETDEWEB)

    Moss, C.E.; Seitz, T.

    1990-10-01

    In response to a request from the Director of the Environmental Health and Safety Department of the San Francisco General Hospital and Medical Center, located in San Francisco, California, an evaluation was undertaken of possible hazardous working conditions at that site. Concern existed about exposures to hazards while operating the germicidal lamp at the facility. Germicidal lamps were used to disinfect the air in tuberculosis and aerosolized pentamidine clinics. The workers wore no protective eye wear. All rooms used a 30 watt germicidal lamp. Lower wattage bulbs in the smaller rooms would have reduced occupational ultraviolet (UV) exposure. Reflectance levels of UV radiation were quite high and varied. Worker exposure to germicidal lamp UV levels was dependent on many factors, some of the most important ones being the position of the bulb in the room, age of the bulb, obstruction of the UV radiation by objects near the bulb, and the height of the worker. While there are no consensus guidelines available on ventilation systems designed for areas where germicidal lamps are used, the provision of good room air distribution and mixing is recommended to prevent stagnant air conditions or short circuiting of supply air within the room. Bulb changers need to be aware of the need for protective clothing and gloves for protection from both the UV radiation levels as well as possible glass breakage.

  11. Automatic discovery of the communication network topology for building a supercomputer model

    Science.gov (United States)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  12. Integration of Panda Workload Management System with supercomputers

    Science.gov (United States)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  13. Seismic signal processing on heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Fichtner, Andreas

    2015-04-01

    The processing of seismic signals - including the correlation of massive ambient noise data sets - represents an important part of a wide range of seismological applications. It is characterized by large data volumes as well as high computational input/output intensity. Development of efficient approaches towards seismic signal processing on emerging high performance computing systems is therefore essential. Heterogeneous supercomputing systems introduced in the recent years provide numerous computing nodes interconnected via high throughput networks, every node containing a mix of processing elements of different architectures, like several sequential processor cores and one or a few graphical processing units (GPU) serving as accelerators. A typical representative of such computing systems is "Piz Daint", a supercomputer of the Cray XC 30 family operated by the Swiss National Supercomputing Center (CSCS), which we used in this research. Heterogeneous supercomputers provide an opportunity for manifold application performance increase and are more energy-efficient, however they have much higher hardware complexity and are therefore much more difficult to program. The programming effort may be substantially reduced by the introduction of modular libraries of software components that can be reused for a wide class of seismology applications. The ultimate goal of this research is design of a prototype for such library suitable for implementing various seismic signal processing applications on heterogeneous systems. As a representative use case we have chosen an ambient noise correlation application. Ambient noise interferometry has developed into one of the most powerful tools to image and monitor the Earth's interior. Future applications will require the extraction of increasingly small details from noise recordings. To meet this demand, more advanced correlation techniques combined with very large data volumes are needed. This poses new computational problems that

  14. Data Analysis and Assessment Center

    Data.gov (United States)

    Federal Laboratory Consortium — The DoD Supercomputing Resource Center (DSRC) Data Analysis and Assessment Center (DAAC) provides classified facilities to enhance customer interactions with the ARL...

  15. TOP500 Supercomputers for June 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  16. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    Science.gov (United States)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  17. INTEGRATION OF PANDA WORKLOAD MANAGEMENT SYSTEM WITH SUPERCOMPUTERS

    Energy Technology Data Exchange (ETDEWEB)

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Maeno, T [Brookhaven National Laboratory (BNL); Mashinistov, R. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Nilsson, P [Brookhaven National Laboratory (BNL); Novikov, A. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Poyda, A. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Ryabinkin, E. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Teslyuk, A. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Tsulaia, V. [Lawrence Berkeley National Laboratory (LBNL); Velikhov, V. [Russian Research Center, Kurchatov Institute, Moscow, Russia; Wen, G. [University of Wisconsin, Madison; Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    Abstract The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the funda- mental nature of matter and the basic forces that shape our universe, and were recently credited for the dis- covery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data cen- ters are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Com- puting Facility (OLCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single- threaded workloads in parallel on Titan s multi-core worker nodes. This implementation was tested with a variety of

  18. Examination Of The Influence Of Service Quality On Membership Renewal In Fitness Centers In San Francisco Bay Area

    Directory of Open Access Journals (Sweden)

    Pei Chih Wei

    2010-12-01

    Full Text Available Corporations have to learn how to satisfy their customers’ various demands as the era of interactivity with customers has emerged (Pepper & Rogers, 1999. For fitness center, customers’ demands are increasing and diversified. Therefore, service quality is an index of quality assessment from customers for service-producing industries. Furthermore, the concept of corporate expansion and customer relationship has become the foundation of service-providers for higher profitability through customers’ renewal of membership. The main purpose of this study is to evaluate the impact of service quality on the renewal willingness of fitness center membership. Customers from four fitness centers in the San Francisco Bay Area, USA, were randomly selected for this survey. A total of 50 subjects participated in this survey. The data was analyzed by multiple regression and stepwise regression. The result indicated that the service quality has positive influence on the renewal willingness of membership.

  19. Scientists turn to supercomputers for knowledge about universe

    CERN Multimedia

    White, G

    2003-01-01

    The DOE is funding the computers at the Center for Astrophysical Thermonuclear Flashes which is based at the University of Chicago and uses supercomputers at the nation's weapons labs to study explosions in and on certain stars. The DOE is picking up the project's bill in the hope that the work will help the agency learn to better simulate the blasts of nuclear warheads (1 page).

  20. Study of ATLAS TRT performance with GRID and supercomputers

    Science.gov (United States)

    Krasnopevtsev, D. V.; Klimentov, A. A.; Mashinistov, R. Yu.; Belyaev, N. L.; Ryabinkin, E. A.

    2016-09-01

    One of the most important studies dedicated to be solved for ATLAS physical analysis is a reconstruction of proton-proton events with large number of interactions in Transition Radiation Tracker. Paper includes Transition Radiation Tracker performance results obtained with the usage of the ATLAS GRID and Kurchatov Institute's Data Processing Center including Tier-1 grid site and supercomputer as well as analysis of CPU efficiency during these studies.

  1. INTEL: Intel based systems move up in supercomputing ranks

    CERN Multimedia

    2002-01-01

    "The TOP500 supercomputer rankings released today at the Supercomputing 2002 conference show a dramatic increase in the number of Intel-based systems being deployed in high-performance computing (HPC) or supercomputing areas" (1/2 page).

  2. Comparing Clusters and Supercomputers for Lattice QCD

    CERN Document Server

    Gottlieb, S

    2001-01-01

    Since the development of the Beowulf project to build a parallel computer from commodity PC components, there have been many such clusters built. The MILC QCD code has been run on a variety of clusters and supercomputers. Key design features are identified, and the cost effectiveness of clusters and supercomputers are compared.

  3. Low Cost Supercomputer for Applications in Physics

    Science.gov (United States)

    Ahmed, Maqsood; Ahmed, Rashid; Saeed, M. Alam; Rashid, Haris; Fazal-e-Aleem

    2007-02-01

    Using parallel processing technique and commodity hardware, Beowulf supercomputers can be built at a much lower cost. Research organizations and educational institutions are using this technique to build their own high performance clusters. In this paper we discuss the architecture and design of Beowulf supercomputer and our own experience of building BURRAQ cluster.

  4. TOP500 Supercomputers for June 2005

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2005-06-22

    25th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/L LNL BlueGene/L and IBM gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 25th edition of the TOP500 list of the world's fastest supercomputers was released today (June 22, 2005) at the 20th International Supercomputing Conference (ISC2005) in Heidelberg Germany.

  5. Tomographic Rayleigh-wave group velocities in the Central Valley, California centered on the Sacramento/San Joaquin Delta

    Science.gov (United States)

    Fletcher, Jon Peter B.; Erdem, Jemile; Seats, Kevin; Lawrence, Jesse

    2016-01-01

    If shaking from a local or regional earthquake in the San Francisco Bay region were to rupture levees in the Sacramento/San Joaquin Delta then brackish water from San Francisco Bay would contaminate the water in the Delta: the source of fresh water for about half of California. As a prelude to a full shear-wave velocity model that can be used in computer simulations and further seismic hazard analysis, we report on the use of ambient noise tomography to build a fundamental-mode, Rayleigh-wave group velocity model for the region around the Sacramento/San Joaquin Delta in the western Central Valley, California. Recordings from the vertical component of about 31 stations were processed to compute the spatial distribution of Rayleigh wave group velocities. Complex coherency between pairs of stations were stacked over 8 months to more than a year. Dispersion curves were determined from 4 to about 18 seconds. We calculated average group velocities for each period and inverted for deviations from the average for a matrix of cells that covered the study area. Smoothing using the first difference is applied. Cells of the model were about 5.6 km in either dimension. Checkerboard tests of resolution, which is dependent on station density, suggest that the resolving ability of the array is reasonably good within the middle of the array with resolution between 0.2 and 0.4 degrees. Overall, low velocities in the middle of each image reflect the deeper sedimentary syncline in the Central Valley. In detail, the model shows several centers of low velocity that may be associated with gross geologic features such as faulting along the western margin of the Central Valley, oil and gas reservoirs, and large cross cutting features like the Stockton arch. At shorter periods around 5.5s, the model’s western boundary between low and high velocities closely follows regional fault geometry and the edge of a residual isostatic gravity low. In the eastern part of the valley, the boundaries

  6. Integrating with users is one thing, but living with them? A case study on loss of space from the Medical Center Library, University of California, San Diego.

    Science.gov (United States)

    Haynes, Craig

    2010-01-01

    The University of California, San Diego (UCSD) Medical Center is the primary hospital for the UCSD School of Medicine. The UCSD Medical Center Library (MCL), a branch of the campus's biomedical library, is located on the medical center campus. In 2007, the medical center administration made a request to MCL for space in its facility to relocate pharmacy administration from the hospital tower. The university librarian brought together a team of library managers to deliberate and develop a proposal, which ultimately accommodated the medical center's request and enhanced some of MCL's public services.

  7. 16 million [pounds] investment for 'virtual supercomputer'

    CERN Multimedia

    Holland, C

    2003-01-01

    "The Particle Physics and Astronomy Research Council is to spend 16million [pounds] to create a massive computing Grid, equivalent to the world's second largest supercomputer after Japan's Earth Simulator computer" (1/2 page)

  8. Supercomputers open window of opportunity for nursing.

    Science.gov (United States)

    Meintz, S L

    1993-01-01

    A window of opportunity was opened for nurse researchers with the High Performance Computing and Communications (HPCC) initiative in President Bush's 1992 fiscal-year budget. Nursing research moved into the high-performance computing environment through the University of Nevada Las Vegas/Cray Project for Nursing and Health Data Research (PNHDR). USing the CRAY YMP 2/216 supercomputer, the PNHDR established the validity of a supercomputer platform for nursing research. In addition, the research has identified a paradigm shift in statistical analysis, delineated actual and potential barriers to nursing research in a supercomputing environment, conceptualized a new branch of nursing science called Nurmetrics, and discovered new avenue for nursing research utilizing supercomputing tools.

  9. TOP500 Supercomputers for November 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-11-08

    24th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/IBM BlueGene/L and NASA/SGI's Columbia gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 24th edition of the TOP500 list of the worlds fastest supercomputers was released today (November 8, 2004) at the SC2004 Conference in Pittsburgh, Pa.

  10. Misleading Performance Reporting in the Supercomputing Field

    Directory of Open Access Journals (Sweden)

    David H. Bailey

    1992-01-01

    Full Text Available In a previous humorous note, I outlined 12 ways in which performance figures for scientific supercomputers can be distorted. In this paper, the problem of potentially misleading performance reporting is discussed in detail. Included are some examples that have appeared in recent published scientific papers. This paper also includes some proposed guidelines for reporting performance, the adoption of which would raise the level of professionalism and reduce the level of confusion in the field of supercomputing.

  11. Simulating Galactic Winds on Supercomputers

    Science.gov (United States)

    Schneider, Evan

    2017-01-01

    Galactic winds are a ubiquitous feature of rapidly star-forming galaxies. Observations of nearby galaxies have shown that winds are complex, multiphase phenomena, comprised of outflowing gas at a large range of densities, temperatures, and velocities. Describing how starburst-driven outflows originate, evolve, and affect the circumgalactic medium and gas supply of galaxies is an important challenge for theories of galaxy evolution. In this talk, I will discuss how we are using a new hydrodynamics code, Cholla, to improve our understanding of galactic winds. Cholla is a massively parallel, GPU-based code that takes advantage of specialized hardware on the newest generation of supercomputers. With Cholla, we can perform large, three-dimensional simulations of multiphase outflows, allowing us to track the coupling of mass and momentum between gas phases across hundreds of parsecs at sub-parsec resolution. The results of our recent simulations demonstrate that the evolution of cool gas in galactic winds is highly dependent on the initial structure of embedded clouds. In particular, we find that turbulent density structures lead to more efficient mass transfer from cool to hot phases of the wind. I will discuss the implications of our results both for the incorporation of winds into cosmological simulations, and for interpretations of observed multiphase winds and the circumgalatic medium of nearby galaxies.

  12. PROJECT HEAD START, SUMMER 1966, LECTURES PRESENTED IN THE ORIENTATION SESSION FOR PERSONNEL IN THE CHILD DEVELOPMENT CENTERS (SAN FRANCISCO STATE COLLEGE, JUNE 19-24, 1966).

    Science.gov (United States)

    LEWIS, MARY S.

    IN JUNE, 1966, SAN FRANCISCO STATE COLLEGE CONDUCTED AN ORIENTATION SESSION FOR THE PERSONNEL OF CHILD CARE CENTERS IN HEAD START PROGRAMS. FOLLOWING THE WASHINGTON, D. C., HEAD START STAFF GUIDELINES, THE 15 SPEAKERS PRESENTED SUCH TOPICS AS THE IMPACT OF POVERTY, HEALTH AND NUTRITION NEEDS FOR DISADVANTAGED CHILDREN, LANGUAGE DEVELOPMENT,…

  13. Accuracy of Perceived Estimated Travel Time by EMS to a Trauma Center in San Bernardino County, California

    Directory of Open Access Journals (Sweden)

    Michael M. Neeki

    2016-06-01

    Full Text Available Introduction: Mobilization of trauma resources has the potential to cause ripple effects throughout hospital operations. One major factor affecting efficient utilization of trauma resources is a discrepancy between the prehospital estimated time of arrival (ETA as communicated by emergency medical services (EMS personnel and their actual time of arrival (TOA. The current study aimed to assess the accuracy of the perceived prehospital estimated arrival time by EMS personnel in comparison to their actual arrival time at a Level II trauma center in San Bernardino County, California. Methods: This retrospective study included traumas classified as alerts or activations that were transported to Arrowhead Regional Medical Center in 2013. We obtained estimated arrival time and actual arrival time for each transport from the Surgery Department Trauma Registry. The difference between the median of ETA and actual TOA by EMS crews to the trauma center was calculated for these transports. Additional variables assessed included time of day and month during which the transport took place. Results: A total of 2,454 patients classified as traumas were identified in the Surgery Department Trauma Registry. After exclusion of trauma consults, walk-ins, handoffs between agencies, downgraded traumas, traumas missing information, and traumas transported by agencies other than American Medical Response, Ontario Fire, Rialto Fire or San Bernardino County Fire, we included a final sample size of 555 alert and activation classified traumas in the final analysis. When combining all transports by the included EMS agencies, the median of the ETA was 10 minutes and the median of the actual TOA was 22 minutes (median of difference=9 minutes, p<0.0001. Furthermore, when comparing the difference between trauma alerts and activations, trauma activations demonstrated an equal or larger difference in the median of the estimated and actual time of arrival (p<0.0001. We also found

  14. GREEN SUPERCOMPUTING IN A DESKTOP BOX

    Energy Technology Data Exchange (ETDEWEB)

    HSU, CHUNG-HSING [Los Alamos National Laboratory; FENG, WU-CHUN [NON LANL; CHING, AVERY [NON LANL

    2007-01-17

    The computer workstation, introduced by Sun Microsystems in 1982, was the tool of choice for scientists and engineers as an interactive computing environment for the development of scientific codes. However, by the mid-1990s, the performance of workstations began to lag behind high-end commodity PCs. This, coupled with the disappearance of BSD-based operating systems in workstations and the emergence of Linux as an open-source operating system for PCs, arguably led to the demise of the workstation as we knew it. Around the same time, computational scientists started to leverage PCs running Linux to create a commodity-based (Beowulf) cluster that provided dedicated computer cycles, i.e., supercomputing for the rest of us, as a cost-effective alternative to large supercomputers, i.e., supercomputing for the few. However, as the cluster movement has matured, with respect to cluster hardware and open-source software, these clusters have become much more like their large-scale supercomputing brethren - a shared (and power-hungry) datacenter resource that must reside in a machine-cooled room in order to operate properly. Consequently, the above observations, when coupled with the ever-increasing performance gap between the PC and cluster supercomputer, provide the motivation for a 'green' desktop supercomputer - a turnkey solution that provides an interactive and parallel computing environment with the approximate form factor of a Sun SPARCstation 1 'pizza box' workstation. In this paper, they present the hardware and software architecture of such a solution as well as its prowess as a developmental platform for parallel codes. In short, imagine a 12-node personal desktop supercomputer that achieves 14 Gflops on Linpack but sips only 185 watts of power at load, resulting in a performance-power ratio that is over 300% better than their reference SMP platform.

  15. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    Science.gov (United States)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  16. Input/output behavior of supercomputing applications

    Science.gov (United States)

    Miller, Ethan L.

    1991-01-01

    The collection and analysis of supercomputer I/O traces and their use in a collection of buffering and caching simulations are described. This serves two purposes. First, it gives a model of how individual applications running on supercomputers request file system I/O, allowing system designer to optimize I/O hardware and file system algorithms to that model. Second, the buffering simulations show what resources are needed to maximize the CPU utilization of a supercomputer given a very bursty I/O request rate. By using read-ahead and write-behind in a large solid stated disk, one or two applications were sufficient to fully utilize a Cray Y-MP CPU.

  17. GPUs: An Oasis in the Supercomputing Desert

    CERN Document Server

    Kamleh, Waseem

    2012-01-01

    A novel metric is introduced to compare the supercomputing resources available to academic researchers on a national basis. Data from the supercomputing Top 500 and the top 500 universities in the Academic Ranking of World Universities (ARWU) are combined to form the proposed "500/500" score for a given country. Australia scores poorly in the 500/500 metric when compared with other countries with a similar ARWU ranking, an indication that HPC-based researchers in Australia are at a relative disadvantage with respect to their overseas competitors. For HPC problems where single precision is sufficient, commodity GPUs provide a cost-effective means of quenching the computational thirst of otherwise parched Lattice practitioners traversing the Australian supercomputing desert. We explore some of the more difficult terrain in single precision territory, finding that BiCGStab is unreliable in single precision at large lattice sizes. We test the CGNE and CGNR forms of the conjugate gradient method on the normal equa...

  18. Current situation of sexual and reproductive health of men deprived of liberty in the Institutional Care Center of San Jose

    Directory of Open Access Journals (Sweden)

    Dorita Rivas Fonseca

    2013-10-01

    Full Text Available The objective of this research was to determine the current status of the issue of sexual and reproductive health ofthe prisoners Institutional Care Center (CAI of San Jose. It is a descriptive study. Through a strategic samplingdetermined the participation of 102 men. The information was obtained by applying a self-administeredquestionnaire with closed and open questions. As a result relevant to your socio-demographic profile, it appearsthat deprived of their liberty is a very heterogeneous group. As regards sexual and reproductive health, the firstconcept they relate to the prevention of disease and the second reproductive aspects, this shows limitations inknowledge on the topics, something that affects the daily life activities and self-care. It is concluded that researchby nurses Gyneco-obstetric in the deprived of liberty is almost null not only in the country but in the world,especially if it comes with the male population. In the case of CAI Prison, health care is not enough for thenumber of inmates who inhabit (overpopulation of almost 50%, this implies a deterioration in health and physicalcondition of these people, as well as sexual and reproductive health

  19. Floating point arithmetic in future supercomputers

    Science.gov (United States)

    Bailey, David H.; Barton, John T.; Simon, Horst D.; Fouts, Martin J.

    1989-01-01

    Considerations in the floating-point design of a supercomputer are discussed. Particular attention is given to word size, hardware support for extended precision, format, and accuracy characteristics. These issues are discussed from the perspective of the Numerical Aerodynamic Simulation Systems Division at NASA Ames. The features believed to be most important for a future supercomputer floating-point design include: (1) a 64-bit IEEE floating-point format with 11 exponent bits, 52 mantissa bits, and one sign bit and (2) hardware support for reasonably fast double-precision arithmetic.

  20. Sandia`s network for supercomputing `95: Validating the progress of Asynchronous Transfer Mode (ATM) switching

    Energy Technology Data Exchange (ETDEWEB)

    Pratt, T.J.; Vahle, O.; Gossage, S.A.

    1996-04-01

    The Advanced Networking Integration Department at Sandia National Laboratories has used the annual Supercomputing conference sponsored by the IEEE and ACM for the past three years as a forum to demonstrate and focus communication and networking developments. For Supercomputing `95, Sandia elected: to demonstrate the functionality and capability of an AT&T Globeview 20Gbps Asynchronous Transfer Mode (ATM) switch, which represents the core of Sandia`s corporate network, to build and utilize a three node 622 megabit per second Paragon network, and to extend the DOD`s ACTS ATM Internet from Sandia, New Mexico to the conference`s show floor in San Diego, California, for video demonstrations. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations supports Sandia`s overall strategies in ATM networking.

  1. Shear-wave Velocity Model from Rayleigh Wave Group Velocities Centered on the Sacramento/San Joaquin Delta

    Science.gov (United States)

    Fletcher, Jon B.; Erdem, Jemile

    2017-06-01

    Rayleigh wave group velocities obtained from ambient noise tomography are inverted for an upper crustal model of the Central Valley, California, centered on the Sacramento/San Joaquin Delta. Two methods were tried; the first uses SURF96, a least squares routine. It provides a good fit to the data, but convergence is dependent on the starting model. The second uses a genetic algorithm, whose starting model is random. This method was tried at several nodes in the model and compared to the output from SURF96. The genetic code is run five times and the variance of the output of all five models can be used to obtain an estimate of error. SURF96 produces a more regular solution mostly because it is typically run with a smoothing constraint. Models from the genetic code are generally consistent with the SURF96 code sometimes producing lower velocities at depth. The full model, calculated using SURF96, employed a 2-pass strategy, which used a variable damping scheme in the first pass. The resulting model shows low velocities near the surface in the Central Valley with a broad asymmetrical sedimentary basin located close to the western edge of the Central Valley near 122°W longitude. At shallow depths, the Rio Vista Basin is found nestled between the Pittsburgh/Kirby Hills and Midland faults, but a significant basin also seems to exist to the west of the Kirby Hills fault. There are other possible correlations between fast and slow velocities in the Central Valley and geologic features such as the Stockton Arch, oil or gas producing regions and the fault-controlled western boundary of the Central Valley.

  2. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Energy Technology Data Exchange (ETDEWEB)

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Klimentov, A [Brookhaven National Laboratory (BNL); Maeno, T [Brookhaven National Laboratory (BNL); Nilsson, P [Brookhaven National Laboratory (BNL); Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation

  3. Internal computational fluid mechanics on supercomputers for aerospace propulsion systems

    Science.gov (United States)

    Andersen, Bernhard H.; Benson, Thomas J.

    1987-01-01

    The accurate calculation of three-dimensional internal flowfields for application towards aerospace propulsion systems requires computational resources available only on supercomputers. A survey is presented of three-dimensional calculations of hypersonic, transonic, and subsonic internal flowfields conducted at the Lewis Research Center. A steady state Parabolized Navier-Stokes (PNS) solution of flow in a Mach 5.0, mixed compression inlet, a Navier-Stokes solution of flow in the vicinity of a terminal shock, and a PNS solution of flow in a diffusing S-bend with vortex generators are presented and discussed. All of these calculations were performed on either the NAS Cray-2 or the Lewis Research Center Cray XMP.

  4. Interview with Jennie E. Rodríguez, Executive Director of the Mission Cultural Center for Latino Arts, San Francisco, CA, USA, August 15, 2001 Entretien avec Jennie E. Rodríguez, directrice, Mission Cultural Center for Latino Arts, San Francisco, CA, États-Unis

    Directory of Open Access Journals (Sweden)

    Gérard Selbach

    2009-10-01

    Full Text Available ForewordThe Mission Cultural Center for Latino Arts (MCCLA is located at 2868 Mission Street in San Francisco, in a district mainly inhabited by Hispanics and well-known for its numerous murals. The Center was founded in 1977 by artists and community activists who shared “the vision to promote, preserve and develop the Latino cultural arts that reflect the living tradition and experiences of Chicano, Central and South American, and Caribbean people.”August 2001 was as busy at the Center as a...

  5. Refinement of herpesvirus B-capsid structure on parallel supercomputers.

    Science.gov (United States)

    Zhou, Z H; Chiu, W; Haskell, K; Spears, H; Jakana, J; Rixon, F J; Scott, L R

    1998-01-01

    Electron cryomicroscopy and icosahedral reconstruction are used to obtain the three-dimensional structure of the 1250-A-diameter herpesvirus B-capsid. The centers and orientations of particles in focal pairs of 400-kV, spot-scan micrographs are determined and iteratively refined by common-lines-based local and global refinement procedures. We describe the rationale behind choosing shared-memory multiprocessor computers for executing the global refinement, which is the most computationally intensive step in the reconstruction procedure. This refinement has been implemented on three different shared-memory supercomputers. The speedup and efficiency are evaluated by using test data sets with different numbers of particles and processors. Using this parallel refinement program, we refine the herpesvirus B-capsid from 355-particle images to 13-A resolution. The map shows new structural features and interactions of the protein subunits in the three distinct morphological units: penton, hexon, and triplex of this T = 16 icosahedral particle.

  6. Data-intensive computing on numerically-insensitive supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Ahrens, James P [Los Alamos National Laboratory; Fasel, Patricia K [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Heitmann, Katrin [Los Alamos National Laboratory; Lo, Li - Ta [Los Alamos National Laboratory; Patchett, John M [Los Alamos National Laboratory; Williams, Sean J [Los Alamos National Laboratory; Woodring, Jonathan L [Los Alamos National Laboratory; Wu, Joshua [Los Alamos National Laboratory; Hsu, Chung - Hsing [ONL

    2010-12-03

    With the advent of the era of petascale supercomputing, via the delivery of the Roadrunner supercomputing platform at Los Alamos National Laboratory, there is a pressing need to address the problem of visualizing massive petascale-sized results. In this presentation, I discuss progress on a number of approaches including in-situ analysis, multi-resolution out-of-core streaming and interactive rendering on the supercomputing platform. These approaches are placed in context by the emerging area of data-intensive supercomputing.

  7. United States Air Force Personalized Medicine and Advanced Diagnostics Program Panel: Representative Research at the San Antonio Military Medical Center

    Science.gov (United States)

    2016-05-20

    DEPARTMENT OF THE AIR FORCE 59TH MEDICAL WING (AETC) LACKLAND AIR FORCE BASE TEXAS MEMORANDUMFORSGVT ATTN: DEBRA M NIEMEYER FROM: 59 MDW/SGVU... Dato of Mooting) 181 PLATFORM PRESENTATION (At c ivilian lnstitulionsfNamo of Meeting, State, Dato of Mooting) University of Texas at San Antonio...SAMHS & Universities Research Forum {SURF2016). TX, 05-20-2016 D OTHER (Describe: Name of Mooting, City, State, and Dato of Meeting) 6. WHAT IS THE

  8. Parallel supercomputers for lattice gauge theory.

    Science.gov (United States)

    Brown, F R; Christ, N H

    1988-03-18

    During the past 10 years, particle physicists have increasingly employed numerical simulation to answer fundamental theoretical questions about the properties of quarks and gluons. The enormous computer resources required by quantum chromodynamic calculations have inspired the design and construction of very powerful, highly parallel, dedicated computers optimized for this work. This article gives a brief description of the numerical structure and current status of these large-scale lattice gauge theory calculations, with emphasis on the computational demands they make. The architecture, present state, and potential of these special-purpose supercomputers is described. It is argued that a numerical solution of low energy quantum chromodynamics may well be achieved by these machines.

  9. US Department of Energy High School Student Supercomputing Honors Program: A follow-up assessment

    Energy Technology Data Exchange (ETDEWEB)

    1987-01-01

    The US DOE High School Student Supercomputing Honors Program was designed to recognize high school students with superior skills in mathematics and computer science and to provide them with formal training and experience with advanced computer equipment. This document reports on the participants who attended the first such program, which was held at the National Magnetic Fusion Energy Computer Center at the Lawrence Livermore National Laboratory (LLNL) during August 1985.

  10. Multi-petascale highly efficient parallel supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O' Brien, John K.; O' Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  11. Most Social Scientists Shun Free Use of Supercomputers.

    Science.gov (United States)

    Kiernan, Vincent

    1998-01-01

    Social scientists, who frequently complain that the federal government spends too little on them, are passing up what scholars in the physical and natural sciences see as the government's best give-aways: free access to supercomputers. Some social scientists say the supercomputers are difficult to use; others find desktop computers provide…

  12. Supercomputing - Use Cases, Advances, The Future (2/2)

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the second day, we will focus on software and software paradigms driving supercomputers, workloads that need supercomputing treatment, advances in technology and possible future developments. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and i...

  13. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Science.gov (United States)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  14. Will Your Next Supercomputer Come from Costco?

    Energy Technology Data Exchange (ETDEWEB)

    Farber, Rob

    2007-04-15

    A fun topic for April, one that is not an April fool’s joke, is that you can purchase a commodity 200+ Gflop (single-precision) Linux supercomputer for around $600 from your favorite electronic vendor. Yes, it’s true. Just walk in and ask for a Sony Playstation 3 (PS3), take it home and install Linux on it. IBM has provided an excellent tutorial for installing Linux and building applications at http://www-128.ibm.com/developerworks/power/library/pa-linuxps3-1. If you want to raise some eyebrows at work, then submit a purchase request for a Sony PS3 game console and watch the reactions as your paperwork wends its way through the procurement process.

  15. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  16. Multiprocessing on supercomputers for computational aerodynamics

    Science.gov (United States)

    Yarrow, Maurice; Mehta, Unmeel B.

    1991-01-01

    Little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPS or more) to improve turnaround time in computational aerodynamics. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, such improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) is applied through multitasking via a strategy that requires relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-Fortran-Unix interface. The existing code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor.

  17. The PMS project Poor Man's Supercomputer

    CERN Document Server

    Csikor, Ferenc; Hegedüs, P; Horváth, V K; Katz, S D; Piróth, A

    2001-01-01

    We briefly describe the Poor Man's Supercomputer (PMS) project that is carried out at Eotvos University, Budapest. The goal is to develop a cost effective, scalable, fast parallel computer to perform numerical calculations of physical problems that can be implemented on a lattice with nearest neighbour interactions. To reach this goal we developed the PMS architecture using PC components and designed a special, low cost communication hardware and the driver software for Linux OS. Our first implementation of the PMS includes 32 nodes (PMS1). The performance of the PMS1 was tested by Lattice Gauge Theory simulations. Using SU(3) pure gauge theory or bosonic MSSM on the PMS1 computer we obtained 3$/Mflops price-per-sustained performance ratio. The design of the special hardware and the communication driver are freely available upon request for non-profit organizations.

  18. The BlueGene/L Supercomputer

    CERN Document Server

    Bhanot, G V; Gara, A; Vranas, P M; Bhanot, Gyan; Chen, Dong; Gara, Alan; Vranas, Pavlos

    2002-01-01

    The architecture of the BlueGene/L massively parallel supercomputer is described. Each computing node consists of a single compute ASIC plus 256 MB of external memory. The compute ASIC integrates two 700 MHz PowerPC 440 integer CPU cores, two 2.8 Gflops floating point units, 4 MB of embedded DRAM as cache, a memory controller for external memory, six 1.4 Gbit/s bi-directional ports for a 3-dimensional torus network connection, three 2.8 Gbit/s bi-directional ports for connecting to a global tree network and a Gigabit Ethernet for I/O. 65,536 of such nodes are connected into a 3-d torus with a geometry of 32x32x64. The total peak performance of the system is 360 Teraflops and the total amount of memory is 16 TeraBytes.

  19. Making lemonade from lemons: a case study on loss of space at the Dolph Briscoe, Jr. Library, University of Texas Health Science Center at San Antonio.

    Science.gov (United States)

    Tobia, Rajia C; Feldman, Jonquil D

    2010-01-01

    The setting for this case study is the Dolph Briscoe, Jr. Library, University of Texas Health Science Center at San Antonio, a health sciences campus with medical, dental, nursing, health professions, and graduate schools. During 2008-2009, major renovations to the library building were completed including office space for a faculty development department, multipurpose classrooms, a 24/7 study area, study rooms, library staff office space, and an information commons. The impetus for changes to the library building was the decreasing need to house collections in an increasingly electronic environment, the need for office space for other departments, and growth of the student body. About 40% of the library building was remodeled or repurposed, with a loss of approximately 25% of the library's original space. Campus administration proposed changes to the library building, and librarians worked with administration, architects, and construction managers to seek renovation solutions that meshed with the library's educational mission.

  20. The TianHe-1A Supercomputer: Its Hardware and Software

    Institute of Scientific and Technical Information of China (English)

    Xue-Jun Yang; Xiang-Ke Liao; Kai Lu; Qing-Feng Hu; Jun-Qiang Song; Jin-Shu Su

    2011-01-01

    This paper presents an overview of TianHe-1A (TH-1A) supercomputer, which is built by National University of Defense Technology of China (NUDT). TH-1A adopts a hybrid architecture by integrating CPUs and GPUs, and its interconnect network is a proprietary high-speed communication network. The theoretical peak performance of TH-1A is 4700TFlops, and its LINPACK test result is 2566TFlops. It was ranked the No. 1 on the TOP500 List released in November, 2010. TH-1A is now deployed in National Supercomputer Center in Tianjin and provides high performance computing services. TH-1A has played an important role in many applications, such as oil exploration, weather forecast, bio-medical research.

  1. Air Force Personalized Medicine Program Panel: Representative Research at the 59th Medical Wing San Antonio Military Medical Center

    Science.gov (United States)

    2016-05-18

    Using Genomics-Based Risk Characterization Reduce Time to Detection of Human Sepsis RDT&E JPC SBIR Other 0 Decision Point ~Milestone... Sepsis testing tool ~Develop algorithm based onYomarkers to predict therapeutic clinica l response . Personalized Medicine Program Portfolio...BAMC Department of Clinical Investigations Laboratory Capabilities and Supported Personalized Medicine Research 59MDW Center for Molecular Detection

  2. World's biggest 'virtual supercomputer' given the go-ahead

    CERN Multimedia

    2003-01-01

    "The Particle Physics and Astronomy Research Council has today announced GBP 16 million to create a massive computing Grid, equivalent to the world's second largest supercomputer after Japan's Earth Simulator computer" (1 page).

  3. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Ryabinkin, E.; Wenaus, T.

    2016-02-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed Analysis)Workload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF), is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF's Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  4. Developing and Deploying Advanced Algorithms to Novel Supercomputing Hardware

    CERN Document Server

    Brunner, Robert J; Myers, Adam D

    2007-01-01

    The objective of our research is to demonstrate the practical usage and orders of magnitude speedup of real-world applications by using alternative technologies to support high performance computing. Currently, the main barrier to the widespread adoption of this technology is the lack of development tools and case studies that typically impede non-specialists that might otherwise develop applications that could leverage these technologies. By partnering with the Innovative Systems Laboratory at the National Center for Supercomputing, we have obtained access to several novel technologies, including several Field-Programmable Gate Array (FPGA) systems, NVidia Graphics Processing Units (GPUs), and the STI Cell BE platform. Our goal is to not only demonstrate the capabilities of these systems, but to also serve as guides for others to follow in our path. To date, we have explored the efficacy of the SRC-6 MAP-C and MAP-E and SGI RASC Athena and RC100 reconfigurable computing platforms in supporting a two-point co...

  5. Developing Fortran Code for Kriging on the Stampede Supercomputer

    Science.gov (United States)

    Hodgess, Erin

    2016-04-01

    Kriging is easily accessible in the open source statistical language R (R Core Team, 2015) in the gstat (Pebesma, 2004) package. It works very well, but can be slow on large data sets, particular if the prediction space is large as well. We are working on the Stampede supercomputer at the Texas Advanced Computing Center to develop code using a combination of R and the Message Passage Interface (MPI) bindings to Fortran. We have a function similar to the autofitVariogram found in the automap (Hiemstra {et al}, 2008) package and it is very effective. We are comparing R with MPI/Fortran, MPI/Fortran alone, and R with the Rmpi package, which uses bindings to C. We will present results from simulation studies and real-world examples. References: Hiemstra, P.H., Pebesma, E.J., Twenhofel, C.J.W. and G.B.M. Heuvelink, 2008. Real-time automatic interpolation of ambient gamma dose rates from the Dutch Radioactivity Monitoring Network. Computers and Geosciences, accepted for publication. Pebesma, E.J., 2004. Multivariable geostatistics in S: the gstat package. Computers and Geosciences, 30: 683-691. R Core Team, 2015. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/.

  6. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    Directory of Open Access Journals (Sweden)

    De K.

    2016-01-01

    Full Text Available The Large Hadron Collider (LHC, operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed AnalysisWorkload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF, is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF’s Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  7. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  8. Taking ASCI supercomputing to the end game.

    Energy Technology Data Exchange (ETDEWEB)

    DeBenedictis, Erik P.

    2004-03-01

    The ASCI supercomputing program is broadly defined as running physics simulations on progressively more powerful digital computers. What happens if we extrapolate the computer technology to its end? We have developed a model for key ASCI computations running on a hypothetical computer whose technology is parameterized in ways that account for advancing technology. This model includes technology information such as Moore's Law for transistor scaling and developments in cooling technology. The model also includes limits imposed by laws of physics, such as thermodynamic limits on power dissipation, limits on cooling, and the limitation of signal propagation velocity to the speed of light. We apply this model and show that ASCI computations will advance smoothly for another 10-20 years to an 'end game' defined by thermodynamic limits and the speed of light. Performance levels at the end game will vary greatly by specific problem, but will be in the Exaflops to Zetaflops range for currently anticipated problems. We have also found an architecture that would be within a constant factor of giving optimal performance at the end game. This architecture is an evolutionary derivative of the mesh-connected microprocessor (such as ASCI Red Storm or IBM Blue Gene/L). We provide designs for the necessary enhancement to microprocessor functionality and the power-efficiency of both the processor and memory system. The technology we develop in the foregoing provides a 'perfect' computer model with which we can rate the quality of realizable computer designs, both in this writing and as a way of designing future computers. This report focuses on classical computers based on irreversible digital logic, and more specifically on algorithms that simulate space computing, irreversible logic, analog computers, and other ways to address stockpile stewardship that are outside the scope of this report.

  9. Simulating functional magnetic materials on supercomputers.

    Science.gov (United States)

    Gruner, Markus Ernst; Entel, Peter

    2009-07-22

    The recent passing of the petaflop per second landmark by the Roadrunner project at the Los Alamos National Laboratory marks a preliminary peak of an impressive world-wide development in the high-performance scientific computing sector. Also, purely academic state-of-the-art supercomputers such as the IBM Blue Gene/P at Forschungszentrum Jülich allow us nowadays to investigate large systems of the order of 10(3) spin polarized transition metal atoms by means of density functional theory. Three applications will be presented where large-scale ab initio calculations contribute to the understanding of key properties emerging from a close interrelation between structure and magnetism. The first two examples discuss the size dependent evolution of equilibrium structural motifs in elementary iron and binary Fe-Pt and Co-Pt transition metal nanoparticles, which are currently discussed as promising candidates for ultra-high-density magnetic data storage media. However, the preference for multiply twinned morphologies at smaller cluster sizes counteracts the formation of a single-crystalline L1(0) phase, which alone provides the required hard magnetic properties. The third application is concerned with the magnetic shape memory effect in the Ni-Mn-Ga Heusler alloy, which is a technologically relevant candidate for magnetomechanical actuators and sensors. In this material strains of up to 10% can be induced by external magnetic fields due to the field induced shifting of martensitic twin boundaries, requiring an extremely high mobility of the martensitic twin boundaries, but also the selection of the appropriate martensitic structure from the rich phase diagram.

  10. Seismic Sensors to Supercomputers: Internet Mapping and Computational Tools for Teaching and Learning about Earthquakes and the Structure of the Earth from Seismology

    Science.gov (United States)

    Meertens, C. M.; Seber, D.; Hamburger, M.

    2004-12-01

    The Internet has become an integral resource in the classrooms and homes of teachers and students. Widespread Web-access to seismic data and analysis tools enhances opportunities for teaching and learning about earthquakes and the structure of the earth from seismic tomography. We will present an overview and demonstration of the UNAVCO Voyager Java- and Javascript-based mapping tools (jules.unavco.org) and the Cornell University/San Diego Supercomputer Center (www.discoverourearth.org) Java-based data analysis and mapping tools. These map tools, datasets, and related educational websites have been developed and tested by collaborative teams of scientific programmers, research scientists, and educators. Dual-use by research and education communities ensures persistence of the tools and data, motivates on-going development, and encourages fresh content. With these tools are curricular materials and on-going evaluation processes that are essential for an effective application in the classroom. The map tools provide not only seismological data and tomographic models of the earth's interior, but also a wealth of associated map data such as topography, gravity, sea-floor age, plate tectonic motions and strain rates determined from GPS geodesy, seismic hazard maps, stress, and a host of geographical data. These additional datasets help to provide context and enable comparisons leading to an integrated view of the planet and the on-going processes that shape it. Emerging Cyberinfrastructure projects such as the NSF-funded GEON Information Technology Research project (www.geongrid.org) are developing grid/web services, advanced visualization software, distributed databases and data sharing methods, concept-based search mechanisms, and grid-computing resources for earth science and education. These developments in infrastructure seek to extend the access to data and to complex modeling tools from the hands of a few researchers to a much broader set of users. The GEON

  11. Defining competencies for education in health care value: recommendations from the University of California, San Francisco Center for Healthcare Value Training Initiative.

    Science.gov (United States)

    Moriates, Christopher; Dohan, Daniel; Spetz, Joanne; Sawaya, George F

    2015-04-01

    Leaders in medical education have increasingly called for the incorporation of cost awareness and health care value into health professions curricula. Emerging efforts have thus far focused on physicians, but foundational competencies need to be defined related to health care value that span all health professions and stages of training. The University of California, San Francisco (UCSF) Center for Healthcare Value launched an initiative in 2012 that engaged a group of educators from all four health professions schools at UCSF: Dentistry, Medicine, Nursing, and Pharmacy. This group created and agreed on a multidisciplinary set of comprehensive competencies related to health care value. The term "competency" was used to describe components within the larger domain of providing high-value care. The group then classified the competencies as beginner, proficient, or expert level through an iterative process and group consensus. The group articulated 21 competencies. The beginner competencies include basic principles of health policy, health care delivery, health costs, and insurance. Proficient competencies include real-world applications of concepts to clinical situations, primarily related to the care of individual patients. The expert competencies focus primarily on systems-level design, advocacy, mentorship, and policy. These competencies aim to identify a standard that may help inform the development of curricula across health professions training. These competencies could be translated into the learning objectives and evaluation methods of resources to teach health care value, and they should be considered in educational settings for health care professionals at all levels of training and across a variety of specialties.

  12. The KhoeSan Early Learning Center Pilot Project: Negotiating Power and Possibility in a South African Institute of Higher Learning

    Science.gov (United States)

    De Wet, Priscilla

    2011-01-01

    As we search for a new paradigm in post-apartheid South Africa, the knowledge base and worldview of the KhoeSan first Indigenous peoples is largely missing. The South African government has established various mechanisms as agents for social change. Institutions of higher learning have implemented transformation programs. KhoeSan peoples, however,…

  13. Supercomputing - Use Cases, Advances, The Future (1/2)

    CERN Document Server

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the first day, we will focus on the history and theory of supercomputing, the top500 list and the hardware that makes supercomputers tick. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP an...

  14. San Francisco District Laboratory (SAN)

    Data.gov (United States)

    Federal Laboratory Consortium — Program Capabilities Food Analysis SAN-DO Laboratory has an expert in elemental analysis who frequently performs field inspections of materials. A recently acquired...

  15. Using Supercomputers to Probe the Early Universe

    Energy Technology Data Exchange (ETDEWEB)

    Giorgi, Elena Edi [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-17

    For decades physicists have been trying to decipher the first moments after the Big Bang. Using very large telescopes, for example, scientists scan the skies and look at how fast galaxies move. Satellites study the relic radiation left from the Big Bang, called the cosmic microwave background radiation. And finally, particle colliders, like the Large Hadron Collider at CERN, allow researchers to smash protons together and analyze the debris left behind by such collisions. Physicists at Los Alamos National Laboratory, however, are taking a different approach: they are using computers. In collaboration with colleagues at University of California San Diego, the Los Alamos researchers developed a computer code, called BURST, that can simulate conditions during the first few minutes of cosmological evolution.

  16. An integrated distributed processing interface for supercomputers and workstations

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, J.; McGavran, L.

    1989-01-01

    Access to documentation, communication between multiple processes running on heterogeneous computers, and animation of simulations of engineering problems are typically weak in most supercomputer environments. This presentation will describe how we are improving this situation in the Computer Research and Applications group at Los Alamos National Laboratory. We have developed a tool using UNIX filters and a SunView interface that allows users simple access to documentation via mouse driven menus. We have also developed a distributed application that integrated a two point boundary value problem on one of our Cray Supercomputers. It is controlled and displayed graphically by a window interface running on a workstation screen. Our motivation for this research has been to improve the usual typewriter/static interface using language independent controls to show capabilities of the workstation/supercomputer combination. 8 refs.

  17. Prenatal care according to the NOM-007 norm, which relates to maternal morbidity in a health center in San Luis Potosí (2008

    Directory of Open Access Journals (Sweden)

    Lucila P. Acosta R

    2012-02-01

    Full Text Available Mother and child mortality reflects the level of social and economic development of a country; therefore, reproductive health is a sanitary priority. Mortality prevention depends directly on the coverage and quality of health services. Objective: to assess the compliance of prenatal care with the NOM 007 norm and its correlation with maternal morbidity in a health center located in San Luis Potosí, Mexico. Methodology: a descriptive, correlational, and quantitative study in which the units of analysis were the medical records of 571 pregnant women cared for during 2008. In order to prove the hypothesis, Pearson’s r was used. The p value was ≤ 0.05. Results: ages ranged from 13 to 43 years. Additionally, 37.1% of the patients were teenagers and 44.3 % began receiving attention during the second trimester of their pregnancy; 38.2 % attended at least five medical appointments, and 46.4 % had morbidity. For the latter group, urinary infection was the most common condition (224 cases. Prenatal attention was adequate in 2.6 % of the cases according to the actions performed. Health promotion actions were the least frequent. Conclusion: the level of compliance with the NOM 007 norm for prenatal care was considered inadequate in 97.4 % of the cases and was consistent with maternal morbidity (87.5-100 %. This could be related to more frequent appointments for some women and with late treatment, which resulted in less time to perform said actions. Contrary to expectations, greater compliance meant higher maternal morbidity (r = 0.318, p < 0.000.

  18. Supercomputing:HPCMP, Performance Measures and Opportunities

    Science.gov (United States)

    2007-11-02

    28 PEs Redstone Technical Test Center (RTTC) SGI Origin 3900 24 PEs Simulations & Analysis Facility (SIMAF) Beowulf Cluster Linux...HPCMP Systems (MSRCs) HPC Center System Processors Army Research Laboratory (ARL) IBM P3 SGI Origin 3800 IBM P4 Linux Networx Cluster LNX1...Xeon Cluster IBM Opteron Cluster SGI Altix Cluster 1,280 PEs 256 PEs 512 PEs 768 PEs 128 PEs 256 PEs 2,100 PEs 2,372 PEs 256 PEs

  19. Recent results from the Swinburne supercomputer software correlator

    Science.gov (United States)

    Tingay, Steven; et al.

    I will descrcibe the development of software correlators on the Swinburne Beowulf supercomputer and recent work using the Cray XD-1 machine. I will also describe recent Australian and global VLBI experiments that have been processed on the Swinburne software correlator, along with imaging results from these data. The role of the software correlator in Australia's eVLBI project will be discussed.

  20. Access to Supercomputers. Higher Education Panel Report 69.

    Science.gov (United States)

    Holmstrom, Engin Inel

    This survey was conducted to provide the National Science Foundation with baseline information on current computer use in the nation's major research universities, including the actual and potential use of supercomputers. Questionnaires were sent to 207 doctorate-granting institutions; after follow-ups, 167 institutions (91% of the institutions…

  1. The Sky's the Limit When Super Students Meet Supercomputers.

    Science.gov (United States)

    Trotter, Andrew

    1991-01-01

    In a few select high schools in the U.S., supercomputers are allowing talented students to attempt sophisticated research projects using simultaneous simulations of nature, culture, and technology not achievable by ordinary microcomputers. Schools can get their students online by entering contests and seeking grants and partnerships with…

  2. San Francisco District Laboratory (SAN)

    Data.gov (United States)

    Federal Laboratory Consortium — Program CapabilitiesFood Analysis SAN-DO Laboratory has an expert in elemental analysis who frequently performs field inspections of materials. A recently acquired...

  3. Scalable parallel programming for high performance seismic simulation on petascale heterogeneous supercomputers

    Science.gov (United States)

    Zhou, Jun

    -ODC code is ready now for real world petascale earthquake simulations. This GPU-based code has demonstrated excellent weak scaling up to the full Titan scale and achieved 2.3 PetaFLOPs sustained computation performance in single precision. The production simulation demonstrated the first 0-10Hz deterministic rough fault simulation. Using the accelerated AWP-ODC, Southern California Earthquake Center (SCEC) has recently created the physics-based probablistic seismic hazard analysis model of the Los Angeles region, CyberShake 14.2, as of the time of the dissertation writing. The tensor-valued wavefield code based on this GPU research has dramatically reduced time-to-solution, making a statewide hazard model a goal reachable with existing heterogeneous supercomputers.

  4. Cyberdyn supercomputer - a tool for imaging geodinamic processes

    Science.gov (United States)

    Pomeran, Mihai; Manea, Vlad; Besutiu, Lucian; Zlagnean, Luminita

    2014-05-01

    More and more physical processes developed within the deep interior of our planet, but with significant impact on the Earth's shape and structure, become subject to numerical modelling by using high performance computing facilities. Nowadays, worldwide an increasing number of research centers decide to make use of such powerful and fast computers for simulating complex phenomena involving fluid dynamics and get deeper insight to intricate problems of Earth's evolution. With the CYBERDYN cybernetic infrastructure (CCI), the Solid Earth Dynamics Department in the Institute of Geodynamics of the Romanian Academy boldly steps into the 21st century by entering the research area of computational geodynamics. The project that made possible this advancement, has been jointly supported by EU and Romanian Government through the Structural and Cohesion Funds. It lasted for about three years, ending October 2013. CCI is basically a modern high performance Beowulf-type supercomputer (HPCC), combined with a high performance visualization cluster (HPVC) and a GeoWall. The infrastructure is mainly structured around 1344 cores and 3 TB of RAM. The high speed interconnect is provided by a Qlogic InfiniBand switch, able to transfer up to 40 Gbps. The CCI storage component is a 40 TB Panasas NAS. The operating system is Linux (CentOS). For control and maintenance, the Bright Cluster Manager package is used. The SGE job scheduler manages the job queues. CCI has been designed for a theoretical peak performance up to 11.2 TFlops. Speed tests showed that a high resolution numerical model (256 × 256 × 128 FEM elements) could be resolved with a mean computational speed of 1 time step at 30 seconds, by employing only a fraction of the computing power (20%). After passing the mandatory tests, the CCI has been involved in numerical modelling of various scenarios related to the East Carpathians tectonic and geodynamic evolution, including the Neogene magmatic activity, and the intriguing

  5. Frequently updated noise threat maps created with use of supercomputing grid

    Directory of Open Access Journals (Sweden)

    Szczodrak Maciej

    2014-09-01

    Full Text Available An innovative supercomputing grid services devoted to noise threat evaluation were presented. The services described in this paper concern two issues, first is related to the noise mapping, while the second one focuses on assessment of the noise dose and its influence on the human hearing system. The discussed serviceswere developed within the PL-Grid Plus Infrastructure which accumulates Polish academic supercomputer centers. Selected experimental results achieved by the usage of the services proposed were presented. The assessment of the environmental noise threats includes creation of the noise maps using either ofline or online data, acquired through a grid of the monitoring stations. A concept of estimation of the source model parameters based on the measured sound level for the purpose of creating frequently updated noise maps was presented. Connecting the noise mapping grid service with a distributed sensor network enables to automatically update noise maps for a specified time period. Moreover, a unique attribute of the developed software is the estimation of the auditory effects evoked by the exposure to noise. The estimation method uses a modified psychoacoustic model of hearing and is based on the calculated noise level values and on the given exposure period. Potential use scenarios of the grid services for research or educational purpose were introduced. Presentation of the results of predicted hearing threshold shift caused by exposure to excessive noise can raise the public awareness of the noise threats.

  6. A student operated, faculty mentored dental clinic service experience at the University of Texas Health Science Center at San Antonio for the underserved refugee community: an interprofessional approach.

    Science.gov (United States)

    Farokhi, Moshtagh R; Glass, Birgit Junfin; Gureckis, Kevin M

    2014-01-01

    As the number of refugees settling in San Antonio increases, so will their health care needs. Due to limited resources and stress, they suffer from acute and chronic diseases, reducing their potential for success in their new host country. The need for proper health education coupled with a stable holistic health care facility is essential for their future success. In 2009, nursing students began serving the San Antonio refugee population. By 2011, dental and medical students joined to create the student-run San Antonio Refugee Health Clinic (SARHC). SARHC serves the refugees by providing free health care/education while connecting them to San Antonio's primary health care system. Select dental, medical, and nursing students under the mentorship of their faculty operate the SARHC clinic. The students work in collaborative teams where select members of the refugee community and bilingual students provide translational assistance. The nursing students take vital signs and medical students perform physical exams after gathering a history of present illness. Dental students provide oral health/nutritional education and screenings inclusive of head and neck examination and oral cancer risk assessment. Thirty-two dental, 83 medical, and 118 nursing students rotated through the clinic last year, serving patients with the most common chief complaints of dental, musculoskeletal, dermatological, and gastrointestinal nature. The most common dental findings for this population have been dental caries, periodontal disease, and other dental diseases requiring urgent care. Sub-programs such as the student interpreter program, ladies' health education, and the Refugee Accompaniment Health Partnership have resulted from the SARHC initiative to meet the refugees' needs. Currently under development is a future collaboration with local San Antonio clinics such as the San Antonio Christian Dental Clinic to serve as their dental home. The use of this interprofessional model has resulted

  7. Applications of parallel supercomputers: Scientific results and computer science lessons

    Energy Technology Data Exchange (ETDEWEB)

    Fox, G.C.

    1989-07-12

    Parallel Computing has come of age with several commercial and inhouse systems that deliver supercomputer performance. We illustrate this with several major computations completed or underway at Caltech on hypercubes, transputer arrays and the SIMD Connection Machine CM-2 and AMT DAP. Applications covered are lattice gauge theory, computational fluid dynamics, subatomic string dynamics, statistical and condensed matter physics,theoretical and experimental astronomy, quantum chemistry, plasma physics, grain dynamics, computer chess, graphics ray tracing, and Kalman filters. We use these applications to compare the performance of several advanced architecture computers including the conventional CRAY and ETA-10 supercomputers. We describe which problems are suitable for which computers in the terms of a matching between problem and computer architecture. This is part of a set of lessons we draw for hardware, software, and performance. We speculate on the emergence of new academic disciplines motivated by the growing importance of computers. 138 refs., 23 figs., 10 tabs.

  8. Extending ATLAS Computing to Commercial Clouds and Supercomputers

    CERN Document Server

    Nilsson, P; The ATLAS collaboration; Filipcic, A; Klimentov, A; Maeno, T; Oleynik, D; Panitkin, S; Wenaus, T; Wu, W

    2014-01-01

    The Large Hadron Collider will resume data collection in 2015 with substantially increased computing requirements relative to its first 2009-2013 run. A near doubling of the energy and the data rate, high level of event pile-up, and detector upgrades will mean the number and complexity of events to be analyzed will increase dramatically. A naive extrapolation of the Run 1 experience would suggest that a 5-6 fold increase in computing resources are needed - impossible within the anticipated flat computing budgets in the near future. Consequently ATLAS is engaged in an ambitious program to expand its computing to all available resources, notably including opportunistic use of commercial clouds and supercomputers. Such resources present new challenges in managing heterogeneity, supporting data flows, parallelizing workflows, provisioning software, and other aspects of distributed computing, all while minimizing operational load. We will present the ATLAS experience to date with clouds and supercomputers, and des...

  9. Integration of Titan supercomputer at OLCF with ATLAS production system

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration

    2016-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job...

  10. Supercomputers ready for use as discovery machines for neuroscience

    OpenAIRE

    Kunkel, Susanne; Schmidt, Maximilian; Helias, Moritz; Eppler, Jochen Martin; Igarashi, Jun; Masumoto, Gen; Fukai, Tomoki; Ishii, Shin; Plesser, Hans Ekkehard; Morrison, Abigail; Diesmann, Markus

    2013-01-01

    NEST is a widely used tool to simulate biological spiking neural networks [1]. The simulator is subject to continuous development, which is driven by the requirements of the current neuroscientific questions. At present, a major part of the software development focuses on the improvement of the simulator's fundamental data structures in order to enable brain-scale simulations on supercomputers such as the Blue Gene system in Jülich and the K computer in Kobe. Based on our memory-u...

  11. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  12. San Marino.

    Science.gov (United States)

    1985-02-01

    San Marino, an independent republic located in north central Italy, in 1983 had a population of 22,206 growing at an annual rate of .9%. The literacy rate is 97% and the infant mortality rate is 9.6/1000. The terrain is mountainous and the climate is moderate. According to local tradition, San Marino was founded by a Christian stonecutter in the 4th century A.D. as a refuge against religious persecution. Its recorded history began in the 9th century, and it has survived assaults on its independence by the papacy, the Malatesta lords of Rimini, Cesare Borgia, Napoleon, and Mussolini. An 1862 treaty with the newly formed Kingdom of Italy has been periodically renewed and amended. The present government is an alliance between the socialists and communists. San Marino has had its own statutes and governmental institutions since the 11th century. Legislative authority at present is vested in a 60-member unicameral parliament. Executive authority is exercised by the 11-member Congress of State, the members of which head the various administrative departments of the goverment. The posts are divided among the parties which form the coalition government. Judicial authority is partly exercised by Italian magistrates in civil and criminal cases. San Marino's policies are tied to Italy's and political organizations and labor unions active in Italy are also active in San Marino. Since World War II, there has been intense rivalry between 2 political coalitions, the Popular Alliance composed of the Christian Democratic Party and the Independent Social Democratic Party, and the Liberty Committee, coalition of the Communist Party and the Socialist Party. San Marino's gross domestic product was $137 million and its per capita income was $6290 in 1980. The principal economic activities are farming and livestock raising, along with some light manufacturing. Foreign transactions are dominated by tourism. The government derives most of its revenue from the sale of postage stamps to

  13. From Thread to Transcontinental Computer: Disturbing Lessons in Distributed Supercomputing

    CERN Document Server

    Groen, Derek

    2015-01-01

    We describe the political and technical complications encountered during the astronomical CosmoGrid project. CosmoGrid is a numerical study on the formation of large scale structure in the universe. The simulations are challenging due to the enormous dynamic range in spatial and temporal coordinates, as well as the enormous computer resources required. In CosmoGrid we dealt with the computational requirements by connecting up to four supercomputers via an optical network and make them operate as a single machine. This was challenging, if only for the fact that the supercomputers of our choice are separated by half the planet, as three of them are located scattered across Europe and fourth one is in Tokyo. The co-scheduling of multiple computers and the 'gridification' of the code enabled us to achieve an efficiency of up to $93\\%$ for this distributed intercontinental supercomputer. In this work, we find that high-performance computing on a grid can be done much more effectively if the sites involved are will...

  14. Proceedings of the first energy research power supercomputer users symposium

    Energy Technology Data Exchange (ETDEWEB)

    1991-01-01

    The Energy Research Power Supercomputer Users Symposium was arranged to showcase the richness of science that has been pursued and accomplished in this program through the use of supercomputers and now high performance parallel computers over the last year: this report is the collection of the presentations given at the Symposium. Power users'' were invited by the ER Supercomputer Access Committee to show that the use of these computational tools and the associated data communications network, ESNet, go beyond merely speeding up computations. Today the work often directly contributes to the advancement of the conceptual developments in their fields and the computational and network resources form the very infrastructure of today's science. The Symposium also provided an opportunity, which is rare in this day of network access to computing resources, for the invited users to compare and discuss their techniques and approaches with those used in other ER disciplines. The significance of new parallel architectures was highlighted by the interesting evening talk given by Dr. Stephen Orszag of Princeton University.

  15. Porting Ordinary Applications to Blue Gene/Q Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy; Katz, Daniel S.; Binkowski, T. Andrew; Zhong, Xiaoliang; Heinonen, Olle; Karpeyev, Dmitry; Wilde, Michael

    2015-08-31

    Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt's sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.

  16. Extracting the Textual and Temporal Structure of Supercomputing Logs

    Energy Technology Data Exchange (ETDEWEB)

    Jain, S; Singh, I; Chandra, A; Zhang, Z; Bronevetsky, G

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an online clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.

  17. Programming Environment for a High-Performance Parallel Supercomputer with Intelligent Communication

    OpenAIRE

    A. Gunzinger; BÄumle, B.; Frey, M.; Klebl, M.; Kocheisen, M.; Kohler, P.; Morel, R.; Müller, U; Rosenthal, M

    1996-01-01

    At the Electronics Laboratory of the Swiss Federal Institute of Technology (ETH) in Zürich, the high-performance parallel supercomputer MUSIC (MUlti processor System with Intelligent Communication) has been developed. As applications like neural network simulation and molecular dynamics show, the Electronics Laboratory supercomputer is absolutely on par with those of conventional supercomputers, but electric power requirements are reduced by a factor of 1,000, weight is reduced by a factor of...

  18. 75 FR 71179 - Environmental Impact Statement: San Diego County, CA

    Science.gov (United States)

    2010-11-22

    ... Federal Highway Administration Environmental Impact Statement: San Diego County, CA AGENCY: Federal... Environmental Impact Statement (EIS) will be prepared ] for a proposed highway project in San Diego County... Community Center, 2258 Island Avenue, San Diego, California 92102. FOR FURTHER INFORMATION CONTACT: Kevin...

  19. Numerical simulations of astrophysical problems on massively parallel supercomputers

    Science.gov (United States)

    Kulikov, Igor; Chernykh, Igor; Glinsky, Boris

    2016-10-01

    In this paper, we propose the last version of the numerical model for simulation of astrophysical objects dynamics, and a new realization of our AstroPhi code for Intel Xeon Phi based RSC PetaStream supercomputers. The co-design of a computational model for the description of astrophysical objects is described. The parallel implementation and scalability tests of the AstroPhi code are presented. We achieve a 73% weak scaling efficiency with using of 256x Intel Xeon Phi accelerators with 61440 threads.

  20. AENEAS A Custom-built Parallel Supercomputer for Quantum Gravity

    CERN Document Server

    Hamber, H W

    1998-01-01

    Accurate Quantum Gravity calculations, based on the simplicial lattice formulation, are computationally very demanding and require vast amounts of computer resources. A custom-made 64-node parallel supercomputer capable of performing up to $2 \\times 10^{10}$ floating point operations per second has been assembled entirely out of commodity components, and has been operational for the last ten months. It will allow the numerical computation of a variety of quantities of physical interest in quantum gravity and related field theories, including the estimate of the critical exponents in the vicinity of the ultraviolet fixed point to an accuracy of a few percent.

  1. A special purpose silicon compiler for designing supercomputing VLSI systems

    Science.gov (United States)

    Venkateswaran, N.; Murugavel, P.; Kamakoti, V.; Shankarraman, M. J.; Rangarajan, S.; Mallikarjun, M.; Karthikeyan, B.; Prabhakar, T. S.; Satish, V.; Venkatasubramaniam, P. R.

    1991-01-01

    Design of general/special purpose supercomputing VLSI systems for numeric algorithm execution involves tackling two important aspects, namely their computational and communication complexities. Development of software tools for designing such systems itself becomes complex. Hence a novel design methodology has to be developed. For designing such complex systems a special purpose silicon compiler is needed in which: the computational and communicational structures of different numeric algorithms should be taken into account to simplify the silicon compiler design, the approach is macrocell based, and the software tools at different levels (algorithm down to the VLSI circuit layout) should get integrated. In this paper a special purpose silicon (SPS) compiler based on PACUBE macrocell VLSI arrays for designing supercomputing VLSI systems is presented. It is shown that turn-around time and silicon real estate get reduced over the silicon compilers based on PLA's, SLA's, and gate arrays. The first two silicon compiler characteristics mentioned above enable the SPS compiler to perform systolic mapping (at the macrocell level) of algorithms whose computational structures are of GIPOP (generalized inner product outer product) form. Direct systolic mapping on PLA's, SLA's, and gate arrays is very difficult as they are micro-cell based. A novel GIPOP processor is under development using this special purpose silicon compiler.

  2. Solidification in a Supercomputer: From Crystal Nuclei to Dendrite Assemblages

    Science.gov (United States)

    Shibuta, Yasushi; Ohno, Munekazu; Takaki, Tomohiro

    2015-08-01

    Thanks to the recent progress in high-performance computational environments, the range of applications of computational metallurgy is expanding rapidly. In this paper, cutting-edge simulations of solidification from atomic to microstructural levels performed on a graphics processing unit (GPU) architecture are introduced with a brief introduction to advances in computational studies on solidification. In particular, million-atom molecular dynamics simulations captured the spontaneous evolution of anisotropy in a solid nucleus in an undercooled melt and homogeneous nucleation without any inducing factor, which is followed by grain growth. At the microstructural level, the quantitative phase-field model has been gaining importance as a powerful tool for predicting solidification microstructures. In this paper, the convergence behavior of simulation results obtained with this model is discussed, in detail. Such convergence ensures the reliability of results of phase-field simulations. Using the quantitative phase-field model, the competitive growth of dendrite assemblages during the directional solidification of a binary alloy bicrystal at the millimeter scale is examined by performing two- and three-dimensional large-scale simulations by multi-GPU computation on the supercomputer, TSUBAME2.5. This cutting-edge approach using a GPU supercomputer is opening a new phase in computational metallurgy.

  3. The TeraGyroid Experiment – Supercomputing 2003

    Directory of Open Access Journals (Sweden)

    R.J. Blake

    2005-01-01

    Full Text Available Amphiphiles are molecules with hydrophobic tails and hydrophilic heads. When dispersed in solvents, they self assemble into complex mesophases including the beautiful cubic gyroid phase. The goal of the TeraGyroid experiment was to study defect pathways and dynamics in these gyroids. The UK's supercomputing and USA's TeraGrid facilities were coupled together, through a dedicated high-speed network, into a single computational Grid for research work that peaked around the Supercomputing 2003 conference. The gyroids were modeled using lattice Boltzmann methods with parameter spaces explored using many 1283 and 3grid point simulations, this data being used to inform the world's largest three-dimensional time dependent simulation with 10243-grid points. The experiment generated some 2 TBytes of useful data. In terms of Grid technology the project demonstrated the migration of simulations (using Globus middleware to and fro across the Atlantic exploiting the availability of resources. Integration of the systems accelerated the time to insight. Distributed visualisation of the output datasets enabled the parameter space of the interactions within the complex fluid to be explored from a number of sites, informed by discourse over the Access Grid. The project was sponsored by EPSRC (UK and NSF (USA with trans-Atlantic optical bandwidth provided by British Telecommunications.

  4. Calibrating Building Energy Models Using Supercomputer Trained Machine Learning Agents

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Jibonananda [ORNL; New, Joshua Ryan [ORNL; Edwards, Richard [ORNL; Parker, Lynne Edwards [ORNL

    2014-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrofit purposes. EnergyPlus is the flagship Department of Energy software that performs BEM for different types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manually by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building energy modeling unfeasible for smaller projects. In this paper, we describe the Autotune research which employs machine learning algorithms to generate agents for the different kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of EnergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-effective calibration of building models.

  5. Optimizing Linpack Benchmark on GPU-Accelerated Petascale Supercomputer

    Institute of Scientific and Technical Information of China (English)

    Feng Wang; Can-Qun Yang; Yun-Fei Du; Juan Chen; Hui-Zhan Yi; Wei-Xia Xu

    2011-01-01

    In this paper we present the programming of the Linpack benchmark on TianHe-1 system,the first petascale supercomputer system of China,and the largest GPU-accelerated heterogeneous system ever attempted before.A hybrid programming model consisting of MPI,OpenMP and streaming computing is described to explore the task parallel,thread parallel and data parallel of the Linpack.We explain how we optimized the load distribution across the CPUs and GPUs using the two-level adaptive method and describe the implementation in details.To overcome the low-bandwidth between the CPU and GPU communication,we present a software pipelining technique to hide the communication overhead.Combined with other traditional optimizations,the Linpack we developed achieved 196.7 GFLOPS on a single compute element of TianHe-1.This result is 70.1% of the peak compute capability,3.3 times faster than the result by using the vendor's library.On the full configuration of TianHe-1 our optimizations resulted in a Linpack performance of 0.563 PFLOPS,which made TianHe-1 the 5th fastest supercomputer on the Top500 list in November,2009.

  6. Integration of Titan supercomputer at OLCF with ATLAS Production System

    CERN Document Server

    Barreiro Megino, Fernando Harald; The ATLAS collaboration

    2017-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS ex- periment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercom- puter at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modi ed PanDA Pilot framework for ...

  7. Lectures in Supercomputational Neurosciences Dynamics in Complex Brain Networks

    CERN Document Server

    Graben, Peter beim; Thiel, Marco; Kurths, Jürgen

    2008-01-01

    Computational Neuroscience is a burgeoning field of research where only the combined effort of neuroscientists, biologists, psychologists, physicists, mathematicians, computer scientists, engineers and other specialists, e.g. from linguistics and medicine, seem to be able to expand the limits of our knowledge. The present volume is an introduction, largely from the physicists' perspective, to the subject matter with in-depth contributions by system neuroscientists. A conceptual model for complex networks of neurons is introduced that incorporates many important features of the real brain, such as various types of neurons, various brain areas, inhibitory and excitatory coupling and the plasticity of the network. The computational implementation on supercomputers, which is introduced and discussed in detail in this book, will enable the readers to modify and adapt the algortihm for their own research. Worked-out examples of applications are presented for networks of Morris-Lecar neurons to model the cortical co...

  8. Modeling the weather with a data flow supercomputer

    Science.gov (United States)

    Dennis, J. B.; Gao, G.-R.; Todd, K. W.

    1984-01-01

    A static concept of data flow architecture is considered for a supercomputer for weather modeling. The machine level instructions are loaded into specific memory locations before computation is initiated, with only one instruction active at a time. The machine would have processing element, functional unit, array memory, memory routing and distribution routing network elements all contained on microprocessors. A value-oriented algorithmic language (VAL) would be employed and would have, as basic operations, simple functions deriving results from operand values. Details of the machine language format, computations with an array and file processing procedures are outlined. A global weather model is discussed in terms of a static architecture and the potential computation rate is analyzed. The results indicate that detailed design studies are warranted to quantify costs and parts fabrication requirements.

  9. Toward the Graphics Turing Scale on a Blue Gene Supercomputer

    CERN Document Server

    McGuigan, Michael

    2008-01-01

    We investigate raytracing performance that can be achieved on a class of Blue Gene supercomputers. We measure a 822 times speedup over a Pentium IV on a 6144 processor Blue Gene/L. We measure the computational performance as a function of number of processors and problem size to determine the scaling performance of the raytracing calculation on the Blue Gene. We find nontrivial scaling behavior at large number of processors. We discuss applications of this technology to scientific visualization with advanced lighting and high resolution. We utilize three racks of a Blue Gene/L in our calculations which is less than three percent of the the capacity of the worlds largest Blue Gene computer.

  10. Direct numerical simulation of turbulence using GPU accelerated supercomputers

    Science.gov (United States)

    Khajeh-Saeed, Ali; Blair Perot, J.

    2013-02-01

    Direct numerical simulations of turbulence are optimized for up to 192 graphics processors. The results from two large GPU clusters are compared to the performance of corresponding CPU clusters. A number of important algorithm changes are necessary to access the full computational power of graphics processors and these adaptations are discussed. It is shown that the handling of subdomain communication becomes even more critical when using GPU based supercomputers. The potential for overlap of MPI communication with GPU computation is analyzed and then optimized. Detailed timings reveal that the internal calculations are now so efficient that the operations related to MPI communication are the primary scaling bottleneck at all but the very largest problem sizes that can fit on the hardware. This work gives a glimpse of the CFD performance issues will dominate many hardware platform in the near future.

  11. Solving global shallow water equations on heterogeneous supercomputers.

    Science.gov (United States)

    Fu, Haohuan; Gan, Lin; Yang, Chao; Xue, Wei; Wang, Lanning; Wang, Xinliang; Huang, Xiaomeng; Yang, Guangwen

    2017-01-01

    The scientific demand for more accurate modeling of the climate system calls for more computing power to support higher resolutions, inclusion of more component models, more complicated physics schemes, and larger ensembles. As the recent improvements in computing power mostly come from the increasing number of nodes in a system and the integration of heterogeneous accelerators, how to scale the computing problems onto more nodes and various kinds of accelerators has become a challenge for the model development. This paper describes our efforts on developing a highly scalable framework for performing global atmospheric modeling on heterogeneous supercomputers equipped with various accelerators, such as GPU (Graphic Processing Unit), MIC (Many Integrated Core), and FPGA (Field Programmable Gate Arrays) cards. We propose a generalized partition scheme of the problem domain, so as to keep a balanced utilization of both CPU resources and accelerator resources. With optimizations on both computing and memory access patterns, we manage to achieve around 8 to 20 times speedup when comparing one hybrid GPU or MIC node with one CPU node with 12 cores. Using a customized FPGA-based data-flow engines, we see the potential to gain another 5 to 8 times improvement on performance. On heterogeneous supercomputers, such as Tianhe-1A and Tianhe-2, our framework is capable of achieving ideally linear scaling efficiency, and sustained double-precision performances of 581 Tflops on Tianhe-1A (using 3750 nodes) and 3.74 Pflops on Tianhe-2 (using 8644 nodes). Our study also provides an evaluation on the programming paradigm of various accelerator architectures (GPU, MIC, FPGA) for performing global atmospheric simulation, to form a picture about both the potential performance benefits and the programming efforts involved.

  12. Virtualizing Super-Computation On-Board Uas

    Science.gov (United States)

    Salami, E.; Soler, J. A.; Cuadrado, R.; Barrado, C.; Pastor, E.

    2015-04-01

    Unmanned aerial systems (UAS, also known as UAV, RPAS or drones) have a great potential to support a wide variety of aerial remote sensing applications. Most UAS work by acquiring data using on-board sensors for later post-processing. Some require the data gathered to be downlinked to the ground in real-time. However, depending on the volume of data and the cost of the communications, this later option is not sustainable in the long term. This paper develops the concept of virtualizing super-computation on-board UAS, as a method to ease the operation by facilitating the downlink of high-level information products instead of raw data. Exploiting recent developments in miniaturized multi-core devices is the way to speed-up on-board computation. This hardware shall satisfy size, power and weight constraints. Several technologies are appearing with promising results for high performance computing on unmanned platforms, such as the 36 cores of the TILE-Gx36 by Tilera (now EZchip) or the 64 cores of the Epiphany-IV by Adapteva. The strategy for virtualizing super-computation on-board includes the benchmarking for hardware selection, the software architecture and the communications aware design. A parallelization strategy is given for the 36-core TILE-Gx36 for a UAS in a fire mission or in similar target-detection applications. The results are obtained for payload image processing algorithms and determine in real-time the data snapshot to gather and transfer to ground according to the needs of the mission, the processing time, and consumed watts.

  13. Non-preconditioned conjugate gradient on cell and FPCA-based hybrid supercomputer nodes

    Energy Technology Data Exchange (ETDEWEB)

    Dubois, David H [Los Alamos National Laboratory; Dubois, Andrew J [Los Alamos National Laboratory; Boorman, Thomas M [Los Alamos National Laboratory; Connor, Carolyn M [Los Alamos National Laboratory

    2009-03-10

    This work presents a detailed implementation of a double precision, Non-Preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{trademark} in conjunction with x86 Opteron{trademark} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  14. Non-preconditioned conjugate gradient on cell and FPGA based hybrid supercomputer nodes

    Energy Technology Data Exchange (ETDEWEB)

    Dubois, David H [Los Alamos National Laboratory; Dubois, Andrew J [Los Alamos National Laboratory; Boorman, Thomas M [Los Alamos National Laboratory; Connor, Carolyn M [Los Alamos National Laboratory

    2009-01-01

    This work presents a detailed implementation of a double precision, non-preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{sup TM} in conjunction with x86 Opteron{sup TM} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  15. Programming Environment for a High-Performance Parallel Supercomputer with Intelligent Communication

    Directory of Open Access Journals (Sweden)

    A. Gunzinger

    1996-01-01

    Full Text Available At the Electronics Laboratory of the Swiss Federal Institute of Technology (ETH in Zürich, the high-performance parallel supercomputer MUSIC (MUlti processor System with Intelligent Communication has been developed. As applications like neural network simulation and molecular dynamics show, the Electronics Laboratory supercomputer is absolutely on par with those of conventional supercomputers, but electric power requirements are reduced by a factor of 1,000, weight is reduced by a factor of 400, and price is reduced by a factor of 100. Software development is a key issue of such parallel systems. This article focuses on the programming environment of the MUSIC system and on its applications.

  16. Requirements for supercomputing in energy research: The transition to massively parallel computing

    Energy Technology Data Exchange (ETDEWEB)

    1993-02-01

    This report discusses: The emergence of a practical path to TeraFlop computing and beyond; requirements of energy research programs at DOE; implementation: supercomputer production computing environment on massively parallel computers; and implementation: user transition to massively parallel computing.

  17. Novel Supercomputing Approaches for High Performance Linear Algebra Using FPGAs Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Supercomputing plays a major role in many areas of science and engineering, and it has had tremendous impact for decades in areas such as aerospace, defense, energy,...

  18. SUPERCOMPUTERS FOR AIDING ECONOMIC PROCESSES WITH REFERENCE TO THE FINANCIAL SECTOR

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2014-12-01

    Full Text Available The article discusses the use of supercomputers to support business processes with particular emphasis on the financial sector. A reference was made to the selected projects that support economic development. In particular, we propose the use of supercomputers to perform artificial intel-ligence methods in banking. The proposed methods combined with modern technology enables a significant increase in the competitiveness of enterprises and banks by adding new functionality.

  19. A novel VLSI processor architecture for supercomputing arrays

    Science.gov (United States)

    Venkateswaran, N.; Pattabiraman, S.; Devanathan, R.; Ahmed, Ashaf; Venkataraman, S.; Ganesh, N.

    1993-01-01

    Design of the processor element for general purpose massively parallel supercomputing arrays is highly complex and cost ineffective. To overcome this, the architecture and organization of the functional units of the processor element should be such as to suit the diverse computational structures and simplify mapping of complex communication structures of different classes of algorithms. This demands that the computation and communication structures of different class of algorithms be unified. While unifying the different communication structures is a difficult process, analysis of a wide class of algorithms reveals that their computation structures can be expressed in terms of basic IP,IP,OP,CM,R,SM, and MAA operations. The execution of these operations is unified on the PAcube macro-cell array. Based on this PAcube macro-cell array, we present a novel processor element called the GIPOP processor, which has dedicated functional units to perform the above operations. The architecture and organization of these functional units are such to satisfy the two important criteria mentioned above. The structure of the macro-cell and the unification process has led to a very regular and simpler design of the GIPOP processor. The production cost of the GIPOP processor is drastically reduced as it is designed on high performance mask programmable PAcube arrays.

  20. Numerical infinities and infinitesimals in a new supercomputing framework

    Science.gov (United States)

    Sergeyev, Yaroslav D.

    2016-06-01

    Traditional computers are able to work numerically with finite numbers only. The Infinity Computer patented recently in USA and EU gets over this limitation. In fact, it is a computational device of a new kind able to work numerically not only with finite quantities but with infinities and infinitesimals, as well. The new supercomputing methodology is not related to non-standard analysis and does not use either Cantor's infinite cardinals or ordinals. It is founded on Euclid's Common Notion 5 saying `The whole is greater than the part'. This postulate is applied to all numbers (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). It is shown that it becomes possible to write down finite, infinite, and infinitesimal numbers by a finite number of symbols as numerals belonging to a positional numeral system with an infinite radix described by a specific ad hoc introduced axiom. Numerous examples of the usage of the introduced computational tools are given during the lecture. In particular, algorithms for solving optimization problems and ODEs are considered among the computational applications of the Infinity Computer. Numerical experiments executed on a software prototype of the Infinity Computer are discussed.

  1. Micro-mechanical Simulations of Soils using Massively Parallel Supercomputers

    Directory of Open Access Journals (Sweden)

    David W. Washington

    2004-06-01

    Full Text Available In this research a computer program, Trubal version 1.51, based on the Discrete Element Method was converted to run on a Connection Machine (CM-5,a massively parallel supercomputer with 512 nodes, to expedite the computational times of simulating Geotechnical boundary value problems. The dynamic memory algorithm in Trubal program did not perform efficiently in CM-2 machine with the Single Instruction Multiple Data (SIMD architecture. This was due to the communication overhead involving global array reductions, global array broadcast and random data movement. Therefore, a dynamic memory algorithm in Trubal program was converted to a static memory arrangement and Trubal program was successfully converted to run on CM-5 machines. The converted program was called "TRUBAL for Parallel Machines (TPM." Simulating two physical triaxial experiments and comparing simulation results with Trubal simulations validated the TPM program. With a 512 nodes CM-5 machine TPM produced a nine-fold speedup demonstrating the inherent parallelism within algorithms based on the Discrete Element Method.

  2. Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters

    Science.gov (United States)

    Fluke, Christopher J.; Barnes, David G.; Barsdell, Benjamin R.; Hassan, Amr H.

    2011-01-01

    General-purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplifying the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks and make the investment of time and effort to become early adopters of GPGPU in astronomy, stand to reap great benefits.

  3. Astrophysical Supercomputing with GPUs: Critical Decisions for Early Adopters

    CERN Document Server

    Fluke, Christopher J; Barsdell, Benjamin R; Hassan, Amr H

    2010-01-01

    General purpose computing on graphics processing units (GPGPU) is dramatically changing the landscape of high performance computing in astronomy. In this paper, we identify and investigate several key decision areas, with a goal of simplyfing the early adoption of GPGPU in astronomy. We consider the merits of OpenCL as an open standard in order to reduce risks associated with coding in a native, vendor-specific programming environment, and present a GPU programming philosophy based on using brute force solutions. We assert that effective use of new GPU-based supercomputing facilities will require a change in approach from astronomers. This will likely include improved programming training, an increased need for software development best-practice through the use of profiling and related optimisation tools, and a greater reliance on third-party code libraries. As with any new technology, those willing to take the risks, and make the investment of time and effort to become early adopters of GPGPU in astronomy, s...

  4. Using the multistage cube network topology in parallel supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Siegel, H.J.; Nation, W.G. (Purdue Univ., Lafayette, IN (USA). School of Electrical Engineering); Kruskal, C.P. (Maryland Univ., College Park, MD (USA). Dept. of Computer Science); Napolitano, L.M. Jr. (Sandia National Labs., Livermore, CA (USA))

    1989-12-01

    A variety of approaches to designing the interconnection network to support communications among the processors and memories of supercomputers employing large-scale parallel processing have been proposed and/or implemented. These approaches are often based on the multistage cube topology. This topology is the subject of much ongoing research and study because of the ways in which the multistage cube can be used. The attributes of the topology that make it useful are described. These include O(N log{sub 2} N) cost for an N input/output network, decentralized control, a variety of implementation options, good data permuting capability to support single instruction stream/multiple data stream (SIMD) parallelism, good throughput to support multiple instruction stream/multiple data stream (MIMD) parallelism, and ability to be partitioned into independent subnetworks to support reconfigurable systems. Examples of existing systems that use multistage cube networks are overviewed. The multistage cube topology can be converted into a single-stage network by associating with each switch in the network a processor (and a memory). Properties of systems that use the multistage cube network in this way are also examined.

  5. Supercomputers ready for use as discovery machines for neuroscience

    Directory of Open Access Journals (Sweden)

    Moritz eHelias

    2012-11-01

    Full Text Available NEST is a widely used tool to simulate biological spiking neural networks. Here we explain theimprovements, guided by a mathematical model of memory consumption, that enable us to exploitfor the first time the computational power of the K supercomputer for neuroscience. Multi-threadedcomponents for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling.K is capable of simulating networks corresponding to a brain area with 10^8 neurons and 10^12 synapsesin the worst case scenario of random connectivity; for larger networks of the brain its hierarchicalorganization can be exploited to constrain the number of communicating computer nodes. Wediscuss the limits of the software technology, comparing maximum-□lling scaling plots for K andthe JUGENE BG/P system. The usability of these machines for network simulations has becomecomparable to running simulations on a single PC. Turn-around times in the range of minutes evenfor the largest systems enable a quasi-interactive working style and render simulations on this scalea practical tool for computational neuroscience.

  6. Supercomputers ready for use as discovery machines for neuroscience.

    Science.gov (United States)

    Helias, Moritz; Kunkel, Susanne; Masumoto, Gen; Igarashi, Jun; Eppler, Jochen Martin; Ishii, Shin; Fukai, Tomoki; Morrison, Abigail; Diesmann, Markus

    2012-01-01

    NEST is a widely used tool to simulate biological spiking neural networks. Here we explain the improvements, guided by a mathematical model of memory consumption, that enable us to exploit for the first time the computational power of the K supercomputer for neuroscience. Multi-threaded components for wiring and simulation combine 8 cores per MPI process to achieve excellent scaling. K is capable of simulating networks corresponding to a brain area with 10(8) neurons and 10(12) synapses in the worst case scenario of random connectivity; for larger networks of the brain its hierarchical organization can be exploited to constrain the number of communicating computer nodes. We discuss the limits of the software technology, comparing maximum filling scaling plots for K and the JUGENE BG/P system. The usability of these machines for network simulations has become comparable to running simulations on a single PC. Turn-around times in the range of minutes even for the largest systems enable a quasi interactive working style and render simulations on this scale a practical tool for computational neuroscience.

  7. Impact evaluation of a healthy lifestyle intervention to reduce cardiovascular disease risk in health centers in San José, Costa Rica and Chiapas, Mexico.

    Science.gov (United States)

    Fort, Meredith P; Murillo, Sandra; López, Erika; Dengo, Ana Laura; Alvarado-Molina, Nadia; de Beausset, Indira; Castro, Maricruz; Peña, Liz; Ramírez-Zea, Manuel; Martínez, Homero

    2015-12-28

    Previous healthy lifestyle interventions based on the Salud para Su Corazón curriculum for Latinos in the United States, and a pilot study in Guatemala, demonstrated improvements in patient knowledge, behavior, and clinical outcomes for adults with hypertension. This article describes the implementation of a healthy lifestyle group education intervention at the primary care health center level in the capital cities of Costa Rica and Chiapas, Mexico for patients with hypertension and/or type 2 diabetes and presents impact evaluation results. Six group education sessions were offered to participants at intervention health centers from November 2011 to December 2012 and participants were followed up for 8 months. The study used a prospective, longitudinal, nonequivalent pretest-posttest comparison group design, and was conducted in parallel in the two countries. Cognitive and behavioral outcome measures were knowledge, self-efficacy, stage-of-change, dietary behavior and physical activity. Clinical outcomes were: body mass index, systolic and diastolic blood pressure, and fasting blood glucose. Group by time differences were assessed using generalized estimating equation models, and a dose-response analysis was conducted for the intervention group. The average number of group education sessions attended in Chiapas was 4 (SD: 2.2) and in Costa Rica, 1.8 (SD: 2.0). In both settings, participation in the study declined by 8-month follow-up. In Costa Rica, intervention group participants showed significant improvements in systolic and diastolic blood pressure and borderline significant improvement for fasting glucose, and significant improvement in the stages-of-change measure vs. the comparison group. In Chiapas, the intervention group showed significant improvement in the stages-of-change measure in relation to the comparison group. Significant improvements were not observed for knowledge, self-efficacy, dietary behavior or physical activity. In Chiapas only, a

  8. Economics of data center optics

    Science.gov (United States)

    Huff, Lisa

    2016-03-01

    Traffic to and from data centers is now reaching Zettabytes/year. Even the smallest of businesses now rely on data centers for revenue generation. And, the largest data centers today are orders of magnitude larger than the supercomputing centers of a few years ago. Until quite recently, for most data center managers, optical data centers were nice to dream about, but not really essential. Today, the all-optical data center - perhaps even an all-single mode fiber (SMF) data center is something that even managers of medium-sized data centers should be considering. Economical transceivers are the key to increased adoption of data center optics. An analysis of current and near future data center optics economics will be discussed in this paper.

  9. Specialty education in periodontics in Japan and the United States: comparison of programs at Nippon Dental University Hospital and the University of Texas Health Science Center at San Antonio.

    Science.gov (United States)

    Osawa, Ginko; Nakaya, Hiroshi; Mealey, Brian L; Kalkwarf, Kenneth; Cochran, David L

    2014-03-01

    Japan has institutions that train qualified postdoctoral students in the field of periodontics; however, Japan does not have comprehensive advanced periodontal programs and national standards for these specialty programs. To help Japanese programs move toward global standards in this area, this study was designed to describe overall differences in periodontics specialty education in Japan and the United States and to compare periodontics faculty members and residents' characteristics and attitudes in two specific programs, one in each country. Periodontal faculty members and residents at Nippon Dental University (NDU) and the University of Texas Health Science Center at San Antonio (UTHSCSA) Dental School participated in the survey study: four faculty members and nine residents at NDU; seven faculty members and thirteen residents at UTHSCSA. Demographic data were collected as well as respondents' attitudes toward and assessment of their programs. The results showed many differences in curriculum structure and clinical performance. In contrast to the UTHSCSA respondents, for example, the residents and faculty members at NDU reported that they did not have enough subject matter and time to learn clinical science. Although the residents at NDU reported seeing more total patients in one month than those at UTHSCSA, they were taught fewer varieties of periodontal treatments. To provide high-quality and consistent education for periodontal residents, Japan needs to establish a set of standards that will have positive consequences for those in Japan who need periodontal treatment.

  10. 77 FR 54811 - Safety Zone; TriRock San Diego, San Diego Bay, San Diego, CA

    Science.gov (United States)

    2012-09-06

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; TriRock San Diego, San Diego Bay, San Diego... safety zone upon the navigable waters of the San Diego Bay, San Diego, CA, in support of a bay swim in San Diego Harbor. This safety zone is necessary to provide for the safety of the participants, crew...

  11. 78 FR 58878 - Safety Zone; San Diego Shark Fest Swim; San Diego Bay, San Diego, CA

    Science.gov (United States)

    2013-09-25

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; San Diego Shark Fest Swim; San Diego Bay, San Diego, CA AGENCY: Coast Guard, DHS. ACTION: Temporary final rule. SUMMARY: The Coast Guard is establishing a safety zone upon the navigable waters of the San Diego Bay, San Diego, CA, in support of San...

  12. 78 FR 53243 - Safety Zone; TriRock San Diego, San Diego Bay, San Diego, CA

    Science.gov (United States)

    2013-08-29

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; TriRock San Diego, San Diego Bay, San Diego... temporary safety zone upon the navigable waters of the San Diego Bay, San Diego, CA, in support of a... Bryan Gollogly, Waterways Management, U.S. Coast Guard Sector San Diego; telephone (619) 278-7656, email...

  13. SSC San Diego Command History Calendar Year 2005

    Science.gov (United States)

    2006-03-01

    SSC San Diego from Capt. Tim Flynn on 18 August 2005. Capt. Unetic joined the Center from SPAWAR Headquarters where he was Executive Assistant to...communications. Rear Adm. Tim Flynn Rear Adm. Tim Flynn served as SSC San Diego Commanding Officer beginning 2 May 2002. Before joining SSC San Diego, he was...Steven Short, Arleen Simbulan, Robert Smith, Gleason Snashall, Dow Street, Weden Teng, Deborah Tharp , Tine Thompson, Viet Tran, Thomas Tucker, Rob Turner

  14. Geology and geochemistry of volcanic centers within the eastern half of the Sonoma volcanic field, northern San Francisco Bay region, California

    Science.gov (United States)

    Sweetkind, Donald S.; Rytuba, James J.; Langenheim, V.E.; Fleck, Robert J.

    2011-01-01

    Volcanic rocks in the Sonoma volcanic field in the northern California Coast Ranges contain heterogeneous assemblages of a variety of compositionally diverse volcanic rocks. We have used field mapping, new and existing age determinations, and 343 new major and trace element analyses of whole-rock samples from lavas and tuff to define for the first time volcanic source areas for many parts of the Sonoma volcanic field. Geophysical data and models have helped to define the thickness of the volcanic pile and the location of caldera structures. Volcanic rocks of the Sonoma volcanic field show a broad range in eruptive style that is spatially variable and specific to an individual eruptive center. Major, minor, and trace-element geochemical data for intracaldera and outflow tuffs and their distal fall equivalents suggest caldera-related sources for the Pinole and Lawlor Tuffs in southern Napa Valley and for the tuff of Franz Valley in northern Napa Valley. Stratigraphic correlations based on similarity in eruptive sequence and style coupled with geochemical data allow an estimate of 30 km of right-lateral offset across the West Napa-Carneros fault zones since ~5 Ma.

  15. Holistic Approach to Data Center Energy Efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Steven W [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-18

    This presentation discusses NREL's Energy System Integrations Facility and NREL's holistic design approach to sustainable data centers that led to the world's most energy-efficient data center. It describes Peregrine, a warm water liquid cooled supercomputer, waste heat reuse in the data center, demonstrated PUE and ERE, and lessons learned during four years of operation.

  16. The TESS science processing operations center

    Science.gov (United States)

    Jenkins, Jon M.; Twicken, Joseph D.; McCauliff, Sean; Campbell, Jennifer; Sanderfer, Dwight; Lung, David; Mansouri-Samani, Masoud; Girouard, Forrest; Tenenbaum, Peter; Klaus, Todd; Smith, Jeffrey C.; Caldwell, Douglas A.; Chacon, A. D.; Henze, Christopher; Heiges, Cory; Latham, David W.; Morgan, Edward; Swade, Daryl; Rinehart, Stephen; Vanderspek, Roland

    2016-08-01

    The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover 1,000 small planets with Rp SPOC) is being developed at NASA Ames Research Center based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NASA Advanced Supercomputing Division's Pleiades supercomputer. The SPOC will also search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes (MAST).

  17. COSMOS (County of San Mateo Online System). A Searcher's Manual.

    Science.gov (United States)

    San Mateo County Superintendent of Schools, Redwood City, CA. Educational Resources Center.

    Operating procedures are explained for COSMOS (County of San Mateo Online System), a computerized information retrieval system designed for the San Mateo Educational Resources Center (SMERC), which provides interactive access to both ERIC and a local file of fugitive documents. COSMOS hardware and modem compatibility requirements are reviewed,…

  18. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C. (LCF)

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to

  19. Survival and natality rate observations of California sea lions at San Miguel Island, California conducted by Alaska Fisheries Science Center, National Marine Mammal Laboratory from 1987-09-20 to 2014-09-25 (NCEI Accession 0145167)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dataset contains initial capture and marking data for California sea lion (Zalophus californianus) pups at San Miguel Island, California and subsequent...

  20. Developing Alternative Placement Criteria for English Courses at City College of San Francisco. Issue Brief

    Science.gov (United States)

    Castrechini, Sebastian

    2013-01-01

    Recognizing the need to improve postsecondary access and success for underrepresented populations, the San Francisco Unified School District (SFUSD), City College of San Francisco (CCSF), the City and County of San Francisco, and key community organizations formed the Bridge to Success initiative in 2009. The John W. Gardner Center for Youth and…

  1. Metabolomics Workbench (MetWB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Metabolomics Program's Data Repository and Coordinating Center (DRCC), housed at the San Diego Supercomputer Center (SDSC), University of California, San Diego,...

  2. Data mining method for anomaly detection in the supercomputer task flow

    Science.gov (United States)

    Voevodin, Vadim; Voevodin, Vladimir; Shaikhislamov, Denis; Nikitenko, Dmitry

    2016-10-01

    The efficiency of most supercomputer applications is extremely low. At the same time, the user rarely even suspects that their applications may be wasting computing resources. Software tools need to be developed to help detect inefficient applications and report them to the users. We suggest an algorithm for detecting anomalies in the supercomputer's task flow, based on a data mining methods. System monitoring is used to calculate integral characteristics for every job executed, and the data is used as input for our classification method based on the Random Forest algorithm. The proposed approach can currently classify the application as one of three classes - normal, suspicious and definitely anomalous. The proposed approach has been demonstrated on actual applications running on the "Lomonosov" supercomputer.

  3. EDITORIAL: `Bridging Gravitational Wave Astronomy and Observational Astrophysics', Proceedings of the 13th Gravitational Wave Data Analysis Workshop (GWDAW13) (San Juan, Puerto Rico, 19-22 January 2009), sponsored by the Center for Gravitational Wave Astronomy, The University of Texas at Brownsville and The National Astronomy and Ionosphere Center `Bridging Gravitational Wave Astronomy and Observational Astrophysics', Proceedings of the 13th Gravitational Wave Data Analysis Workshop (GWDAW13) (San Juan, Puerto Rico, 19-22 January 2009), sponsored by the Center for Gravitational Wave Astronomy, The University of Texas at Brownsville and The National Astronomy and Ionosphere Center

    Science.gov (United States)

    Díaz, Mario; Jenet, Fredrick; Mohanty, Soumya

    2009-10-01

    The 13th Gravitational Wave Data Analysis Workshop took place in San Juan, Puerto Rico on the 19-22 January 2009. This annual event has become the established venue for presenting and discussing new results and techniques in this crucial subfield of gravitational wave astronomy. A major attraction of the event is that scientists working with all possible instruments gather to discuss their projects and report on the status of their observations. The Center for Gravitational Wave Astronomy at the University of Texas at Brownsville, USA (a National Aeronautics and Space Administration University Research Center and a National Science Foundation Center for Research Excellence in Science and Technology) jointly with the National Astronomy and Ionosphere Center (which operates the Arecibo Observatory) were the proud sponsors of the gathering this time. As in previous years, GWDAW13 was well attended by more than 100 participants from over 10 countries worldwide As this issue is going to press GEO, LIGO and VIRGO are undergoing new scientific runs of their instruments with the LIGO detectors holding the promise of increasing their operational sensitivity twofold as compared with the observations finished a couple of years ago. This new cycle of observations is a major milestone compared to the previous observations which have been accomplished. Gravitational waves have not been observed yet, but the instrumental sensitivity achieved has started producing relevant astrophysical results. In particular, very recently (Nature, 20 August 2009) a letter from the LIGO Scientific Collaboration http://www.ligo.org and the VIRGO Collaboration http://www.virgo.infn.it has set the most stringent limits yet on the amount of gravitational waves that could have come from the Big Bang in the gravitational wave frequency band where current gravitational wave detectors can observe. These results have put new constraints on the physical characteristics of the early universe. The proximity

  4. Guide to dataflow supercomputing basic concepts, case studies, and a detailed example

    CERN Document Server

    Milutinovic, Veljko; Trifunovic, Nemanja; Giorgi, Roberto

    2015-01-01

    This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; revie

  5. Supercomputing '91; Proceedings of the 4th Annual Conference on High Performance Computing, Albuquerque, NM, Nov. 18-22, 1991

    Science.gov (United States)

    1991-01-01

    Various papers on supercomputing are presented. The general topics addressed include: program analysis/data dependence, memory access, distributed memory code generation, numerical algorithms, supercomputer benchmarks, latency tolerance, parallel programming, applications, processor design, networks, performance tools, mapping and scheduling, characterization affecting performance, parallelism packaging, computing climate change, combinatorial algorithms, hardware and software performance issues, system issues. (No individual items are abstracted in this volume)

  6. High-Performance Computing: Industry Uses of Supercomputers and High-Speed Networks. Report to Congressional Requesters.

    Science.gov (United States)

    General Accounting Office, Washington, DC. Information Management and Technology Div.

    This report was prepared in response to a request for information on supercomputers and high-speed networks from the Senate Committee on Commerce, Science, and Transportation, and the House Committee on Science, Space, and Technology. The following information was requested: (1) examples of how various industries are using supercomputers to…

  7. SAN CARLOS APACHE PAPERS.

    Science.gov (United States)

    ROESSEL, ROBERT A., JR.

    THE FIRST SECTION OF THIS BOOK COVERS THE HISTORICAL AND CULTURAL BACKGROUND OF THE SAN CARLOS APACHE INDIANS, AS WELL AS AN HISTORICAL SKETCH OF THE DEVELOPMENT OF THEIR FORMAL EDUCATIONAL SYSTEM. THE SECOND SECTION IS DEVOTED TO THE PROBLEMS OF TEACHERS OF THE INDIAN CHILDREN IN GLOBE AND SAN CARLOS, ARIZONA. IT IS DIVIDED INTO THREE PARTS--(1)…

  8. San Carlo Operaen

    DEFF Research Database (Denmark)

    Holm, Bent

    2005-01-01

    En indplacering af operahuset San Carlo i en kulturhistorisk repræsentationskontekst med særligt henblik på begrebet napolalità.......En indplacering af operahuset San Carlo i en kulturhistorisk repræsentationskontekst med særligt henblik på begrebet napolalità....

  9. Large scale simulations of the great 1906 San Francisco earthquake

    Science.gov (United States)

    Nilsson, S.; Petersson, A.; Rodgers, A.; Sjogreen, B.; McCandless, K.

    2006-12-01

    As part of a multi-institutional simulation effort, we present large scale computations of the ground motion during the great 1906 San Francisco earthquake using a new finite difference code called WPP. The material data base for northern California provided by USGS together with the rupture model by Song et al. is demonstrated to lead to a reasonable match with historical data. In our simulations, the computational domain covered 550 km by 250 km of northern California down to 40 km depth, so a 125 m grid size corresponds to about 2.2 Billion grid points. To accommodate these large grids, the simulations were run on 512-1024 processors on one of the supercomputers at Lawrence Livermore National Lab. A wavelet compression algorithm enabled storage of time-dependent volumetric data. Nevertheless, the first 45 seconds of the earthquake still generated 1.2 TByte of disk space and the 3-D post processing was done in parallel.

  10. Argonne National Lab deploys Force10 networks' massively dense ethernet switch for supercomputing cluster

    CERN Multimedia

    2003-01-01

    "Force10 Networks, Inc. today announced that Argonne National Laboratory (Argonne, IL) has successfully deployed Force10 E-Series switch/routers to connect to the TeraGrid, the world's largest supercomputing grid, sponsored by the National Science Foundation (NSF)" (1/2 page).

  11. Design and performance characterization of electronic structure calculations on massively parallel supercomputers

    DEFF Research Database (Denmark)

    Romero, N. A.; Glinsvad, Christian; Larsen, Ask Hjorth

    2013-01-01

    Density function theory (DFT) is the most widely employed electronic structure method because of its favorable scaling with system size and accuracy for a broad range of molecular and condensed-phase systems. The advent of massively parallel supercomputers has enhanced the scientific community's ...

  12. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  13. The impact of the U.S. supercomputing initiative will be global

    Energy Technology Data Exchange (ETDEWEB)

    Crawford, Dona [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-15

    Last July, President Obama issued an executive order that created a coordinated federal strategy for HPC research, development, and deployment called the U.S. National Strategic Computing Initiative (NSCI). However, this bold, necessary step toward building the next generation of supercomputers has inaugurated a new era for U.S. high performance computing (HPC).

  14. Congressional Panel Seeks To Curb Access of Foreign Students to U.S. Supercomputers.

    Science.gov (United States)

    Kiernan, Vincent

    1999-01-01

    Fearing security problems, a congressional committee on Chinese espionage recommends that foreign students and other foreign nationals be barred from using supercomputers at national laboratories unless they first obtain export licenses from the federal government. University officials dispute the data on which the report is based and find the…

  15. [Experience in simulating the structural and dynamic features of small proteins using table supercomputers].

    Science.gov (United States)

    Kondrat'ev, M S; Kabanov, A V; Komarov, V M; Khechinashvili, N N; Samchenko, A A

    2011-01-01

    The results of theoretical studies of the structural and dynamic features of peptides and small proteins have been presented that were carried out by quantum chemical and molecular dynamics methods in high-performance graphic stations, "table supercomputers", using distributed calculations by the CUDA technology.

  16. Integration Of PanDA Workload Management System With Supercomputers

    CERN Document Server

    Klimentov, Alexei; The ATLAS collaboration; Maeno, Tadashi; Mashinistov, Ruslan; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Read, Kenneth; Ryabinkin, Evgeny; Wenaus, Torre

    2015-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 100,000 co...

  17. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  18. Interactive steering of supercomputing simulation for aerodynamic noise radiated from square cylinder; Supercomputer wo mochiita steering system ni yoru kakuchu kara hoshasareru kurikion no suchi kaiseki

    Energy Technology Data Exchange (ETDEWEB)

    Yokono, Y. [Toshiba Corp., Tokyo (Japan); Fujita, H. [Tokyo Inst. of Technology, Tokyo (Japan). Precision Engineering Lab.

    1995-03-25

    This paper describes extensive computer simulation for aerodynamic noise radiated from a square cylinder using an interactive steering supercomputing simulation system. The unsteady incompressible three-dimensional Navier-Stokes equations are solved by the finite volume method using a steering system which can visualize the numerical process during calculation and alter the numerical parameter. Using the fluctuating surface pressure of the square cylinder, the farfield sound pressure is calculated based on Lighthill-Curle`s equation. The results are compared with those of low noise wind tunnel experiments, and good agreement is observed for the peak spectrum frequency of the sound pressure level. 14 refs., 10 figs.

  19. MPI/OpenMP Hybrid Parallel Algorithm of Resolution of Identity Second-Order Møller-Plesset Perturbation Calculation for Massively Parallel Multicore Supercomputers.

    Science.gov (United States)

    Katouda, Michio; Nakajima, Takahito

    2013-12-10

    A new algorithm for massively parallel calculations of electron correlation energy of large molecules based on the resolution of identity second-order Møller-Plesset perturbation (RI-MP2) technique is developed and implemented into the quantum chemistry software NTChem. In this algorithm, a Message Passing Interface (MPI) and Open Multi-Processing (OpenMP) hybrid parallel programming model is applied to attain efficient parallel performance on massively parallel supercomputers. An in-core storage scheme of intermediate data of three-center electron repulsion integrals utilizing the distributed memory is developed to eliminate input/output (I/O) overhead. The parallel performance of the algorithm is tested on massively parallel supercomputers such as the K computer (using up to 45 992 central processing unit (CPU) cores) and a commodity Intel Xeon cluster (using up to 8192 CPU cores). The parallel RI-MP2/cc-pVTZ calculation of two-layer nanographene sheets (C150H30)2 (number of atomic orbitals is 9640) is performed using 8991 node and 71 288 CPU cores of the K computer.

  20. Condiciones físicas de formación de gabros ymigmatitas derivadas de rocas máficas en el centro de la Sierra de ValleFértil, San Juan Physical conditions for the formation ofgabbros and migmatites derived from mafic rocks in the center of Sierra deValle Fértil, San Juan

    Directory of Open Access Journals (Sweden)

    AlinaM. Tibaldi

    2009-11-01

    resultadosexperimentales se demuestra que la transición desde rocas ígneas máficas amigmatitas máficas ocurrió por enfriamiento isobárico, y que el emplazamientocontinuo de magmas debió ser la fuente de calor que mantuvo la secuencia enfacies de granulitas sin dejar que el enfriamiento fuera rápido. Se interpretala geología del centro de la sierra de Valle Fértil como un ejemplo del núcleoplutónico-metamórfico del arco magmático Famatiniano donde el gradientemetamórfico anormalmente alto refleja que un volumen importante de magmasmáficos alcanzaban, y dominaban, en paleo-profundidades de entre 16 y 20kilómetros.A sequence of plutonicmafic rocks inter-stratified with both mafic- and metasedimentary-derivedmigmatites is found along the San Juan valley in the center of the Sierra deValle Fértil. This natural example shows the transition from igneous tometamorphic petrologic processes which occurred during the crystallization ofmafic magmas and the subsequent partial melting of crystallized gabbroic rocks.This work studies the mineralogical changes associate to this petrologictransition. Thermobarometric estimates based on amphibole-plagioclase indicatethat the mafic magmas crystallized at around 1100ºC and 5 ± 0.5 kbar. Theconditions under which gabbroic rocks were partially melted are estimated usingtwo pyroxenes thermometry and amphibole-plagioclase thermobarometry. Similarphysical conditions in the range between 740 and 840ºC and 5 to 6.5 kbar arerecovery from mineral assemblages in the mesosomes and leucosomes of maficmigmatites. The main mineral compositional changes that accompanied the partialprocess of the gabbroic rocks are: 1 depletion of aluminium content and Mg#ratio in pyroxenes; 2 depletion of anorthite mole fraction of theplagioclases; and 3 depletion of the Mg# ratio in amphiboles. These mineralcompositional variations are consistent with those found by experimentallymelting mafic protoliths. Experimental results showed that the temperature forpromoting

  1. Study of ATLAS TRT performance with GRID and supercomputers.

    CERN Document Server

    Krasnopevtsev, Dimitriy; The ATLAS collaboration; Mashinistov, Ruslan; Belyaev, Nikita; Ryabinkin, Evgeny

    2015-01-01

    After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at high occupancy conditions is important for many on-going physics analyses. The ATLAS Transition Radiation Tracker (TRT) is one of these detectors. TRT is a large straw tube tracking system that is the outermost of the three subsystems of the ATLAS Inner Detector (ID). TRT contributes significantly to the resolution for high-pT tracks in the ID providing excellent particle identification capabilities and electron-pion separation. ATLAS experiment is using Worldwide LHC Computing Grid. WLCG is a global collaboration of computer centers and provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualisation tools and more. WLCG...

  2. Study of ATLAS TRT performance with GRID and supercomputers.

    CERN Document Server

    Krasnopevtsev, Dimitriy; The ATLAS collaboration; Belyaev, Nikita; Mashinistov, Ruslan; Ryabinkin, Evgeny

    2015-01-01

    After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at highoccupancy conditions is important for many on-going physics analyses. The ATLAS Transition Radiation Tracker (TRT) is one of these detectors. TRT is a large straw tube tracking system that is the outermost of the three subsystems of the ATLAS Inner Detector (ID). TRT contributes significantly to the resolution for high-pT tracks in the ID providing excellent particle identification capabilities and electron-pion separation. ATLAS experiment is using Worldwide LHC Computing Grid. WLCG is a global collaboration of computer centers and provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualization tools and more. WLCG ...

  3. Internal fluid mechanics research on supercomputers for aerospace propulsion systems

    Science.gov (United States)

    Miller, Brent A.; Anderson, Bernhard H.; Szuch, John R.

    1988-01-01

    The Internal Fluid Mechanics Division of the NASA Lewis Research Center is combining the key elements of computational fluid dynamics, aerothermodynamic experiments, and advanced computational technology to bring internal computational fluid mechanics (ICFM) to a state of practical application for aerospace propulsion systems. The strategies used to achieve this goal are to: (1) pursue an understanding of flow physics, surface heat transfer, and combustion via analysis and fundamental experiments, (2) incorporate improved understanding of these phenomena into verified 3-D CFD codes, and (3) utilize state-of-the-art computational technology to enhance experimental and CFD research. Presented is an overview of the ICFM program in high-speed propulsion, including work in inlets, turbomachinery, and chemical reacting flows. Ongoing efforts to integrate new computer technologies, such as parallel computing and artificial intelligence, into high-speed aeropropulsion research are described.

  4. Explaining the Gap between Theoretical Peak Performance and Real Performance for Supercomputer Architectures

    Directory of Open Access Journals (Sweden)

    W. Schönauer

    1994-01-01

    Full Text Available The basic architectures of vector and parallel computers and their properties are presented followed by a discussion of memory size and arithmetic operations in the context of memory bandwidth. For a single operation micromeasurements of the vector triad for the IBM 3090 VF and the CRAY Y-MP/8 are presented, revealing in detail the losses for this operation. The global performance of a whole supercomputer is then considered by identifying reduction factors that reduce the theoretical peak performance to the poor real performance. The responsibilities of the manufacturer and of the user for these losses are discussed. The price-performance ratio for different architectures as of January 1991 is briefly mentioned. Finally a user-friendly architecture for a supercomputer is proposed.

  5. HACC: Simulating Sky Surveys on State-of-the-Art Supercomputing Architectures

    CERN Document Server

    Habib, Salman; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukic, Zarija; Sehrish, Saba; Liao, Wei-keng

    2014-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of prog...

  6. Direct exploitation of a top 500 Supercomputer for Analysis of CMS Data

    Science.gov (United States)

    Cabrillo, I.; Cabellos, L.; Marco, J.; Fernandez, J.; Gonzalez, I.

    2014-06-01

    The Altamira Supercomputer hosted at the Instituto de Fisica de Cantatbria (IFCA) entered in operation in summer 2012. Its last generation FDR Infiniband network used (for message passing) in parallel jobs, supports the connection to General Parallel File System (GPFS) servers, enabling an efficient simultaneous processing of multiple data demanding jobs. Sharing a common GPFS system and a single LDAP-based identification with the existing Grid clusters at IFCA allows CMS researchers to exploit the large instantaneous capacity of this supercomputer to execute analysis jobs. The detailed experience describing this opportunistic use for skimming and final analysis of CMS 2012 data for a specific physics channel, resulting in an order of magnitude reduction of the waiting time, is presented.

  7. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  8. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Del Debbio, Luigi; Jordan, Kirk; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  9. Towards 21st century stellar models: Star clusters, supercomputing and asteroseismology

    Science.gov (United States)

    Campbell, S. W.; Constantino, T. N.; D'Orazi, V.; Meakin, C.; Stello, D.; Christensen-Dalsgaard, J.; Kuehn, C.; De Silva, G. M.; Arnett, W. D.; Lattanzio, J. C.; MacLean, B. T.

    2016-09-01

    Stellar models provide a vital basis for many aspects of astronomy and astrophysics. Recent advances in observational astronomy - through asteroseismology, precision photometry, high-resolution spectroscopy, and large-scale surveys - are placing stellar models under greater quantitative scrutiny than ever. The model limitations are being exposed and the next generation of stellar models is needed as soon as possible. The current uncertainties in the models propagate to the later phases of stellar evolution, hindering our understanding of stellar populations and chemical evolution. Here we give a brief overview of the evolution, importance, and substantial uncertainties of core helium burning stars in particular and then briefly discuss a range of methods, both theoretical and observational, that we are using to advance the modelling. This study uses observational data from from HST, VLT, AAT, Kepler, and supercomputing resources in Australia provided by the National Computational Infrastructure (NCI) and Pawsey Supercomputing Centre.

  10. Salt Ponds, South San Francisco Bay

    Science.gov (United States)

    2002-01-01

    higher resolution 1000 pixel-wide image The red and green colors of the salt ponds in South San Francisco Bay are brilliant visual markers for astronauts. The STS-111 crew photographed the bay south of the San Mateo bridge in June, 2002. This photograph is timely because a large number of the salt ponds (more than 16,500 acres) that are owned by Cargill, Inc. will be sold in September for wetlands restoration-a restoration project second in size only to the Florida Everglades project. Rough boundaries of the areas to be restored are outlined on the image. Over the past century, more than 80% of San Francisco Bay's wetlands have been filled and developed or diked off for salt mining. San Francisco Bay has supported salt mining since 1854. Cargill has operated most of the bay's commercial salt ponds since 1978, and had already sold thousands of acres to the State of California and the Don Edwards National Wildlife Refuge. This new transaction will increase San Francisco Bay's existing tidal wetlands by 50%. The new wetlands, to be managed by the California Department of Fish and Game and the U.S. Fish and Wildlife Service, will join the Don Edwards National Wildlife Refuge, and provide valuable habitat for birds, fish and other wildlife. The wetlands will contribute to better water quality and flood control in the bay, and open up more coastline for public enjoyment. Additional information: Cargill Salt Ponds (PDF) Turning Salt Into Environmental Gold Salt Ponds on Way to Becoming Wetlands Historic Agreement Reached to Purchase San Francisco Bay Salt Ponds Astronaut photograph STS111-376-3 was provided by the Earth Sciences and Image Analysis Laboratory at Johnson Space Center. Additional images taken by astronauts and cosmonauts can be viewed at the NASA-JSC Gateway to Astronaut Photography of Earth

  11. Analyzing the Interplay of Failures and Workload on a Leadership-Class Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Meneses, Esteban [University of Pittsburgh; Ni, Xiang [University of Illinois at Urbana-Champaign; Jones, Terry R [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of fault tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.

  12. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    Energy Technology Data Exchange (ETDEWEB)

    Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brightwell, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In this paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.

  13. TSP:A Heterogeneous Multiprocessor Supercomputing System Based on i860XP

    Institute of Scientific and Technical Information of China (English)

    黄国勇; 李三立

    1994-01-01

    Numerous new RISC processors provide support for supercomputing.By using the “mini-Cray” i860 superscalar processor,an add-on board has been developed to boost the performance of a real time system.A parallel heterogeneous multiprocessor surercomputing system,TSP,is constructed.In this paper,we present the system design consideration and described the architecture of the TSP and its features.

  14. Application of Supercomputer Technologies for Simulation Of Socio-Economic Systems

    Directory of Open Access Journals (Sweden)

    Vladimir Valentinovich Okrepilov

    2015-06-01

    Full Text Available To date, an extensive experience has been accumulated in investigation of problems related to quality, assessment of management systems, modeling of economic system sustainability. The performed studies have created a basis for development of a new research area — Economics of Quality. Its tools allow to use opportunities of model simulation for construction of the mathematical models adequately reflecting the role of quality in natural, technical, social regularities of functioning of the complex socio-economic systems. Extensive application and development of models, and also system modeling with use of supercomputer technologies, on our deep belief, will bring the conducted research of socio-economic systems to essentially new level. Moreover, the current scientific research makes a significant contribution to model simulation of multi-agent social systems and that is not less important, it belongs to the priority areas in development of science and technology in our country. This article is devoted to the questions of supercomputer technologies application in public sciences, first of all, — regarding technical realization of the large-scale agent-focused models (AFM. The essence of this tool is that owing to the power computer increase it has become possible to describe the behavior of many separate fragments of a difficult system, as socio-economic systems are. The article also deals with the experience of foreign scientists and practicians in launching the AFM on supercomputers, and also the example of AFM developed in CEMI RAS, stages and methods of effective calculating kernel display of multi-agent system on architecture of a modern supercomputer will be analyzed. The experiments on the basis of model simulation on forecasting the population of St. Petersburg according to three scenarios as one of the major factors influencing the development of socio-economic system and quality of life of the population are presented in the

  15. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  16. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential of PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.

  17. Sensitive Plants - Center for Natural Lands Management [ds458

    Data.gov (United States)

    California Department of Resources — This dataset represents sensitive plant data collected on Center for Natural Lands Management (CNLM) dedicated nature preserves in San Diego County, California. Data...

  18. Sensitive Wildlife - Center for Natural Lands Management [ds431

    Data.gov (United States)

    California Department of Resources — This dataset represents sensitive wildlife data collected for the Center for Natural Lands Management (CNLM) at dedicated nature preserves in San Diego County,...

  19. Visualization on supercomputing platform level II ASC milestone (3537-1B) results from Sandia.

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk (Kitware, Inc., Clifton Park, NY); Fabian, Nathan; Marion, Patrick (Kitware, Inc., Clifton Park, NY); Moreland, Kenneth D.

    2010-09-01

    This report provides documentation for the completion of the Sandia portion of the ASC Level II Visualization on the platform milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratories and Los Alamos National Laboratories. This milestone contains functionality required for performing visualization directly on a supercomputing platform, which is necessary for peta-scale visualization. Sandia's contribution concerns in-situ visualization, running a visualization in tandem with a solver. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors(GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the performance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. Scientific simulation on parallel supercomputers is traditionally performed in four

  20. Sandia`s network for Supercomputing `94: Linking the Los Alamos, Lawrence Livermore, and Sandia National Laboratories using switched multimegabit data service

    Energy Technology Data Exchange (ETDEWEB)

    Vahle, M.O.; Gossage, S.A.; Brenkosh, J.P. [Sandia National Labs., Albuquerque, NM (United States). Advanced Networking Integration Dept.

    1995-01-01

    Supercomputing `94, a high-performance computing and communications conference, was held November 14th through 18th, 1994 in Washington DC. For the past four years, Sandia National Laboratories has used this conference to showcase and focus its communications and networking endeavors. At the 1994 conference, Sandia built a Switched Multimegabit Data Service (SMDS) network running at 44.736 megabits per second linking its private SMDS network between its facilities in Albuquerque, New Mexico and Livermore, California to the convention center in Washington, D.C. For the show, the network was also extended from Sandia, New Mexico to Los Alamos National Laboratory and from Sandia, California to Lawrence Livermore National Laboratory. This paper documents and describes this network and how it was used at the conference.

  1. San Pascual (1989) n. 272

    OpenAIRE

    Pérez, María Dolores, O.S.C. (Directora)

    1989-01-01

    Editorial. Entrevista madre abadesa. Ofrenda. San Pascual tercer centenario de la canonizacion y cuarto de su muerte. San Pascual, un Santo universal. Pascual Baylón, poeta. grupo Scout Sant Pasqual. Aportaciones, donativos, limosnas, benefactores. Boletin informativo del templo de San Pascual de villareal.

  2. Hazardous waste reduction efforts of the Navy and DoD in the San Diego, California region

    OpenAIRE

    Kane, Michael W.

    1993-01-01

    Approved for public release; distribution is unlimited. This research investigates the hazardous waste reduction efforts of the Department of Defense and the Navy in the San Diego, California region. It shows that previous efforts to reduce cost and generated waste have not been successful. The study reveals that efforts by Fleet Industrial Supply Center, San Diego should reduce both costs and wastes and that the improvements in the pricing schedule used by the Public Works Center, San Die...

  3. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    Science.gov (United States)

    Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.

  4. Integration of PanDA workload management system with Titan supercomputer at OLCF

    Science.gov (United States)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  5. Development of the general interpolants method for the CYBER 200 series of supercomputers

    Science.gov (United States)

    Stalnaker, J. F.; Robinson, M. A.; Spradley, L. W.; Kurzius, S. C.; Thoenes, J.

    1988-01-01

    The General Interpolants Method (GIM) is a 3-D, time-dependent, hybrid procedure for generating numerical analogs of the conservation laws. This study is directed toward the development and application of the GIM computer code for fluid dynamic research applications as implemented for the Cyber 200 series of supercomputers. An elliptic and quasi-parabolic version of the GIM code are discussed. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and an implicit finite difference scheme are also included.

  6. Accelerating Virtual High-Throughput Ligand Docking: current technology and case study on a petascale supercomputer.

    Science.gov (United States)

    Ellingson, Sally R; Dakshanamurthy, Sivanesan; Brown, Milton; Smith, Jeremy C; Baudry, Jerome

    2014-04-25

    In this paper we give the current state of high-throughput virtual screening. We describe a case study of using a task-parallel MPI (Message Passing Interface) version of Autodock4 [1], [2] to run a virtual high-throughput screen of one-million compounds on the Jaguar Cray XK6 Supercomputer at Oak Ridge National Laboratory. We include a description of scripts developed to increase the efficiency of the predocking file preparation and postdocking analysis. A detailed tutorial, scripts, and source code for this MPI version of Autodock4 are available online at http://www.bio.utk.edu/baudrylab/autodockmpi.htm.

  7. A New Hydrodynamic Model for Numerical Simulation of Interacting Galaxies on Intel Xeon Phi Supercomputers

    Science.gov (United States)

    Kulikov, Igor; Chernykh, Igor; Tutukov, Alexander

    2016-05-01

    This paper presents a new hydrodynamic model of interacting galaxies based on the joint solution of multicomponent hydrodynamic equations, first moments of the collisionless Boltzmann equation and the Poisson equation for gravity. Using this model, it is possible to formulate a unified numerical method for solving hyperbolic equations. This numerical method has been implemented for hybrid supercomputers with Intel Xeon Phi accelerators. The collision of spiral and disk galaxies considering the star formation process, supernova feedback and molecular hydrogen formation is shown as a simulation result.

  8. 33 CFR 110.220 - Pacific Ocean at San Nicolas Island, Calif.; restricted anchorage areas.

    Science.gov (United States)

    2010-07-01

    ... Pacific Ocean at San Nicolas Island, Calif.; restricted anchorage areas. (a) The restricted areas—(1) East area. All waters within a circle having a radius of one nautical mile centered at latitude 33°13′45... approximately 101°, 420 yards, from San Nicolas Island East End Light. (2) West area. Shoreward of a...

  9. Research on Optimal Path of Data Migration among Multisupercomputer Centers

    Directory of Open Access Journals (Sweden)

    Gang Li

    2016-01-01

    Full Text Available Data collaboration between supercomputer centers requires a lot of data migration. In order to increase the efficiency of data migration, it is necessary to design optimal path of data transmission among multisupercomputer centers. Based on the situation that the target center which finished receiving data can be regarded as the new source center to migrate data to others, we present a parallel scheme for the data migration among multisupercomputer centers with different interconnection topologies using graph theory analysis and calculations. Finally, we verify that this method is effective via numeric simulation.

  10. Scheduling Supercomputers.

    Science.gov (United States)

    1983-02-01

    no task is scheduled with overlap. Let numpi be the total number of preemptions and idle slots of size at most to that are introduced. We see that if...no usable block remains on Qm-*, then numpi < m-k. Otherwise, numpi ! m-k-1. If j>n when this procedure terminates, then all tasks have been scheduled

  11. Grassroots Supercomputing

    CERN Multimedia

    Buchanan, Mark

    2005-01-01

    What started out as a way for SETI to plow through its piles or radio-signal data from deep space has turned into a powerful research tool as computer users acrosse the globe donate their screen-saver time to projects as diverse as climate-change prediction, gravitational-wave searches, and protein folding (4 pages)

  12. 75 FR 55975 - Safety Zone; San Diego Harbor Shark Fest Swim; San Diego Bay, San Diego, CA

    Science.gov (United States)

    2010-09-15

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; San Diego Harbor Shark Fest Swim; San Diego Bay, San Diego, CA AGENCY: Coast Guard, DHS. ACTION: Temporary final rule. SUMMARY: The Coast Guard is establishing a temporary safety zone upon the navigable waters of the San Diego Bay, San Diego, CA, in support...

  13. 超级计算中心功能与设计探讨%Discussion on the Function and Design of Super Computer Center

    Institute of Scientific and Technical Information of China (English)

    焦建欣

    2013-01-01

      超级计算中心是数据中心领域中的一个特殊的类型,本文以国家超级计算深圳中心为例,探讨了超级计算中心的功能及相关的设计。%Super computer center is a particular type in the field of data center. Based on the National Supercomputing Center in Shenzhen as an example, the paper discusses the function and related design for such supercomputing centers.

  14. Structure and mechanics of the San Andreas-San Gregorio fault junction, San Francisco, California

    Science.gov (United States)

    Parsons, Tom; Bruns, Terry R.; Sliter, Ray

    2005-01-01

    The right-lateral San Gregorio and San Andreas faults meet west of the Golden Gate near San Francisco. Coincident seismic reflection and refraction profiling across the San Gregorio and San Andreas faults south of their junction shows the crust between them to have formed shallow extensional basins that are dissected by parallel strike-slip faults. We employ a regional finite element model to investigate the long-term consequences of the fault geometry. Over the course of 2-3 m.y. of slip on the San Andreas-San Gregorio fault system, elongated extensional basins are predicted to form between the two faults. An additional consequence of the fault geometry is that the San Andreas fault is expected to have migrated eastward relative to the San Gregorio fault. We thus propose a model of eastward stepping right-lateral fault formation to explain the observed multiple fault strands and depositional basins. The current manifestation of this process might be the observed transfer of slip from the San Andreas fault east to the Golden Gate fault.

  15. Feynman diagrams sampling for quantum field theories on the QPACE 2 supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Rappl, Florian

    2016-08-01

    This work discusses the application of Feynman diagram sampling in quantum field theories. The method uses a computer simulation to sample the diagrammatic space obtained in a series expansion. For running large physical simulations powerful computers are obligatory, effectively splitting the thesis in two parts. The first part deals with the method of Feynman diagram sampling. Here the theoretical background of the method itself is discussed. Additionally, important statistical concepts and the theory of the strong force, quantum chromodynamics, are introduced. This sets the context of the simulations. We create and evaluate a variety of models to estimate the applicability of diagrammatic methods. The method is then applied to sample the perturbative expansion of the vertex correction. In the end we obtain the value for the anomalous magnetic moment of the electron. The second part looks at the QPACE 2 supercomputer. This includes a short introduction to supercomputers in general, as well as a closer look at the architecture and the cooling system of QPACE 2. Guiding benchmarks of the InfiniBand network are presented. At the core of this part, a collection of best practices and useful programming concepts are outlined, which enables the development of efficient, yet easily portable, applications for the QPACE 2 system.

  16. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration; Klimentov, Alexei; Oleynik, Danila; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently uses more than 100,000 cores at well over 100 Grid sites with a peak performance of 0.3 petaFLOPS, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real tim...

  17. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    De, Kaushik; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Vaniachine, Alexandre; Wenaus, Torre; Schovancova, Jaroslava

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modi ed PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real time, information about unused...

  18. PREFACE: HITES 2012: 'Horizons of Innovative Theories, Experiments, and Supercomputing in Nuclear Physics'

    Science.gov (United States)

    Hecht, K. T.

    2012-12-01

    This volume contains the contributions of the speakers of an international conference in honor of Jerry Draayer's 70th birthday, entitled 'Horizons of Innovative Theories, Experiments and Supercomputing in Nuclear Physics'. The list of contributors includes not only international experts in these fields, but also many former collaborators, former graduate students, and former postdoctoral fellows of Jerry Draayer, stressing innovative theories such as special symmetries and supercomputing, both of particular interest to Jerry. The organizers of the conference intended to honor Jerry Draayer not only for his seminal contributions in these fields, but also for his administrative skills at departmental, university, national and international level. Signed: Ted Hecht University of Michigan Conference photograph Scientific Advisory Committee Ani AprahamianUniversity of Notre Dame Baha BalantekinUniversity of Wisconsin Bruce BarrettUniversity of Arizona Umit CatalyurekOhio State Unversity David DeanOak Ridge National Laboratory Jutta Escher (Chair)Lawrence Livermore National Laboratory Jorge HirschUNAM, Mexico David RoweUniversity of Toronto Brad Sherill & Michigan State University Joel TohlineLouisiana State University Edward ZganjarLousiana State University Organizing Committee Jeff BlackmonLouisiana State University Mark CaprioUniversity of Notre Dame Tomas DytrychLouisiana State University Ana GeorgievaINRNE, Bulgaria Kristina Launey (Co-chair)Louisiana State University Gabriella PopaOhio University Zanesville James Vary (Co-chair)Iowa State University Local Organizing Committee Laura LinhardtLouisiana State University Charlie RascoLouisiana State University Karen Richard (Coordinator)Louisiana State University

  19. Graph visualization for the analysis of the structure and dynamics of extreme-scale supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Berkbigler, K. P. (Kathryn P.); Bush, B. W. (Brian W.); Davis, Kei,; Hoisie, A. (Adolfy); Smith, S. A. (Steve A.)

    2002-01-01

    We are exploring the development and application of information visualization techniques for the analysis of new extreme-scale supercomputer architectures. Modern supercomputers typically comprise very large clusters of commodity SMPs interconnected by possibly dense and often nonstandard networks. The scale, complexity, and inherent nonlocality of the structure and dynamics of this hardware, and the systems and applications distributed over it, challenge traditional analysis methods. As part of the a la carte team at Los Alamos National Laboratory, who are simulating these advanced architectures, we are exploring advanced visualization techniques and creating tools to provide intuitive exploration, discovery, and analysis of these simulations. This work complements existing and emerging algorithmic analysis tools. Here we gives background on the problem domain, a description of a prototypical computer architecture of interest (on the order of 10,000 processors connected by a quaternary fat-tree network), and presentations of several visualizations of the simulation data that make clear the flow of data in the interconnection network.

  20. Groundwater cooling of a supercomputer in Perth, Western Australia: hydrogeological simulations and thermal sustainability

    Science.gov (United States)

    Sheldon, Heather A.; Schaubs, Peter M.; Rachakonda, Praveen K.; Trefry, Michael G.; Reid, Lynn B.; Lester, Daniel R.; Metcalfe, Guy; Poulet, Thomas; Regenauer-Lieb, Klaus

    2015-12-01

    Groundwater cooling (GWC) is a sustainable alternative to conventional cooling technologies for supercomputers. A GWC system has been implemented for the Pawsey Supercomputing Centre in Perth, Western Australia. Groundwater is extracted from the Mullaloo Aquifer at 20.8 °C and passes through a heat exchanger before returning to the same aquifer. Hydrogeological simulations of the GWC system were used to assess its performance and sustainability. Simulations were run with cooling capacities of 0.5 or 2.5 Mega Watts thermal (MWth), with scenarios representing various combinations of pumping rate, injection temperature and hydrogeological parameter values. The simulated system generates a thermal plume in the Mullaloo Aquifer and overlying Superficial Aquifer. Thermal breakthrough (transfer of heat from injection to production wells) occurred in 2.7-4.3 years for a 2.5 MWth system. Shielding (reinjection of cool groundwater between the injection and production wells) resulted in earlier thermal breakthrough but reduced the rate of temperature increase after breakthrough, such that shielding was beneficial after approximately 5 years pumping. Increasing injection temperature was preferable to increasing flow rate for maintaining cooling capacity after thermal breakthrough. Thermal impacts on existing wells were small, with up to 10 wells experiencing a temperature increase ≥ 0.1 °C (largest increase 6 °C).

  1. OpenMC:Towards Simplifying Programming for TianHe Supercomputers

    Institute of Scientific and Technical Information of China (English)

    廖湘科; 杨灿群; 唐滔; 易会战; 王锋; 吴强; 薛京灵

    2014-01-01

    Modern petascale and future exascale systems are massively heterogeneous architectures. Developing produc-tive intra-node programming models is crucial toward addressing their programming challenge. We introduce a directive-based intra-node programming model, OpenMC, and show that this new model can achieve ease of programming, high performance, and the degree of portability desired for heterogeneous nodes, especially those in TianHe supercomputers. While existing models are geared towards offloading computations to accelerators (typically one), OpenMC aims to more uniformly and adequately exploit the potential offered by multiple CPUs and accelerators in a compute node. OpenMC achieves this by providing a unified abstraction of hardware resources as workers and facilitating the exploitation of asyn-chronous task parallelism on the workers. We present an overview of OpenMC, a prototyping implementation, and results from some initial comparisons with OpenMP and hand-written code in developing six applications on two types of nodes from TianHe supercomputers.

  2. Unique Methodologies for Nano/Micro Manufacturing Job Training Via Desktop Supercomputer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kimball, Clyde [Northern Illinois Univ., DeKalb, IL (United States); Karonis, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Lurio, Laurence [Northern Illinois Univ., DeKalb, IL (United States); Piot, Philippe [Northern Illinois Univ., DeKalb, IL (United States); Xiao, Zhili [Northern Illinois Univ., DeKalb, IL (United States); Glatz, Andreas [Northern Illinois Univ., DeKalb, IL (United States); Pohlman, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Hou, Minmei [Northern Illinois Univ., DeKalb, IL (United States); Demir, Veysel [Northern Illinois Univ., DeKalb, IL (United States); Song, Jie [Northern Illinois Univ., DeKalb, IL (United States); Duffin, Kirk [Northern Illinois Univ., DeKalb, IL (United States); Johns, Mitrick [Northern Illinois Univ., DeKalb, IL (United States); Sims, Thomas [Northern Illinois Univ., DeKalb, IL (United States); Yin, Yanbin [Northern Illinois Univ., DeKalb, IL (United States)

    2012-11-21

    This project establishes an initiative in high speed (Teraflop)/large-memory desktop supercomputing for modeling and simulation of dynamic processes important for energy and industrial applications. It provides a training ground for employment of current students in an emerging field with skills necessary to access the large supercomputing systems now present at DOE laboratories. It also provides a foundation for NIU faculty to quantum leap beyond their current small cluster facilities. The funding extends faculty and student capability to a new level of analytic skills with concomitant publication avenues. The components of the Hewlett Packard computer obtained by the DOE funds create a hybrid combination of a Graphics Processing System (12 GPU/Teraflops) and a Beowulf CPU system (144 CPU), the first expandable via the NIU GAEA system to ~60 Teraflops integrated with a 720 CPU Beowulf system. The software is based on access to the NVIDIA/CUDA library and the ability through MATLAB multiple licenses to create additional local programs. A number of existing programs are being transferred to the CPU Beowulf Cluster. Since the expertise necessary to create the parallel processing applications has recently been obtained at NIU, this effort for software development is in an early stage. The educational program has been initiated via formal tutorials and classroom curricula designed for the coming year. Specifically, the cost focus was on hardware acquisitions and appointment of graduate students for a wide range of applications in engineering, physics and computer science.

  3. Supercomputer Assisted Generation of Machine Learning Agents for the Calibration of Building Energy Models

    Energy Technology Data Exchange (ETDEWEB)

    Sanyal, Jibonananda [ORNL; New, Joshua Ryan [ORNL; Edwards, Richard [ORNL

    2013-01-01

    Building Energy Modeling (BEM) is an approach to model the energy usage in buildings for design and retrot pur- poses. EnergyPlus is the agship Department of Energy software that performs BEM for dierent types of buildings. The input to EnergyPlus can often extend in the order of a few thousand parameters which have to be calibrated manu- ally by an expert for realistic energy modeling. This makes it challenging and expensive thereby making building en- ergy modeling unfeasible for smaller projects. In this paper, we describe the \\Autotune" research which employs machine learning algorithms to generate agents for the dierent kinds of standard reference buildings in the U.S. building stock. The parametric space and the variety of building locations and types make this a challenging computational problem necessitating the use of supercomputers. Millions of En- ergyPlus simulations are run on supercomputers which are subsequently used to train machine learning algorithms to generate agents. These agents, once created, can then run in a fraction of the time thereby allowing cost-eective cali- bration of building models.

  4. Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; Leinweber, David; Ruebel, Oliver; Wu, Kesheng

    2011-09-16

    This paper describes collaborative work between active traders, regulators, economists, and supercomputing researchers to replicate and extend investigations of the Flash Crash and other market anomalies in a National Laboratory HPC environment. Our work suggests that supercomputing tools and methods will be valuable to market regulators in achieving the goal of market safety, stability, and security. Research results using high frequency data and analytics are described, and directions for future development are discussed. Currently the key mechanism for preventing catastrophic market action are “circuit breakers.” We believe a more graduated approach, similar to the “yellow light” approach in motorsports to slow down traffic, might be a better way to achieve the same goal. To enable this objective, we study a number of indicators that could foresee hazards in market conditions and explore options to confirm such predictions. Our tests confirm that Volume Synchronized Probability of Informed Trading (VPIN) and a version of volume Herfindahl-Hirschman Index (HHI) for measuring market fragmentation can indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.

  5. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; D' Azevedo, Eduardo [ORNL; Philip, Bobby [ORNL; Worley, Patrick H [ORNL

    2016-01-01

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phase of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.

  6. Factores de riesgo relacionados al uso de drogas ilegales: perspectiva crítica de familiares y personas cercanas en un centro de salud público en San Pedro Sula, Honduras Fatores de risco relacionados ao uso de drogas ilegais, perspectiva crítica de familiares e pessoas próximas, em um centro da saúde público em San Pedro Sula, Honduras Risk factors related to the use of illegal drugs: the critial perspective of drug users' relatives and acquaintances at a public health center in San Pedro Sula, Honduras

    Directory of Open Access Journals (Sweden)

    Gladys Magdalena Rodríguez Funes

    2009-01-01

    ência anterior com álcool/fumo, ter amigos/amigas que usam drogas, falta de conhecimento, baixa autoestima, idade, entre outros fatores pessoais, familiares e sociais. Em conclusão, deve-se reforçar a prevenção e proteção.This article presents quantitative data from a multicenter, cross-sectional study, which was performed at a public health center in San Pedro Sula, Honduras, using multiple methods. The objective of the study was to describe the critical perspective of people who reported being affected by their relationship with an illicit drug user (relative or acquaintance in terms of risk factors. Data collection was performed using 100 questionnaires. Most participants were women with low education levels. Drug users were mostly men, with an average age of 23.3 years. The most consumed drug was marijuana (78%, followed by crack/cocaine (72%, glue/inhalants (27%, hallucinogens (ecstasy/LSD (3%, amphetamines/stimulants (1%, and heroin (1%. The identified risk factors include: previous experience with alcohol/tobacco, having friends who use drugs, lack of information, low self-esteem, age, and other personal, family and social factors. In conclusion, prevention and protection should be reinforced.

  7. San Cástulo

    OpenAIRE

    Jaramillo, Tania

    2014-01-01

    Porque no te acercas y nos entendemos, nos vamos cayendo por el lucro de la colonia, nos perdemos en la esquina de san Cástulo y nos vamos volando a Eleuterio, en una noche, que la luna nos vigile, que nos aguarde, que retrase el día, y la gente permanezca dormida o despierta pero temerosa de la noche, de los policías y los delincuentes, de los violadores y de nosotros, de la vida nocturna, de ese lugar oscuro en alguna parte, donde nos convertimos y aullamos.

  8. San Cástulo

    OpenAIRE

    Jaramillo, Tania

    2014-01-01

    Porque no te acercas y nos entendemos, nos vamos cayendo por el lucro de la colonia, nos perdemos en la esquina de san Cástulo y nos vamos volando a Eleuterio, en una noche, que la luna nos vigile, que nos aguarde, que retrase el día, y la gente permanezca dormida o despierta pero temerosa de la noche, de los policías y los delincuentes, de los violadores y de nosotros, de la vida nocturna, de ese lugar oscuro en alguna parte, donde nos convertimos y aullamos.

  9. Coma blisters sans coma.

    Science.gov (United States)

    Heinisch, Silke; Loosemore, Michael; Cusack, Carrie A; Allen, Herbert B

    2012-09-01

    Coma blisters (CBs) are self-limited lesions that occur in regions of pressure during unconscious states classically induced by barbiturates. We report a case of CBs sans coma that were histologically confirmed in a 41-year-old woman who developed multiple tense abdominal bullae with surrounding erythema following a transatlantic flight. Interestingly, the patient was fully conscious and denied medication use or history of medical conditions. A clinical diagnosis of CBs was confirmed by histopathologic findings of eccrine gland necrosis, a hallmark of these bulIous lesions.

  10. Federal Council on Science, Engineering and Technology: Committee on Computer Research and Applications, Subcommittee on Science and Engineering Computing: The US Supercomputer Industry

    Energy Technology Data Exchange (ETDEWEB)

    1987-12-01

    The Federal Coordinating Council on Science, Engineering, and Technology (FCCSET) Committee on Supercomputing was chartered by the Director of the Office of Science and Technology Policy in 1982 to examine the status of supercomputing in the United States and to recommend a role for the Federal Government in the development of this technology. In this study, the FCCSET Committee (now called the Subcommittee on Science and Engineering Computing of the FCCSET Committee on Computer Research and Applications) reports on the status of the supercomputer industry and addresses changes that have occured since issuance of the 1983 and 1985 reports. The review based upon periodic meetings with and site visits to supercomputer manufacturers and consultation with experts in high performance scientific computing. White papers have been contributed to this report by industry leaders and supercomputer experts.

  11. San Diego's Capital Planning Process

    Science.gov (United States)

    Lytton, Michael

    2009-01-01

    This article describes San Diego's capital planning process. As part of its capital planning process, the San Diego Unified School District has developed a systematic analysis of functional quality at each of its school sites. The advantage of this approach is that it seeks to develop and apply quantifiable metrics and standards for the more…

  12. Los Angeles og San Francisco

    DEFF Research Database (Denmark)

    Ørstrup, Finn Rude

    1998-01-01

    Kompendium udarbejdet til en studierejse til Los Angeles og San Francisco april-maj 1998 Kunstakademiets Arkitektskole, Institut 3H......Kompendium udarbejdet til en studierejse til Los Angeles og San Francisco april-maj 1998 Kunstakademiets Arkitektskole, Institut 3H...

  13. A Framework for HI Spectral Source Finding Using Distributed-Memory Supercomputing

    CERN Document Server

    Westerlund, Stefan

    2014-01-01

    The latest generation of radio astronomy interferometers will conduct all sky surveys with data products consisting of petabytes of spectral line data. Traditional approaches to identifying and parameterising the astrophysical sources within this data will not scale to datasets of this magnitude, since the performance of workstations will not keep up with the real-time generation of data. For this reason, it is necessary to employ high performance computing systems consisting of a large number of processors connected by a high-bandwidth network. In order to make use of such supercomputers substantial modifications must be made to serial source finding code. To ease the transition, this work presents the Scalable Source Finder Framework, a framework providing storage access, networking communication and data composition functionality, which can support a wide range of source finding algorithms provided they can be applied to subsets of the entire image. Additionally, the Parallel Gaussian Source Finder was imp...

  14. Diskless supercomputers: Scalable, reliable I/O for the Tera-Op technology base

    Science.gov (United States)

    Katz, Randy H.; Ousterhout, John K.; Patterson, David A.

    1993-01-01

    Computing is seeing an unprecedented improvement in performance; over the last five years there has been an order-of-magnitude improvement in the speeds of workstation CPU's. At least another order of magnitude seems likely in the next five years, to machines with 500 MIPS or more. The goal of the ARPA Teraop program is to realize even larger, more powerful machines, executing as many as a trillion operations per second. Unfortunately, we have seen no comparable breakthroughs in I/O performance; the speeds of I/O devices and the hardware and software architectures for managing them have not changed substantially in many years. We have completed a program of research to demonstrate hardware and software I/O architectures capable of supporting the kinds of internetworked 'visualization' workstations and supercomputers that will appear in the mid 1990s. The project had three overall goals: high performance, high reliability, and scalable, multipurpose system.

  15. An Optimized Parallel FDTD Topology for Challenging Electromagnetic Simulations on Supercomputers

    Directory of Open Access Journals (Sweden)

    Shugang Jiang

    2015-01-01

    Full Text Available It may not be a challenge to run a Finite-Difference Time-Domain (FDTD code for electromagnetic simulations on a supercomputer with more than 10 thousands of CPU cores; however, to make FDTD code work with the highest efficiency is a challenge. In this paper, the performance of parallel FDTD is optimized through MPI (message passing interface virtual topology, based on which a communication model is established. The general rules of optimal topology are presented according to the model. The performance of the method is tested and analyzed on three high performance computing platforms with different architectures in China. Simulations including an airplane with a 700-wavelength wingspan, and a complex microstrip antenna array with nearly 2000 elements are performed very efficiently using a maximum of 10240 CPU cores.

  16. Large-scale integrated super-computing platform for next generation virtual drug discovery.

    Science.gov (United States)

    Mitchell, Wayne; Matsumoto, Shunji

    2011-08-01

    Traditional drug discovery starts by experimentally screening chemical libraries to find hit compounds that bind to protein targets, modulating their activity. Subsequent rounds of iterative chemical derivitization and rescreening are conducted to enhance the potency, selectivity, and pharmacological properties of hit compounds. Although computational docking of ligands to targets has been used to augment the empirical discovery process, its historical effectiveness has been limited because of the poor correlation of ligand dock scores and experimentally determined binding constants. Recent progress in super-computing, coupled to theoretical insights, allows the calculation of the Gibbs free energy, and therefore accurate binding constants, for usually large ligand-receptor systems. This advance extends the potential of virtual drug discovery. A specific embodiment of the technology, integrating de novo, abstract fragment based drug design, sophisticated molecular simulation, and the ability to calculate thermodynamic binding constants with unprecedented accuracy, are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Gallarno, George [Christian Brothers University; Rogers, James H [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  18. Operational numerical weather prediction on a GPU-accelerated cluster supercomputer

    Science.gov (United States)

    Lapillonne, Xavier; Fuhrer, Oliver; Spörri, Pascal; Osuna, Carlos; Walser, André; Arteaga, Andrea; Gysi, Tobias; Rüdisühli, Stefan; Osterried, Katherine; Schulthess, Thomas

    2016-04-01

    The local area weather prediction model COSMO is used at MeteoSwiss to provide high resolution numerical weather predictions over the Alpine region. In order to benefit from the latest developments in computer technology the model was optimized and adapted to run on Graphical Processing Units (GPUs). Thanks to these model adaptations and the acquisition of a dedicated hybrid supercomputer a new set of operational applications have been introduced, COSMO-1 (1 km deterministic), COSMO-E (2 km ensemble) and KENDA (data assimilation) at MeteoSwiss. These new applications correspond to an increase of a factor 40x in terms of computational load as compared to the previous operational setup. We present an overview of the porting approach of the COSMO model to GPUs together with a detailed description of and performance results on the new hybrid Cray CS-Storm computer, Piz Kesch.

  19. A CPU/MIC Collaborated Parallel Framework for GROMACS on Tianhe-2 Supercomputer.

    Science.gov (United States)

    Peng, Shaoliang; Yang, Shunyun; Su, Wenhe; Zhang, Xiaoyu; Zhang, Tenglilang; Liu, Weiguo; Zhao, Xingming

    2017-06-16

    Molecular Dynamics (MD) is the simulation of the dynamic behavior of atoms and molecules. As the most popular software for molecular dynamics, GROMACS cannot work on large-scale data because of limit computing resources. In this paper, we propose a CPU and Intel® Xeon Phi Many Integrated Core (MIC) collaborated parallel framework to accelerate GROMACS using the offload mode on a MIC coprocessor, with which the performance of GROMACS is improved significantly, especially with the utility of Tianhe-2 supercomputer. Furthermore, we optimize GROMACS so that it can run on both the CPU and MIC at the same time. In addition, we accelerate multi-node GROMACS so that it can be used in practice. Benchmarking on real data, our accelerated GROMACS performs very well and reduces computation time significantly. Source code: https://github.com/tianhe2/gromacs-mic.

  20. Mixed precision numerical weather prediction on hybrid GPU-CPU supercomputers

    Science.gov (United States)

    Lapillonne, Xavier; Osuna, Carlos; Spoerri, Pascal; Osterried, Katherine; Charpilloz, Christophe; Fuhrer, Oliver

    2017-04-01

    A new version of the climate and weather model COSMO that runs faster on traditional high performance computing systems with CPUs as well as on heterogeneous architectures using graphics processing units (GPUs) has been developed. The model was in addition adapted to be able to run in "single precision" mode. After discussing the key changes introduced in this new model version and the tools used in the porting approach, we present 3 applications, namely the MeteoSwiss operational weather prediction system, COSMO-LEPS and the CALMO project, which already take advantage of the performance improvement, up to a factor 4, by running on GPU system and using the single precision mode. We discuss how the code changes open new perspectives for scientific research and can enable researchers to get access to a new class of supercomputers.

  1. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  2. Modern Gyrokinetic Particle-In-Cell Simulation of Fusion Plasmas on Top Supercomputers

    CERN Document Server

    Wang, Bei; Tang, William; Ibrahim, Khaled; Madduri, Kamesh; Williams, Samuel; Oliker, Leonid

    2015-01-01

    The Gyrokinetic Toroidal Code at Princeton (GTC-P) is a highly scalable and portable particle-in-cell (PIC) code. It solves the 5D Vlasov-Poisson equation featuring efficient utilization of modern parallel computer architectures at the petascale and beyond. Motivated by the goal of developing a modern code capable of dealing with the physics challenge of increasing problem size with sufficient resolution, new thread-level optimizations have been introduced as well as a key additional domain decomposition. GTC-P's multiple levels of parallelism, including inter-node 2D domain decomposition and particle decomposition, as well as intra-node shared memory partition and vectorization have enabled pushing the scalability of the PIC method to extreme computational scales. In this paper, we describe the methods developed to build a highly parallelized PIC code across a broad range of supercomputer designs. This particularly includes implementations on heterogeneous systems using NVIDIA GPU accelerators and Intel Xeon...

  3. Dawning Nebulae: A PetaFLOPS Supercomputer with a Heterogeneous Structure

    Institute of Scientific and Technical Information of China (English)

    Ning-Hui Sun; Jing Xing; Zhi-Gang Huo; Guang-Ming Tan; Jin Xiong; Bo Li; Can Ma

    2011-01-01

    Dawning Nebulae is a heterogeneous system composed of 9280 multi-core x86 CPUs and 4640 NVIDIA Fermi GPUs. With a Linpack performance of 1.271 petaFLOPS, it was ranked the second in the TOP500 List released in June 2010. In this paper, key issues in the system design of Dawning Nebulae are introduced. System tuning methodologies aiming at petaFLOPS Linpack result are presented, including algorithmic optimization and communication improvement. The design of its file I/O subsystem, including HVFS and the underlying DCFS3, is also described. Performance evaluations show that the Linpack efficiency of each node reaches 69.89%, and 1024-node aggregate read and write bandwidths exceed 100 GB/s and 70 GB/s respectively. The success of Dawning Nebulae has demonstrated the viability of CPU/GPU heterogeneous structure for future designs of supercomputers.

  4. Scalability Test of multiscale fluid-platelet model for three top supercomputers

    Science.gov (United States)

    Zhang, Peng; Zhang, Na; Gao, Chao; Zhang, Li; Gao, Yuxiang; Deng, Yuefan; Bluestein, Danny

    2016-07-01

    We have tested the scalability of three supercomputers: the Tianhe-2, Stampede and CS-Storm with multiscale fluid-platelet simulations, in which a highly-resolved and efficient numerical model for nanoscale biophysics of platelets in microscale viscous biofluids is considered. Three experiments involving varying problem sizes were performed: Exp-S: 680,718-particle single-platelet; Exp-M: 2,722,872-particle 4-platelet; and Exp-L: 10,891,488-particle 16-platelet. Our implementations of multiple time-stepping (MTS) algorithm improved the performance of single time-stepping (STS) in all experiments. Using MTS, our model achieved the following simulation rates: 12.5, 25.0, 35.5 μs/day for Exp-S and 9.09, 6.25, 14.29 μs/day for Exp-M on Tianhe-2, CS-Storm 16-K80 and Stampede K20. The best rate for Exp-L was 6.25 μs/day for Stampede. Utilizing current advanced HPC resources, the simulation rates achieved by our algorithms bring within reach performing complex multiscale simulations for solving vexing problems at the interface of biology and engineering, such as thrombosis in blood flow which combines millisecond-scale hematology with microscale blood flow at resolutions of micro-to-nanoscale cellular components of platelets. This study of testing the performance characteristics of supercomputers with advanced computational algorithms that offer optimal trade-off to achieve enhanced computational performance serves to demonstrate that such simulations are feasible with currently available HPC resources.

  5. Efficient development of memory bounded geo-applications to scale on modern supercomputers

    Science.gov (United States)

    Räss, Ludovic; Omlin, Samuel; Licul, Aleksandar; Podladchikov, Yuri; Herman, Frédéric

    2016-04-01

    Numerical modeling is an actual key tool in the area of geosciences. The current challenge is to solve problems that are multi-physics and for which the length scale and the place of occurrence might not be known in advance. Also, the spatial extend of the investigated domain might strongly vary in size, ranging from millimeters for reactive transport to kilometers for glacier erosion dynamics. An efficient way to proceed is to develop simple but robust algorithms that perform well and scale on modern supercomputers and permit therefore very high-resolution simulations. We propose an efficient approach to solve memory bounded real-world applications on modern supercomputers architectures. We optimize the software to run on our newly acquired state-of-the-art GPU cluster "octopus". Our approach shows promising preliminary results on important geodynamical and geomechanical problematics: we have developed a Stokes solver for glacier flow and a poromechanical solver including complex rheologies for nonlinear waves in stressed rocks porous rocks. We solve the system of partial differential equations on a regular Cartesian grid and use an iterative finite difference scheme with preconditioning of the residuals. The MPI communication happens only locally (point-to-point); this method is known to scale linearly by construction. The "octopus" GPU cluster, which we use for the computations, has been designed to achieve maximal data transfer throughput at minimal hardware cost. It is composed of twenty compute nodes, each hosting four Nvidia Titan X GPU accelerators. These high-density nodes are interconnected with a parallel (dual-rail) FDR InfiniBand network. Our efforts show promising preliminary results for the different physics investigated. The glacier flow solver achieves good accuracy in the relevant benchmarks and the coupled poromechanical solver permits to explain previously unresolvable focused fluid flow as a natural outcome of the porosity setup. In both cases

  6. Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science

    Science.gov (United States)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.

    2008-12-01

    The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.

  7. Construction, water-level, and water-quality data for multiple-well monitoring sites and test wells, Fort Irwin National Training Center, San Bernardino County, California, 2009-12

    Science.gov (United States)

    Kjos, Adam R.; Densmore, Jill N.; Nawikas, Joseph M.; Brown, Anthony A.

    2014-01-01

    Because of increasing water demands at the U.S. Army Fort Irwin National Training Center, the U.S. Geological Survey in cooperation with the U.S. Army carried out a study to evaluate the water quality and potential groundwater supply of undeveloped basins within the U.S. Army Fort Irwin National Training Center. In addition, work was performed in the three developed basins—Langford, Bicycle, and Irwin—proximal to or underlying cantonment to provide information in support of water-resources management and to supplement monitoring in these basins. Between 2009 and 2012, the U.S. Geological Survey installed 41 wells to expand collection of water-resource data within the U.S. Army Fort Irwin National Training Center. Thirty-four monitoring wells (2-inch diameter) were constructed at 14 single- or multiple-well monitoring sites and 7 test wells (8-inch diameter) were installed. The majority of the wells were installed in previously undeveloped or minimally developed basins (Cronise, Red Pass, the Central Corridor area, Superior, Goldstone, and Nelson Basins) proximal to cantonment (primary base housing and infrastructure). Data associated with well construction, water-level monitoring, and water-quality sampling are presented in this report.

  8. 78 FR 39610 - Safety Zone; Big Bay Boom, San Diego Bay; San Diego, CA

    Science.gov (United States)

    2013-07-02

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; Big Bay Boom, San Diego Bay; San Diego, CA... temporary safety zones upon the navigable waters of the San Diego Bay for the annual Port of San Diego... Sector San Diego, Coast Guard; telephone 619-278-7261, email d11marineeventssd@uscg.mil . If you have...

  9. 75 FR 38412 - Safety Zone; San Diego POPS Fireworks, San Diego, CA

    Science.gov (United States)

    2010-07-02

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; San Diego POPS Fireworks, San Diego, CA... zone on the ] navigable waters of San Diego Bay in support of the San Diego POPS Fireworks. This safety.... Coast Guard Sector San Diego, CA; telephone 619-278- 7262, e-mail Shane.E.Jackson@uscg.mil . If you have...

  10. 78 FR 42027 - Safety Zone; San Diego Bayfair; Mission Bay, San Diego, CA

    Science.gov (United States)

    2013-07-15

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; San Diego Bayfair; Mission Bay, San Diego... proposing a temporary safety zone on the navigable waters of Mission Bay in San Diego, CA for the San Diego..., call or email Lieutenant John Bannon, Waterways Management, U.S. Coast Guard Sector San Diego...

  11. 78 FR 29289 - Safety Zone; Big Bay Boom, San Diego Bay, San Diego, CA

    Science.gov (United States)

    2013-05-20

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; Big Bay Boom, San Diego Bay, San Diego, CA... establish four temporary safety zones upon the navigable waters of San Diego ] Bay for the Port of San Diego... Management, U.S. Coast Guard Sector San Diego; telephone (619) 278-7261, email John.E.Bannon@uscg.mil . If...

  12. 78 FR 53245 - Safety Zone; San Diego Bayfair; Mission Bay, San Diego, CA

    Science.gov (United States)

    2013-08-29

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; San Diego Bayfair; Mission Bay, San Diego... temporary safety zone upon the navigable waters of Mission Bay in San Diego, CA for the annual San Diego... Management, U.S. Coast Guard Sector San Diego; telephone (619) 278-7261, email John.E.Bannon@uscg.mil . If...

  13. NASA Center for Climate Simulation (NCCS) Presentation

    Science.gov (United States)

    Webster, William P.

    2012-01-01

    The NASA Center for Climate Simulation (NCCS) offers integrated supercomputing, visualization, and data interaction technologies to enhance NASA's weather and climate prediction capabilities. It serves hundreds of users at NASA Goddard Space Flight Center, as well as other NASA centers, laboratories, and universities across the US. Over the past year, NCCS has continued expanding its data-centric computing environment to meet the increasingly data-intensive challenges of climate science. We doubled our Discover supercomputer's peak performance to more than 800 teraflops by adding 7,680 Intel Xeon Sandy Bridge processor-cores and most recently 240 Intel Xeon Phi Many Integrated Core (MIG) co-processors. A supercomputing-class analysis system named Dali gives users rapid access to their data on Discover and high-performance software including the Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT), with interfaces from user desktops and a 17- by 6-foot visualization wall. NCCS also is exploring highly efficient climate data services and management with a new MapReduce/Hadoop cluster while augmenting its data distribution to the science community. Using NCCS resources, NASA completed its modeling contributions to the Intergovernmental Panel on Climate Change (IPCG) Fifth Assessment Report this summer as part of the ongoing Coupled Modellntercomparison Project Phase 5 (CMIP5). Ensembles of simulations run on Discover reached back to the year 1000 to test model accuracy and projected climate change through the year 2300 based on four different scenarios of greenhouse gases, aerosols, and land use. The data resulting from several thousand IPCC/CMIP5 simulations, as well as a variety of other simulation, reanalysis, and observationdatasets, are available to scientists and decision makers through an enhanced NCCS Earth System Grid Federation Gateway. Worldwide downloads have totaled over 110 terabytes of data.

  14. Volcano-hazard zonation for San Vicente volcano, El Salvador

    Science.gov (United States)

    Major, J.J.; Schilling, S.P.; Pullinger, C.R.; Escobar, C.D.; Howell, M.M.

    2001-01-01

    San Vicente volcano, also known as Chichontepec, is one of many volcanoes along the volcanic arc in El Salvador. This composite volcano, located about 50 kilometers east of the capital city San Salvador, has a volume of about 130 cubic kilometers, rises to an altitude of about 2180 meters, and towers above major communities such as San Vicente, Tepetitan, Guadalupe, Zacatecoluca, and Tecoluca. In addition to the larger communities that surround the volcano, several smaller communities and coffee plantations are located on or around the flanks of the volcano, and major transportation routes are located near the lowermost southern and eastern flanks of the volcano. The population density and proximity around San Vicente volcano, as well as the proximity of major transportation routes, increase the risk that even small landslides or eruptions, likely to occur again, can have serious societal consequences. The eruptive history of San Vicente volcano is not well known, and there is no definitive record of historical eruptive activity. The last significant eruption occurred more than 1700 years ago, and perhaps long before permanent human habitation of the area. Nevertheless, this volcano has a very long history of repeated, and sometimes violent, eruptions, and at least once a large section of the volcano collapsed in a massive landslide. The oldest rocks associated with a volcanic center at San Vicente are more than 2 million years old. The volcano is composed of remnants of multiple eruptive centers that have migrated roughly eastward with time. Future eruptions of this volcano will pose substantial risk to surrounding communities.

  15. Riparian Habitat - San Joaquin River

    Data.gov (United States)

    California Department of Resources — The immediate focus of this study is to identify, describe and map the extent and diversity of riparian habitats found along the main stem of the San Joaquin River,...

  16. 1906 San Francisco, USA Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 1906 San Francisco earthquake was the largest event (magnitude 8.3) to occur in the conterminous United States in the 20th Century. Recent estimates indicate...

  17. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  18. Stratigraphy, sedimentology and paleontology of lower Eocene San Jose formation, central San Juan basin, New Mexico

    Energy Technology Data Exchange (ETDEWEB)

    Lucas, S.G.; Smith, L.N. (New Mexico Museum of Natural History, Albuquerque (USA))

    1989-09-01

    The lower Eocene San Jose Formation in the central portion of the San Juan basin (Gobernador-Vigas Canyon area) consists of the Cuba Mesa, Regina, Llaves, and Tapicitos Members. Well log data indicate that, from its 100-m thickness, the Cuba Mesa Member thins toward the basin center and pinches out to the northeast by lat. 36{degree}40'N, long. 107{degree}19'W. The Regina Member has the most extensive outcrops in the central basin, and it decreases in sandstone/mud rock ratio to the north. The Llaves and Tapicitos Members occur only at the highest elevations, are thin due to erosion, and are not mappable as separate units. Well log data and 1,275 m of measured stratigraphic section in the Regina, Llaves, and Tapicitos Members indicate these strata are composed of approximately 35% medium to coarse-grained sandstone and 65% fine-grained sandstone and mud rock. Sedimentology and sediment-dispersal patterns indicate deposition by generally south-flowing streams that had sources to the northwest, northeast, and east. Low-sinuosity, sand-bedded, braided( ) streams shifted laterally across about 1 km-wide channel belts to produce sheet sandstones that are prominent throughout the San Jose Formation. Subtle levees separated channel environments from floodplain and local lacustrine areas. Avulsion relocated channels periodically to areas on the floodplain, resulting in the typically disconnected sheet sandstones within muddy overbank deposits of the Regina Member.

  19. Environmental assessment : Rodent control program : San Joaquin river levee : San Luis National Wildlife Refuge

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — The Lower San Joaquin Levee District (LSJLD) requires that six miles of levee situated along the San Joaquin River on San Luis National Wildlife Refuge (SLNWR) be...

  20. Residencia San Pedro, California

    Directory of Open Access Journals (Sweden)

    Neutra, Richard J.

    1961-01-01

    Full Text Available Esta vivienda representa una aproximación más hacia la típica casa grande española, con techos de teca de 7 cm, que los señores Rados han edificado y en la que albergan a su gran familia de hijos, los cuales tienen ya sus propios vástagos. Ambos, el señor y la señora Rados, descienden de familias navieras italianas de Trieste, y el propio señor Rados tiene una compañía constructora de barcos en el puerto de San Pedro, que puede verse desde su propia casa. Los dos son verdaderamente unos abuelos muy sociables, cariñosos y atentos. Por añadidura, la señora Rados se entretiene frecuentemente y le agrada el cuidado de la casa. Por ello ha sido proyectada para facilitar sensiblemente toda esta serie de actividades.

  1. The company's mainframes join CERN's openlab for DataGrid apps and are pivotal in a new $22 million Supercomputer in the U.K.

    CERN Multimedia

    2002-01-01

    Hewlett-Packard has installed a supercomputer system valued at more than $22 million at the Wellcome Trust Sanger Institute (WTSI) in the U.K. HP has also joined the CERN openlab for DataGrid applications (1 page).

  2. Earth and environmental science in the 1980's: Part 1: Environmental data systems, supercomputer facilities and networks

    Science.gov (United States)

    1986-01-01

    Overview descriptions of on-line environmental data systems, supercomputer facilities, and networks are presented. Each description addresses the concepts of content, capability, and user access relevant to the point of view of potential utilization by the Earth and environmental science community. The information on similar systems or facilities is presented in parallel fashion to encourage and facilitate intercomparison. In addition, summary sheets are given for each description, and a summary table precedes each section.

  3. A Parallel Supercomputer Implementation of a Biological Inspired Neural Network and its use for Pattern Recognition

    Science.gov (United States)

    de Ladurantaye, Vincent; Lavoie, Jean; Bergeron, Jocelyn; Parenteau, Maxime; Lu, Huizhong; Pichevar, Ramin; Rouat, Jean

    2012-02-01

    A parallel implementation of a large spiking neural network is proposed and evaluated. The neural network implements the binding by synchrony process using the Oscillatory Dynamic Link Matcher (ODLM). Scalability, speed and performance are compared for 2 implementations: Message Passing Interface (MPI) and Compute Unified Device Architecture (CUDA) running on clusters of multicore supercomputers and NVIDIA graphical processing units respectively. A global spiking list that represents at each instant the state of the neural network is described. This list indexes each neuron that fires during the current simulation time so that the influence of their spikes are simultaneously processed on all computing units. Our implementation shows a good scalability for very large networks. A complex and large spiking neural network has been implemented in parallel with success, thus paving the road towards real-life applications based on networks of spiking neurons. MPI offers a better scalability than CUDA, while the CUDA implementation on a GeForce GTX 285 gives the best cost to performance ratio. When running the neural network on the GTX 285, the processing speed is comparable to the MPI implementation on RQCHP's Mammouth parallel with 64 notes (128 cores).

  4. Massively-parallel electrical-conductivity imaging of hydrocarbonsusing the Blue Gene/L supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Commer, M.; Newman, G.A.; Carazzone, J.J.; Dickens, T.A.; Green,K.E.; Wahrmund, L.A.; Willen, D.E.; Shiu, J.

    2007-05-16

    Large-scale controlled source electromagnetic (CSEM)three-dimensional (3D) geophysical imaging is now receiving considerableattention for electrical conductivity mapping of potential offshore oiland gas reservoirs. To cope with the typically large computationalrequirements of the 3D CSEM imaging problem, our strategies exploitcomputational parallelism and optimized finite-difference meshing. Wereport on an imaging experiment, utilizing 32,768 tasks/processors on theIBM Watson Research Blue Gene/L (BG/L) supercomputer. Over a 24-hourperiod, we were able to image a large scale marine CSEM field data setthat previously required over four months of computing time ondistributed clusters utilizing 1024 tasks on an Infiniband fabric. Thetotal initial data misfit could be decreased by 67 percent within 72completed inversion iterations, indicating an electrically resistiveregion in the southern survey area below a depth of 1500 m below theseafloor. The major part of the residual misfit stems from transmitterparallel receiver components that have an offset from the transmittersail line (broadside configuration). Modeling confirms that improvedbroadside data fits can be achieved by considering anisotropic electricalconductivities. While delivering a satisfactory gross scale image for thedepths of interest, the experiment provides important evidence for thenecessity of discriminating between horizontal and verticalconductivities for maximally consistent 3D CSEM inversions.

  5. 369 TFlop/s molecular dynamics simulations on the Roadrunner general-purpose heterogeneous supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Swaminarayan, Sriram [Los Alamos National Laboratory; Germann, Timothy C [Los Alamos National Laboratory; Kadau, Kai [Los Alamos National Laboratory; Fossum, Gordon C [IBM CORPORATION

    2008-01-01

    The authors present timing and performance numbers for a short-range parallel molecular dynamics (MD) code, SPaSM, that has been rewritten for the heterogeneous Roadrunner supercomputer. Each Roadrunner compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, so that there are four MPI ranks per node, each with one Opteron and one Cell. The interatomic forces are computed on the Cells (each with one PPU and eight SPU cores), while the Opterons are used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard-Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), corresponding to 124 MFlop/Watt/s at a price of approximately 3.69 MFlops/dollar. They demonstrate an initial target application, the jetting and ejection of material from a shocked surface.

  6. Cell-based Adaptive Mesh Refinement on the GPU with Applications to Exascale Supercomputing

    Science.gov (United States)

    Trujillo, Dennis; Robey, Robert; Davis, Neal; Nicholaeff, David

    2011-10-01

    We present an OpenCL implementation of a cell-based adaptive mesh refinement (AMR) scheme for the shallow water equations. The challenges associated with ensuring the locality of algorithm architecture to fully exploit the massive number of parallel threads on the GPU is discussed. This includes a proof of concept that a cell-based AMR code can be effectively implemented, even on a small scale, in the memory and threading model provided by OpenCL. Additionally, the program requires dynamic memory in order to properly implement the mesh; as this is not supported in the OpenCL 1.1 standard, a combination of CPU memory management and GPU computation effectively implements a dynamic memory allocation scheme. Load balancing is achieved through a new stencil-based implementation of a space-filling curve, eliminating the need for a complete recalculation of the indexing on the mesh. A cartesian grid hash table scheme to allow fast parallel neighbor accesses is also discussed. Finally, the relative speedup of the GPU-enabled AMR code is compared to the original serial version. We conclude that parallelization using the GPU provides significant speedup for typical numerical applications and is feasible for scientific applications in the next generation of supercomputing.

  7. Distributed computing as a virtual supercomputer: Tools to run and manage large-scale BOINC simulations

    Science.gov (United States)

    Giorgino, Toni; Harvey, M. J.; de Fabritiis, Gianni

    2010-08-01

    Distributed computing (DC) projects tackle large computational problems by exploiting the donated processing power of thousands of volunteered computers, connected through the Internet. To efficiently employ the computational resources of one of world's largest DC efforts, GPUGRID, the project scientists require tools that handle hundreds of thousands of tasks which run asynchronously and generate gigabytes of data every day. We describe RBoinc, an interface that allows computational scientists to embed the DC methodology into the daily work-flow of high-throughput experiments. By extending the Berkeley Open Infrastructure for Network Computing (BOINC), the leading open-source middleware for current DC projects, with mechanisms to submit and manage large-scale distributed computations from individual workstations, RBoinc turns distributed grids into cost-effective virtual resources that can be employed by researchers in work-flows similar to conventional supercomputers. The GPUGRID project is currently using RBoinc for all of its in silico experiments based on molecular dynamics methods, including the determination of binding free energies and free energy profiles in all-atom models of biomolecules.

  8. A user-friendly web portal for T-Coffee on supercomputers

    Directory of Open Access Journals (Sweden)

    Koetsier Jos

    2011-05-01

    Full Text Available Abstract Background Parallel T-Coffee (PTC was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. Results In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. Conclusions The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution.

  9. PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan; Mills, Richard T.

    2012-04-18

    PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors per realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.

  10. A user-friendly web portal for T-Coffee on supercomputers.

    Science.gov (United States)

    Rius, Josep; Cores, Fernando; Solsona, Francesc; van Hemert, Jano I; Koetsier, Jos; Notredame, Cedric

    2011-05-12

    Parallel T-Coffee (PTC) was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution.

  11. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  12. Bringing ATLAS production to HPC resources - A use case with the Hydra supercomputer of the Max Planck Society

    Science.gov (United States)

    Kennedy, J. A.; Kluth, S.; Mazzaferro, L.; Walker, Rodney

    2015-12-01

    The possible usage of HPC resources by ATLAS is now becoming viable due to the changing nature of these systems and it is also very attractive due to the need for increasing amounts of simulated data. In recent years the architecture of HPC systems has evolved, moving away from specialized monolithic systems, to a more generic linux type platform. This change means that the deployment of non HPC specific codes has become much easier. The timing of this evolution perfectly suits the needs of ATLAS and opens a new window of opportunity. The ATLAS experiment at CERN will begin a period of high luminosity data taking in 2015. This high luminosity phase will be accompanied by a need for increasing amounts of simulated data which is expected to exceed the capabilities of the current Grid infrastructure. ATLAS aims to address this need by opportunistically accessing resources such as cloud and HPC systems. This paper presents the results of a pilot project undertaken by ATLAS and the MPP/RZG to provide access to the HYDRA supercomputer facility. Hydra is the supercomputer of the Max Planck Society, it is a linux based supercomputer with over 80000 cores and 4000 physical nodes located at the RZG near Munich. This paper describes the work undertaken to integrate Hydra into the ATLAS production system by using the Nordugrid ARC-CE and other standard Grid components. The customization of these components and the strategies for HPC usage are discussed as well as possibilities for future directions.

  13. Crustal structure of the coastal and marine San Francisco Bay region, California

    Science.gov (United States)

    Parsons, Tom

    2002-01-01

    As of the time of this writing, the San Francisco Bay region is home to about 6.8 million people, ranking fifth among population centers in the United States. Most of these people live on the coastal lands along San Francisco Bay, the Sacramento River delta, and the Pacific coast. The region straddles the tectonic boundary between the Pacific and North American Plates and is crossed by several strands of the San Andreas Fault system. These faults, which are stressed by about 4 cm of relative plate motion each year, pose an obvious seismic hazard.

  14. Vegetation - San Felipe Valley [ds172

    Data.gov (United States)

    California Department of Resources — This Vegetation Map of the San Felipe Valley Wildlife Area in San Diego County, California is based on vegetation samples collected in the field in 2002 and 2005 and...

  15. 75 FR 39166 - Safety Zone; San Francisco Giants Baseball Game Promotion, San Francisco, CA

    Science.gov (United States)

    2010-07-08

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; San Francisco Giants Baseball Game... Bay off San Francisco, CA in support of the San Francisco Giants Baseball Game Promotion. This safety... Giants will sponsor the San Francisco Giants Baseball Game Promotion on July 16, 2010, on the...

  16. 78 FR 21403 - Notice of Inventory Completion: San Francisco State University NAGPRA Program, San Francisco, CA

    Science.gov (United States)

    2013-04-10

    ... NAGPRA Program, c/o Department of Anthropology, San Francisco State University, 1600 Holloway Avenue, San...: Pursuant to 25 U.S.C. 3001(9), the human remains described in this notice represent the physical remains of... NAGPRA Program, c/o Department of Anthropology, San Francisco State University, 1600 Holloway Avenue, San...

  17. 76 FR 55796 - Safety Zone; TriRock Triathlon, San Diego Bay, San Diego, CA

    Science.gov (United States)

    2011-09-09

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; TriRock Triathlon, San Diego Bay, San Diego.... Basis and Purpose Competitor Group is sponsoring the TriRock Triathlon, consisting of 2000 swimmers....T11-431 Safety Zone; TriRock Triathlon, San Diego Bay, San Diego, CA. (a) Location. The limits of...

  18. 76 FR 45693 - Safety Zone; San Diego POPS Fireworks, San Diego, CA

    Science.gov (United States)

    2011-08-01

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; San Diego POPS Fireworks, San Diego, CA... temporary safety zone on the navigable waters of San Diego Bay in support of the San Diego POPS Fireworks... Diego, CA; telephone (619) 278- 7262, e-mail Shane.E.Jakcson@uscg.mil . If you have questions on viewing...

  19. 78 FR 38584 - Safety Zone; San Diego Symphony Summer POPS Fireworks 2013 Season, San Diego, CA

    Science.gov (United States)

    2013-06-27

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; San Diego Symphony Summer POPS Fireworks 2013 Season, San Diego, CA AGENCY: Coast Guard, DHS. ACTION: Temporary final rule. SUMMARY: The Coast Guard is establishing a safety zone on the navigable waters of San Diego Bay in support of the San Diego...

  20. 76 FR 75908 - Notice of Inventory Completion: The University of California, San Diego, San Diego, CA

    Science.gov (United States)

    2011-12-05

    ... National Park Service Notice of Inventory Completion: The University of California, San Diego, San Diego... California on behalf of the University of California, San Diego, have completed an inventory of human remains... contact the University of California, San Diego. Disposition of the human remains and associated funerary...

  1. 77 FR 42647 - Safety Zone: San Diego Symphony POPS Fireworks; San Diego, CA

    Science.gov (United States)

    2012-07-20

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone: San Diego Symphony POPS Fireworks; San Diego, CA AGENCY: Coast Guard, DHS. ACTION: Temporary final rule. SUMMARY: The Coast Guard is establishing a safety zone on the navigable waters of San Diego Bay in support of the San Diego Symphony POPS...

  2. 75 FR 77756 - Safety Zone; San Diego Parade of Lights Fireworks, San Diego, CA

    Science.gov (United States)

    2010-12-14

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; San Diego Parade of Lights Fireworks, San Diego, CA AGENCY: Coast Guard, DHS. ACTION: Temporary final rule. SUMMARY: The Coast Guard is establishing a safety zone upon the navigable water of the San Diego Bay in San Diego, CA in support of the two...

  3. The San Bernabe power substation; La subestacion San Bernabe

    Energy Technology Data Exchange (ETDEWEB)

    Chavez Sanudo, Andres D. [Luz y Fuerza del Centro, Mexico, D. F. (Mexico)

    1997-12-31

    The first planning studies that gave rise to the San Bernabe substation go back to year 1985. The main circumstance that supports this decision is the gradual restriction for electric power generation that has been suffering the Miguel Aleman Hydro System, until its complete disappearance, to give priority to the potable water supply through the Cutzamala pumping system, that feeds in an important way Mexico City and the State of Mexico. In this document the author describes the construction project of the San Bernabe Substation; mention is made of the technological experiences obtained during the construction and its geographical location is shown, as well as the one line diagram of the same [Espanol] Los primeros estudios de planeacion que dieron origen a la subestacion San Bernabe se remontan al ano de 1985. La circunstancia principal que soporta esta decision es la restriccion paulatina para generar energia que ha venido experimentando el Sistema Hidroelectrico Miguel Aleman, hasta su desaparicion total, para dar prioridad al suministro de agua potable por medio del sistema de bombeo Cutzamala, que alimenta en forma importante a la Ciudad de Mexico y al Estado de Mexico. En este documento el autor describe el proyecto de construccion de la subestacion San Bernabe; se mencionan las experiencias tecnologicas obtenidas durante su construccion y se ilustra su ubicacion geografica, asi como un diagrama unifilar de la misma

  4. Influence of Earth crust composition on continental collision style in Precambrian conditions: Results of supercomputer modelling

    Science.gov (United States)

    Zavyalov, Sergey; Zakharov, Vladimir

    2016-04-01

    A number of issues concerning Precambrian geodynamics still remain unsolved because of uncertainity of many physical (thermal regime, lithosphere thickness, crust thickness, etc.) and chemical (mantle composition, crust composition) parameters, which differed considerably comparing to the present day values. In this work, we show results of numerical supercomputations based on petrological and thermomechanical 2D model, which simulates the process of collision between two continental plates, each 80-160 km thick, with various convergence rates ranging from 5 to 15 cm/year. In the model, the upper mantle temperature is 150-200 ⁰C higher than the modern value, while the continental crust radiogenic heat production is higher than the present value by the factor of 1.5. These settings correspond to Archean conditions. The present study investigates the dependence of collision style on various continental crust parameters, especially on crust composition. The 3 following archetypal settings of continental crust composition are examined: 1) completely felsic continental crust; 2) basic lower crust and felsic upper crust; 3) basic upper crust and felsic lower crust (hereinafter referred to as inverted crust). Modeling results show that collision with completely felsic crust is unlikely. In the case of basic lower crust, a continental subduction and subsequent continental rocks exhumation can take place. Therefore, formation of ultra-high pressure metamorphic rocks is possible. Continental subduction also occurs in the case of inverted continental crust. However, in the latter case, the exhumation of felsic rocks is blocked by upper basic layer and their subsequent interaction depends on their volume ratio. Thus, if the total inverted crust thickness is about 15 km and the thicknesses of the two layers are equal, felsic rocks cannot be exhumed. If the total thickness is 30 to 40 km and that of the felsic layer is 20 to 25 km, it breaks through the basic layer leading to

  5. A Spanish Borderlands Community: San Antonio.

    Science.gov (United States)

    Teja, Jesus F. de la

    2000-01-01

    Discusses the founding of San Antonio, originally San Antonio de Bexar, which, in 1718, came into being as a military settlement involved in Spanish imperial defensive measures. Focuses on the development and continued growth of San Antonio, Texas's most populous city in the 19th century. (CMK)

  6. PV Validation and Bankability Workshop: San Jose, California

    Energy Technology Data Exchange (ETDEWEB)

    Granata, J.; Howard, J.

    2011-12-01

    This report is a collaboration between Sandia National Laboratories, the National Renewable Energy Laboratory, and the Florida Solar Energy Center (FSEC). The report provides feedback from the U.S. Department of Energy's (DOE) Solar Program PV Validation and Bankability Workshop in San Jose, California on August 31, 2011. It focuses on the current state of PV in the United States, private funding to fund U.S. PV industry growth, roles and functions of the regional test center program, and ways to improve the current validation and bankability practices.

  7. Convair Astronautics, San Diego (California

    Directory of Open Access Journals (Sweden)

    Pereira & Luckmam, Arquitectos

    1960-05-01

    Full Text Available Este brillante y espectacular complejo industrial se ha creado especialmente para la investigación y fabricación de cohetes intercontinentales y vehículos del espacio de las Fuerzas Aéreas de los EE. UU., en las proximidades de San Diego y cerca del campo de pruebas de Sycamore Canyon.

  8. The Eastern California Shear Zone as the northward extension of the southern San Andreas Fault

    Science.gov (United States)

    Thatcher, Wayne R.; Savage, James C.; Simpson, Robert W.

    2016-01-01

    Cluster analysis offers an agnostic way to organize and explore features of the current GPS velocity field without reference to geologic information or physical models using information only contained in the velocity field itself. We have used cluster analysis of the Southern California Global Positioning System (GPS) velocity field to determine the partitioning of Pacific-North America relative motion onto major regional faults. Our results indicate the large-scale kinematics of the region is best described with two boundaries of high velocity gradient, one centered on the Coachella section of the San Andreas Fault and the Eastern California Shear Zone and the other defined by the San Jacinto Fault south of Cajon Pass and the San Andreas Fault farther north. The ~120 km long strand of the San Andreas between Cajon Pass and Coachella Valley (often termed the San Bernardino and San Gorgonio sections) is thus currently of secondary importance and carries lesser amounts of slip over most or all of its length. We show these first order results are present in maps of the smoothed GPS velocity field itself. They are also generally consistent with currently available, loosely bounded geologic and geodetic fault slip rate estimates that alone do not provide useful constraints on the large-scale partitioning we show here. Our analysis does not preclude the existence of smaller blocks and more block boundaries in Southern California. However, attempts to identify smaller blocks along and adjacent to the San Gorgonio section were not successful.

  9. The PVM (Parallel Virtual Machine) system: Supercomputer level concurrent computation on a network of IBM RS/6000 power stations

    Energy Technology Data Exchange (ETDEWEB)

    Sunderam, V.S. (Emory Univ., Atlanta, GA (USA). Dept. of Mathematics and Computer Science); Geist, G.A. (Oak Ridge National Lab., TN (USA))

    1991-01-01

    The PVM (Parallel Virtual Machine) system enables supercomputer level concurrent computations to be performed on interconnected networks of heterogeneous computer systems. Specifically, a network of 13 IBM RS/6000 powerstations has been successfully used to execute production quality runs of superconductor modeling codes at more than 250 Mflops. This work demonstrates the effectiveness of cooperative concurrent processing for high performance applications, and shows that supercomputer level computations may be attained at a fraction of the cost on distributed computing platforms. This paper describes the PVM programming environment and user facilities, as they apply to hardware platforms comprising a network of IBM RS/6000 powerstations. The salient design features of PVM will be discussed; including heterogeneity, scalability, multilanguage support, provisions for fault tolerance, the use of multiprocessors and scalar machines, an interactive graphical front end, and support for profiling, tracing, and visual analysis. The PVM system has been used extensively, and a range of production quality concurrent applications have been successfully executed using PVM on a variety of networked platforms. The paper will mention representative examples, and discuss two in detail. The first is a material sciences problem that was originally developed on a Cray 2. This application code calculates the electronic structure of metallic alloys from first principles and is based on the KKR-CPA algorithm. The second is a molecular dynamics simulation for calculating materials properties. Performance results for both applicants on networks of RS/6000 powerstations will be presented, and accompanied by discussions of the other advantages of PVM and its potential as a complement or alternative to conventional supercomputers.

  10. Calculation of Free Energy Landscape in Multi-Dimensions with Hamiltonian-Exchange Umbrella Sampling on Petascale Supercomputer.

    Science.gov (United States)

    Jiang, Wei; Luo, Yun; Maragliano, Luca; Roux, Benoît

    2012-11-13

    An extremely scalable computational strategy is described for calculations of the potential of mean force (PMF) in multidimensions on massively distributed supercomputers. The approach involves coupling thousands of umbrella sampling (US) simulation windows distributed to cover the space of order parameters with a Hamiltonian molecular dynamics replica-exchange (H-REMD) algorithm to enhance the sampling of each simulation. In the present application, US/H-REMD is carried out in a two-dimensional (2D) space and exchanges are attempted alternatively along the two axes corresponding to the two order parameters. The US/H-REMD strategy is implemented on the basis of parallel/parallel multiple copy protocol at the MPI level, and therefore can fully exploit computing power of large-scale supercomputers. Here the novel technique is illustrated using the leadership supercomputer IBM Blue Gene/P with an application to a typical biomolecular calculation of general interest, namely the binding of calcium ions to the small protein Calbindin D9k. The free energy landscape associated with two order parameters, the distance between the ion and its binding pocket and the root-mean-square deviation (rmsd) of the binding pocket relative the crystal structure, was calculated using the US/H-REMD method. The results are then used to estimate the absolute binding free energy of calcium ion to Calbindin D9k. The tests demonstrate that the 2D US/H-REMD scheme greatly accelerates the configurational sampling of the binding pocket, thereby improving the convergence of the potential of mean force calculation.

  11. Scalable geocomputation: evolving an environmental model building platform from single-core to supercomputers

    Science.gov (United States)

    Schmitz, Oliver; de Jong, Kor; Karssenberg, Derek

    2017-04-01

    There is an increasing demand to run environmental models on a big scale: simulations over large areas at high resolution. The heterogeneity of available computing hardware such as multi-core CPUs, GPUs or supercomputer potentially provides significant computing power to fulfil this demand. However, this requires detailed knowledge of the underlying hardware, parallel algorithm design and the implementation thereof in an efficient system programming language. Domain scientists such as hydrologists or ecologists often lack this specific software engineering knowledge, their emphasis is (and should be) on exploratory building and analysis of simulation models. As a result, models constructed by domain specialists mostly do not take full advantage of the available hardware. A promising solution is to separate the model building activity from software engineering by offering domain specialists a model building framework with pre-programmed building blocks that they combine to construct a model. The model building framework, consequently, needs to have built-in capabilities to make full usage of the available hardware. Developing such a framework providing understandable code for domain scientists and being runtime efficient at the same time poses several challenges on developers of such a framework. For example, optimisations can be performed on individual operations or the whole model, or tasks need to be generated for a well-balanced execution without explicitly knowing the complexity of the domain problem provided by the modeller. Ideally, a modelling framework supports the optimal use of available hardware whichsoever combination of model building blocks scientists use. We demonstrate our ongoing work on developing parallel algorithms for spatio-temporal modelling and demonstrate 1) PCRaster, an environmental software framework (http://www.pcraster.eu) providing spatio-temporal model building blocks and 2) parallelisation of about 50 of these building blocks using

  12. Comparison of neuronal spike exchange methods on a Blue Gene/P supercomputer.

    Science.gov (United States)

    Hines, Michael; Kumar, Sameer; Schürmann, Felix

    2011-01-01

    For neural network simulations on parallel machines, interprocessor spike communication can be a significant portion of the total simulation time. The performance of several spike exchange methods using a Blue Gene/P (BG/P) supercomputer has been tested with 8-128 K cores using randomly connected networks of up to 32 M cells with 1 k connections per cell and 4 M cells with 10 k connections per cell, i.e., on the order of 4·10(10) connections (K is 1024, M is 1024(2), and k is 1000). The spike exchange methods used are the standard Message Passing Interface (MPI) collective, MPI_Allgather, and several variants of the non-blocking Multisend method either implemented via non-blocking MPI_Isend, or exploiting the possibility of very low overhead direct memory access (DMA) communication available on the BG/P. In all cases, the worst performing method was that using MPI_Isend due to the high overhead of initiating a spike communication. The two best performing methods-the persistent Multisend method using the Record-Replay feature of the Deep Computing Messaging Framework DCMF_Multicast; and a two-phase multisend in which a DCMF_Multicast is used to first send to a subset of phase one destination cores, which then pass it on to their subset of phase two destination cores-had similar performance with very low overhead for the initiation of spike communication. Departure from ideal scaling for the Multisend methods is almost completely due to load imbalance caused by the large variation in number of cells that fire on each processor in the interval between synchronization. Spike exchange time itself is negligible since transmission overlaps with computation and is handled by a DMA controller. We conclude that ideal performance scaling will be ultimately limited by imbalance between incoming processor spikes between synchronization intervals. Thus, counterintuitively, maximization of load balance requires that the distribution of cells on processors should not reflect

  13. Proposal of a Desk-Side Supercomputer with Reconfigurable Data-Paths Using Rapid Single-Flux-Quantum Circuits

    Science.gov (United States)

    Takagi, Naofumi; Murakami, Kazuaki; Fujimaki, Akira; Yoshikawa, Nobuyuki; Inoue, Koji; Honda, Hiroaki

    We propose a desk-side supercomputer with large-scale reconfigurable data-paths (LSRDPs) using superconducting rapid single-flux-quantum (RSFQ) circuits. It has several sets of computing unit which consists of a general-purpose microprocessor, an LSRDP and a memory. An LSRDP consists of a lot of, e. g., a few thousand, floating-point units (FPUs) and operand routing networks (ORNs) which connect the FPUs. We reconfigure the LSRDP to fit a computation, i. e., a group of floating-point operations, which appears in a ‘for’ loop of numerical programs by setting the route in ORNs before the execution of the loop. We propose to implement the LSRDPs by RSFQ circuits. The processors and the memories can be implemented by semiconductor technology. We expect that a 10 TFLOPS supercomputer, as well as a refrigerating engine, will be housed in a desk-side rack, using a near-future RSFQ process technology, such as 0.35μm process.

  14. San Pascual (1991) Año XXVIII, n. 284

    OpenAIRE

    Pérez, María Dolores, O.S.C. (Directora)

    1991-01-01

    Editorial. Documento de la Santa sede. San Pascual y la Virgen de Gracia. Nueva revista dedicada a San Pascual. Adoración nocturna. Desde la clausura. Vida en el santuario de San Pascual. Orden Franciscana. Coplas a San Pascual Baylón. San Pascual en el arte. Rutas pascualinas. A l'ombra del claustre.

  15. San Pascual (1991) Año XXVIII, n. 284

    OpenAIRE

    Pérez, María Dolores, O.S.C. (Directora)

    1991-01-01

    Editorial. Documento de la Santa sede. San Pascual y la Virgen de Gracia. Nueva revista dedicada a San Pascual. Adoración nocturna. Desde la clausura. Vida en el santuario de San Pascual. Orden Franciscana. Coplas a San Pascual Baylón. San Pascual en el arte. Rutas pascualinas. A l'ombra del claustre.

  16. Evidence for Late Oligocene-Early Miocene episode of transtension along San Andreas Fault system in central California

    Energy Technology Data Exchange (ETDEWEB)

    Stanley, R.G.

    1986-04-01

    The San Andreas is one of the most intensely studied fault systems in the world, but many aspects of its kinematic history remain controversial. For example, the period from the late Eocene to early Miocene is widely believed to have been a time of negligible strike-slip movement along the San Andreas fault proper, based on the rough similarity of offset of the Eocene Butano-Point of rocks Submarine Fan, the early Miocene Pinnacles-Neenach volcanic center, and an early Miocene shoreline in the northern Gabilan Range and San Emigdio Mountains. Nonetheless, evidence indicates that a late Oligocene-early Miocene episode of transtension, or strike-slip motion with a component of extension, occurred within the San Andreas fault system. The evidence includes: (1) about 22-24 Ma, widespread, synchronous volcanic activity occurred at about 12 volcanic centers along a 400-km long segment of the central California coast; (2) most of these volcanic centers are located along faults of the San Andreas system, including the San Andreas fault proper, the San Gregorio-Hosgri fault, and the Zayante-Vergeles fault, suggesting that these and other faults were active and served as conduits for magmas rising from below; (3) during the late Oligocene and early Miocene, a pull-apart basin developed adjacent to the San Andreas fault proper in the La Honda basin near Santa Cruz; and (4) during the late Oligocene and early Miocene, active faulting, rapid subsidence, and marine transgression occurred in the La Honda and other sedimentary basins in central California. The amount of right-lateral displacement along the San Andreas fault proper during this transtentional episode is unknown but was probably about 7.5-35 km, based on model studies of pull-apart basin formation. This small amount of movement is well within the range of error in published estimates of the offset of the Eocene to early Miocene geologic features noted.

  17. Evaluation Methodologies for Information Management Systems; Building Digital Tobacco Industry Document Libraries at the University of California, San Francisco Library/Center for Knowledge Management; Experiments with the IFLA Functional Requirements for Bibliographic Records (FRBR); Coming to Term: Designing the Texas Email Repository Model.

    Science.gov (United States)

    Morse, Emile L.; Schmidt, Heidi; Butter, Karen; Rider, Cynthia; Hickey, Thomas B.; O'Neill, Edward T.; Toves, Jenny; Green, Marlan; Soy, Sue; Gunn, Stan; Galloway, Patricia

    2002-01-01

    Includes four articles that discuss evaluation methods for information management systems under the Defense Advanced Research Projects Agency; building digital libraries at the University of California San Francisco's Tobacco Control Archives; IFLA's Functional Requirements for Bibliographic Records; and designing the Texas email repository model…

  18. Species - San Diego Co. [ds121

    Data.gov (United States)

    California Department of Resources — This is the Biological Observation Database point layer representing baseline observations of sensitive species (as defined by the MSCP) throughout San Diego County....

  19. 33 CFR 165.1102 - Security Zone; Naval Base Point Loma; San Diego Bay, San Diego, CA.

    Science.gov (United States)

    2010-07-01

    ... Loma; San Diego Bay, San Diego, CA. 165.1102 Section 165.1102 Navigation and Navigable Waters COAST... Guard District § 165.1102 Security Zone; Naval Base Point Loma; San Diego Bay, San Diego, CA. (a) Location. The following area is a security zone: The water adjacent to the Naval Base Point Loma, San Diego...

  20. COMPUTATIONAL SCIENCE CENTER

    Energy Technology Data Exchange (ETDEWEB)

    DAVENPORT, J.

    2006-11-01

    Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to

  1. 77 FR 57494 - Safety Zone; Fleet Week Fireworks, San Francisco Bay, San Francisco, CA

    Science.gov (United States)

    2012-09-18

    ... SECURITY Coast Guard 33 CFR Part 165 Safety Zone; Fleet Week Fireworks, San Francisco Bay, San Francisco... will enforce the safety zone for the Fleet Week Fireworks in the Captain of the Port, San Francisco...'' W (NAD83) for the Fleet Week Fireworks in 33 CFR 165.1191, Table 1, item number 25. This safety...

  2. 78 FR 10062 - Safety Zone; Sea World San Diego Fireworks, Mission Bay; San Diego, CA

    Science.gov (United States)

    2013-02-13

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; Sea World San Diego Fireworks, Mission Bay; San Diego, CA AGENCY: Coast Guard, DHS. ACTION: Temporary final rule. SUMMARY: The Coast Guard is establishing a safety zone on the navigable waters of Mission Bay in support of the Sea World San Diego...

  3. 77 FR 42649 - Safety Zone: Sea World San Diego Fireworks, Mission Bay; San Diego, CA

    Science.gov (United States)

    2012-07-20

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone: Sea World San Diego Fireworks, Mission Bay; San Diego, CA AGENCY: Coast Guard, DHS. ACTION: Temporary final rule. SUMMARY: The Coast Guard is... rule, call or email Petty Officer David Varela, Waterways Management, U.S. Coast Guard Sector San Diego...

  4. 77 FR 60899 - Safety Zone; Sea World San Diego Fireworks, Mission Bay; San Diego, CA

    Science.gov (United States)

    2012-10-05

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; Sea World San Diego Fireworks, Mission Bay; San Diego, CA AGENCY: Coast Guard, DHS. ACTION: Temporary final rule. SUMMARY: The Coast Guard is establishing a safety zone on the navigable waters of Mission Bay in support of the Sea World San Diego...

  5. 77 FR 42638 - Safety Zone: Sea World San Diego Fireworks, Mission Bay; San Diego, CA

    Science.gov (United States)

    2012-07-20

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone: Sea World San Diego Fireworks, Mission Bay; San Diego, CA AGENCY: Coast Guard, DHS. ACTION: Temporary final rule. SUMMARY: The Coast Guard is establishing a safety zone on the navigable waters of Mission Bay in support of the Sea World San Diego...

  6. 78 FR 29025 - Sea World San Diego Fireworks 2013 Season; Mission Bay, San Diego, CA

    Science.gov (United States)

    2013-05-17

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Sea World San Diego Fireworks 2013 Season; Mission Bay, San Diego, CA AGENCY: Coast Guard, DHS. ACTION: Temporary final rule. SUMMARY: The Coast Guard is establishing a safety zone on the navigable waters of Mission Bay in support of the Sea World San Diego...

  7. 76 FR 46352 - Approval of Noise Compatibility Program for San Diego International, San Diego, CA

    Science.gov (United States)

    2011-08-02

    ... Federal Aviation Administration Approval of Noise Compatibility Program for San Diego International, San Diego, CA AGENCY: Federal Aviation Administration, DOT. ACTION: Notice . SUMMARY: The Federal Aviation Administration (FAA) announces its findings on the noise compatibility program submitted by San Diego Regional...

  8. 78 FR 77597 - Safety Zone; Allied PRA-Solid Works, San Diego Bay; San Diego, CA

    Science.gov (United States)

    2013-12-24

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; Allied PRA-Solid Works, San Diego Bay; San Diego, CA AGENCY: Coast Guard, DHS. ACTION: Temporary final rule. SUMMARY: The Coast Guard is establishing a temporary safety zone on the navigable waters of the San Diego Bay in support of a fireworks...

  9. SF Bayweb 2009: Planting the Seeds of an Observing System in the San Francisco Bay

    Science.gov (United States)

    2010-06-01

    UC Berkeley Berkeley, CA 94720 Toby Garfield SFSU Romberg Tiburon Center Tiburon, CA 94920 John Largier UC Davis / Bodega Marine Laboratory... Bodega Bay, Ca 94923 Abstract - A pilot project was recently completed in the San Francisco Bay from May 1-10, 2009, to test the use of advanced

  10. Seasonal Changes of Bioluminescence in Photosynthetic and Heterotrophic Dinoflagellates at San Clemente Island

    Science.gov (United States)

    2012-02-01

    2 Seasonal Changes of Bioluminescence in Photosynthetic and Heterotrophic Dinoflagellates at San Clemente Island David Lapota Space and Naval...Warfare Systems Center, Pacific USA 1. Introduction A significant portion of bioluminescence in all oceans is produced by dinoflagellates . Numerous...studies have documented the ubiquitous distribution of bioluminescent dinoflagellates in near surface waters (Seliger et al., 1961; Yentsch and Laird

  11. Solar Feasibility Study May 2013 - San Carlos Apache Tribe

    Energy Technology Data Exchange (ETDEWEB)

    Rapp, Jim [Parametrix; Duncan, Ken [San Carlos Apache Tribe; Albert, Steve [Parametrix

    2013-05-01

    The San Carlos Apache Tribe (Tribe) in the interests of strengthening tribal sovereignty, becoming more energy self-sufficient, and providing improved services and economic opportunities to tribal members and San Carlos Apache Reservation (Reservation) residents and businesses, has explored a variety of options for renewable energy development. The development of renewable energy technologies and generation is consistent with the Tribe’s 2011 Strategic Plan. This Study assessed the possibilities for both commercial-scale and community-scale solar development within the southwestern portions of the Reservation around the communities of San Carlos, Peridot, and Cutter, and in the southeastern Reservation around the community of Bylas. Based on the lack of any commercial-scale electric power transmission between the Reservation and the regional transmission grid, Phase 2 of this Study greatly expanded consideration of community-scale options. Three smaller sites (Point of Pines, Dudleyville/Winkleman, and Seneca Lake) were also evaluated for community-scale solar potential. Three building complexes were identified within the Reservation where the development of site-specific facility-scale solar power would be the most beneficial and cost-effective: Apache Gold Casino/Resort, Tribal College/Skill Center, and the Dudleyville (Winkleman) Casino.

  12. Cuartel San Carlos. Yacimiento veterano

    Directory of Open Access Journals (Sweden)

    Mariana Flores

    2007-01-01

    Full Text Available El Cuartel San Carlos es un monumento histórico nacional (1986 de finales del siglo XVIII (1785-1790, caracterizado por sufrir diversas adversidades en su construcción y soportar los terremotos de 1812 y 1900. En el año 2006, el organismo encargado de su custodia, el Instituto de Patrimonio Cultural del Ministerio de Cultura, ejecutó tres etapas de exploración arqueológica, que abarcaron las áreas Traspatio, Patio Central y las Naves Este y Oeste de la edificación. Este trabajo reseña el análisis de la documentación arqueológica obtenida en el sitio, a partir de la realización de dicho proyecto, denominado EACUSAC (Estudio Arqueológico del Cuartel San Carlos, que representa además, la tercera campaña realizada en el sitio. La importancia de este yacimiento histórico, radica en su participación en los acontecimientos que propiciaron conflictos de poder durante el surgimiento de la República y en los sucesos políticos del siglo XX. De igual manera, se encontró en el sitio una amplia muestra de materiales arqueológicos que reseñan un estilo de vida cotidiana militar, así como las dinámicas sociales internas ocurridas en el San Carlos, como lugar estratégico para la defensa de los diferentes regímenes que atravesó el país, desde la época del imperialismo español hasta nuestros días.

  13. Downtown revitalization in San Salvador

    OpenAIRE

    Ejeborn, Elisabet; Nedersjö, Julia

    2012-01-01

    The aim of this master’s thesis in spatial planning is to research the conditions in the historic city centre of San Salvador and make a strategy and urban design proposal for the area, but also to investigate the relationship between economic development, public institutions and the public space in this area. The research has been done through literature studies on El Salvador and formal/informal economy, onsite inventory, studies of good examples and interviews with people in the area. In t...

  14. Downtown revitalization in San Salvador

    OpenAIRE

    Ejeborn, Elisabet; Nedersjö, Julia

    2012-01-01

    The aim of this master’s thesis in spatial planning is to research the conditions in the historic city centre of San Salvador and make a strategy and urban design proposal for the area, but also to investigate the relationship between economic development, public institutions and the public space in this area. The research has been done through literature studies on El Salvador and formal/informal economy, onsite inventory, studies of good examples and interviews with people in the area. In t...

  15. A week of SRI 2003 in San Francisco

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, Art

    2003-09-29

    The Eighth International Conference on Synchrotron Radiation (SRI 2003) ended its August 25-28 run at the Yerba Buena Center for the Arts in San Francisco with almost as many in attendance as at the beginning. The steady attendance was surely a tribute to the quality of the program and the excitement it generated among the more than 700 registrants who gathered for four days of plenary talks, parallel sessions, and posters, as well as facility tours of the ALS and SSRL on August 29.

  16. California State Waters Map Series: offshore of San Gregorio, California

    Science.gov (United States)

    Cochrane, Guy R.; Dartnell, Peter; Greene, H. Gary; Watt, Janet T.; Golden, Nadine E.; Endris, Charles A.; Phillips, Eleyne L.; Hartwell, Stephen R.; Johnson, Samuel Y.; Kvitek, Rikk G.; Erdey, Mercedes D.; Bretz, Carrie K.; Manson, Michael W.; Sliter, Ray W.; Ross, Stephanie L.; Dieter, Bryan E.; Chin, John L.; Cochran, Susan A.; Cochrane, Guy R.; Cochran, Susan A.

    2014-01-01

    In 2007, the California Ocean Protection Council initiated the California Seafloor Mapping Program (CSMP), designed to create a comprehensive seafloor map of high-resolution bathymetry, marine benthic habitats, and geology within the 3-nautical-mile limit of California's State Waters. The CSMP approach is to create highly detailed seafloor maps through collection, integration, interpretation, and visualization of swath sonar data, acoustic backscatter, seafloor video, seafloor photography, high-resolution seismic-reflection profiles, and bottom-sediment sampling data. The map products display seafloor morphology and character, identify potential marine benthic habitats, and illustrate both the surficial seafloor geology and shallow (to about 100 m) subsurface geology. The Offshore of San Gregorio map area is located in northern California, on the Pacific coast of the San Francisco Peninsula about 50 kilometers south of the Golden Gate. The map area lies offshore of the Santa Cruz Mountains, part of the northwest-trending Coast Ranges that run roughly parallel to the San Andreas Fault Zone. The Santa Cruz Mountains lie between the San Andreas Fault Zone and the San Gregorio Fault system. The nearest significant onshore cultural centers in the map area are San Gregorio and Pescadero, both unincorporated communities with populations well under 1,000. Both communities are situated inland of state beaches that share their names. No harbor facilities are within the Offshore of San Gregorio map area. The hilly coastal area is virtually undeveloped grazing land for sheep and cattle. The coastal geomorphology is controlled by late Pleistocene and Holocene slip in the San Gregorio Fault system. A westward bend in the San Andreas Fault Zone, southeast of the map area, coupled with right-lateral movement along the San Gregorio Fault system have caused regional folding and uplift. The coastal area consists of high coastal bluffs and vertical sea cliffs. Coastal promontories in

  17. Coherent 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an Optimal Supercomputer Optical Switch Fabric

    DEFF Research Database (Denmark)

    Karinou, Fotini; Borkowski, Robert; Zibar, Darko

    2013-01-01

    We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates....

  18. Car2x with software defined networks, network functions virtualization and supercomputers technical and scientific preparations for the Amsterdam Arena telecoms fieldlab

    NARCIS (Netherlands)

    Meijer R.J.; Cushing R.; De Laat C.; Jackson P.; Klous S.; Koning R.; Makkes M.X.; Meerwijk A.

    2015-01-01

    In the invited talk 'Car2x with SDN, NFV and supercomputers' we report about how our past work with SDN [1, 2] allows the design of a smart mobility fieldlab in the huge parking lot the Amsterdam Arena. We explain how we can engineer and test software that handle the complex conditions of the Car2X

  19. Car2x with software defined networks, network functions virtualization and supercomputers technical and scientific preparations for the Amsterdam Arena telecoms fieldlab

    NARCIS (Netherlands)

    Meijer R.J.; Cushing R.; De Laat C.; Jackson P.; Klous S.; Koning R.; Makkes M.X.; Meerwijk A.

    2015-01-01

    In the invited talk 'Car2x with SDN, NFV and supercomputers' we report about how our past work with SDN [1, 2] allows the design of a smart mobility fieldlab in the huge parking lot the Amsterdam Arena. We explain how we can engineer and test software that handle the complex conditions of the Car2X

  20. Predictive Upper Cretaceous to Early Miocene Paleogeography of the San Andreas Fault System

    Science.gov (United States)

    Burnham, K.

    2006-12-01

    Paleogeographic reconstruction of the region of the San Andreas fault was hampered for more than twenty years by the apparent incompatibility of authoritative lithologic correlations. These led to disparate estimates of dextral strike-slip offsets, notably 315 km between Pinnacles and Neenach Volcanics (Matthews, 1976), versus 563 km between Anchor Bay and Eagle Rest Peak (Ross et al., 1973). In addition, estimates of total dextral slip on the San Gregorio fault have ranged from 5 km to 185 km. Sixteen upper Cretaceous and Paleogene conglomerates of the California Coast Ranges, from Anchor Bay to Simi Valley, have been included in a multidisciplinary study. Detailed analysis, including microscopic petrography and microprobe geochemistry, verified Seiders and Cox's (1992) and Wentworth's (1996) correlation of the upper Cretaceous Strata of Anchor Bay with an unnamed conglomerate east of Half Moon Bay. Similar detailed study, with the addition of SHRIMP U/Pb zircon dating, verified that the Paleocene or Eocene Point Reyes Conglomerate at Point Reyes is a tectonically displaced segment of the Carmelo Formation of Point Lobos. These studies centered on identification of matching unique clast varieties, rather than on simply counting general clast types, and included analyses of matrices, fossils, paleocurrents, diagenesis, adjacent rocks, and stratigraphy. The work also led to three new correlations: the Point Reyes Conglomerate with granitic source rock at Point Lobos; a magnetic anomaly at Black Point with a magnetic anomaly near San Gregorio; and the Strata of Anchor Bay with previously established source rock, the potassium-poor Logan Gabbro (Ross et al., 1973) at a more recently recognized location (Brabb and Hanna, 1981; McLaughlin et al., 1996) just east of the San Gregorio fault, south of San Gregorio. From these correlations, an upper Cretaceous early Oligocene paleogeography of the San Andreas fault system was constructed that honors both the Anchor Bay

  1. Nonperturbative Lattice Simulation of High Multiplicity Cross Section Bound in $\\phi^4_3$ on Beowulf Supercomputer

    CERN Document Server

    Charng, Y Y

    2001-01-01

    In this thesis, we have investigated the possibility of large cross sections at large multiplicity in weakly coupled three dimensional $\\phi^4$ theory using Monte Carlo Simulation methods. We have built a Beowulf Supercomputer for this purpose. We use spectral function sum rules to derive a bound on the total cross section where the quantity determining the bound can be measured by Monte Carlo simulation in Euclidean space. We determine the critical threshold energy for large high multiplicity cross section according to the analysis of M.B. Volosion and E.N. Argyres, R.M.P. Kleiss, and C.G. Papadopoulos. We compare the simulation results with the perturbation results and see no evidence for large cross section in the range where tree diagram estimates suggest they should exist.

  2. Performance Evaluation of an Intel Haswell- and Ivy Bridge-Based Supercomputer Using Scientific and Engineering Applications

    Science.gov (United States)

    Saini, Subhash; Hood, Robert T.; Chang, Johnny; Baron, John

    2016-01-01

    We present a performance evaluation conducted on a production supercomputer of the Intel Xeon Processor E5- 2680v3, a twelve-core implementation of the fourth-generation Haswell architecture, and compare it with Intel Xeon Processor E5-2680v2, an Ivy Bridge implementation of the third-generation Sandy Bridge architecture. Several new architectural features have been incorporated in Haswell including improvements in all levels of the memory hierarchy as well as improvements to vector instructions and power management. We critically evaluate these new features of Haswell and compare with Ivy Bridge using several low-level benchmarks including subset of HPCC, HPCG and four full-scale scientific and engineering applications. We also present a model to predict the performance of HPCG and Cart3D within 5%, and Overflow within 10% accuracy.

  3. Aggregate Settling Velocities in San Francisco Estuary Margins

    Science.gov (United States)

    Allen, R. M.; Stacey, M. T.; Variano, E. A.

    2015-12-01

    One way that humans impact aquatic ecosystems is by adding nutrients and contaminants, which can propagate up the food web and cause blooms and die-offs, respectively. Often, these chemicals are attached to fine sediments, and thus where sediments go, so do these anthropogenic influences. Vertical motion of sediments is important for sinking and burial, and also for indirect effects on horizontal transport. The dynamics of sinking sediment (often in aggregates) are complex, thus we need field data to test and validate existing models. San Francisco Bay is well studied and is often used as a test case for new measurement and model techniques (Barnard et al. 2013). Settling velocities for aggregates vary between 4*10-5 to 1.6*10-2 m/s along the estuary backbone (Manning and Schoellhamer 2013). Model results from South San Francisco Bay shoals suggest two populations of settling particles, one fast (ws of 9 to 5.8*10-4 m/s) and one slow (ws of Brand et al. 2015). While the open waters of San Francisco Bay and other estuaries are well studied and modeled, sediment and contaminants often originate from the margin regions, and the margins remain poorly characterized. We conducted a 24 hour field experiment in a channel slough of South San Francisco Bay, and measured settling velocity, turbulence and flow, and suspended sediment concentration. At this margin location, we found average settling velocities of 4-5*10-5 m/s, and saw settling velocities decrease with decreasing suspended sediment concentration. These results are consistent with, though at the low end of, those seen along the estuary center, and they suggest that the two population model that has been successful along the shoals may also apply in the margins.

  4. Parallel workflow manager for non-parallel bioinformatic applications to solve large-scale biological problems on a supercomputer.

    Science.gov (United States)

    Suplatov, Dmitry; Popova, Nina; Zhumatiy, Sergey; Voevodin, Vladimir; Švedas, Vytas

    2016-04-01

    Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads - one for task management and communication, and another for subtask execution - are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper .

  5. Trouble Brewing in San Francisco. Policy Brief

    Science.gov (United States)

    Buck, Stuart

    2010-01-01

    The city of San Francisco will face enormous budgetary pressures from the growing deficits in public pensions, both at a state and local level. In this policy brief, the author estimates that San Francisco faces an aggregate $22.4 billion liability for pensions and retiree health benefits that are underfunded--including $14.1 billion for the city…

  6. Activity report of Computing Research Center

    Energy Technology Data Exchange (ETDEWEB)

    1997-07-01

    On April 1997, National Laboratory for High Energy Physics (KEK), Institute of Nuclear Study, University of Tokyo (INS), and Meson Science Laboratory, Faculty of Science, University of Tokyo began to work newly as High Energy Accelerator Research Organization after reconstructing and converting their systems, under aiming at further development of a wide field of accelerator science using a high energy accelerator. In this Research Organization, Applied Research Laboratory is composed of four Centers to execute assistance of research actions common to one of the Research Organization and their relating research and development (R and D) by integrating the present four centers and their relating sections in Tanashi. What is expected for the assistance of research actions is not only its general assistance but also its preparation and R and D of a system required for promotion and future plan of the research. Computer technology is essential to development of the research and can communize for various researches in the Research Organization. On response to such expectation, new Computing Research Center is required for promoting its duty by coworking and cooperating with every researchers at a range from R and D on data analysis of various experiments to computation physics acting under driving powerful computer capacity such as supercomputer and so forth. Here were described on report of works and present state of Data Processing Center of KEK at the first chapter and of the computer room of INS at the second chapter and on future problems for the Computing Research Center. (G.K.)

  7. Comparative Analysis of Fusion Center Outreach to Fire and EMS Agencies

    Science.gov (United States)

    2015-12-01

    Shooter” Response Strategies,” Homeland Security Affairs 10 (February 2014), article 3, https://www.hsaj.org/articles/253. xvi each fusion center has...consisting of the counties of: Los Angeles, Orange, Riverside, San Bernardino, San Luis Obispo, Santa Barbara, and Ventura along with all the cities

  8. Integration Of PanDA Workload Management System With Supercomputers for ATLAS

    CERN Document Server

    Oleynik, Danila; The ATLAS collaboration; De, Kaushik; Wenaus, Torre; Maeno, Tadashi; Barreiro Megino, Fernando Harald; Nilsson, Paul; Guan, Wen; Panitkin, Sergey

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production ANd Distributed Analysis system) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more t...

  9. Excel Center

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Citigroup,one of the World top 500 companies,has now settled in Excel Center,Financial Street. The opening ceremony of Excel Center and the entry ceremony of Citigroup in the center were held on March 31.Government leaders of Xicheng District,the Excel CEO and the heads of Asia-Pacific Region leaders of Citibank all participated in the ceremony.

  10. Small angle neutron scattering (SANS and V-SANS) study of asphaltene aggregates in crude oil.

    Science.gov (United States)

    Headen, Thomas F; Boek, Edo S; Stellbrink, Jörg; Scheven, Ulrich M

    2009-01-06

    We report small angle neutron scattering (SANS) experiments on two crude oils. Analysis of the high-Q SANS region has probed the asphaltene aggregates in the nanometer length scale. We find that the radius of gyration decreases with increasing temperature. We show that SANS measurements on crude oils give similar aggregate sizes to those found from SANS measurements of asphaltenes redispersed in deuterated toluene. The combined use of SANS and V-SANS on crude oil samples has allowed the determination of the radius of gyration of large scale asphaltene aggregates of approximately 0.45 microm. This has been achieved by the fitting of Beaucage functions over two size regimes. Analysis of the fitted Beaucage functions at very low-Q has shown that the large scale aggregates are not simply made by aggregation of all the smaller nanoaggregates. Instead, they are two different aggregates coexisting.

  11. 77 FR 6809 - Center for Scientific Review; Notice of Closed Meetings

    Science.gov (United States)

    2012-02-09

    ...: Jacinta Bronte-Tinkew, Ph.D., Scientific Review Officer, Center for Scientific Review, National Institutes... Post Street, San Francisco, CA 94115. Contact Person: Jacinta Bronte-Tinkew, Ph.D., Scientific...

  12. 77 FR 55851 - Center for Scientific Review; Notice of Closed Meetings

    Science.gov (United States)

    2012-09-11

    ... Conference Call). Contact Person: James P Harwood, Ph.D., Scientific Review Officer, Center for Scientific... applications. Place: Hilton Fisherman's Wharf, 2620 Jones Street, San Francisco, CA 94133. Contact Person...

  13. 78 FR 7790 - Center for Scientific Review; Notice of Closed Meetings

    Science.gov (United States)

    2013-02-04

    ... applications. Place: The Westin St. Francis, 335 Powell Street, San Francisco, CA 94102. Contact Person: Daniel F McDonald, Ph.D., Scientific Review Officer, Center for Scientific Review, National Institutes...

  14. EX1103L1: Exploration and Mapping, Galapagos Spreading Center: Mapping, CTD and Tow-yo

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This project will be a transit from San Diego, CA to the Galapagos Spreading Center, where multibeam mapping, CTD casts, and CTD tow-yo operations will be performed....

  15. Vertical tectonic deformation associated with the San Andreas fault zone offshore of San Francisco, California

    Science.gov (United States)

    Ryan, H. F.; Parsons, T.; Sliter, R. W.

    2008-10-01

    A new fault map of the shelf offshore of San Francisco, California shows that faulting occurs as a distributed shear zone that involves many fault strands with the principal displacement taken up by the San Andreas fault and the eastern strand of the San Gregorio fault zone. Structures associated with the offshore faulting show compressive deformation near where the San Andreas fault goes offshore, but deformation becomes extensional several km to the north off of the Golden Gate. Our new fault map serves as the basis for a 3-D finite element model that shows that the block between the San Andreas and San Gregorio fault zone is subsiding at a long-term rate of about 0.2-0.3 mm/yr, with the maximum subsidence occurring northwest of the Golden Gate in the area of a mapped transtensional basin. Although the long-term rates of vertical displacement primarily show subsidence, the model of coseismic deformation associated with the 1906 San Francisco earthquake indicates that uplift on the order of 10-15 cm occurred in the block northeast of the San Andreas fault. Since 1906, 5-6 cm of regional subsidence has occurred in that block. One implication of our model is that the transfer of slip from the San Andreas fault to a fault 5 km to the east, the Golden Gate fault, is not required for the area offshore of San Francisco to be in extension. This has implications for both the deposition of thick Pliocene-Pleistocene sediments (the Merced Formation) observed east of the San Andreas fault, and the age of the Peninsula segment of the San Andreas fault.

  16. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry

    2017-02-27

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey\\'s acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less

  17. Hurricane Modeling and Supercomputing: Can a global mesoscale model be useful in improving forecasts of tropical cyclogenesis?

    Science.gov (United States)

    Shen, B.; Tao, W.; Atlas, R.

    2007-12-01

    Hurricane modeling, along with guidance from observations, has been used to help construct hurricane theories since the 1960s. CISK (conditional instability of the second kind, Charney and Eliassen 1964; Ooyama 1964,1969) and WISHE (wind-induced surface heat exchange, Emanuel 1986) are among the well-known theories being used to understand hurricane intensification. For hurricane genesis, observations have indicated the importance of large-scale flows (e.g., the Madden-Julian Oscillation or MJO, Maloney and Hartmann, 2000) on the modulation of hurricane activity. Recent modeling studies have focused on the role of the MJO and Rossby waves (e.g., Ferreira and Schubert, 1996; Aivyer and Molinari, 2003) and/or the interaction of small-scale vortices (e.g., Holland 1995; Simpson et al. 1997; Hendrick et al. 2004), of which determinism could be also built by large-scale flows. The aforementioned studies suggest a unified view on hurricane formation, consisting of multiscale processes such as scale transition (e.g., from the MJO to Equatorial Rossby Waves and from waves to vortices), and scale interactions among vortices, convection, and surface heat and moisture fluxes. To depict the processes in the unified view, a high-resolution global model is needed. During the past several years, supercomputers have enabled the deployment of ultra-high resolution global models, obtaining remarkable forecasts of hurricane track and intensity (Atlas et al. 2005; Shen et al. 2006). In this work, hurricane genesis is investigated with the aid of a global mesoscale model on the NASA Columbia supercomputer by conducting numerical experiments on the genesis of six consecutive tropical cyclones (TCs) in May 2002. These TCs include two pairs of twin TCs in the Indian Ocean, Supertyphoon Hagibis in the West Pacific Ocean and Hurricane Alma in the East Pacific Ocean. It is found that the model is capable of predicting the genesis of five of these TCs about two to three days in advance. Our

  18. 75 FR 55270 - Safety Zone; NASSCO Launching of USNS Washington Chambers, San Diego Bay, San Diego, CA

    Science.gov (United States)

    2010-09-10

    ... Chambers, San Diego Bay, San Diego, CA AGENCY: Coast Guard, DHS. ACTION: Temporary final rule. SUMMARY: The Coast Guard is establishing a temporary safety zone on the navigable waters of the San Diego Bay in... the Port (COTP) San Diego or his designated representative. DATES: This rule is effective from 9:15 a...

  19. Description of gravity cores from San Pablo Bay and Carquinez Strait, San Francisco Bay, California

    Science.gov (United States)

    Woodrow, Donald L.; John L. Chin,; Wong, Florence L.; Fregoso, Theresa; Jaffe, Bruce E.

    2017-06-27

    Seventy-two gravity cores were collected by the U.S. Geological Survey in 1990, 1991, and 2000 from San Pablo Bay and Carquinez Strait, California. The gravity cores collected within San Pablo Bay contain bioturbated laminated silts and sandy clays, whole and broken bivalve shells (mostly mussels), fossil tube structures, and fine-grained plant or wood fragments. Gravity cores from the channel wall of Carquinez Strait east of San Pablo Bay consist of sand and clay layers, whole and broken bivalve shells (less than in San Pablo Bay), trace fossil tubes, and minute fragments of plant material.

  20. San Juan River, 1962, Overview

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The aerial photography inventory contains aerial photographs that are retrievable on a frame by frame basis. The inventory contains imagery from various sources...

  1. San Luis Valley waterbird plan : Draft

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — The goal of this plan is "to provide and protect a habitat base of sufficient quality and quantity to maintain healthy viable populations of waterbirds in the San...

  2. Historical methyl mercury in San Francisco Bay

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — San Francisco Bay, California is considered a mercury-impaired watershed. Elevated concentrations of mercury are found in water and sediment as well as fish and...

  3. Bathymetry--Offshore of San Francisco, California

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This part of DS 781 presents data for the bathymetry and shaded-relief maps of Offshore of San Francisco, California (raster data file is included in...

  4. Backscatter A [8101]--Offshore San Gregorio, California

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This part of SIM 3306 presents data for the acoustic-backscatter map (see sheet 3, SIM 3306) of the Offshore of San Gregorio map area, California. Backscatter data...

  5. Habitat--Offshore of San Francisco, California

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This part of DS 781 presents data for the habitat map of the seafloor of the Offshore of San Francisco map area, California. The vector data file is included in...

  6. San Antonio Bay 1986-1989

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The effect of salinity on utilization of shallow-water nursery habitats by aquatic fauna was assessed in San Antonio Bay, Texas. Overall, 272 samples were collected...

  7. Habitat--Offshore of San Francisco, California

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This part of DS 781 presents data for the habitat map of the seafloor of the Offshore of San Francisco map area, California. The vector data file is included in...

  8. Contours--Offshore of San Francisco, California

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This part of DS 781 presents data for the bathymetric contours for several seafloor maps of the Offshore of San Francisco map area, California. The vector data file...

  9. Backscatter B [7125]--Offshore San Gregorio, California

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This part of SIM 3306 presents data for the acoustic-backscatter map (see sheet 3, SIM 3306) of the Offshore of San Gregorio map area, California. Backscatter data...

  10. San Bernardino National Wildlife Refuge contaminant study

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — The San Bernardino National Wildlife Refuge was established in 1982 for the protection and management of endangered desert fishes which are indigenous to the Rio...

  11. An Archeological Survey of the San Diego River

    Science.gov (United States)

    1975-08-27

    Indian populations and to arable landa and water, Father Serra moved Mission San Diego de Aleala ------ to the site of Nipoguay, an Indian village located...NN ARCHAEOLOGICAL SRE OF THE ~i7 SAN DIEGORIE I If SAN DIEGO STATE UNIVERSITY FOUNDATION DEPARTMENT OF THE ARMY, CORPS OF ENGINEERS kSAN DIEGO STATE...the San Diego River Archeological Survey A. PERFORMING ORG.ý REPORT NUM§ER na 7. AUTHOR(s) 6. CONTRACT OR GRANT NUMBER(*) - I- PERFORMING ORGANIZATION

  12. Science Driven Supercomputing Architectures: AnalyzingArchitectural Bottlenecks with Applications and Benchmark Probes

    Energy Technology Data Exchange (ETDEWEB)

    Kamil, S.; Yelick, K.; Kramer, W.T.; Oliker, L.; Shalf, J.; Shan,H.; Strohmaier, E.

    2005-09-26

    There is a growing gap between the peak speed of parallel computing systems and the actual delivered performance for scientific applications. In general this gap is caused by inadequate architectural support for the requirements of modern scientific applications, as commercial applications and the much larger market they represent, have driven the evolution of computer architectures. This gap has raised the importance of developing better benchmarking methodologies to characterize and to understand the performance requirements of scientific applications, to communicate them efficiently to influence the design of future computer architectures. This improved understanding of the performance behavior of scientific applications will allow improved performance predictions, development of adequate benchmarks for identification of hardware and application features that work well or poorly together, and a more systematic performance evaluation in procurement situations. The Berkeley Institute for Performance Studies has developed a three-level approach to evaluating the design of high end machines and the software that runs on them: (1) A suite of representative applications; (2) A set of application kernels; and (3) Benchmarks to measure key system parameters. The three levels yield different type of information, all of which are useful in evaluating systems, and enable NSF and DOE centers to select computer architectures more suited for scientific applications. The analysis will further allow the centers to engage vendors in discussion of strategies to alleviate the present architectural bottlenecks using quantitative information. These may include small hardware changes or larger ones that may be out interest to non-scientific workloads. Providing quantitative models to the vendors allows them to assess the benefits of technology alternatives using their own internal cost-models in the broader marketplace, ideally facilitating the development of future computer

  13. Science Driven Supercomputing Architectures: AnalyzingArchitectural Bottlenecks with Applications and Benchmark Probes

    Energy Technology Data Exchange (ETDEWEB)

    Kamil, S.; Yelick, K.; Kramer, W.T.; Oliker, L.; Shalf, J.; Shan,H.; Strohmaier, E.

    2005-09-26

    There is a growing gap between the peak speed of parallelcomputing systems and the actual delivered performance for scientificapplications. In general this gap is caused by inadequate architecturalsupport for the requirements of modern scientific applications, ascommercial applications and the much larger market they represent, havedriven the evolution of computer architectures. This gap has raised theimportance of developing better benchmarking methodologies tocharacterize and to understand the performance requirements of scientificapplications, to communicate them efficiently to influence the design offuture computer architectures. This improved understanding of theperformance behavior of scientific applications will allow improvedperformance predictions, development of adequate benchmarks foridentification of hardware and application features that work well orpoorly together, and a more systematic performance evaluation inprocurement situations. The Berkeley Institute for Performance Studieshas developed a three-level approach to evaluating the design of high endmachines and the software that runs on them: 1) A suite of representativeapplications; 2) A set of application kernels; and 3) Benchmarks tomeasure key system parameters. The three levels yield different type ofinformation, all of which are useful in evaluating systems, and enableNSF and DOE centers to select computer architectures more suited forscientific applications. The analysis will further allow the centers toengage vendors in discussion of strategies to alleviate the presentarchitectural bottlenecks using quantitative information. These mayinclude small hardware changes or larger ones that may be out interest tonon-scientific workloads. Providing quantitative models to the vendorsallows them to assess the benefits of technology alternatives using theirown internal cost-models in the broader marketplace, ideally facilitatingthe development of future computer architectures more suited forscientific

  14. SSC San Diego Strategic Plan. Revision 3

    Science.gov (United States)

    2001-11-01

    OMB control number. 1. REPORT DATE NOV 2001 2. REPORT TYPE 3. DATES COVERED - 4. TITLE AND SUBTITLE SSC San Diego Strategic Plan. Rev 3 5a...Business Improvement Group ( CBIG ), will be composed of the team members listed above, and will be responsible for monitoring the planning and...implementation of these objectives. The CBIG will charter sub-groups as necessary. Improve Corporate IT infrastructure Much of the SSC San Diego IT service

  15. Trouble Brewing in San Diego. Policy Brief

    Science.gov (United States)

    Buck, Stuart

    2010-01-01

    The city of San Diego will face enormous budgetary pressures from the growing deficits in public pensions, both at a state and local level. In this policy brief, the author estimates that San Diego faces total of $45.4 billion, including $7.95 billion for the county pension system, $5.4 billion for the city pension system, and an estimated $30.7…

  16. An efficient highly parallel implementation of a large air pollution model on an IBM blue gene supercomputer

    Science.gov (United States)

    Ostromsky, Tz.; Georgiev, K.; Zlatev, Z.

    2012-10-01

    In this paper we discuss the efficient distributed-memory parallelization strategy of the Unified Danish Eulerian Model (UNI-DEM). We apply an improved decomposition strategy to the spatial domain in order to get more parallel tasks (based on the larger number of subdomains) with less communications between them (due to optimization of the overlapping area when the advection-diffusion problem is solved numerically). This kind of rectangular block partitioning (with a squareshape trend) allows us not only to increase significantly the number of potential parallel tasks, but also to reduce the local memory requirements per task, which is critical for the distributed-memory implementation of the higher-resolution/finergrid versions of UNI-DEM on some parallel systems, and particularly on the IBM BlueGene/P platform - our target hardware. We will show by experiments that our new parallel implementation can use rather efficiently the resources of the powerful IBM BlueGene/P supercomputer, the largest in Bulgaria, up to its full capacity. It turned out to be extremely useful in the large and computationally expensive numerical experiments, carried out to calculate some initial data for sensitivity analysis of the Danish Eulerian model.

  17. The ASCI Network for SC '99: A Step on the Path to a 100 Gigabit Per Second Supercomputing Network

    Energy Technology Data Exchange (ETDEWEB)

    PRATT,THOMAS J.; TARMAN,THOMAS D.; MARTINEZ,LUIS M.; MILLER,MARC M.; ADAMS,ROGER L.; CHEN,HELEN Y.; BRANDT,JAMES M.; WYCKOFF,PETER S.

    2000-07-24

    This document highlights the Discom{sup 2}'s Distance computing and communication team activities at the 1999 Supercomputing conference in Portland, Oregon. This conference is sponsored by the IEEE and ACM. Sandia, Lawrence Livermore and Los Alamos National laboratories have participated in this conference for eleven years. For the last four years the three laboratories have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives rubric. Communication support for the ASCI exhibit is provided by the ASCI DISCOM{sup 2} project. The DISCOM{sup 2} communication team uses this forum to demonstrate and focus communication and networking developments within the community. At SC 99, DISCOM built a prototype of the next generation ASCI network demonstrated remote clustering techniques, demonstrated the capabilities of the emerging Terabit Routers products, demonstrated the latest technologies for delivering visualization data to the scientific users, and demonstrated the latest in encryption methods including IP VPN technologies and ATM encryption research. The authors also coordinated the other production networking activities within the booth and between their demonstration partners on the exhibit floor. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia's overall strategies in ASCI networking.

  18. Combining density functional theory calculations, supercomputing, and data-driven methods to design new materials (Conference Presentation)

    Science.gov (United States)

    Jain, Anubhav

    2017-04-01

    Density functional theory (DFT) simulations solve for the electronic structure of materials starting from the Schrödinger equation. Many case studies have now demonstrated that researchers can often use DFT to design new compounds in the computer (e.g., for batteries, catalysts, and hydrogen storage) before synthesis and characterization in the lab. In this talk, I will focus on how DFT calculations can be executed on large supercomputing resources in order to generate very large data sets on new materials for functional applications. First, I will briefly describe the Materials Project, an effort at LBNL that has virtually characterized over 60,000 materials using DFT and has shared the results with over 17,000 registered users. Next, I will talk about how such data can help discover new materials, describing how preliminary computational screening led to the identification and confirmation of a new family of bulk AMX2 thermoelectric compounds with measured zT reaching 0.8. I will outline future plans for how such data-driven methods can be used to better understand the factors that control thermoelectric behavior, e.g., for the rational design of electronic band structures, in ways that are different from conventional approaches.

  19. BLAS (Basic Linear Algebra Subroutines), linear algebra modules, and supercomputers. Technical report for period ending 15 December 1984

    Energy Technology Data Exchange (ETDEWEB)

    Rice, J.R.

    1984-12-31

    On October 29 and 30, 1984 about 20 people met at Purdue University to consider extensions to the Basic Linear Algebra Subroutines (BLAS) and linear algebra software modules in general. The need for these extensions and new sets of modules is largely due to the advent of new supercomputer architectures which make it difficult for ordinary coding techniques to achieve even a significant fraction of the potential computing power. The workshop format was one of informal presentations with ample discussions followed by sessions of general discussions of the issues raised. This report is a summary of the presentations, the issues raised, the conclusions reached and the open issue discussions. Each participant had an opportunity to comment on this report, but it also clearly reflects the author's filtering of the extensive discussions. Section 2 describes seven proposals for linear algebra software modules and Section 3 describes four presentations on the use of such modules. Discussion summaries are given next; Section 4 for those where near concensus was reached and Section 5 where the issues were left open.

  20. Abrupt along-strike change in tectonic style: San Andreas Fault zone, San Francisco Peninsula

    Science.gov (United States)

    Zoback, Mary Lou; Jachens, Robert C.; Olson, Jean A.

    1999-05-01

    Seismicity and high-resolution aeromagnetic data are used to define an abrupt change from compressional to extensional tectonism within a 10- to 15-km-wide zone along the San Andreas fault on the San Francisco Peninsula and offshore from the Golden Gate. This 100-km-long section of the San Andreas fault includes the hypocenter of the Mw = 7.8 1906 San Francisco earthquake as well as the highest level of persistent microseismicity along that ˜470-km-long rupture. We define two distinct zones of deformation along this stretch of the fault using well-constrained relocations of all post-1969 earthquakes based a joint one-dimensional velocity/hypocenter inversion and a redetermination of focal mechanisms. The southern zone is characterized by thrust- and reverse-faulting focal mechanisms with NE trending P axes that indicate "fault-normal" compression in 7- to 10-km-wide zones of deformation on both sides of the San Andreas fault. A 1- to 2-km-wide vertical zone beneath the surface trace of the San Andreas is characterized by its almost complete lack of seismicity. The compressional deformation is consistent with the young, high topography of the Santa Cruz Mountains/Coast Ranges as the San Andreas fault makes a broad restraining left bend (˜10°) through the southernmost peninsula. A zone of seismic quiescence ˜15 km long separates this compressional zone to the south from a zone of combined normal-faulting and strike-slip-faulting focal mechanisms (including a ML = 5.3 earthquake in 1957) on the northernmost peninsula and offshore on the Golden Gate platform. Both linear pseudogravity gradients, calculated from the aeromagnetic data, and seismic reflection data indicate that the San Andreas fault makes an abrupt ˜3-km right step less than 5 km offshore in this northern zone. A similar right-stepping (dilatational) geometry is also observed for the subparallel San Gregorio fault offshore. Persistent seismicity and extensional tectonism occur within the San Andreas

  1. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-03-29

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  2. Supercomputations and big-data analysis in strong-field ultrafast optical physics: filamentation of high-peak-power ultrashort laser pulses

    Science.gov (United States)

    Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.

    2016-06-01

    High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3  +  1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.

  3. Inferring Internet Denial-of-Service Activity

    Science.gov (United States)

    2007-11-02

    Inferring Internet Denial-of-Service Activity David Moore CAIDA San Diego Supercomputer Center University of California, San Diego dmoore@caida.org...the local network topology. kc claffy and Colleen Shannon at CAIDA provided support and valuable feed- back throughout the project. David Wetherall

  4. Synthesis of SAN-PB-SAN triblock copolymers via a ''living'' copolymerization with macro-photoiniferters

    NARCIS (Netherlands)

    Kroeze, E; de Boer, B.; ten Brinke, G.; Hadziioannou, G

    1996-01-01

    A technique is described for the synthesis of poly((styrene-co-acrylonitrile)-block-butadiene-block-(styrene-co-acrylonitrile)) (SAN-PB-SAN) triblock copolymers through polybutadiene-based photo-iniferters. Dihydroxy- and dicarboxy-terminated polybutadienes were transformed into the chloro-terminate

  5. 76 FR 1386 - Safety Zone; Centennial of Naval Aviation Kickoff, San Diego Bay, San Diego, CA

    Science.gov (United States)

    2011-01-10

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; Centennial of Naval Aviation Kickoff, San... in support of the Centennial of Naval Aviation Kickoff. This temporary safety zone is necessary to... Purpose On February 12, 2010, the Centennial of Naval Aviation Kickoff will take place in San Diego Bay...

  6. 77 FR 59969 - Notice of Inventory Completion: San Francisco State University, Department of Anthropology, San...

    Science.gov (United States)

    2012-10-01

    ... Anthropology, San Francisco, CA; Correction AGENCY: National Park Service, Interior. ACTION: Notice; correction... Department of Anthropology). The human remains and associated funerary objects were removed from Marin County... San Francisco State University Department of Anthropology records. In the Federal Register (73...

  7. Una Visita al Viejo San Juan (A Visit to Old San Juan).

    Science.gov (United States)

    Cabello, Victor; And Others

    Written in Spanish, this black and white illustrated booklet provides a tour of Old San Juan, Puerto Rico's oldest and most historic city. Brief historical information is provided on the Perro de San Jeronimo, a statue of a barking dog found in front of the Castillo; Plaza de Colon, a small plaza dedicated to Christopher Columbus; the Catedral de…

  8. Una Visita al Viejo San Juan (A Visit to Old San Juan).

    Science.gov (United States)

    Cabello, Victor; And Others

    Written in Spanish, this black and white illustrated booklet provides a tour of Old San Juan, Puerto Rico's oldest and most historic city. Brief historical information is provided on the Perro de San Jeronimo, a statue of a barking dog found in front of the Castillo; Plaza de Colon, a small plaza dedicated to Christopher Columbus; the Catedral de…

  9. 33 CFR 165.754 - Safety Zone: San Juan Harbor, San Juan, PR.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Safety Zone: San Juan Harbor, San Juan, PR. 165.754 Section 165.754 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) PORTS AND WATERWAYS SAFETY REGULATED NAVIGATION AREAS AND LIMITED ACCESS AREAS Specific Regulated Navigation Areas and...

  10. Distribution center

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    Distribution center is a logistics link fulfill physical distribution as its main functionGenerally speaking, it's a large and hiahly automated center destined to receive goods from various plants and suppliers,take orders,fill them efficiently,and deliver goods to customers as quickly as possible.

  11. Location and Shallow Structure of the Frijoles Strand of the San Gregorio Fault Zone, Pescadero, California

    Science.gov (United States)

    Fox-Lent, C.; Catchings, R. D.; Rymer, M. J.; Goldman, M. R.; Steedman, C. E.; Prentice, C. S.

    2003-12-01

    The San Gregorio fault is one of the principal faults of the San Andreas fault system in the San Francisco Bay area. Located west of the active trace of the San Andreas fault and near the coast, the San Gregorio fault zone consists of at least two northwest-southeast-trending strands, the Coastways and Frijoles faults. Little is known about the slip history on the San Gregorio, and information for the Frijoles fault is especially scarce, as it lies mostly offshore. To better understand the contribution of the San Gregorio fault zone to slip along the San Andreas fault system, we conducted a high-resolution, seismic imaging investigation of the Frijoles fault to locate near-surface, onshore, branches of the fault that may be suitable for paleoseismic trenching. Our seismic survey consisted of a 590-meter-long, east-west-trending, combined seismic reflection and refraction profile across Butano Creek Valley, in Pescadero, California. The profile included 107 shot points and 120 geophones spaced at 5-m increments. Seismic sources were generated by a Betsy Seisgun in 0.3-m-deep holes. Data were recorded on two Geometrics Strataview RX-60 seismographs at a sampling rate of 0.5 ms. Seismic p-wave velocities, determined by inverting first-arrival refractions using tomographic methods, ranged from 900 m/s in the shallow subsurface to 5000 m/s at 200 m depth, with higher velocities in the western half of the profile. Migrated seismic reflection images show clear, planar layering in the top 100-200 meters on the eastern and western ends of the seismic profile. However, to within the shallow subsurface, a 200-m-long zone near the center of the profile shows disturbed stratigraphic layers with several apparent fault strands approaching within a few meters of the surface. The near-surface locations of the imaged strands suggest that the Frijoles fault has been active in the recent past, although further paleoseismic study is needed to detail the slip history of the San Gregorio

  12. 78 FR 19103 - Safety Zone; Spanish Navy School Ship San Sebastian El Cano Escort; Bahia de San Juan; San Juan, PR

    Science.gov (United States)

    2013-03-29

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; Spanish Navy School Ship San Sebastian El... during the transit of the Spanish Navy School Ship San Sebastian El Cano, a public vessel, and during... board the Spanish Navy School Ship San Sebastian El Cano. The inbound escort is scheduled to take...

  13. San Pedro Martir Telescope: Mexican design endeavor

    Science.gov (United States)

    Toledo-Ramirez, Gengis K.; Bringas-Rico, Vicente; Reyes, Noe; Uribe, Jorge; Lopez, Aldo; Tovar, Carlos; Caballero, Xochitl; Del-Llano, Luis; Martinez, Cesar; Macias, Eduardo; Lee, William; Carramiñana, Alberto; Richer, Michael; González, Jesús; Sanchez, Beatriz; Lucero, Diana; Manuel, Rogelio; Segura, Jose; Rubio, Saul; Gonzalez, German; Hernandez, Obed; García, Mary; Lazaro, Jose; Rosales-Ortega, Fabian; Herrera, Joel; Sierra, Gerardo; Serrano, Hazael

    2016-08-01

    The Telescopio San Pedro Martir (TSPM) is a new ground-based optical telescope project, with a 6.5 meters honeycomb primary mirror, to be built in the Observatorio Astronomico Nacional on the Sierra San Pedro Martir (OAN-SPM) located in Baja California, Mexico. The OAN-SPM has an altitude of 2830 meters above sea level; it is among the best location for astronomical observation in the world. It is located 1830 m higher than the atmospheric inversion layer with 70% of photometric nights, 80% of spectroscopic nights and a sky brightness up to 22 mag/arcsec2. The TSPM will be suitable for general science projects intended to improve the knowledge of the universe established on the Official Mexican Program for Science, Technology and Innovation 2014-2018. The telescope efforts are headed by two Mexican institutions in name of the Mexican astronomical community: the Universidad Nacional Autonoma de Mexico and the Instituto Nacional de Astrofisica, Optica y Electronica. The telescope has been financially supported mainly by the Consejo Nacional de Ciencia y Tecnologia (CONACYT). It is under development by Mexican scientists and engineers from the Center for Engineering and Industrial Development. This development is supported by a Mexican-American scientific cooperation, through a partnership with the University of Arizona (UA), and the Smithsonian Astrophysical Observatory (SAO). M3 Engineering and Technology Corporation in charge of enclosure and building design. The TSPM will be designed to allow flexibility and possible upgrades in order to maximize resources. Its optical and mechanical designs are based upon those of the Magellan and MMT telescopes. The TSPM primary mirror and its cell will be provided by the INAOE and UA. The telescope will be optimized from the near ultraviolet to the near infrared wavelength range (0.35-2.5 m), but will allow observations up to 26μm. The TSPM will initially offer a f/5 Cassegrain focal station. Later, four folded Cassegrain and

  14. Edificio San Cristobal -Alicante- España

    Directory of Open Access Journals (Sweden)

    Navarro Guzmán, Alfonso

    1980-11-01

    Full Text Available The San Cristóbal Building has been constructed in the very center of the city of Alicante and is considered as one of the most singular buildings of Europe. Among its outstanding characteristic, we can point out. In addition to the facade -unique- in fire-lacquered aluminium, the fact of its extremely deep foundations lined by hundreds of tons of concrete due to the geological conditions of the ground. It has four parking floors, ground floor, four floors four dwellings. The construction of this building, which lasted two years, signifies a step forward and a decisive contribution to world architecture.

    Se ha construido en Alicante, en pleno centro de la ciudad, el Edificio San Cristóbal considerado como uno de los más singulares de Europa. Entre sus características más destacadas podemos señalar, además de la fachada - única- de aluminio lacado al fuego, el hecho de tener una cimentación muy profunda revestida por cientos de toneladas de hormigón debido a las condiciones geológicas del suelo. Posee cuatro plantas de aparcamiento, planta baja, cuatro plantas de oficinas y cuatro viviendas. La construcción de este edificio, cuya duración ha sido de dos años, ha supuesto un avance y un decisivo aporte a la arquitectura mundial.

  15. municipios Maracaibo y San Francisco

    Directory of Open Access Journals (Sweden)

    Rosa E. Ortiz

    2005-01-01

    Full Text Available El estudio de las Organizaciones No Gubernamentales estuvo dirigido a caracterizar la diversidad de grupos y asociaciones que bajo el termino genérico de ONG’s existen en los municipios Maracaibo y San Francisco, los más poblados y relevantes en la vida política, económica, social y cultural del estado Zulia-Venezuela. Se planteó una investigación exploratoria y descriptiva. El resultado permitió reconocer a Existe un crecimiento sostenido de las ONG en estos municipios: b los sectores con mayor concentración son el de salud; educación; deporte y recreación; arte y cultura. c Las instituciones que promueven más organizaciones son la iglesia y el Estado, lo que supone es este último caso una reducción de las funciones que le corresponden, además de la minimización de costos y conflictos laborales. d Ambivalencia en la orientación de su funcionamiento: Algunas ONG’s implementan prácticas próximas a la mercantilización de los servicios con lo cual legitiman estrategias del modelo económico neoliberal mientras otras contribuyen a fortalecer la democratización de los servicios públicos. Se concluye que el crecimiento y características de las ONGs en estos municipios estuvo vinculado en los noventa y comienzos de la década de 2000 por la aplicación de las políticas públicas neoliberales.

  16. San Pascual (2007) Año XLV, n. 344

    OpenAIRE

    2007-01-01

    Editorial. Coronación pontificia y declaración del patronato canónico para nuestra señora de Gracia. siguiendo los caminos de San Pascual, capítulo XIII Madrid II. Entrevista, Rosario García García presidenta de la asociación de hijas de María del Rosario.¡Seréis mis testigos (II)!. Donativos pro nueva bandera de San Pascual. San Pascual y la virgen de Gracia. Vida en el santuario. Los tesoros de San Pascual. Milagros de San Pascual VI, libra San Pascual a muchos del peligro del mar y de otro...

  17. San Diego Science Alliance Education Outreach Activities

    Science.gov (United States)

    Blue, Anne P.

    1996-11-01

    The General Atomics Science Education Outreach Activities as well as those of several other San Diego area institutions led to the formation in 1994 of the San Diego Science Alliance. The Science Alliance is a consortium of science-related industries, institutions of research and higher education, museums, medical health networks, and science competitions in support of K-12 science education. Some Alliance accomplishments include printing over 4000 resource catalogs for teachers, workshops presented by over 20 of their business members at the San Diego Science Education Conference, and hosting of 3 eight-week courses for teachers. The Alliance provides an important forum for interaction between schools and teachers and local industries and institutions. The Science Alliance maintains a World Wide Web Home Page at elvbf http://www.cerf.net/sd_science/. General Atomics' role in the San Diego Science Alliance will be presented.(Presented by Patricia S. Winter for the General Atomics Science Education Groups and San Diego Science Alliance.)

  18. 77 FR 19755 - Endangered and Threatened Wildlife and Plants; 12-Month Finding on a Petition to List the San...

    Science.gov (United States)

    2012-04-02

    ... Center for Biological Diversity, and the Natural Resources Defense Council to list the San Francisco Bay... Japan (McAllister 1963, pp. 10, 15). Because of its distinctive physical characteristics, the Bay- Delta... (Dege and Brown 2004, p. 59). They spend approximately 21 months of their 24-month life cycle...

  19. The San Bernardino, California, Terror Attack: Two Emergency Departments’ Response

    Directory of Open Access Journals (Sweden)

    Carol Lee, MD

    2016-01-01

    Full Text Available On December 2, 2015, a terror attack in the city of San Bernardino, California killed 14 Americans and injured 22 in the deadliest attack on U.S. soil since September 11, 2001. Although emergency personnel and law enforcement officials frequently deal with multi-casualty incidents (MCIs, what occurred that day required an unprecedented response. Most of the severely injured victims were transported to either Loma Linda University Medical Center (LLUMC or Arrowhead Regional Medical Center (ARMC. These two hospitals operate two designated trauma centers in the region and played crucial roles during the massive response that followed this attack. In an effort to shed a light on our response to others, we provide an account of how these two teaching hospitals prepared for and coordinated the medical care of these victims. In general, both centers were able to quickly mobilize large number of staff and resources. Prior disaster drills proved to be invaluable. Both centers witnessed excellent teamwork and coordination involving first responders, law enforcement, administration, and medical personnel from multiple specialty services. Those of us working that day felt safe and protected. Although we did identify areas we could have improved upon, including patchy communication and crowd-control, they were minor in nature and did not affect patient care. MCIs pose major challenges to emergency departments and trauma centers across the country. Responding to such incidents requires an ever-evolving approach as no two incidents will present exactly alike. It is our hope that this article will foster discussion and lead to improvements in management of future MCIs.

  20. The San Bernardino, California, Terror Attack: Two Emergency Departments' Response.

    Science.gov (United States)

    Lee, Carol; Walters, Elizabeth; Borger, Rodney; Clem, Kathleen; Fenati, Gregory; Kiemeney, Michael; Seng, Sakona; Yuen, Ho-Wang; Neeki, Michael; Smith, Dustin

    2016-01-01

    On December 2, 2015, a terror attack in the city of San Bernardino, California killed 14 Americans and injured 22 in the deadliest attack on U.S. soil since September 11, 2001. Although emergency personnel and law enforcement officials frequently deal with multi-casualty incidents (MCIs), what occurred that day required an unprecedented response. Most of the severely injured victims were transported to either Loma Linda University Medical Center (LLUMC) or Arrowhead Regional Medical Center (ARMC). These two hospitals operate two designated trauma centers in the region and played crucial roles during the massive response that followed this attack. In an effort to shed a light on our response to others, we provide an account of how these two teaching hospitals prepared for and coordinated the medical care of these victims. In general, both centers were able to quickly mobilize large number of staff and resources. Prior disaster drills proved to be invaluable. Both centers witnessed excellent teamwork and coordination involving first responders, law enforcement, administration, and medical personnel from multiple specialty services. Those of us working that day felt safe and protected. Although we did identify areas we could have improved upon, including patchy communication and crowd-control, they were minor in nature and did not affect patient care. MCIs pose major challenges to emergency departments and trauma centers across the country. Responding to such incidents requires an ever-evolving approach as no two incidents will present exactly alike. It is our hope that this article will foster discussion and lead to improvements in management of future MCIs.

  1. Connectionist Models: Proceedings of the Summer School Held in San Diego, California on 1990

    Science.gov (United States)

    1990-01-01

    David, Brown University Burani, Cristina, Istituto Di Psicologia Carlin, Michael J., Naval Ocean Systems Center Camporese, Daniel S., University of... Psicologia Del C. N. R. Ossen, Arnfried, Technical University of Berlin Parisi, Domenico, Istituto Di Psicologia Del C. N. R. Pearlmutter, B...changes in thironmenadts pheng and maintenance of an ongoing cognitive model of one’san org nism’s life. In his case, learni g a apts eno- kin and social

  2. Geologic and hydrologic characterization of coalbed-methane reservoirs in the San Juan basin

    Energy Technology Data Exchange (ETDEWEB)

    Kaiser, W.R. (Univ. of Texas, Austin, TX (United States)); Ayers, W.B. Jr.

    1994-09-01

    Fruitland coals are best developed in the north-central part of the San Juan basin. Coal distribution is controlled by shoreline and fluvial depositional settings. Hydraulic gradient, pressure regime, and hydrochemistry reflect regional permeability contrasts. The most productive (>1 MMcf/D) coalbed wells occur along a structural hinge line in association with a regional permeability barrier (no-flow boundary) at the basin center.

  3. Large-scale Particle Simulations for Debris Flows using Dynamic Load Balance on a GPU-rich Supercomputer

    Science.gov (United States)

    Tsuzuki, Satori; Aoki, Takayuki

    2016-04-01

    Numerical simulations for debris flows including a countless of objects is one of important topics in fluid dynamics and many engineering applications. Particle-based method is a promising approach to carry out the simulations for flows interacting with objects. In this paper, we propose an efficient method to realize a large-scale simulation for fluid-structure interaction by combining SPH (Smoothed Particle Hydrodynamics) method for fluid with DEM (Discrete Element Method) for objects on a multi-GPU system. By applying space filling curves to decomposition of the computational domain, we are able to contain the same number of particles in each decomposed domain. In our implementation, several techniques for particle counting and data movement have been introduced. Fragmentation of the memory used for particles happens during the time-integration and the frequency of de-fragmentation is examined by taking account for computational load balance and the communication cost between CPU and GPU. A link-list technique of the particle interaction is introduced to save the memory drastically. It is found that the sorting of particle data for the neighboring particle list using linked-list method improves the memory access greatly with a certain interval. The weak and strong scalabilities for a SPH simulation using 111 Million particles was measured from 4 GPUs to 512 GPUs for three types of space filling curves. A large-scale debris flow simulation of tsunami with 10,368 floating rubbles using 117 Million particles were successfully carried out with 256 GPUs on the TSUBAME 2.5 supercomputer at Tokyo Institute of Technology.

  4. Statistical correlations and risk analyses techniques for a diving dual phase bubble model and data bank using massively parallel supercomputers.

    Science.gov (United States)

    Wienke, B R; O'Leary, T R

    2008-05-01

    Linking model and data, we detail the LANL diving reduced gradient bubble model (RGBM), dynamical principles, and correlation with data in the LANL Data Bank. Table, profile, and meter risks are obtained from likelihood analysis and quoted for air, nitrox, helitrox no-decompression time limits, repetitive dive tables, and selected mixed gas and repetitive profiles. Application analyses include the EXPLORER decompression meter algorithm, NAUI tables, University of Wisconsin Seafood Diver tables, comparative NAUI, PADI, Oceanic NDLs and repetitive dives, comparative nitrogen and helium mixed gas risks, USS Perry deep rebreather (RB) exploration dive,world record open circuit (OC) dive, and Woodville Karst Plain Project (WKPP) extreme cave exploration profiles. The algorithm has seen extensive and utilitarian application in mixed gas diving, both in recreational and technical sectors, and forms the bases forreleased tables and decompression meters used by scientific, commercial, and research divers. The LANL Data Bank is described, and the methods used to deduce risk are detailed. Risk functions for dissolved gas and bubbles are summarized. Parameters that can be used to estimate profile risk are tallied. To fit data, a modified Levenberg-Marquardt routine is employed with L2 error norm. Appendices sketch the numerical methods, and list reports from field testing for (real) mixed gas diving. A Monte Carlo-like sampling scheme for fast numerical analysis of the data is also detailed, as a coupled variance reduction technique and additional check on the canonical approach to estimating diving risk. The method suggests alternatives to the canonical approach. This work represents a first time correlation effort linking a dynamical bubble model with deep stop data. Supercomputing resources are requisite to connect model and data in application.

  5. Building an Advanced Computing Environment with SAN Support

    Institute of Scientific and Technical Information of China (English)

    DajianYANG; MeiMA; 等

    2001-01-01

    The current computing environment of our Computing Center in IHEP uses a SAS (server Attached Storage)architecture,attaching all the storage devices directly to the machines.This kind of storage strategy can't meet the requirement of our BEPC II/BESⅢ project properly.Thus we design and implement a SAN-based computing environment,which consists of several computing farms,a three-level storage pool,a set of storage management software and a web-based data management system.The feature of ours system includes cross-platform data sharing,fast data access,high scalability,convenient storage management and data management.

  6. Rocks and geology in the San Francisco Bay region

    Science.gov (United States)

    Stoffer, Philip W.

    2002-01-01

    The landscape of the San Francisco Bay region is host to a greater variety of rocks than most other regions in the United States. This introductory guide provides illustrated descriptions of 46 common and important varieties of igneous, sedimentary, and metamorphic rock found in the region. Rock types are described in context of their identification qualities, how they form, and where they occur in the region. The guide also provides discussion about of regional geology, plate tectonics, the rock cycle, the significance of the selected rock types in relation to both earth history and the impact of mineral resources on the development in the region. Maps and text also provide information where rocks, fossils, and geologic features can be visited on public lands or in association with public displays in regional museums, park visitor centers, and other public facilities.

  7. Cacao use and the San Lorenzo Olmec.

    Science.gov (United States)

    Powis, Terry G; Cyphers, Ann; Gaikwad, Nilesh W; Grivetti, Louis; Cheong, Kong

    2011-05-24

    Mesoamerican peoples had a long history of cacao use--spanning more than 34 centuries--as confirmed by previous identification of cacao residues on archaeological pottery from Paso de la Amada on the Pacific Coast and the Olmec site of El Manatí on the Gulf Coast. Until now, comparable evidence from San Lorenzo, the premier Olmec capital, was lacking. The present study of theobromine residues confirms the continuous presence and use of cacao products at San Lorenzo between 1800 and 1000 BCE, and documents assorted vessels forms used in its preparation and consumption. One elite context reveals cacao use as part of a mortuary ritual for sacrificial victims, an event that occurred during the height of San Lorenzo's power.

  8. A Case for Historic Joint Rupture of the San Andreas and San Jacinto Faults

    Science.gov (United States)

    Lozos, J.

    2015-12-01

    The ~M7.5 southern California earthquake of 8 December 1812 ruptured the San Andreas Fault from Cajon Pass to at least as far north as Pallet Creek (Biasi et al., 2002). The 1812 rupture has also been identified in trenches at Burro Flats to the south (Yule and Howland, 2001). However, the lack of a record of 1812 at Plunge Creek, between Cajon Pass and Burro Flats (McGill et al., 2002), complicates the interpretation of this event as a straightforward San Andreas rupture. Paleoseismic records of a large early 19th century rupture on the northern San Jacinto Fault (Onderdonk et al., 2013; Kendrick and Fumal, 2005) allow for alternate interpretations of the 1812 earthquake. I use dynamic rupture modeling on the San Andreas-San Jacinto junction to determine which rupture behaviors produce slip patterns consistent with observations of the 1812 event. My models implement realistic fault geometry, a realistic velocity structure, and stress orientations based on seismicity literature. Under these simple assumptions, joint rupture of the two faults is the most common behavior. My modeling rules out a San Andreas-only rupture that is consistent with the data from the 1812 earthquake, and also shows that single fault events are unable to match the average slip per event for either fault. The choice of nucleation point affects the details of rupture directivity and slip distribution, but not the first order result that multi-fault rupture is the preferred behavior. While it cannot be definitively said that joint San Andreas-San Jacinto rupture occurred in 1812, these results are consistent with paleoseismic and historic data. This has implications for the possibility of future multi-fault rupture within the San Andreas system, as well as for interpretation of other paleoseismic events in regions of complex fault interactions.

  9. A Glorious Century of Art Education: San Francisco's Art Institute

    Science.gov (United States)

    Dobbs, Stephen Mark

    1976-01-01

    Author described the life and times of the San Francisco Art Institute and reviewed the forces that made San Francisco a city of more than ordinary awareness of the arts in its civic and civil existence. (Editor/RK)

  10. Species Observations (poly) - San Diego County [ds648

    Data.gov (United States)

    California Department of Resources — Created in 2009, the SanBIOS database serves as a single repository of species observations collected by various departments within the County of San Diego's Land...

  11. Mammal Track Counts - San Diego County, 2010 [ds709

    Data.gov (United States)

    California Department of Resources — The San Diego Tracking Team (SDTT) is a non-profit organization dedicated to promoting the preservation of wildlife habitat in San Diego County through citizen-based...

  12. Coastal Cactus Wren, San Diego Co. - 2009 [ds702

    Data.gov (United States)

    California Department of Resources — The San Diego Multiple Species Conservation program (MSCP) was developed for the conservation of plants and animals in the southeast portion of San Diego County....

  13. Coastal Cactus Wren, San Diego Co. - 2011 [ds708

    Data.gov (United States)

    California Department of Resources — The San Diego Multiple Species Conservation program (MSCP) was developed for the conservation of plants and animals in the southeast portion of San Diego County....

  14. Mammal Track Counts - San Diego County [ds442

    Data.gov (United States)

    California Department of Resources — The San Diego Tracking Team (SDTT) is a non-profit organization dedicated to promoting the preservation of wildlife habitat in San Diego County through citizen-based...

  15. San Marino-China Friendship Association 20 Years Old

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    <正>San Marino, with a population of less than 30,000, is one of the smallest countries in Europe. In such a small country the San Marino-China Friendship Association (SMCFA) was set up and has exerted

  16. Las fortalezas de San Lázaro

    Directory of Open Access Journals (Sweden)

    Enrique Naranjo Martínez

    1961-11-01

    Full Text Available Generalmente en Cartagena, al hablar de las fortalezas del cerro de San Lázaro, las llaman en conjunto el Castillo de San Felipe de Barajas, pero en esto nos parece que hay un error. El castillo propiamente dicho se construyó en 1657, según la inscripción que en placa de mármol, ostentaba El Caballero, desde donde la sonora voz de la campana daba los alertas a la plaza fuerte en los días de la colonia.

  17. Cacao use and the San Lorenzo Olmec

    OpenAIRE

    Powis, Terry G.; Cyphers, Ann; Gaikwad, Nilesh W.; Grivetti, Louis; Cheong, Kong

    2011-01-01

    Mesoamerican peoples had a long history of cacao use—spanning more than 34 centuries—as confirmed by previous identification of cacao residues on archaeological pottery from Paso de la Amada on the Pacific Coast and the Olmec site of El Manatí on the Gulf Coast. Until now, comparable evidence from San Lorenzo, the premier Olmec capital, was lacking. The present study of theobromine residues confirms the continuous presence and use of cacao products at San Lorenzo between 1800 and 1000 BCE, an...

  18. Sediment Deposition, Erosion, and Bathymetric Change in Central San Francisco Bay: 1855-1979

    Science.gov (United States)

    Fregoso, Theresa A.; Foxgrover, Amy C.; Jaffe, Bruce E.

    2008-01-01

    Central San Francisco Bay is the hub of a dynamic estuarine system connecting the San Joaquin and Sacramento River Deltas, Suisun Bay, and San Pablo Bay to the Pacific Ocean and South San Francisco Bay. To understand the role that Central San Francisco Bay plays in sediment transport throughout the system, it is necessary to first determine historical changes in patterns of sediment deposition and erosion from both natural and anthropogenic forces. The first extensive hydrographic survey of Central San Francisco Bay was conducted in 1853 by the National Ocean Service (NOS) (formerly the United States Coast and Geodetic Survey (USCGS)). From 1894 to 1979, four additional surveys, composed of a total of approximately 700,000 bathymetric soundings, were collected within Central San Francisco Bay. Converting these soundings into accurate bathymetric models involved many steps. The soundings were either hand digitized directly from the original USCGS and NOS hydrographic sheets (H-sheets) or obtained digitally from the National Geophysical Data Center's (NGDC) Geophysical Data System (GEODAS) (National Geophysical Data Center, 1996). Soundings were supplemented with contours that were either taken directly from the H-sheets or added in by hand. Shorelines and marsh areas were obtained from topographic sheets. The digitized soundings, depth contours, shorelines, and marsh areas were entered into a geographic information system (GIS) and georeferenced to a common horizontal datum. Using surface modeling software, bathymetric grids with a horizontal resolution of 25 m were developed for each of the five hydrographic surveys. Before analyses of sediment deposition and erosion were conducted, interpolation bias was removed and all of the grids were converted to a common vertical datum. These bathymetric grids were then used to develop bathymetric change maps for subsequent survey periods and to determine long-term changes in deposition and erosion by calculating volumes and

  19. Evidence-Centered Design as a Foundation for ALD Development

    Science.gov (United States)

    Plake, Barbara S.; Huff, Kristen; Reshetar, Rosemary

    2009-01-01

    [Slides] presented at the Annual Meeting of National Council on Measurement in Education (NCME) in San Diego, CA in April 2009. This presentation discusses a methodology for directly connecting evidence-centered assessment design (ECD) to score interpretation and use through the development of Achievement level descriptors.

  20. Southern San Francisco Bay Colonial Nesting Bird Census 1995-1996: Don Edwards San Francisco Bay National Wildlife Refuge Lands

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This report summarizes the 1995-1996 field season of the San Francisco Bay Bird Observatory (SFBBO) Colonial Waterbird Monitoring Study on the Don Edwards San...

  1. 33 CFR 110.210 - San Diego Harbor, CA.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false San Diego Harbor, CA. 110.210... ANCHORAGE REGULATIONS Anchorage Grounds § 110.210 San Diego Harbor, CA. (a) The anchorage grounds. (1... Commander, Naval Base, San Diego, CA. The administration of these anchorages is exercised by the...

  2. 78 FR 48646 - Pike & San Isabel Resource Advisory Committee

    Science.gov (United States)

    2013-08-09

    ... Forest Service Pike & San Isabel Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meetings. SUMMARY: The Pike & San Isabel Resource Advisory Committee will meet in Pueblo, Colorado... at the Supervisor's Office of the Pike & San Isabel National Forests, Cimarron and Comanche...

  3. 75 FR 51749 - Pike & San Isabel Resource Advisory Committee

    Science.gov (United States)

    2010-08-23

    ... No: 2010-20802] DEPARTMENT OF AGRICULTURE Forest Service Pike & San Isabel Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Pike & San Isabel Resource... at the Supervisor's Office of the Pike & San Isabel National Forests, Cimarron and Comanche...

  4. 76 FR 30903 - Pike & San Isabel Resource Advisory Committee

    Science.gov (United States)

    2011-05-27

    ... Forest Service Pike & San Isabel Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Pike & San Isabel Resource Advisory Committee will meet in Pueblo, Colorado... of the Pike & San Isabel National Forests, Cimarron and Comanche National Grasslands (PSICC) at...

  5. 75 FR 65609 - Pike & San Isabel Resource Advisory Committee

    Science.gov (United States)

    2010-10-26

    ... Forest Service Pike & San Isabel Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Pike & San Isabel Resource Advisory Committee will meet in Pueblo, Colorado... Supervior's Office of the Pike & San Isabel National Forests, Cimarron and Comanche National...

  6. 76 FR 2331 - Pike & San Isabel Resource Advisory Committee

    Science.gov (United States)

    2011-01-13

    ... Forest Service Pike & San Isabel Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Pike & San Isabel Resource Advisory Committee will meet in Pueblo, Colorado... of the Pike & San Isabel National Forests, Cimarron and Comanche National Grasslands (PSICC) at...

  7. 77 FR 50459 - Pike & San Isabel Resource Advisory Committee

    Science.gov (United States)

    2012-08-21

    ... Forest Service Pike & San Isabel Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meetings. SUMMARY: The Pike & San Isabel Resource Advisory Committee will meet in Pueblo, Colorado... will be held at the Supervisor's Office of the Pike & San Isabel National Forests, Cimarron...

  8. 76 FR 27304 - Pike & San Isabel Resource Advisory Committee

    Science.gov (United States)

    2011-05-11

    ... Forest Service Pike & San Isabel Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Pike & San Isabel Resource Advisory Committee will meet in Pueblo, Colorado... Pike & San Isabel National Forests, Cimarron and Comanche National Grasslands (PSICC) at 2840...

  9. 75 FR 78675 - Pike & San Isabel Resource Advisory Committee

    Science.gov (United States)

    2010-12-16

    ... Forest Service Pike & San Isabel Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Pike & San Isabel Resource Advisory Committee will meet in Pueblo, Colorado... of the Pike & San Isabel National Forests, Cimarron and Comanche National Grasslands (PSICC) at...

  10. 76 FR 9540 - Pike & San Isabel Resource Advisory Committee

    Science.gov (United States)

    2011-02-18

    ... Forest Service Pike & San Isabel Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The Pike & San Isabel Resource Advisory Committee will meet in Pueblo, Colorado... of the Pike & San Isabel National Forests, Cimarron and Comanche National Grasslands (PSICC) at...

  11. 76 FR 12692 - San Juan National Forest Resource Advisory Committee

    Science.gov (United States)

    2011-03-08

    ... Forest Service San Juan National Forest Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The San Juan National Forest Resource Advisory Council (RAC) will meet in... comments should be sent to Attn: San Juan National Forest RAC, 15 Burnett Court, Durango, CO 81301...

  12. 76 FR 40876 - San Juan National Forest Resource Advisory Committee

    Science.gov (United States)

    2011-07-12

    ... Forest Service San Juan National Forest Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The San Juan National Forest Resource Advisory Council (RAC) will meet in... Sonoran Meeting Rooms. Written comments should be sent to Attn: San Juan National Forest RAC, 15 Burnett...

  13. 75 FR 48306 - San Juan National Forest Resource Advisory Committee

    Science.gov (United States)

    2010-08-10

    ... Forest Service San Juan National Forest Resource Advisory Committee AGENCY: Forest Service, USDA. ACTION: Notice of meeting. SUMMARY: The San Juan National Forest Resource Advisory Council (RAC) will meet in... comments should be sent to Attn: San Juan National Forest RAC, 15 Burnett Court, Durango, CO 81301...

  14. San Francisco Bay Long Term Management Strategy for Dredging

    Science.gov (United States)

    The San Francisco Bay Long Term Management Strategy (LTMS) is a cooperative effort to develop a new approach to dredging and dredged material disposal in the San Francisco Bay area. The LTMS serves as the Regional Dredging Team for the San Francisco area.

  15. Heidegger y el cristianismo de San Pablo y San Agustín

    Directory of Open Access Journals (Sweden)

    Francisco de Lara

    2007-01-01

    Full Text Available Este texto intenta mostrar el sentido de la interpretación de San Pablo y San Agustín que Heidegger lleva a cabo en sus primeros cursos de Friburgo. En concreto, se pretende apuntar al motivo por el que el joven Heidegger recupera aspectos del cristianismo para su proyecto filosófico y cuáles son los elementos concretos que las Epístolas de San Pablo y las Confesiones de San Agustín le aportan. De esta forma, se apreciará, entre otras cosas, la importancia concedida por Heidegger a la acentuación del mundo propio (Selbstwelt y de la temporalidad que es característica de la experiencia cristiana del vivir

  16. San Juan Islands National Wildlife Refuge : San Juan Wilderness : Wilderness management plan

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — This document is a plan regarding management of the San Juan Wilderness. After introducing the area, it analyzes current management practices against current public...

  17. Gravity cores from San Pablo Bay and Carquinez Strait, San Francisco Bay, California

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This data release contains information on gravity cores that were collected by the U.S. Geological Survey in the area of San Pablo Bay and Carquinez Strait,...

  18. New Mexico High School Supercomputing Challenge, 1990--1995: Five years of making a difference to students, teachers, schools, and communities. Progress report

    Energy Technology Data Exchange (ETDEWEB)

    Foster, M.; Kratzer, D.

    1996-02-01

    The New Mexico High School Supercomputing Challenge is an academic program dedicated to increasing interest in science and math among high school students by introducing them to high performance computing. This report provides a summary and evaluation of the first five years of the program, describes the program and shows the impact that it has had on high school students, their teachers, and their communities. Goals and objectives are reviewed and evaluated, growth and development of the program are analyzed, and future directions are discussed.

  19. San Jose, California: Solar in Action (Brochure)

    Energy Technology Data Exchange (ETDEWEB)

    2011-10-01

    This brochure provides an overview of the challenges and successes of San Jose, CA, a 2008 Solar America City awardee, on the path toward becoming a solar-powered community. Accomplishments, case studies, key lessons learned, and local resource information are given.

  20. San Diego, California: Solar in Action (Brochure)

    Energy Technology Data Exchange (ETDEWEB)

    2011-10-01

    This brochure provides an overview of the challenges and successes of San Diego, CA, a 2007 Solar America City awardee, on the path toward becoming a solar-powered community. Accomplishments, case studies, key lessons learned, and local resource information are given.

  1. Habitat--Offshore of San Gregorio, California

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This part of SIM 3306 presents data for the habitat map of the seafloor (see sheet 7, SIM 3306) of the Offshore of San Gregorio map area, California. The vector data...

  2. San Francisco's New Zoo's Connections for Conservation.

    Science.gov (United States)

    Routman, Emily

    2001-01-01

    Provides information on a redevelopment project at the San Francisco Zoo known as the New Zoo. The explicit goal of the project is to inspire a sense of caring and appreciation for wildlife that is the foundation of a conservation ethic. (DDR)

  3. SANS observations on weakly flocculated dispersions

    DEFF Research Database (Denmark)

    Mischenko, N.; Ourieva, G.; Mortensen, K.;

    1997-01-01

    Structural changes occurring in colloidal dispersions of poly-(methyl metacrylate) (PMMA) particles, sterically stabilized with poly-(12-hydroxystearic acid) (PHSA), while varying the solvent quality, temperature and shear rate, are investigated by small-angle neutron scattering (SANS). For a mod...

  4. San Diego Zoo:Success in Breeding

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    Giant pandas have become very popular in U.S.zoos. One in particular, the San Diego Zoo, has been extremely successful at making the pandas feel at home and getting them to breed. In 1999, it became home tothe first surviving panda cub born in the United States.

  5. Habitat--Offshore of San Gregorio, California

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This part of SIM 3306 presents data for the habitat map of the seafloor (see sheet 7, SIM 3306) of the Offshore of San Gregorio map area, California. The vector data...

  6. SANS analysis of aqueous ionic perfluoropolyether micelles

    CERN Document Server

    Gambi, C M C; Chittofrati, A; Pieri, R; Baglioni, P; Teixeira, J

    2002-01-01

    Preliminary SANS results of ionic chlorine terminated perfluoropolyether micelles in water are given. The experimental spectra have been analyzed by a two-shell ellipsoidal model for the micellar form factor and a screened Coulombic plus hard-sphere repulsion potential for the structure factor. (orig.)

  7. San Antonio, Texas: Solar in Action (Brochure)

    Energy Technology Data Exchange (ETDEWEB)

    2011-10-01

    This brochure provides an overview of the challenges and successes of San Antonio, TX, a 2008 Solar America City awardee, on the path toward becoming a solar-powered community. Accomplishments, case studies, key lessons learned, and local resource information are given.

  8. Contours--Offshore of San Gregorio, California

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This part of SIM 3306 presents data for the bathymetric contours for several seafloor maps (see sheets 1, 2, 3, 7, 10, SIM 3306) of the Offshore of San Gregorio map...

  9. Bathymetry--Offshore San Gregorio, California

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This part of SIM 3306 presents data for the bathymetry and shaded-relief maps (see sheet 1 and 2, SIM 3306) of the Offshore of San Gregorio map area, California....

  10. Nathaniel Hawthorne Elementary School: San Antonio, Texas.

    Science.gov (United States)

    American Educator, 1997

    1997-01-01

    Discusses the successful use of Core Knowledge Curriculum in one inner-city elementary school in San Antonio (Texas) that had previously reflected low student achievement, inconsistent attendance, and student behavioral problems. Improvements in these conditions as revealed through teacher observations are highlighted. (GR)

  11. 1986 San Salvador, El Salvador Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — At least 1,000 people killed, 10,000 injured, 200,000 homeless and severe damage in the San Salvador area. About 50 fatalities were the result of landslides in the...

  12. 77 FR 20379 - San Diego Gas &

    Science.gov (United States)

    2012-04-04

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission San Diego Gas & Electric Company v. Sellers of Energy and Ancillary Services Into Markets Operated by the California Independent System Operator Corporation and the California...

  13. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  14. Mass Storage System Upgrades at the NASA Center for Computational Sciences

    Science.gov (United States)

    Tarshish, Adina; Salmon, Ellen; Macie, Medora; Saletta, Marty

    2000-01-01

    The NASA Center for Computational Sciences (NCCS) provides supercomputing and mass storage services to over 1200 Earth and space scientists. During the past two years, the mass storage system at the NCCS went through a great deal of changes both major and minor. Tape drives, silo control software, and the mass storage software itself were upgraded, and the mass storage platform was upgraded twice. Some of these upgrades were aimed at achieving year-2000 compliance, while others were simply upgrades to newer and better technologies. In this paper we will describe these upgrades.

  15. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  16. 75 FR 8106 - Don Edwards San Francisco Bay National Wildlife Refuge, Alameda, Santa Clara, and San Mateo...

    Science.gov (United States)

    2010-02-23

    ... Fish and Wildlife Service Don Edwards San Francisco Bay National Wildlife Refuge, Alameda, Santa Clara... located in Alameda, Santa Clara, and San Mateo Counties of California. We provide this notice in... in Alameda, Santa Clara, and San Mateo Counties of California, consists of several non...

  17. 75 FR 8804 - Safety Zone; NASSCO Launching of USNS Charles Drew, San Diego Bay, San Diego, CA.

    Science.gov (United States)

    2010-02-26

    ... Diego Bay, San Diego, CA. AGENCY: Coast Guard, DHS. ACTION: Temporary final rule. SUMMARY: The Coast Guard is establishing a temporary safety zone on the navigable waters of the San Diego Bay in support of... Captain of the Port (COTP) San Diego or his designated representative. DATES: This rule is effective from...

  18. 77 FR 48532 - Notice of Intent To Repatriate Cultural Items: San Diego State University, San Diego, CA

    Science.gov (United States)

    2012-08-14

    ... National Park Service Notice of Intent To Repatriate Cultural Items: San Diego State University, San Diego, CA AGENCY: National Park Service, Interior. ACTION: Notice. SUMMARY: The San Diego State University... Diego State University Archaeology Collections Management Program. DATES: Representatives of any Indian...

  19. 75 FR 17329 - Safety Zone; Big Bay Fourth of July Fireworks, San Diego Bay, San Diego, CA

    Science.gov (United States)

    2010-04-06

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; Big Bay Fourth of July Fireworks, San Diego Bay, San Diego, CA AGENCY: Coast Guard, DHS. ACTION: Notice of proposed rulemaking. SUMMARY: The Coast Guard proposes establishing a temporary safety zone on the navigable waters of the San Diego Bay in...

  20. 76 FR 70480 - Otay River Estuary Restoration Project, South San Diego Bay Unit of the San Diego Bay National...

    Science.gov (United States)

    2011-11-14

    ... Fish and Wildlife Service Otay River Estuary Restoration Project, South San Diego Bay Unit of the San Diego Bay National Wildlife Refuge, California; Environmental Impact Statement AGENCY: Fish and Wildlife... the San Diego Bay National Wildlife Refuge. This notice advises the public that we intend to gather...

  1. 78 FR 48044 - Safety Zone; San Diego International Airport Terminal Two West Grand Opening Fireworks; San Diego...

    Science.gov (United States)

    2013-08-07

    ... [Docket No. USCG-2013-0637] RIN 1625-AA00 Safety Zone; San Diego International Airport Terminal Two West Grand Opening Fireworks; San Diego, CA AGENCY: Coast Guard, DHS. ACTION: Temporary final rule. SUMMARY: The Coast Guard is establishing a safety zone on the navigable waters of the San Diego Bay in support...

  2. Super-computer architecture

    CERN Document Server

    Hockney, R W

    1977-01-01

    This paper examines the design of the top-of-the-range, scientific, number-crunching computers. The market for such computers is not as large as that for smaller machines, but on the other hand it is by no means negligible. The present work-horse machines in this category are the CDC 7600 and IBM 360/195, and over fifty of the former machines have been sold. The types of installation that form the market for such machines are not only the major scientific research laboratories in the major countries-such as Los Alamos, CERN, Rutherford laboratory-but also major universities or university networks. It is also true that, as with sports cars, innovations made to satisfy the top of the market today often become the standard for the medium-scale computer of tomorrow. Hence there is considerable interest in examining present developments in this area. (0 refs).

  3. Supercomputer debugging workshop `92

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.S.

    1993-02-01

    This report contains papers or viewgraphs on the following topics: The ABCs of Debugging in the 1990s; Cray Computer Corporation; Thinking Machines Corporation; Cray Research, Incorporated; Sun Microsystems, Inc; Kendall Square Research; The Effects of Register Allocation and Instruction Scheduling on Symbolic Debugging; Debugging Optimized Code: Currency Determination with Data Flow; A Debugging Tool for Parallel and Distributed Programs; Analyzing Traces of Parallel Programs Containing Semaphore Synchronization; Compile-time Support for Efficient Data Race Detection in Shared-Memory Parallel Programs; Direct Manipulation Techniques for Parallel Debuggers; Transparent Observation of XENOOPS Objects; A Parallel Software Monitor for Debugging and Performance Tools on Distributed Memory Multicomputers; Profiling Performance of Inter-Processor Communications in an iWarp Torus; The Application of Code Instrumentation Technology in the Los Alamos Debugger; and CXdb: The Road to Remote Debugging.

  4. Associative Memories for Supercomputers

    Science.gov (United States)

    1992-12-01

    Transform (FFT) is computed. The real part is extracted and a bias equal to its minimum is added to it in order to make all the values positive. Each...Transform (FM) is computed. The real part is extracted and a bias equal to its minimum is added to it in order to make all the values positive. Each...masque numero un de Figure 12: Photographic de Ia reconstruction obtenuc avec Ia plaquc IOCDL correspondant k Ia phase binaire. en rotition, montrant

  5. Power-constrained supercomputing

    Science.gov (United States)

    Bailey, Peter E.

    As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound. Adaptive power balancing efficiently predicts where critical paths are likely to occur and distributes power to those paths. Greater power, in turn, allows increased thread concurrency levels, CPU frequency/voltage, or both. We describe these techniques in detail and show that, compared to the state-of-the-art technique of using statically predetermined, per-node power caps, Conductor leads to a best-case performance improvement of up to 30%, and an average improvement of 19.1%. At the node level, an accurate power/performance model will aid in selecting the right configuration from a large set of available configurations. We present a novel approach to generate such a model offline using kernel clustering and multivariate linear regression. Our model requires only two iterations to select a configuration, which provides a significant advantage over exhaustive search-based strategies. We apply our model to predict power and performance for different applications using arbitrary configurations, and show that our model, when used with hardware frequency-limiting in a runtime system, selects configurations with significantly higher performance at a given power limit than those chosen by frequency-limiting alone. When applied to a set of 36 computational kernels from a range of applications, our model accurately predicts power and performance; our runtime system based on the model maintains 91% of optimal performance while meeting power constraints 88% of the time. When the runtime system violates a power constraint, it exceeds the constraint by only 6% in the average case, while simultaneously achieving 54% more performance than an oracle. Through the combination of the above contributions, we hope to provide guidance and inspiration to research practitioners working on runtime systems for power-constrained environments. We also hope this dissertation will draw attention to the need for software and runtime-controlled power management under power constraints at various levels, from the processor level to the cluster level.

  6. San Pascual (2013) Año L, n. 366

    OpenAIRE

    Pérez, María Dolores, O.S.C. (Directora)

    2013-01-01

    Editorial. Actualidad: Su Santidad el Papa Benedicto XVI y su renuncia. Peregrinos de la Fe con San Pascual Baylón. Efemérides Pascualinas: 15 de febrero de 1946. Se inaugura en el punte del Mar de Valencia el casilicio de San Pascual Baylón, obra del escultor José Pascual Ortells. Otras efemérides. San Pascual en la cerámica: El Retablo cerámico de San Pascual, Burriana. San Pascual y los Papas: Papa Benedicto XV. Año de la Fe: Exposición de crucifijos en el museo "Pouet del Sant" del Monast...

  7. San Juan Uchucuanicu: évolution historique

    Directory of Open Access Journals (Sweden)

    1975-01-01

    Full Text Available La communauté de San Juan est reconnue depuis 1939. Une première partie concerne l’organisation de la reducción de San Juan vers le milieu du XVIe siècle. Le poids fiscal s’exerce durement sur le village et la crise est générale dans toute la vallée du Chancay au XVIIe. siècle. La christianisation des habitants est définitive au milieu de ce même siècle. C’est vers la fin du XVIIe siècle et durant tout le XVIIIe que se multiplient les conflits entre San Juan et les villages voisins liés aux terrains de pâture et à la possession de l’eau. La deuxième partie du travail concerne les rapports de la communauté de San Juan avec le Pérou contemporain : contrainte fiscale toujours très lourde durant la fin de l’époque coloniale, exactions des militaires juste avant l’indépendance. La période républicaine voit toujours les conflits avec les villages voisins mais aussi la naissance de familles qui cherchent à retirer le maximum de la communauté. Les terres sont divisées et attribuées : la détérioration de l’organisation communale traditionnelle est manifeste. L4es conflits se multiplient entre petits propriétaires, mais aussi avec les haciendas voisines : c’est l’apparition d’une véritable lutte de classes. La situation actuelle est incertaine, le poids de l’économie marchande se développe avec l’exode des jeunes. Que sera la communauté San Juan à la fin de ce siècle? La comunidad de San Juan está reconocida desde 1939. La primera parte concierne a la organización de la 'reducción' de San Juan hacia mediados del siglo XVI. El peso fiscal se ejerce duramente sobre el pueblo y en el siglo XVII la crisis es general en todo el valle de Chancay. Hacia mediados del mismo siglo la cristianización de los habitantes es definitiva. Es hacia fines del siglo XVII y durante todo el siglo XVIII que se multiplican los conflictos entre San Juan y los pueblos vecinos, los que están relacionados con los terrenos de

  8. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu

    2013-07-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  9. Effects of a major earthquake on calls to regional poison control centers.

    OpenAIRE

    Nathan, A. R.; Olson, K.R.; Everson, G. W.; Kearney, T E; Blanc, P. D.

    1992-01-01

    We retrospectively evaluated the effect of the Loma Prieta earthquake on calls to 2 designated regional poison control centers (San Francisco and Santa Clara) in the area. In the immediate 12 hours after the earthquake, there was an initial drop (31%) in call volume, related to telephone system overload and other technical problems. Calls from Bay Area counties outside of San Francisco and Santa Clara decreased more dramatically than those from within the host counties where the poison contro...

  10. A Retail Center Facing Change: Using Data to Determine Marketing Strategy

    Science.gov (United States)

    Walker, Kristen L.; Curren, Mary T.; Kiesler, Tina

    2013-01-01

    Plaza del Valle is an open-air shopping center in the San Fernando Valley region of Los Angeles. The new marketing manager must review primary and secondary data to determine a target market, a product positioning strategy, and a promotion strategy for the retail shopping center with the ultimate goal of increasing revenue for the Plaza. She is…

  11. 78 FR 65300 - Notice of Availability (NOA) for General Purpose Warehouse and Information Technology Center...

    Science.gov (United States)

    2013-10-31

    ...: Federal Docket Management System Office, 4800 Mark Center Drive, East Tower, 2nd floor, Suite 02G09... of the Secretary Notice of Availability (NOA) for General Purpose Warehouse and Information... Purpose Warehouse and Information Technology Center at Defense Distribution Depot San Joaquin,...

  12. A Retail Center Facing Change: Using Data to Determine Marketing Strategy

    Science.gov (United States)

    Walker, Kristen L.; Curren, Mary T.; Kiesler, Tina

    2013-01-01

    Plaza del Valle is an open-air shopping center in the San Fernando Valley region of Los Angeles. The new marketing manager must review primary and secondary data to determine a target market, a product positioning strategy, and a promotion strategy for the retail shopping center with the ultimate goal of increasing revenue for the Plaza. She is…

  13. A Result Data Offloading Service for HPC Centers

    Energy Technology Data Exchange (ETDEWEB)

    Monti, Henri [Virginia Polytechnic Institute and State University (Virginia Tech); Butt, Ali R [Virginia Polytechnic Institute and State University (Virginia Tech); Vazhkudai, Sudharshan S [ORNL

    2007-01-01

    Modern High-Performance Computing applications are consuming and producing an exponentially increasing amount of data. This increase has lead to a significant number of resources being dedicated to data staging in and out of Supercomputing Centers. The typical approach to staging is a direct transfer of application data between the center and the application submission site. Such a direct data transfer approach becomes problematic, especially for staging-out, as (i) the data transfer time increases with the size of data, and may exceed the time allowed by the center's purge policies; and (ii) the submission site may not be online to receive the data, thus further increasing the chances for output data to be purged. In this paper, we argue for a systematic data staging-out approach that utilizes intermediary data-holding nodes to quickly offload data from the center to the intermediaries, thus avoiding the peril of a purge and addressing the two issues mentioned above. The intermediary nodes provide temporary data storage for the staged-out data and maximize the offload bandwidth by providing multiple data-flow paths from the center to the submission site. Our initial investigation shows such a technique to be effective in addressing the above two issues and providing better QOS guarantees for data retrieval.

  14. The Use of Radar to Improve Rainfall Estimation over the Tennessee and San Joaquin River Valleys

    Science.gov (United States)

    Petersen, Walter A.; Gatlin, Patrick N.; Felix, Mariana; Carey, Lawrence D.

    2010-01-01

    This slide presentation provides an overview of the collaborative radar rainfall project between the Tennessee Valley Authority (TVA), the Von Braun Center for Science & Innovation (VCSI), NASA MSFC and UAHuntsville. Two systems were used in this project, Advanced Radar for Meteorological & Operational Research (ARMOR) Rainfall Estimation Processing System (AREPS), a demonstration project of real-time radar rainfall using a research radar and NEXRAD Rainfall Estimation Processing System (NREPS). The objectives, methodology, some results and validation, operational experience and lessons learned are reviewed. The presentation. Another project that is using radar to improve rainfall estimations is in California, specifically the San Joaquin River Valley. This is part of a overall project to develop a integrated tool to assist water management within the San Joaquin River Valley. This involves integrating several components: (1) Radar precipitation estimates, (2) Distributed hydro model, (3) Snowfall measurements and Surface temperature / moisture measurements. NREPS was selected to provide precipitation component.

  15. San Cristobal Galapagos wind power project

    Energy Technology Data Exchange (ETDEWEB)

    Tolan, J. [Sgurr Energy, Glasgow (United Kingdom)

    2009-07-01

    The San Cristobal Galapagos wind power project was described. With its unique endemic flora and fauna, the Galapagos Islands were declared a world heritage site and marine reserve. The San Cristobal wind project was initiated in 1999 to reduce the environmental impacts of energy use on the island, and has been operational since 2007. Three 800 kW wind turbines have been installed in order to reduce 52 per cent of the island's diesel generation. The project's high penetration wind-diesel hybrid system included 300 kW diesel generators, a 13.2 kV utility distribution system, and six 300 kW wind turbines. The project is located outside of Galapagos Petrel flight paths and nesting areas. Turbines from a factory in Spain were used. The wind turbine foundation was constructed from concrete sand and stone mined on the island. Photographs of the installation process were included. tabs., figs.

  16. San Telmo, backpackers y otras globalizaciones

    Directory of Open Access Journals (Sweden)

    Fernando Firmo

    2015-12-01

    Full Text Available Este artículo pretende contribuir al debate sobre otras formas de globalización  presentando una etnografía realizada en el barrio de San Telmo sobre mochileros que combinan en sus experiencias viaje y trabajo. Su objetivo es viajar al mismo tiempo que sacan provecho de esto para conseguir el capital necesario que les permita continuar en movimiento alrededor del globo. En este texto quiero hablar sobre estos auténticos actores de la globalización popular que ponen el foco en procesos y agentes alternativos no hegemónicos y que en este caso desarrollan su actividad en el contexto de la experiencia mochilera en San Telmo, siendo mi intención enriquecer las reflexiones sobre la globalización desde abajo.

  17. SAFOD Penetrates the San Andreas Fault

    Directory of Open Access Journals (Sweden)

    Mark D. Zoback

    2006-03-01

    Full Text Available SAFOD, the San Andreas Fault Observatory at Depth (Fig. 1, completed an important milestone in July 2005 by drilling through the San Andreas Fault at seismogenic depth. SAFOD is one of three major components of EarthScope, a U.S. National Science Foundation (NSF initiative being conducted in collaboration with the U.S. Geological Survey (USGS. The International Continental Scientific DrillingProgram (ICDP provides engineering and technical support for the project as well as online access to project data and information (http://www.icdp-online.de/sites/sanandreas/news/news1.html. In 2002, the ICDP, the NSF, and the USGS provided funding for a pilot hole project at the SAFOD site. Twenty scientifi c papers summarizing the results of the pilot hole project as well as pre-SAFOD site characterization studies were published in Geophysical Research Letters (Vol.31, Nos. 12 and 15, 2004.

  18. Discovery Along the San Andreas Fault: Relocating Photographs From the 1906 Earthquake in San Francisco and San Mateo Counties

    Science.gov (United States)

    Grove, K.; Prentice, C.; Polly, J.; Yuen, C.; Wu, K.; Zhong, S.; Lopez, J.

    2005-12-01

    April of 2006 will mark the 100-year anniversary of the great 1906 San Francisco earthquake. This earthquake was important not only because of its human tragedy (thousands of dead or homeless people), but also because of its scientific significance. The 8.3 magnitude earthquake ruptured 430 km of the northern San Andreas fault (SAF) and lasted nearly one minute. Investigations after the earthquake led to discoveries that were the beginning of modern earthquake theories and measuring instruments. This was also one of the first large-scale natural disasters to be photographed. Our research group, which is part of the National Science Foundation funded SF-ROCKS program, acquired photographs that were taken shortly after the earthquake in downtown San Francisco and along the SAF in San Mateo County. The SAF photos are part of a Geographical Information System (GIS) database being published on a U.S. Geological Survey web site. The goal of our project was to improve estimates of photograph locations and to compare the landscape features that were visible after the earthquake with the landscape that we see today. We used the GIS database to find initial photo locations, and we then used a high-precision Global Positioning System (GPS) to measure the geographic coordinates of the locations once we matched our view to what we saw in a photo. Where possible, we used a digital camera to retake photos from the same position, to show the difference in the landscape 100 years later. The 1906 photos show fault zone features such as ground rupture, sag ponds, shutter ridges, and offset fences. Changes to the landscape since 1906 have included erosion and grading of the land, building of houses and other structures, and more tree cover compared to previous grassland vegetation. Our project is part of 1906 Earthquake Centennial activities; it is contributing to the photo archive that helps scientists and engineers who study earthquakes and their effects. It will also help the

  19. Centering research

    DEFF Research Database (Denmark)

    Katan, Lina Hauge; Baarts, Charlotte

    and collected 24 portfolios in which students reflect auto-ethnographically on their educational practices. Analyzing this qualitative material, we explore how researchers and students respectively read and write to develop and advance their thinking in those learning processes that the two groups fundamentally...... share as the common aim of both research and education. Despite some similarities, we find that how the two groups engage in and benefit from reading and writing diverges significantly. Thus we have even more reason to believe that centering practice-based teaching on these aspects of research is a good...

  20. Examination of spotted sand bass (Paralabrax maculatofasciatus) pollutant bioaccumulation in San Diego Bay, San Diego, California.

    Science.gov (United States)

    Loflen, Chad L

    2013-01-01

    The spotted sand bass (Paralabrax maculatofasciatus) is an important recreational sport and subsistence food fish within San Diego Bay, a large industrialized harbor in San Diego, California. Despite this importance, few studies examining the species life history relative to pollutant tissue concentrations and the consumptive fishery exist. This study utilized data from three independent spotted sand bass studies from 1989 to 2002 to investigate PCB, DDT, and mercury tissue concentrations relative to spotted sand bass age and growth in San Diego Bay, with subsequent comparisons to published pollutant advisory levels and fishery regulations for recreational and subsistence consumption of the species. Subsequent analysis focused on examining temporal and spatial differences for different regions of San Diego Bay. Study results for growth confirmed previous work, finding the species to exhibit highly asymptotic growth, making tissue pollutant concentrations at initial take size difficult if not impossible to predict. This was corroborated by independent tissue concentration results for mercury, which found no relationship between fish size and pollutant bioaccumulation observed. However, a positive though highly variable relationship was observed between fish size and PCB tissue concentration. Despite these findings, a significant proportion of fish exhibited pollutant levels above recommended state recreational angler consumption advisory levels for PCBs and mercury, especially for fish above the minimum take size, making the necessity of at-size predictions less critical. Lastly, no difference in tissue concentration was found temporally or spatially within San Diego Bay.

  1. San Bernardino National Wildlife Refuge Well 10

    Energy Technology Data Exchange (ETDEWEB)

    Ensminger, J.T.; Easterly, C.E.; Ketelle, R.H.; Quarles, H.; Wade, M.C.

    1999-12-01

    The U.S. Geological Survey (USGS), at the request of the U.S. Fish and Wildlife Service, evaluated the water production capacity of an artesian well in the San Bernardino National Wildlife Refuge, Arizona. Water from the well initially flows into a pond containing three federally threatened or endangered fish species, and water from this pond feeds an adjacent pond/wetland containing an endangered plant species.

  2. Bismuth ochers from San Diego Co., California

    Science.gov (United States)

    Schaller, W.T.

    1911-01-01

    The chief points brought out in this paper may be briefly summarized as follows: (1) The existence of natural Bi2O3 has not been established. (2) Natural bismite or bismuth ocher, when pure, is more probably a bismuth hydroxide. (3) The bismuth ochers from San Diego County, California, are either a bismuth hydroxide or bismuth vanadate, pucherite, or mixtures of these two. (4) Pucherite has been found noncrystallin and determined for the first time in the United States.

  3. The San Joaquin Valley Westside Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Nigel W.T.; Linneman, J. Christopher; Tanji, Kenneth K.

    2006-03-27

    Salt management has been a challenge to westside farmerssince the rapid expansion of irrigated agriculture in the 1900 s. Thesoils in this area are naturally salt-affected having formed from marinesedimentary rocks rich in sea salts rendering the shallow groundwater,and drainage return flows discharging into the lower reaches of the SanJoaquin River, saline. Salinity problems are affected by the importedwater supply from Delta where the Sacramento and San Joaquin Riverscombine. Water quality objectives on salinity and boron have been inplace for decades to protect beneficial uses of the river. However it wasthe selenium-induced avian toxicity that occurred in the evaporationponds of Kesterson Reservoir (the terminal reservoir of a planned but notcompleted San Joaquin Basin Master Drain) that changed public attitudesabout agricultural drainage and initiated a steady stream ofenvironmental legislation directed at reducing non-point source pollutionof the River. Annual and monthly selenium load restrictions and salinityand boron Total Maximum Daily Loads (TMDLs) are the most recent of thesepolicy initiatives. Failure by both State and Federal water agencies toconstruct a Master Drain facility serving mostly west-side irrigatedagriculture has constrained these agencies to consider only In-Valleysolutions to ongoing drainage problems. For the Westlands subarea, whichhas no surface irrigation drainage outlet to the San Joaquin River,innovative drainage reuse systems such as the Integrated Farm DrainageManagement (IFDM) offer short- to medium-term solutions while morepermanent remedies to salt disposal are being investigated. Real-timesalinity management, which requires improved coordination of east-sidereservoir releases and west-side drainage, offers some relief toGrasslands Basin farmers and wetland managers - allowing greater salinityloading to the River than under a strict TMDL. However, currentregulation drives a policy that results in a moratorium on all

  4. Border Security: The San Diego Fence

    Science.gov (United States)

    2007-05-23

    sector is located north of Tijuana and Tecate, Mexican cities with a combined population of 2 million people, and features no natural barriers to entry...more marked in the areas where fencing was constructed within San Diego sector. The USBP’s Imperial Beach and Chula Vista stations saw their...effects on (1) the Tijuana River National Estuarine Research and Reserve; (2) state and federally listed threatened and endangered species; (3) lands

  5. An overview of San Francisco Bay PORTS

    Science.gov (United States)

    Cheng, Ralph T.; McKinnie, David; English, Chad; Smith, Richard E.

    1998-01-01

    The Physical Oceanographic Real-Time System (PORTS) provides observations of tides, tidal currents, and meteorological conditions in real-time. The San Francisco Bay PORTS (SFPORTS) is a decision support system to facilitate safe and efficient maritime commerce. In addition to real-time observations, SFPORTS includes a nowcast numerical model forming a San Francisco Bay marine nowcast system. SFPORTS data and nowcast numerical model results are made available to users through the World Wide Web (WWW). A brief overview of SFPORTS is presented, from the data flow originated at instrument sensors to final results delivered to end users on the WWW. A user-friendly interface for SFPORTS has been designed and implemented. Appropriate field data analysis, nowcast procedures, design and generation of graphics for WWW display of field data and nowcast results are presented and discussed. Furthermore, SFPORTS is designed to support hazardous materials spill prevention and response, and to serve as resources to scientists studying the health of San Francisco Bay ecosystem. The success (or failure) of the SFPORTS to serve the intended user community is determined by the effectiveness of the user interface.

  6. Magnetotelluric Data, Southern San Luis Valley, Colorado

    Science.gov (United States)

    Williams, Jackie M.; Rodriguez, Brian D.

    2007-01-01

    Introduction The population of the San Luis Valley region is growing rapidly. The shallow unconfined and the deeper confined Santa Fe Group aquifer in the San Luis Basin is the main sources of municipal water for the region. Water shortfalls could have serious consequences. Future growth and land management in the region depend on accurate assessment and protection of the region's ground-water resources. An important issue in managing the ground-water resources is a better understanding of the hydrogeology of the Santa Fe Group and the nature of the sedimentary deposits that fill the Rio Grande rift, which contain the principal ground-water aquifers. The U.S. Geological Survey (USGS) is conducting a series of multidisciplinary studies of the San Luis Basin located in southern Colorado. Detailed geologic mapping, high-resolution airborne magnetic surveys, gravity surveys, an electromagnetic survey, called magnetotellurics (MT), and hydrologic and lithologic data are being used to better understand the aquifer systems. The primary goal of the MT survey is to map changes in electrical resistivity with depth that are related to differences in rock type. These various rock types help control the properties of aquifers in the region. This report does not include any interpretation of the data. Its purpose is to release the MT data acquired at the 22 stations shown in figure 1.

  7. Magnetotelluric Data, San Luis Valley, Colorado

    Science.gov (United States)

    Rodriguez, Brian D.; Williams, Jackie M.

    2008-01-01

    The San Luis Valley region population is growing. Water shortfalls could have serious consequences. Future growth and land management in the region depend on accurate assessment and protection of the region?s ground-water resources. An important issue in managing the ground-water resources is a better understanding of the hydrogeology of the Santa Fe Group and the nature of the sedimentary deposits that fill the Rio Grande rift, which contain the principal ground-water aquifers. The shallow unconfined aquifer and the deeper confined Santa Fe Group aquifer in the San Luis Basin are the main sources of municipal water for the region. The U.S. Geological Survey (USGS) is conducting a series of multidisciplinary studies of the San Luis Basin located in southern Colorado. Detailed geologic mapping, high-resolution airborne magnetic surveys, gravity surveys, an electromagnetic survey (called magnetotellurics, or MT), and hydrologic and lithologic data are being used to better understand the aquifers. The MT survey primary goal is to map changes in electrical resistivity with depth that are related to differences in rock types. These various rock types help control the properties of aquifers. This report does not include any data interpretation. Its purpose is to release the MT data acquired at 24 stations. Two of the stations were collected near Santa Fe, New Mexico, near deep wildcat wells. Well logs from those wells will help tie future interpretations of this data with geologic units from the Santa Fe Group sediments to Precambrian basement.

  8. The center for causal discovery of biomedical knowledge from big data.

    Science.gov (United States)

    Cooper, Gregory F; Bahar, Ivet; Becich, Michael J; Benos, Panayiotis V; Berg, Jeremy; Espino, Jeremy U; Glymour, Clark; Jacobson, Rebecca Crowley; Kienholz, Michelle; Lee, Adrian V; Lu, Xinghua; Scheines, Richard

    2015-11-01

    The Big Data to Knowledge (BD2K) Center for Causal Discovery is developing and disseminating an integrated set of open source tools that support causal modeling and discovery of biomedical knowledge from large and complex biomedical datasets. The Center integrates teams of biomedical and data scientists focused on the refinement of existing and the development of new constraint-based and Bayesian algorithms based on causal Bayesian networks, the optimization of software for efficient operation in a supercomputing environment, and the testing of algorithms and software developed using real data from 3 representative driving biomedical projects: cancer driver mutations, lung disease, and the functional connectome of the human brain. Associated training activities provide both biomedical and data scientists with the knowledge and skills needed to apply and extend these tools. Collaborative activities with the BD2K Consortium further advance causal discovery tools and integrate tools and resources developed by other centers.

  9. Performance Analysis and Scaling Behavior of the Terrestrial Systems Modeling Platform TerrSysMP in Large-Scale Supercomputing Environments

    Science.gov (United States)

    Kollet, S. J.; Goergen, K.; Gasper, F.; Shresta, P.; Sulis, M.; Rihani, J.; Simmer, C.; Vereecken, H.

    2013-12-01

    In studies of the terrestrial hydrologic, energy and biogeochemical cycles, integrated multi-physics simulation platforms take a central role in characterizing non-linear interactions, variances and uncertainties of system states and fluxes in reciprocity with observations. Recently developed integrated simulation platforms attempt to honor the complexity of the terrestrial system across multiple time and space scales from the deeper subsurface including groundwater dynamics into the atmosphere. Technically, this requires the coupling of atmospheric, land surface, and subsurface-surface flow models in supercomputing environments, while ensuring a high-degree of efficiency in the utilization of e.g., standard Linux clusters and massively parallel resources. A systematic performance analysis including profiling and tracing in such an application is crucial in the understanding of the runtime behavior, to identify optimum model settings, and is an efficient way to distinguish potential parallel deficiencies. On sophisticated leadership-class supercomputers, such as the 28-rack 5.9 petaFLOP IBM Blue Gene/Q 'JUQUEEN' of the Jülich Supercomputing Centre (JSC), this is a challenging task, but even more so important, when complex coupled component models are to be analysed. Here we want to present our experience from coupling, application tuning (e.g. 5-times speedup through compiler optimizations), parallel scaling and performance monitoring of the parallel Terrestrial Systems Modeling Platform TerrSysMP. The modeling platform consists of the weather prediction system COSMO of the German Weather Service; the Community Land Model, CLM of NCAR; and the variably saturated surface-subsurface flow code ParFlow. The model system relies on the Multiple Program Multiple Data (MPMD) execution model where the external Ocean-Atmosphere-Sea-Ice-Soil coupler (OASIS3) links the component models. TerrSysMP has been instrumented with the performance analysis tool Scalasca and analyzed

  10. Historia del Estadio San Marcos de la Universidad Nacional Mayor de San Marcos

    OpenAIRE

    Meza Bazán, Mario Miguel

    2009-01-01

    La historia del Estadio San Marcos de la Universidad San Marcos de Lima es pretexto para estudiar sus precarias relaciones con el Estado en contextos de autoritarismo y clientelismo. El estadio, planeado al principio para ser el mayor coliseo deportivo de Perú, se convirtió con la donación del dictador Manuel Odría a la Universidad, en espacio de su futura ciudad universitaria. Señalamos que esta donación se hizo cuando el estadio no había sido concluido porque le resultaba demasiado oneroso ...

  11. A Study of the San Andreas Slip Rate on the San Francisco Peninsula, California

    Science.gov (United States)

    Feigelson, L. M.; Prentice, C.; Grove, K.; Caskey, J.; Ritz, J. F.; Leslie, S.

    2008-12-01

    The most recent large earthquake on the San Andreas Fault (SAF) along the San Francisco Peninsula was the great San Francisco earthquake of April 18, 1906, when a Mw= 7.8 event ruptured 435-470 km of the northern SAF. The slip rate for this segment of the SAF is incompletely known but is important for clarifying seismic hazard in this highly urbanized region. A previous study south of our site has found an average slip rate of 17±4 mm/yr for the late Holocene on the San Francisco Peninsula segment of the SAF. North of the Golden Gate, the SAF joins the San Gregorio Fault with an estimated slip rate of 6 mm/yr. A trench study north of where the two faults join has produced an average late Holocene slip rate of 24±3 mm/yr. To refine slip-rate estimates for the peninsula segment of the SAF, we excavated a trench across the fault where we located an abandoned channel between the San Andreas and Lower Crystal Springs reservoirs. This abandoned channel marks the time when a new channel cut across the SAF; the new channel has since been offset in a right-lateral sense about 20 m. The measured amount of offset and the age of the youngest fluvial sediments in the abandoned channel will yield a slip rate for the San Francisco Peninsula segment of the SAF. We excavated a trench across the abandoned channel and logged the exposed sediments. Our investigation revealed channel-fill alluvium incised and filled by probable debris flow sediments, and a wide fault zone in bedrock, west of the channel deposits. The most prominent fault is probably the strand that moved in 1906. We completed a total-station survey to more precisely measure the offset stream, and to confirm that the fault exposed in the trench aligns with a fence that is known to have been offset 2.8m during the 1906 earthquake. We interpret the debris flow sediments to represent the last phase of deposition prior to abandonment of the old channel. We collected samples for radiocarbon dating, optically stimulated

  12. EX1103: Exploration and Mapping, Galapagos Spreading Center: Mapping, CTD, Tow-Yo, and ROV on NOAA Ship Okeanos Explorer between 20110608 and 20110728

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This cruise will be composed of two separate legs. The first leg will be a transit from San Diego, CA to the Galapagos Spreading Center, where multibeam mapping, CTD...

  13. EX1103: Exploration and Mapping, Galapagos Spreading Center: Mapping, CTD, Tow-Yo, and ROV on NOAA Ship Okeanos Explorer (EM302)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This cruise will be composed of two separate legs. The first leg will be a transit from San Diego, CA to the Galapagos Spreading Center, where multibeam mapping, CTD...

  14. Use of QUADRICS supercomputer as embedded simulator in emergency management systems; Utilizzo del calcolatore QUADRICS come simulatore in linea in un sistema di gestione delle emergenze

    Energy Technology Data Exchange (ETDEWEB)

    Bove, R.; Di Costanzo, G.; Ziparo, A. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Dip. Energia

    1996-07-01

    The experience related to the implementation of a MRBT, atmospheric spreading model with a short duration releasing, are reported. This model was implemented on a QUADRICS-Q1 supercomputer. First is reported a description of the MRBT model. It is an analytical model to study the speadings of light gases realised in the atmosphere cause incidental releasing. The solution of diffusion equation is Gaussian like. It yield the concentration of pollutant substance released. The concentration is function of space and time. Thus the QUADRICS architecture is introduced. And the implementation of the model is described. At the end it will be consider the integration of the QUADRICS-based model as simulator in a emergency management system.

  15. A Reliability Calculation Method for Web Service Composition Using Fuzzy Reasoning Colored Petri Nets and Its Application on Supercomputing Cloud Platform

    Directory of Open Access Journals (Sweden)

    Ziyun Deng

    2016-09-01

    Full Text Available In order to develop a Supercomputing Cloud Platform (SCP prototype system using Service-Oriented Architecture (SOA and Petri nets, we researched some technologies for Web service composition. Specifically, in this paper, we propose a reliability calculation method for Web service compositions, which uses Fuzzy Reasoning Colored Petri Net (FRCPN to verify the Web service compositions. We put forward a definition of semantic threshold similarity for Web services and a formal definition of FRCPN. We analyzed five kinds of production rules in FRCPN, and applied our method to the SCP prototype. We obtained the reliability value of the end Web service as an indicator of the overall reliability of the FRCPN. The method can test the activity of FRCPN. Experimental results show that the reliability of the Web service composition has a correlation with the number of Web services and the range of reliability transition values.

  16. Enabling Loosely-Coupled Serial Job Execution on the IBM BlueGene/P Supercomputer and the SiCortex SC5832

    CERN Document Server

    Raicu, Ioan; Wilde, Mike; Foster, Ian

    2008-01-01

    Our work addresses the enabling of the execution of highly parallel computations composed of loosely coupled serial jobs with no modifications to the respective applications, on large-scale systems. This approach allows new-and potentially far larger-classes of application to leverage systems such as the IBM Blue Gene/P supercomputer and similar emerging petascale architectures. We present here the challenges of I/O performance encountered in making this model practical, and show results using both micro-benchmarks and real applications on two large-scale systems, the BG/P and the SiCortex SC5832. Our preliminary benchmarks show that we can scale to 4096 processors on the Blue Gene/P and 5832 processors on the SiCortex with high efficiency, and can achieve thousands of tasks/sec sustained execution rates for parallel workloads of ordinary serial applications. We measured applications from two domains, economic energy modeling and molecular dynamics.

  17. Geophysical evidence for wedging in the San Gorgonio Pass structural knot, southern San Andreas fault zone, southern California

    Science.gov (United States)

    Langenheim, V.E.; Jachens, R.C.; Matti, J.C.; Hauksson, E.; Morton, D.M.; Christensen, A.

    2005-01-01

    Geophysical data and surface geology define intertonguing thrust wedges that form the upper crust in the San Gorgonio Pass region. This picture serves as the basis for inferring past fault movements within the San Andreas system, which are fundamental to understanding the tectonic evolution of the San Gorgonio Pass region. Interpretation of gravity data indicates that sedimentary rocks have been thrust at least 5 km in the central part of San Gorgonio Pass beneath basement rocks of the southeast San Bernardino Mountains. Subtle, long-wavelength magnetic anomalies indicate that a magnetic body extends in the subsurface north of San Gorgonio Pass and south under Peninsular Ranges basement, and has a southern edge that is roughly parallel to, but 5-6 km south of, the surface trace of the Banning fault. This deep magnetic body is composed either of upper-plate rocks of San Gabriel Mountains basement or rocks of San Bernardino Mountains basement or both. We suggest that transpression across the San Gorgonio Pass region drove a wedge of Peninsular Ranges basement and its overlying sedimentary cover northward into the San Bernardino Mountains during the Neogene, offsetting the Banning fault at shallow depth. Average rates of convergence implied by this offset are broadly consistent with estimates of convergence from other geologic and geodetic data. Seismicity suggests a deeper detachment surface beneath the deep magnetic body. This interpretation suggests that the fault mapped at the surface evolved not only in map but also in cross-sectional view. Given the multilayered nature of deformation, it is unlikely that the San Andreas fault will rupture cleanly through the complex structures in San Gorgonio Pass. ?? 2005 Geological Society of America.

  18. The GLOBE-Consortium: The Erasmus Computing Grid – Building a Super-Computer at Erasmus MC for FREE

    NARCIS (Netherlands)

    T.A. Knoch (Tobias)

    2005-01-01

    textabstractTo meet the enormous computational needs of live-science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing grids in the world – The Erasmus Computing Grid.

  19. The GLOBE-Consortium: The Erasmus Computing Grid – Building a Super-Computer at Erasmus MC for FREE

    NARCIS (Netherlands)

    T.A. Knoch (Tobias)

    2005-01-01

    textabstractTo meet the enormous computational needs of live-science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing grids in the world – The Erasmus Computing Grid. Curren

  20. Asymmetric motion along the San Franciso Bay Area faults. Implication for the magnitude of future seismic events

    Science.gov (United States)

    Houlie, N.; Romanowicz, B.

    2007-12-01

    The San Francisco Bay area is one of the tectonically most deformed areas in the world. This deformation is the result of relative motion of the Pacific and North-America plates. A large part of the strain (75 %) is accommodated along structures lying in a 50 km wide land strip. At least two major seismic events (Mw>6.5) are expected along the San Andreas (SAF) and Hayward faults (HAY) within the next decades. Triggering effects between the two seismic events may not be excluded. The BARD network is a permanent GPS network comprising 40 GPS sites, installed since 1994 in Northern California. Originally started as a collaborative effort of different Bay Area institutions, since the establishment of the Plate Boundary Observatory it has focused on real-time data acquisition from stations operated by UC Berkeley, with plans for expansion in collaboration with USGS/Menlo Park. The BARD network is streaming data to the Berkeley Seismological Laboratory in real-time (sampling rates of 1s and 15s, depending on the site). All sites are transmitting data using Frame Relay technology which makes them safer in case of earthquake occurrence. Data are archived at the Northern California Earthquake Data Center (NCEDC, http://www.ncedc.org) and are freely available. The BARD network is currently able to provide high accuracy (errorSan Andreas fault may be asymmetric. Therefore, the common assumption that the deformation is symmetric across the fault could lead to a biased location of the region of maximum strain in the San Francisco Bay Area. The new location of the maximum static strain based on asymmetry influences estimates of the response of the Hayward Fault to deformation associated with the San Andreas fault. We also present preliminary velocities for PBO sites located in the San Francisco Bay Area and discuss them in the light of a BARD reference frame.