WorldWideScience

Sample records for supercomputer center san

  1. Supercomputing Centers and Electricity Service Providers

    DEFF Research Database (Denmark)

    Patki, Tapasya; Bates, Natalie; Ghatikar, Girish

    2016-01-01

    from a detailed, quantitative survey-based analysis and compare the perspectives of the European grid and SCs to the ones of the United States (US). We then show that contrary to the expectation, SCs in the US are more open toward cooperating and developing demand-management strategies with their ESPs......Supercomputing Centers (SCs) have high and variable power demands, which increase the challenges of the Electricity Service Providers (ESPs) with regards to efficient electricity distribution and reliable grid operation. High penetration of renewable energy generation further exacerbates...... this problem. In order to develop a symbiotic relationship between the SCs and their ESPs and to support effective power management at all levels, it is critical to understand and analyze how the existing relationships were formed and how these are expected to evolve. In this paper, we first present results...

  2. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    Energy Technology Data Exchange (ETDEWEB)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S [Earth Sciences Department. Barcelona Supercomputing Center. Barcelona (Spain); Cuevas, E [Izanaa Atmospheric Research Center. Agencia Estatal de Meteorologia, Tenerife (Spain); Nickovic, S [Atmospheric Research and Environment Branch, World Meteorological Organization, Geneva (Switzerland)], E-mail: carlos.perez@bsc.es

    2009-03-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  3. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    International Nuclear Information System (INIS)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S; Cuevas, E; Nickovic, S

    2009-01-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  4. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    Science.gov (United States)

    Landgrebe, Anton J.

    1987-01-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  5. Assessment techniques for a learning-centered curriculum: evaluation design for adventures in supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Helland, B. [Ames Lab., IA (United States); Summers, B.G. [Oak Ridge National Lab., TN (United States)

    1996-09-01

    As the classroom paradigm shifts from being teacher-centered to being learner-centered, student assessments are evolving from typical paper and pencil testing to other methods of evaluation. Students should be probed for understanding, reasoning, and critical thinking abilities rather than their ability to return memorized facts. The assessment of the Department of Energy`s pilot program, Adventures in Supercomputing (AiS), offers one example of assessment techniques developed for learner-centered curricula. This assessment has employed a variety of methods to collect student data. Methods of assessment used were traditional testing, performance testing, interviews, short questionnaires via email, and student presentations of projects. The data obtained from these sources have been analyzed by a professional assessment team at the Center for Children and Technology. The results have been used to improve the AiS curriculum and establish the quality of the overall AiS program. This paper will discuss the various methods of assessment used and the results.

  6. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee; Kaushik, Dinesh; Winfer, Andrew

    2011-01-01

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  7. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee

    2011-11-15

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  8. Visualization at supercomputing centers: the tale of little big iron and the three skinny guys.

    Science.gov (United States)

    Bethel, E W; van Rosendale, J; Southard, D; Gaither, K; Childs, H; Brugger, E; Ahern, S

    2011-01-01

    Supercomputing centers are unique resources that aim to enable scientific knowledge discovery by employing large computational resources-the "Big Iron." Design, acquisition, installation, and management of the Big Iron are carefully planned and monitored. Because these Big Iron systems produce a tsunami of data, it's natural to colocate the visualization and analysis infrastructure. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys doesn't receive the same level of treatment as that of the Big Iron. This article explores the following questions about the Little Iron: How should we size the Little Iron to adequately support visualization and analysis of data coming off the Big Iron? What sort of capabilities must it have? Related questions concern the size of visualization support staff: How big should a visualization program be-that is, how many Skinny Guys should it have? What should the staff do? How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?

  9. Research center Juelich to install Germany's most powerful supercomputer new IBM System for science and research will achieve 5.8 trillion computations per second

    CERN Multimedia

    2002-01-01

    "The Research Center Juelich, Germany, and IBM today announced that they have signed a contract for the delivery and installation of a new IBM supercomputer at the Central Institute for Applied Mathematics" (1/2 page).

  10. San Joaquin Valley Aerosol Health Effects Research Center (SAHERC)

    Data.gov (United States)

    Federal Laboratory Consortium — At the San Joaquin Valley Aerosol Health Effects Center, located at the University of California-Davis, researchers will investigate the properties of particles that...

  11. Supercomputational science

    CERN Document Server

    Wilson, S

    1990-01-01

    In contemporary research, the supercomputer now ranks, along with radio telescopes, particle accelerators and the other apparatus of "big science", as an expensive resource, which is nevertheless essential for state of the art research. Supercomputers are usually provided as shar.ed central facilities. However, unlike, telescopes and accelerators, they are find a wide range of applications which extends across a broad spectrum of research activity. The difference in performance between a "good" and a "bad" computer program on a traditional serial computer may be a factor of two or three, but on a contemporary supercomputer it can easily be a factor of one hundred or even more! Furthermore, this factor is likely to increase with future generations of machines. In keeping with the large capital and recurrent costs of these machines, it is appropriate to devote effort to training and familiarization so that supercomputers are employed to best effect. This volume records the lectures delivered at a Summer School ...

  12. An assessment of worldwide supercomputer usage

    Energy Technology Data Exchange (ETDEWEB)

    Wasserman, H.J.; Simmons, M.L.; Hayes, A.H.

    1995-01-01

    This report provides a comparative study of advanced supercomputing usage in Japan and the United States as of Spring 1994. It is based on the findings of a group of US scientists whose careers have centered on programming, evaluating, and designing high-performance supercomputers for over ten years. The report is a follow-on to an assessment of supercomputing technology in Europe and Japan that was published in 1993. Whereas the previous study focused on supercomputer manufacturing capabilities, the primary focus of the current work was to compare where and how supercomputers are used. Research for this report was conducted through both literature studies and field research in Japan.

  13. Computational Dimensionalities of Global Supercomputing

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2013-12-01

    Acxiom Laboratory of Applied Research (ALAR, University of Central Arkansas (UCA, Conway, AR, April 9, 2010. [78.] Segall, Richard S., Zhang, Qingyu and Pierce, Ryan M.(2009, "Visualization by Supercomputing Data Mining", Proceedings of the 4th INFORMS Workshop on Data Mining and System Informatics, San Diego, CA, October 10, 2009. [79.] Segall, Richard S., Zhang, Qingyu, and Pierce, Ryan (2010, "Data Mining Supercomputing with SAS™ JMP® Genomics", Proceedings of 14th World Multi-Conference on Systemics, Cybernetics and Informatics: WMSCI 2010, Orlando, FL, June 29-July 2, 2010 [80.] Segall, Richard S., Zhang, Qingyu, and Pierce, Ryan (2010, "Data Mining Supercomputing with SAS™ JMP® Genomics", Journal of Systemics, Cybernetics and Informatics (JSCI, Vol. 9, No. 1, 2011, pp.28-33. [81.] Segall, RS, Zhang, Q., and Pierce, RM (2009, Visualization by supercomputing data mining, Proceedings of the 4 th INFORMS Workshop on Data Mining and System Informatics, San Diego, CA, October 10, 2009

  14. 33 CFR 165.1121 - Security Zone: Fleet Supply Center Industrial Pier, San Diego, CA.

    Science.gov (United States)

    2010-07-01

    ... Guard District § 165.1121 Security Zone: Fleet Supply Center Industrial Pier, San Diego, CA. (a... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Security Zone: Fleet Supply Center Industrial Pier, San Diego, CA. 165.1121 Section 165.1121 Navigation and Navigable Waters COAST...

  15. What is supercomputing ?

    International Nuclear Information System (INIS)

    Asai, Kiyoshi

    1992-01-01

    Supercomputing means the high speed computation using a supercomputer. Supercomputers and the technical term ''supercomputing'' have spread since ten years ago. The performances of the main computers installed so far in Japan Atomic Energy Research Institute are compared. There are two methods to increase computing speed by using existing circuit elements, parallel processor system and vector processor system. CRAY-1 is the first successful vector computer. Supercomputing technology was first applied to meteorological organizations in foreign countries, and to aviation and atomic energy research institutes in Japan. The supercomputing for atomic energy depends on the trend of technical development in atomic energy, and the contents are divided into the increase of computing speed in existing simulation calculation and the acceleration of the new technical development of atomic energy. The examples of supercomputing in Japan Atomic Energy Research Institute are reported. (K.I.)

  16. TOP500 Supercomputers for November 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  17. The San Diego Center for Patient Safety: Creating a Research, Education, and Community Consortium

    National Research Council Canada - National Science Library

    Pratt, Nancy; Vo, Kelly; Ganiats, Theodore G; Weinger, Matthew B

    2005-01-01

    In response to the Agency for Healthcare Research and Quality's Developmental Centers of Education and Research in Patient Safety grant program, a group of clinicians and academicians proposed the San...

  18. 76 FR 1521 - Security Zone: Fleet Industrial Supply Center Pier, San Diego, CA

    Science.gov (United States)

    2011-01-11

    ...-AA87 Security Zone: Fleet Industrial Supply Center Pier, San Diego, CA AGENCY: Coast Guard, DHS. ACTION... Diego, CA. The existing security zone is around the former Fleet Industrial Supply Center Pier. The security zone encompasses all navigable waters within 100 feet of the former Fleet Industrial Supply Center...

  19. A training program for scientific supercomputing users

    Energy Technology Data Exchange (ETDEWEB)

    Hanson, F.; Moher, T.; Sabelli, N.; Solem, A.

    1988-01-01

    There is need for a mechanism to transfer supercomputing technology into the hands of scientists and engineers in such a way that they will acquire a foundation of knowledge that will permit integration of supercomputing as a tool in their research. Most computing center training emphasizes computer-specific information about how to use a particular computer system; most academic programs teach concepts to computer scientists. Only a few brief courses and new programs are designed for computational scientists. This paper describes an eleven-week training program aimed principally at graduate and postdoctoral students in computationally-intensive fields. The program is designed to balance the specificity of computing center courses, the abstractness of computer science courses, and the personal contact of traditional apprentice approaches. It is based on the experience of computer scientists and computational scientists, and consists of seminars and clinics given by many visiting and local faculty. It covers a variety of supercomputing concepts, issues, and practices related to architecture, operating systems, software design, numerical considerations, code optimization, graphics, communications, and networks. Its research component encourages understanding of scientific computing and supercomputer hardware issues. Flexibility in thinking about computing needs is emphasized by the use of several different supercomputer architectures, such as the Cray X/MP48 at the National Center for Supercomputing Applications at University of Illinois at Urbana-Champaign, IBM 3090 600E/VF at the Cornell National Supercomputer Facility, and Alliant FX/8 at the Advanced Computing Research Facility at Argonne National Laboratory. 11 refs., 6 tabs.

  20. Communications and Collaboration Keep San Francisco VA Medical Center Project on Track

    International Nuclear Information System (INIS)

    Federal Energy Management Program

    2001-01-01

    This case study about energy saving performance contacts (ESPCs) presents an overview of how the Veterans Affairs Medical Center in San Francisco established an ESPC contract and the benefits derived from it. The Federal Energy Management Program instituted these special contracts to help federal agencies finance energy-saving projects at their facilities

  1. Japanese supercomputer technology

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Ewald, R.H.; Worlton, W.J.

    1982-01-01

    In February 1982, computer scientists from the Los Alamos National Laboratory and Lawrence Livermore National Laboratory visited several Japanese computer manufacturers. The purpose of these visits was to assess the state of the art of Japanese supercomputer technology and to advise Japanese computer vendors of the needs of the US Department of Energy (DOE) for more powerful supercomputers. The Japanese foresee a domestic need for large-scale computing capabilities for nuclear fusion, image analysis for the Earth Resources Satellite, meteorological forecast, electrical power system analysis (power flow, stability, optimization), structural and thermal analysis of satellites, and very large scale integrated circuit design and simulation. To meet this need, Japan has launched an ambitious program to advance supercomputer technology. This program is described

  2. TOP500 Supercomputers for June 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamos National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.

  3. TOP500 Supercomputers for June 2002

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-06-20

    19th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 19th edition of the TOP500 list of the worlds fastest supercomputers was released today (June 20, 2002). The recently installed Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is as expected the clear new number 1. Its performance of 35.86 Tflop/s (trillions of calculations per second) running the Linpack benchmark is almost five times higher than the performance of the now No.2 IBM ASCI White system at Lawrence Livermore National Laboratory (7.2 Tflop/s). This powerful leap frogging to the top by a system so much faster than the previous top system is unparalleled in the history of the TOP500.

  4. Supercomputers to transform Science

    CERN Multimedia

    2006-01-01

    "New insights into the structure of space and time, climate modeling, and the design of novel drugs, are but a few of the many research areas that will be transforned by the installation of three supercomputers at the Unversity of Bristol." (1/2 page)

  5. Supercomputers Of The Future

    Science.gov (United States)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1992-01-01

    Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.

  6. Introduction to Reconfigurable Supercomputing

    CERN Document Server

    Lanzagorta, Marco; Rosenberg, Robert

    2010-01-01

    This book covers technologies, applications, tools, languages, procedures, advantages, and disadvantages of reconfigurable supercomputing using Field Programmable Gate Arrays (FPGAs). The target audience is the community of users of High Performance Computers (HPe who may benefit from porting their applications into a reconfigurable environment. As such, this book is intended to guide the HPC user through the many algorithmic considerations, hardware alternatives, usability issues, programming languages, and design tools that need to be understood before embarking on the creation of reconfigur

  7. The new library building at the University of Texas Health Science Center at San Antonio.

    Science.gov (United States)

    Kronick, D A; Bowden, V M; Olivier, E R

    1985-04-01

    The new University of Texas Health Science Center at San Antonio Library opened in June 1983, replacing the 1968 library building. Planning a new library building provides an opportunity for the staff to rethink their philosophy of service. Of paramount concern and importance is the need to convey this philosophy to the architects. This paper describes the planning process and the building's external features, interior layouts, and accommodations for technology. Details of the move to the building are considered and various aspects of the building are reviewed.

  8. Enabling department-scale supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  9. 33 CFR 334.1170 - San Pablo Bay, Calif.; gunnery range, Naval Inshore Operations Training Center, Mare Island...

    Science.gov (United States)

    2010-07-01

    ... range, Naval Inshore Operations Training Center, Mare Island, Vallejo. 334.1170 Section 334.1170... Operations Training Center, Mare Island, Vallejo. (a) The Danger Zone. A sector in San Pablo Bay delineated..., Vallejo, California, will conduct gunnery practice in the area during the period April 1 through September...

  10. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  11. FEASIBILITY STUDY OF ESTABLISHING AN ARTIFICIAL INSEMINATION (AI CENTER FOR CARABAOS IN SAN ILDEFONSO, BULACAN, PHILIPPINES

    Directory of Open Access Journals (Sweden)

    F.Q. Arrienda II

    2014-10-01

    Full Text Available The productivity of the carabao subsector is influenced by several constraints such as social,technical, economic and policy factors. The need to enhance the local production of carabaos will helplocal farmers to increase their income. Thus, producing thorough breeds of carabaos and improving itgenetically is the best response to these constraints. This study was conducted to present the feasibilitystudy of establishing an Artificial Insemination (AI Center and its planned area of operation in Brgy.San Juan, Ildefonso, Bulacan. The market, production, organizational and financial viability of operatingthe business would also be evaluated. This particular study will provide insights in establishing an AICenter. Included in this study is the identification of anticipated problems that could affect the businessand recommendation of specific courses of action to counteract these possible problems. Primary datawere obtained through interviews with key informants from the Philippine. Carabao Center (PCC. Togain insights about the present status of an AI Center, interviews with the technicians of PCC and privatefarm were done to get additional information. Secondary data were acquired from various literatures andfrom San Ildefonso Municipal Office. The proposed area would be 1,500 square meters that would beallotted for the laboratory and bullpen. The AI Center will operate six days a week and will be openedfrom 8 AM until 5 PM. However, customers or farmers can call the technicians beyond the office hoursin case of emergency. The total initial investment of Php 3,825,417.39 is needed in establishing the AICenter. The whole amount will be sourced from the owner’s equity. Financial projection showed an IRRof 30% with a computed NPV of Php 2,415,597.00 and a payback period of 3.97 years. Based on all themarket, technical, organizational, financial factors, projections and data analysis, it is said that thisbusiness endeavor is viable and feasible.

  12. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-01-01

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  13. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-12-31

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  14. Applications of supercomputing and the utility industry: Calculation of power transfer capabilities

    International Nuclear Information System (INIS)

    Jensen, D.D.; Behling, S.R.; Betancourt, R.

    1990-01-01

    Numerical models and iterative simulation using supercomputers can furnish cost-effective answers to utility industry problems that are all but intractable using conventional computing equipment. An example of the use of supercomputers by the utility industry is the determination of power transfer capability limits for power transmission systems. This work has the goal of markedly reducing the run time of transient stability codes used to determine power distributions following major system disturbances. To date, run times of several hours on a conventional computer have been reduced to several minutes on state-of-the-art supercomputers, with further improvements anticipated to reduce run times to less than a minute. In spite of the potential advantages of supercomputers, few utilities have sufficient need for a dedicated in-house supercomputing capability. This problem is resolved using a supercomputer center serving a geographically distributed user base coupled via high speed communication networks

  15. The Erasmus Computing Grid - Building a Super-Computer Virtually for Free at the Erasmus Medical Center and the Hogeschool Rotterdam

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2006-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop

  16. The ETA10 supercomputer system

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, Inc. ETA 10 is a next-generation supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamics MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed. (orig.)

  17. World's fastest supercomputer opens up to users

    Science.gov (United States)

    Xin, Ling

    2016-08-01

    China's latest supercomputer - Sunway TaihuLight - has claimed the crown as the world's fastest computer according to the latest TOP500 list, released at the International Supercomputer Conference in Frankfurt in late June.

  18. Role of supercomputers in magnetic fusion and energy research programs

    International Nuclear Information System (INIS)

    Killeen, J.

    1985-06-01

    The importance of computer modeling in magnetic fusion (MFE) and energy research (ER) programs is discussed. The need for the most advanced supercomputers is described, and the role of the National Magnetic Fusion Energy Computer Center in meeting these needs is explained

  19. Supercomputing and related national projects in Japan

    International Nuclear Information System (INIS)

    Miura, Kenichi

    1985-01-01

    Japanese supercomputer development activities in the industry and research projects are outlined. Architecture, technology, software, and applications of Fujitsu's Vector Processor Systems are described as an example of Japanese supercomputers. Applications of supercomputers to high energy physics are also discussed. (orig.)

  20. Mistral Supercomputer Job History Analysis

    OpenAIRE

    Zasadziński, Michał; Muntés-Mulero, Victor; Solé, Marc; Ludwig, Thomas

    2018-01-01

    In this technical report, we show insights and results of operational data analysis from petascale supercomputer Mistral, which is ranked as 42nd most powerful in the world as of January 2018. Data sources include hardware monitoring data, job scheduler history, topology, and hardware information. We explore job state sequences, spatial distribution, and electric power patterns.

  1. Supercomputers and quantum field theory

    International Nuclear Information System (INIS)

    Creutz, M.

    1985-01-01

    A review is given of why recent simulations of lattice gauge theories have resulted in substantial demands from particle theorists for supercomputer time. These calculations have yielded first principle results on non-perturbative aspects of the strong interactions. An algorithm for simulating dynamical quark fields is discussed. 14 refs

  2. Computational plasma physics and supercomputers

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1984-09-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular codes, but parallel processing poses new coding difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematics

  3. The BirthPlace collaborative practice model: results from the San Diego Birth Center Study.

    Science.gov (United States)

    Swartz; Jackson; Lang; Ecker; Ganiats; Dickinson; Nguyen

    1998-07-01

    Objective: The search for quality, cost-effective health care programs in the United States is now a major focus in the era of health care reform. New programs need to be evaluated as alternatives are developed in the health care system. The BirthPlace program provides comprehensive perinatal services with certified nurse-midwives and obstetricians working together in an integrated collaborative practice serving a primarily low-income population. Low-risk women are delivered by nurse-midwives in a freestanding birth center (The BirthPlace), which is one component of a larger integrated health network. All others are delivered by team obstetricians at the affiliated tertiary hospital. Wellness, preventive measures, early intervention, and family involvement are emphasized. The San Diego Birth Center Study is a 4-year research project funded by the U.S. Federal Agency for Health Care Policy and Research (#R01-HS07161) to evaluate this program. The National Birth Center Study (NEJM, 1989; 321(26): 1801-11) described the advantages and safety of freestanding birth centers. However, a prospective cohort study with a concurrent comparison group of comparable risk had not been conducted on a collaborative practice-freestanding birth center model to address questions of safety, cost, and patient satisfaction.Methods: The specific aims of this study are to compare this collaborative practice model to the traditional model of perinatal health care (physician providers and hospital delivery). A prospective cohort study comparing these two health care models was conducted with a final expected sample size of approximately 2,000 birth center and 1,350 traditional care subjects. Women were recruited from both the birth center and traditional care programs (private physicians offices and hospital based clinics) at the beginning of prenatal care and followed through the end of the perinatal period. Prenatal, intrapartum, postpartum and infant morbidity and mortality are being

  4. A workbench for tera-flop supercomputing

    International Nuclear Information System (INIS)

    Resch, M.M.; Kuester, U.; Mueller, M.S.; Lang, U.

    2003-01-01

    Supercomputers currently reach a peak performance in the range of TFlop/s. With but one exception - the Japanese Earth Simulator - none of these systems has so far been able to also show a level of sustained performance for a variety of applications that comes close to the peak performance. Sustained TFlop/s are therefore rarely seen. The reasons are manifold and are well known: Bandwidth and latency both for main memory and for the internal network are the key internal technical problems. Cache hierarchies with large caches can bring relief but are no remedy to the problem. However, there are not only technical problems that inhibit the full exploitation by scientists of the potential of modern supercomputers. More and more organizational issues come to the forefront. This paper shows the approach of the High Performance Computing Center Stuttgart (HLRS) to deliver a sustained performance of TFlop/s for a wide range of applications from a large group of users spread over Germany. The core of the concept is the role of the data. Around this we design a simulation workbench that hides the complexity of interacting computers, networks and file systems from the user. (authors)

  5. The GF11 supercomputer

    International Nuclear Information System (INIS)

    Beetem, J.; Weingarten, D.

    1986-01-01

    GF11 is a parallel computer currently under construction at the IBM Yorktown Research Center. The machine incorporates 576 floating-point processors arrangedin a modified SIMD architecture. Each has space for 2 Mbytes of memory and is capable of 20 Mflops, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 Gflops. The floating-point processors are interconnected by a dynamically reconfigurable non-blocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics

  6. The GF11 supercomputer

    International Nuclear Information System (INIS)

    Beetem, J.; Denneau, M.; Weingarten, D.

    1985-01-01

    GF11 is a parallel computer currently under construction at the IBM Yorktown Research Center. The machine incorporates 576 floating- point processors arranged in a modified SIMD architecture. Each has space for 2 Mbytes of memory and is capable of 20 Mflops, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 Gflops. The floating-point processors are interconnected by a dynamically reconfigurable nonblocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics

  7. Automatic discovery of the communication network topology for building a supercomputer model

    Science.gov (United States)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  8. NASA Advanced Supercomputing Facility Expansion

    Science.gov (United States)

    Thigpen, William W.

    2017-01-01

    The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.

  9. ATLAS Software Installation on Supercomputers

    CERN Document Server

    Undrus, Alexander; The ATLAS collaboration

    2018-01-01

    PowerPC and high performance computers (HPC) are important resources for computing in the ATLAS experiment. The future LHC data processing will require more resources than Grid computing, currently using approximately 100,000 cores at well over 100 sites, can provide. Supercomputers are extremely powerful as they use resources of hundreds of thousands CPUs joined together. However their architectures have different instruction sets. ATLAS binary software distributions for x86 chipsets do not fit these architectures, as emulation of these chipsets results in huge performance loss. This presentation describes the methodology of ATLAS software installation from source code on supercomputers. The installation procedure includes downloading the ATLAS code base as well as the source of about 50 external packages, such as ROOT and Geant4, followed by compilation, and rigorous unit and integration testing. The presentation reports the application of this procedure at Titan HPC and Summit PowerPC at Oak Ridge Computin...

  10. The GF11 supercomputer

    International Nuclear Information System (INIS)

    Beetem, J.; Denneau, M.; Weingarten, D.

    1985-01-01

    GF11 is a parallel computer currently under construction at the Yorktown Research Center. The machine incorporates 576 floating-point processors arranged in a modified SIMD architecture. Each processor has space for 2 Mbytes of memory and is capable of 20 MFLOPS, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 GFLOPS. The floating-point processors are interconnected by a dynamically reconfigurable non-blocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics, a proposed theory of the elementary particles which participate in nuclear interactions

  11. Comprehensive efficiency analysis of supercomputer resource usage based on system monitoring data

    Science.gov (United States)

    Mamaeva, A. A.; Shaykhislamov, D. I.; Voevodin, Vad V.; Zhumatiy, S. A.

    2018-03-01

    One of the main problems of modern supercomputers is the low efficiency of their usage, which leads to the significant idle time of computational resources, and, in turn, to the decrease in speed of scientific research. This paper presents three approaches to study the efficiency of supercomputer resource usage based on monitoring data analysis. The first approach performs an analysis of computing resource utilization statistics, which allows to identify different typical classes of programs, to explore the structure of the supercomputer job flow and to track overall trends in the supercomputer behavior. The second approach is aimed specifically at analyzing off-the-shelf software packages and libraries installed on the supercomputer, since efficiency of their usage is becoming an increasingly important factor for the efficient functioning of the entire supercomputer. Within the third approach, abnormal jobs – jobs with abnormally inefficient behavior that differs significantly from the standard behavior of the overall supercomputer job flow – are being detected. For each approach, the results obtained in practice in the Supercomputer Center of Moscow State University are demonstrated.

  12. Integration of Panda Workload Management System with supercomputers

    Science.gov (United States)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  13. Status of supercomputers in the US

    International Nuclear Information System (INIS)

    Fernbach, S.

    1985-01-01

    Current Supercomputers; that is, the Class VI machines which first became available in 1976 are being delivered in greater quantity than ever before. In addition, manufacturers are busily working on Class VII machines to be ready for delivery in CY 1987. Mainframes are being modified or designed to take on some features of the supercomputers and new companies with the intent of either competing directly in the supercomputer arena or in providing entry-level systems from which to graduate to supercomputers are springing up everywhere. Even well founded organizations like IBM and CDC are adding machines with vector instructions in their repertoires. Japanese - manufactured supercomputers are also being introduced into the U.S. Will these begin to compete with those of U.S. manufacture. Are they truly competitive. It turns out that both from the hardware and software points of view they may be superior. We may be facing the same problems in supercomputers that we faced in videosystems

  14. TOP500 Supercomputers for June 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  15. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    Science.gov (United States)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  16. Two-Bin Kanban: Ordering Impact at Navy Medical Center San Diego

    Science.gov (United States)

    2016-06-17

    Wiley. Weed, J. (2010, July 10). Factory efficiency comes to hospital. New York Times, 1–3. Weiss, N. (2008). Introductory statistics . San Francisco...Urology, and Oral Maxillofacial Surgery (OMFS) departments at NMCSD. The data is statistically significant in 2015 when compared to 2013. Procurement...31 3. C. Procurement Cost and Procurement Efficiency Statistics

  17. Tryton Supercomputer Capabilities for Analysis of Massive Data Streams

    Directory of Open Access Journals (Sweden)

    Krawczyk Henryk

    2015-09-01

    Full Text Available The recently deployed supercomputer Tryton, located in the Academic Computer Center of Gdansk University of Technology, provides great means for massive parallel processing. Moreover, the status of the Center as one of the main network nodes in the PIONIER network enables the fast and reliable transfer of data produced by miscellaneous devices scattered in the area of the whole country. The typical examples of such data are streams containing radio-telescope and satellite observations. Their analysis, especially with real-time constraints, can be challenging and requires the usage of dedicated software components. We propose a solution for such parallel analysis using the supercomputer, supervised by the KASKADA platform, which with the conjunction with immerse 3D visualization techniques can be used to solve problems such as pulsar detection and chronometric or oil-spill simulation on the sea surface.

  18. Supercomputer applications in nuclear research

    International Nuclear Information System (INIS)

    Ishiguro, Misako

    1992-01-01

    The utilization of supercomputers in Japan Atomic Energy Research Institute is mainly reported. The fields of atomic energy research which use supercomputers frequently and the contents of their computation are outlined. What is vectorizing is simply explained, and nuclear fusion, nuclear reactor physics, the hydrothermal safety of nuclear reactors, the parallel property that the atomic energy computations of fluids and others have, the algorithm for vector treatment and the effect of speed increase by vectorizing are discussed. At present Japan Atomic Energy Research Institute uses two systems of FACOM VP 2600/10 and three systems of M-780. The contents of computation changed from criticality computation around 1970, through the analysis of LOCA after the TMI accident, to nuclear fusion research, the design of new type reactors and reactor safety assessment at present. Also the method of using computers advanced from batch processing to time sharing processing, from one-dimensional to three dimensional computation, from steady, linear to unsteady nonlinear computation, from experimental analysis to numerical simulation and so on. (K.I.)

  19. INTEL: Intel based systems move up in supercomputing ranks

    CERN Multimedia

    2002-01-01

    "The TOP500 supercomputer rankings released today at the Supercomputing 2002 conference show a dramatic increase in the number of Intel-based systems being deployed in high-performance computing (HPC) or supercomputing areas" (1/2 page).

  20. Tomographic Rayleigh wave group velocities in the Central Valley, California, centered on the Sacramento/San Joaquin Delta

    Science.gov (United States)

    Fletcher, Jon B.; Erdem, Jemile; Seats, Kevin; Lawrence, Jesse

    2016-04-01

    If shaking from a local or regional earthquake in the San Francisco Bay region were to rupture levees in the Sacramento/San Joaquin Delta, then brackish water from San Francisco Bay would contaminate the water in the Delta: the source of freshwater for about half of California. As a prelude to a full shear-wave velocity model that can be used in computer simulations and further seismic hazard analysis, we report on the use of ambient noise tomography to build a fundamental mode, Rayleigh wave group velocity model for the region around the Sacramento/San Joaquin Delta in the western Central Valley, California. Recordings from the vertical component of about 31 stations were processed to compute the spatial distribution of Rayleigh wave group velocities. Complex coherency between pairs of stations was stacked over 8 months to more than a year. Dispersion curves were determined from 4 to about 18 s. We calculated average group velocities for each period and inverted for deviations from the average for a matrix of cells that covered the study area. Smoothing using the first difference is applied. Cells of the model were about 5.6 km in either dimension. Checkerboard tests of resolution, which are dependent on station density, suggest that the resolving ability of the array is reasonably good within the middle of the array with resolution between 0.2 and 0.4°. Overall, low velocities in the middle of each image reflect the deeper sedimentary syncline in the Central Valley. In detail, the model shows several centers of low velocity that may be associated with gross geologic features such as faulting along the western margin of the Central Valley, oil and gas reservoirs, and large crosscutting features like the Stockton arch. At shorter periods around 5.5 s, the model's western boundary between low and high velocities closely follows regional fault geometry and the edge of a residual isostatic gravity low. In the eastern part of the valley, the boundaries of the low

  1. Tomographic Rayleigh-wave group velocities in the Central Valley, California centered on the Sacramento/San Joaquin Delta

    Science.gov (United States)

    Fletcher, Jon Peter B.; Erdem, Jemile; Seats, Kevin; Lawrence, Jesse

    2016-01-01

    If shaking from a local or regional earthquake in the San Francisco Bay region were to rupture levees in the Sacramento/San Joaquin Delta then brackish water from San Francisco Bay would contaminate the water in the Delta: the source of fresh water for about half of California. As a prelude to a full shear-wave velocity model that can be used in computer simulations and further seismic hazard analysis, we report on the use of ambient noise tomography to build a fundamental-mode, Rayleigh-wave group velocity model for the region around the Sacramento/San Joaquin Delta in the western Central Valley, California. Recordings from the vertical component of about 31 stations were processed to compute the spatial distribution of Rayleigh wave group velocities. Complex coherency between pairs of stations were stacked over 8 months to more than a year. Dispersion curves were determined from 4 to about 18 seconds. We calculated average group velocities for each period and inverted for deviations from the average for a matrix of cells that covered the study area. Smoothing using the first difference is applied. Cells of the model were about 5.6 km in either dimension. Checkerboard tests of resolution, which is dependent on station density, suggest that the resolving ability of the array is reasonably good within the middle of the array with resolution between 0.2 and 0.4 degrees. Overall, low velocities in the middle of each image reflect the deeper sedimentary syncline in the Central Valley. In detail, the model shows several centers of low velocity that may be associated with gross geologic features such as faulting along the western margin of the Central Valley, oil and gas reservoirs, and large cross cutting features like the Stockton arch. At shorter periods around 5.5s, the model’s western boundary between low and high velocities closely follows regional fault geometry and the edge of a residual isostatic gravity low. In the eastern part of the valley, the boundaries

  2. TOP500 Supercomputers for June 2005

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2005-06-22

    25th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/L LNL BlueGene/L and IBM gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 25th edition of the TOP500 list of the world's fastest supercomputers was released today (June 22, 2005) at the 20th International Supercomputing Conference (ISC2005) in Heidelberg Germany.

  3. United States Air Force Personalized Medicine and Advanced Diagnostics Program Panel: Representative Research at the San Antonio Military Medical Center

    Science.gov (United States)

    2016-05-20

    health system. dedicated to excellence in global care PROCESSING OF PROFESSIONAL MEDICAL RESEARCH PUBLICATIONS/PRESENTATIONS INSTRUCTIONS 1. The...present this research at the University of Texas at San Antonio/SAMHS & Universities Research Forum, SURF 2016 in San Antonio, TX, on 20 May 2016. The...at San Antonio/SAMHS & Universities Research Forum, SURF 2016 in San Antonio, TX, on 20 May 2016. 3. LAWS AND REGULATIONS: DoD 5500.07-R, Joint

  4. The Pawsey Supercomputer geothermal cooling project

    Science.gov (United States)

    Regenauer-Lieb, K.; Horowitz, F.; Western Australian Geothermal Centre Of Excellence, T.

    2010-12-01

    The Australian Government has funded the Pawsey supercomputer in Perth, Western Australia, providing computational infrastructure intended to support the future operations of the Australian Square Kilometre Array radiotelescope and to boost next-generation computational geosciences in Australia. Supplementary funds have been directed to the development of a geothermal exploration well to research the potential for direct heat use applications at the Pawsey Centre site. Cooling the Pawsey supercomputer may be achieved by geothermal heat exchange rather than by conventional electrical power cooling, thus reducing the carbon footprint of the Pawsey Centre and demonstrating an innovative green technology that is widely applicable in industry and urban centres across the world. The exploration well is scheduled to be completed in 2013, with drilling due to commence in the third quarter of 2011. One year is allocated to finalizing the design of the exploration, monitoring and research well. Success in the geothermal exploration and research program will result in an industrial-scale geothermal cooling facility at the Pawsey Centre, and will provide a world-class student training environment in geothermal energy systems. A similar system is partially funded and in advanced planning to provide base-load air-conditioning for the main campus of the University of Western Australia. Both systems are expected to draw ~80-95 degrees C water from aquifers lying between 2000 and 3000 meters depth from naturally permeable rocks of the Perth sedimentary basin. The geothermal water will be run through absorption chilling devices, which only require heat (as opposed to mechanical work) to power a chilled water stream adequate to meet the cooling requirements. Once the heat has been removed from the geothermal water, licensing issues require the water to be re-injected back into the aquifer system. These systems are intended to demonstrate the feasibility of powering large-scale air

  5. TOP500 Supercomputers for November 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-11-08

    24th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/IBM BlueGene/L and NASA/SGI's Columbia gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 24th edition of the TOP500 list of the worlds fastest supercomputers was released today (November 8, 2004) at the SC2004 Conference in Pittsburgh, Pa.

  6. Quantum Hamiltonian Physics with Supercomputers

    International Nuclear Information System (INIS)

    Vary, James P.

    2014-01-01

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed

  7. Plasma turbulence calculations on supercomputers

    International Nuclear Information System (INIS)

    Carreras, B.A.; Charlton, L.A.; Dominguez, N.; Drake, J.B.; Garcia, L.; Leboeuf, J.N.; Lee, D.K.; Lynch, V.E.; Sidikman, K.

    1991-01-01

    Although the single-particle picture of magnetic confinement is helpful in understanding some basic physics of plasma confinement, it does not give a full description. Collective effects dominate plasma behavior. Any analysis of plasma confinement requires a self-consistent treatment of the particles and fields. The general picture is further complicated because the plasma, in general, is turbulent. The study of fluid turbulence is a rather complex field by itself. In addition to the difficulties of classical fluid turbulence, plasma turbulence studies face the problems caused by the induced magnetic turbulence, which couples field by itself. In addition to the difficulties of classical fluid turbulence, plasma turbulence studies face the problems caused by the induced magnetic turbulence, which couples back to the fluid. Since the fluid is not a perfect conductor, this turbulence can lead to changes in the topology of the magnetic field structure, causing the magnetic field lines to wander radially. Because the plasma fluid flows along field lines, they carry the particles with them, and this enhances the losses caused by collisions. The changes in topology are critical for the plasma confinement. The study of plasma turbulence and the concomitant transport is a challenging problem. Because of the importance of solving the plasma turbulence problem for controlled thermonuclear research, the high complexity of the problem, and the necessity of attacking the problem with supercomputers, the study of plasma turbulence in magnetic confinement devices is a Grand Challenge problem

  8. Quantum Hamiltonian Physics with Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Vary, James P.

    2014-06-15

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed.

  9. Accuracy of Perceived Estimated Travel Time by EMS to a Trauma Center in San Bernardino County, California

    Directory of Open Access Journals (Sweden)

    Michael M. Neeki

    2016-06-01

    Full Text Available Introduction: Mobilization of trauma resources has the potential to cause ripple effects throughout hospital operations. One major factor affecting efficient utilization of trauma resources is a discrepancy between the prehospital estimated time of arrival (ETA as communicated by emergency medical services (EMS personnel and their actual time of arrival (TOA. The current study aimed to assess the accuracy of the perceived prehospital estimated arrival time by EMS personnel in comparison to their actual arrival time at a Level II trauma center in San Bernardino County, California. Methods: This retrospective study included traumas classified as alerts or activations that were transported to Arrowhead Regional Medical Center in 2013. We obtained estimated arrival time and actual arrival time for each transport from the Surgery Department Trauma Registry. The difference between the median of ETA and actual TOA by EMS crews to the trauma center was calculated for these transports. Additional variables assessed included time of day and month during which the transport took place. Results: A total of 2,454 patients classified as traumas were identified in the Surgery Department Trauma Registry. After exclusion of trauma consults, walk-ins, handoffs between agencies, downgraded traumas, traumas missing information, and traumas transported by agencies other than American Medical Response, Ontario Fire, Rialto Fire or San Bernardino County Fire, we included a final sample size of 555 alert and activation classified traumas in the final analysis. When combining all transports by the included EMS agencies, the median of the ETA was 10 minutes and the median of the actual TOA was 22 minutes (median of difference=9 minutes, p<0.0001. Furthermore, when comparing the difference between trauma alerts and activations, trauma activations demonstrated an equal or larger difference in the median of the estimated and actual time of arrival (p<0.0001. We also found

  10. Desktop supercomputer: what can it do?

    Science.gov (United States)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-12-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  11. Desktop supercomputer: what can it do?

    International Nuclear Information System (INIS)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-01-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  12. Status reports of supercomputing astrophysics in Japan

    International Nuclear Information System (INIS)

    Nakamura, Takashi; Nagasawa, Mikio

    1990-01-01

    The Workshop on Supercomputing Astrophysics was held at National Laboratory for High Energy Physics (KEK, Tsukuba) from August 31 to September 2, 1989. More than 40 participants of physicists, astronomers were attendant and discussed many topics in the informal atmosphere. The main purpose of this workshop was focused on the theoretical activities in computational astrophysics in Japan. It was also aimed to promote effective collaboration between the numerical experimentists working on supercomputing technique. The various subjects of the presented papers of hydrodynamics, plasma physics, gravitating systems, radiative transfer and general relativity are all stimulating. In fact, these numerical calculations become possible now in Japan owing to the power of Japanese supercomputer such as HITAC S820, Fujitsu VP400E and NEC SX-2. (J.P.N.)

  13. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  14. Richness and diversity patterns of birds in urban green areas in the center of San Salvador, El Salvador

    Directory of Open Access Journals (Sweden)

    Gabriel L. Vides-Hernández

    2017-10-01

    Full Text Available Increasing urbanization has led to natural ecosystems being constantly replaced by an urban landscape, a process that is very noticeable in El Salvador, due to its small territorial extension (21.041 km. and high population density (291 hab/km.. We performed an inventory in 12 urban green areas, with different sizes, shape and distances from the largest forest area in the metropolitan zone, based on the McArthur and Wilson’s (1967 island biogeography theory. We evaluated if the richness, diversity and equitability of birds were related to the size and distance of the green areas and if their shape had any effect on the richness of birds. We observed a total of 20 bird species and we classified them according to their diet (generalist and specialist. We observed that the distance did not influence the bird richness and that there was no interaction between size and distance variables, but the size of the green area did influence. The richness of birds with specialist diet increased in the more circular green areas than in the irregular ones. We conclude that in the urban center of San Salvador, the presence of large and circular green areas contributes more to the specialist diet birds’ richness, than areas of similar size but of irregular shape. However, small areas contribute more to the specialist diet birds’ richness, if its shape is more circular.

  15. Current situation of sexual and reproductive health of men deprived of liberty in the Institutional Care Center of San Jose

    Directory of Open Access Journals (Sweden)

    Dorita Rivas Fonseca

    2013-10-01

    Full Text Available The objective of this research was to determine the current status of the issue of sexual and reproductive health ofthe prisoners Institutional Care Center (CAI of San Jose. It is a descriptive study. Through a strategic samplingdetermined the participation of 102 men. The information was obtained by applying a self-administeredquestionnaire with closed and open questions. As a result relevant to your socio-demographic profile, it appearsthat deprived of their liberty is a very heterogeneous group. As regards sexual and reproductive health, the firstconcept they relate to the prevention of disease and the second reproductive aspects, this shows limitations inknowledge on the topics, something that affects the daily life activities and self-care. It is concluded that researchby nurses Gyneco-obstetric in the deprived of liberty is almost null not only in the country but in the world,especially if it comes with the male population. In the case of CAI Prison, health care is not enough for thenumber of inmates who inhabit (overpopulation of almost 50%, this implies a deterioration in health and physicalcondition of these people, as well as sexual and reproductive health

  16. The ETA systems plans for supercomputers

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, is a class VII supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamic MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed

  17. Centralized supercomputer support for magnetic fusion energy research

    International Nuclear Information System (INIS)

    Fuss, D.; Tull, G.G.

    1984-01-01

    High-speed computers with large memories are vital to magnetic fusion energy research. Magnetohydrodynamic (MHD), transport, equilibrium, Vlasov, particle, and Fokker-Planck codes that model plasma behavior play an important role in designing experimental hardware and interpreting the resulting data, as well as in advancing plasma theory itself. The size, architecture, and software of supercomputers to run these codes are often the crucial constraints on the benefits such computational modeling can provide. Hence, vector computers such as the CRAY-1 offer a valuable research resource. To meet the computational needs of the fusion program, the National Magnetic Fusion Energy Computer Center (NMFECC) was established in 1974 at the Lawrence Livermore National Laboratory. Supercomputers at the central computing facility are linked to smaller computer centers at each of the major fusion laboratories by a satellite communication network. In addition to providing large-scale computing, the NMFECC environment stimulates collaboration and the sharing of computer codes and data among the many fusion researchers in a cost-effective manner

  18. Adaptability of supercomputers to nuclear computations

    International Nuclear Information System (INIS)

    Asai, Kiyoshi; Ishiguro, Misako; Matsuura, Toshihiko.

    1983-01-01

    Recently in the field of scientific and technical calculation, the usefulness of supercomputers represented by CRAY-1 has been recognized, and they are utilized in various countries. The rapid computation of supercomputers is based on the function of vector computation. The authors investigated the adaptability to vector computation of about 40 typical atomic energy codes for the past six years. Based on the results of investigation, the adaptability of the function of vector computation that supercomputers have to atomic energy codes, the problem regarding the utilization and the future prospect are explained. The adaptability of individual calculation codes to vector computation is largely dependent on the algorithm and program structure used for the codes. The change to high speed by pipeline vector system, the investigation in the Japan Atomic Energy Research Institute and the results, and the examples of expressing the codes for atomic energy, environmental safety and nuclear fusion by vector are reported. The magnification of speed up for 40 examples was from 1.5 to 9.0. It can be said that the adaptability of supercomputers to atomic energy codes is fairly good. (Kako, I.)

  19. Computational plasma physics and supercomputers. Revision 1

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1985-01-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular models, but parallel processing poses new programming difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematical models

  20. Utilizing Lean Six Sigma Methodology to Improve the Authored Works Command Approval Process at Naval Medical Center San Diego.

    Science.gov (United States)

    Valdez, Michelle M; Liwanag, Maureen; Mount, Charles; Rodriguez, Rechell; Avalos-Reyes, Elisea; Smith, Andrew; Collette, David; Starsiak, Michael; Green, Richard

    2018-03-14

    Inefficiencies in the command approval process for publications and/or presentations negatively impact DoD Graduate Medical Education (GME) residency programs' ability to meet ACGME scholarly activity requirements. A preliminary review of the authored works approval process at Naval Medical Center San Diego (NMCSD) disclosed significant inefficiency, variation in process, and a low level of customer satisfaction. In order to facilitate and encourage scholarly activity at NMCSD, and meet ACGME requirements, the Executive Steering Council (ESC) chartered an interprofessional team to lead a Lean Six Sigma (LSS) Rapid Improvement Event (RIE) project. Two major outcome metrics were identified: (1) the number of authored works submissions containing all required signatures and (2) customer satisfaction with the authored works process. Primary metric baseline data were gathered utilizing a Clinical Investigations database tracking publications and presentations. Secondary metric baseline data were collected via a customer satisfaction survey to GME faculty and residents. The project team analyzed pre-survey data and utilized LSS tools and methodology including a "gemba" (environment) walk, cause and effect diagram, critical to quality tree, voice of the customer, "muda" (waste) chart, and a pre- and post-event value stream map. The team selected an electronic submission system as the intervention most likely to positively impact the RIE project outcome measures. The number of authored works compliant with all required signatures improved from 52% to 100%. Customer satisfaction rated as "completely or mostly satisfied" improved from 24% to 97%. For both outcomes, signature compliance and customer satisfaction, statistical significance was achieved with a p methodology and tools to improve signature compliance and increase customer satisfaction with the authored works approval process, leading to 100% signature compliance, a comprehensive longitudinal repository of all

  1. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Energy Technology Data Exchange (ETDEWEB)

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Klimentov, A [Brookhaven National Laboratory (BNL); Maeno, T [Brookhaven National Laboratory (BNL); Nilsson, P [Brookhaven National Laboratory (BNL); Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation

  2. Graphics supercomputer for computational fluid dynamics research

    Science.gov (United States)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  3. FPS scientific and supercomputers computers in chemistry

    International Nuclear Information System (INIS)

    Curington, I.J.

    1987-01-01

    FPS Array Processors, scientific computers, and highly parallel supercomputers are used in nearly all aspects of compute-intensive computational chemistry. A survey is made of work utilizing this equipment, both published and current research. The relationship of the computer architecture to computational chemistry is discussed, with specific reference to Molecular Dynamics, Quantum Monte Carlo simulations, and Molecular Graphics applications. Recent installations of the FPS T-Series are highlighted, and examples of Molecular Graphics programs running on the FPS-5000 are shown

  4. Problem solving in nuclear engineering using supercomputers

    International Nuclear Information System (INIS)

    Schmidt, F.; Scheuermann, W.; Schatz, A.

    1987-01-01

    The availability of supercomputers enables the engineer to formulate new strategies for problem solving. One such strategy is the Integrated Planning and Simulation System (IPSS). With the integrated systems, simulation models with greater consistency and good agreement with actual plant data can be effectively realized. In the present work some of the basic ideas of IPSS are described as well as some of the conditions necessary to build such systems. Hardware and software characteristics as realized are outlined. (orig.) [de

  5. Interview with Jennie E. Rodríguez, Executive Director of the Mission Cultural Center for Latino Arts, San Francisco, CA, USA, August 15, 2001 Entretien avec Jennie E. Rodríguez, directrice, Mission Cultural Center for Latino Arts, San Francisco, CA, États-Unis

    Directory of Open Access Journals (Sweden)

    Gérard Selbach

    2009-10-01

    Full Text Available ForewordThe Mission Cultural Center for Latino Arts (MCCLA is located at 2868 Mission Street in San Francisco, in a district mainly inhabited by Hispanics and well-known for its numerous murals. The Center was founded in 1977 by artists and community activists who shared “the vision to promote, preserve and develop the Latino cultural arts that reflect the living tradition and experiences of Chicano, Central and South American, and Caribbean people.”August 2001 was as busy at the Center as a...

  6. Cooperative visualization and simulation in a supercomputer environment

    International Nuclear Information System (INIS)

    Ruehle, R.; Lang, U.; Wierse, A.

    1993-01-01

    The article takes a closer look on the requirements being imposed by the idea to integrate all the components into a homogeneous software environment. To this end several methods for the distribtuion of applications in dependence of certain problem types are discussed. The currently available methods at the University of Stuttgart Computer Center for the distribution of applications are further explained. Finally the aims and characteristics of a European sponsored project, called PAGEIN, are explained, which fits perfectly into the line of developments at RUS. The aim of the project is to experiment with future cooperative working modes of aerospace scientists in a high speed distributed supercomputing environment. Project results will have an impact on the development of real future scientific application environments. (orig./DG)

  7. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  8. PNNL supercomputer to become largest computing resource on the Grid

    CERN Multimedia

    2002-01-01

    Hewlett Packard announced that the US DOE Pacific Northwest National Laboratory will connect a 9.3-teraflop HP supercomputer to the DOE Science Grid. This will be the largest supercomputer attached to a computer grid anywhere in the world (1 page).

  9. Supercomputing - Use Cases, Advances, The Future (2/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the second day, we will focus on software and software paradigms driving supercomputers, workloads that need supercomputing treatment, advances in technology and possible future developments. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and i...

  10. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Science.gov (United States)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  11. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    International Nuclear Information System (INIS)

    Klimentov, A; Maeno, T; Nilsson, P; Panitkin, S; Wenaus, T; De, K; Oleynik, D; Jha, S; Wells, J

    2016-01-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  12. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-01-01

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  13. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  14. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen-Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2018-05-15

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time and supports DMA functionality allowing for parallel processing message-passing.

  15. Making lemonade from lemons: a case study on loss of space at the Dolph Briscoe, Jr. Library, University of Texas Health Science Center at San Antonio.

    Science.gov (United States)

    Tobia, Rajia C; Feldman, Jonquil D

    2010-01-01

    The setting for this case study is the Dolph Briscoe, Jr. Library, University of Texas Health Science Center at San Antonio, a health sciences campus with medical, dental, nursing, health professions, and graduate schools. During 2008-2009, major renovations to the library building were completed including office space for a faculty development department, multipurpose classrooms, a 24/7 study area, study rooms, library staff office space, and an information commons. The impetus for changes to the library building was the decreasing need to house collections in an increasingly electronic environment, the need for office space for other departments, and growth of the student body. About 40% of the library building was remodeled or repurposed, with a loss of approximately 25% of the library's original space. Campus administration proposed changes to the library building, and librarians worked with administration, architects, and construction managers to seek renovation solutions that meshed with the library's educational mission.

  16. Adventures in supercomputing: An innovative program for high school teachers

    Energy Technology Data Exchange (ETDEWEB)

    Oliver, C.E.; Hicks, H.R.; Summers, B.G. [Oak Ridge National Lab., TN (United States); Staten, D.G. [Wartburg Central High School, TN (United States)

    1994-12-31

    Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology. Adventures in Supercomputing (AiS), sponsored by the U.S. Department of Energy (DOE), is such a program. It is a program for high school teachers that changes the teacher paradigm from a teacher-directed approach of teaching to a student-centered approach. {open_quotes}A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode{close_quotes}. Not only is the process of teaching changed, but the cross-curricula integration within the AiS materials is remarkable. Written from a teacher`s perspective, this paper will describe the AiS program and its effects on teachers and students, primarily at Wartburg Central High School, in Wartburg, Tennessee. The AiS program in Tennessee is sponsored by Oak Ridge National Laboratory (ORNL).

  17. California Environmental Vulnerability Assessment (CEVA) Score, San Joaquin Valley CA, 2013, UC Davis Center for Regional Change

    Data.gov (United States)

    U.S. Environmental Protection Agency — This data set is based on a three year study by the UC Davis Center for Regional Change, in affiliation with the Environmental Justice Project of the John Muir...

  18. OpenMP Performance on the Columbia Supercomputer

    Science.gov (United States)

    Haoqiang, Jin; Hood, Robert

    2005-01-01

    This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.

  19. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    Directory of Open Access Journals (Sweden)

    De K.

    2016-01-01

    Full Text Available The Large Hadron Collider (LHC, operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed AnalysisWorkload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF, is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF’s Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  20. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  1. Defining competencies for education in health care value: recommendations from the University of California, San Francisco Center for Healthcare Value Training Initiative.

    Science.gov (United States)

    Moriates, Christopher; Dohan, Daniel; Spetz, Joanne; Sawaya, George F

    2015-04-01

    Leaders in medical education have increasingly called for the incorporation of cost awareness and health care value into health professions curricula. Emerging efforts have thus far focused on physicians, but foundational competencies need to be defined related to health care value that span all health professions and stages of training. The University of California, San Francisco (UCSF) Center for Healthcare Value launched an initiative in 2012 that engaged a group of educators from all four health professions schools at UCSF: Dentistry, Medicine, Nursing, and Pharmacy. This group created and agreed on a multidisciplinary set of comprehensive competencies related to health care value. The term "competency" was used to describe components within the larger domain of providing high-value care. The group then classified the competencies as beginner, proficient, or expert level through an iterative process and group consensus. The group articulated 21 competencies. The beginner competencies include basic principles of health policy, health care delivery, health costs, and insurance. Proficient competencies include real-world applications of concepts to clinical situations, primarily related to the care of individual patients. The expert competencies focus primarily on systems-level design, advocacy, mentorship, and policy. These competencies aim to identify a standard that may help inform the development of curricula across health professions training. These competencies could be translated into the learning objectives and evaluation methods of resources to teach health care value, and they should be considered in educational settings for health care professionals at all levels of training and across a variety of specialties.

  2. The KhoeSan Early Learning Center Pilot Project: Negotiating Power and Possibility in a South African Institute of Higher Learning

    Science.gov (United States)

    De Wet, Priscilla

    2011-01-01

    As we search for a new paradigm in post-apartheid South Africa, the knowledge base and worldview of the KhoeSan first Indigenous peoples is largely missing. The South African government has established various mechanisms as agents for social change. Institutions of higher learning have implemented transformation programs. KhoeSan peoples, however,…

  3. Helping Smokers Quit: New Partners and New Strategies from the University of California, San Francisco Smoking Cessation Leadership Center.

    Science.gov (United States)

    Schroeder, Steven A; Clark, Brian; Cheng, Christine; Saucedo, Catherine B

    2018-01-01

    The Smoking Cessation Leadership Center (SCLC) was established in 2003 to increase the rate of smoking cessation attempts and the likelihood those efforts would succeed. Although smoking remains the number one cause of preventable death and disability, clinicians underperform in smoking cessation. Furthermore, many clinical organizations, governmental agencies, and advocacy groups put little effort into smoking cessation. Initially targeted at increasing the efforts of primary care physicians, SCLC efforts expanded to include many other medical and non-physician disciplines, ultimately engaging 21 separate specialties. Most clinicians and their organizations are daunted by efforts required to become cessation experts. A compromise solution, Ask, Advise, Refer (to telephone quitlines), was crafted. SCLC also stimulated smoking cessation projects in governmental, not-for-profit, and industry groups, including the Veterans Administration, the Health Resources Services Administration, Los Angeles County, and the Joint Commission. SCLC helped CVS pharmacies to stop selling tobacco products and other pharmacies to increase smoking cessation efforts, provided multiple educational offerings, and distributed $6.4 million in industry-supported smoking cessation grants to 55 organizations plus $4 million in direct SCLC grants. Nevertheless, smoking still causes 540,000 annual deaths in the US. SCLC's work in the field of behavioral health is described in a companion article.

  4. SANS studies of polymers

    International Nuclear Information System (INIS)

    Wignall, G.D.

    1984-10-01

    Before small-angle neutron scattering (SANS), chain conformation studies were limited to light and small angle x-ray scattering techniques, usually in dilute solution. SANS from blends of normal and labeled molecules could give direct information on chain conformation in bulk polymers. Water-soluble polymers may be examined in H 2 O/D 2 O mixtures using contrast variation methods to provide further information on polymer structure. This paper reviews some of the information provided by this technique using examples of experiments performed at the National Center for Small-Angle Scattering Research (NCSASR)

  5. Supercomputing - Use Cases, Advances, The Future (1/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the first day, we will focus on the history and theory of supercomputing, the top500 list and the hardware that makes supercomputers tick. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP an...

  6. JINR supercomputer of the module type for event parallel analysis

    International Nuclear Information System (INIS)

    Kolpakov, I.F.; Senner, A.E.; Smirnov, V.A.

    1987-01-01

    A model of a supercomputer with 50 million of operations per second is suggested. Its realization allows one to solve JINR data analysis problems for large spectrometers (in particular DELPHY collaboration). The suggested module supercomputer is based on 32-bit commercial available microprocessor with a processing rate of about 1 MFLOPS. The processors are combined by means of VME standard busbars. MicroVAX-11 is a host computer organizing the operation of the system. Data input and output is realized via microVAX-11 computer periphery. Users' software is based on the FORTRAN-77. The supercomputer is connected with a JINR net port and all JINR users get an access to the suggested system

  7. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sarje, Abhinav [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jacobsen, Douglas W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ringler, Todd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  8. San Francisco District Laboratory (SAN)

    Data.gov (United States)

    Federal Laboratory Consortium — Program CapabilitiesFood Analysis SAN-DO Laboratory has an expert in elemental analysis who frequently performs field inspections of materials. A recently acquired...

  9. Comments on the parallelization efficiency of the Sunway TaihuLight supercomputer

    OpenAIRE

    Végh, János

    2016-01-01

    In the world of supercomputers, the large number of processors requires to minimize the inefficiencies of parallelization, which appear as a sequential part of the program from the point of view of Amdahl's law. The recently suggested new figure of merit is applied to the recently presented supercomputer, and the timeline of "Top 500" supercomputers is scrutinized using the metric. It is demonstrated, that in addition to the computing performance and power consumption, the new supercomputer i...

  10. Convex unwraps its first grown-up supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Manuel, T.

    1988-03-03

    Convex Computer Corp.'s new supercomputer family is even more of an industry blockbuster than its first system. At a tenfold jump in performance, it's far from just an incremental upgrade over its first minisupercomputer, the C-1. The heart of the new family, the new C-2 processor, churning at 50 million floating-point operations/s, spawns a group of systems whose performance could pass for some fancy supercomputers-namely those of the Cray Research Inc. family. When added to the C-1, Convex's five new supercomputers create the C series, a six-member product group offering a performance range from 20 to 200 Mflops. They mark an important transition for Convex from a one-product high-tech startup to a multinational company with a wide-ranging product line. It's a tough transition but the Richardson, Texas, company seems to be doing it. The extended product line propels Convex into the upper end of the minisupercomputer class and nudges it into the low end of the big supercomputers. It positions Convex in an uncrowded segment of the market in the $500,000 to $1 million range offering 50 to 200 Mflops of performance. The company is making this move because the minisuper area, which it pioneered, quickly became crowded with new vendors, causing prices and gross margins to drop drastically.

  11. QCD on the BlueGene/L Supercomputer

    International Nuclear Information System (INIS)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-01-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented

  12. QCD on the BlueGene/L Supercomputer

    Science.gov (United States)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-03-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented.

  13. Supercomputers and the future of computational atomic scattering physics

    International Nuclear Information System (INIS)

    Younger, S.M.

    1989-01-01

    The advent of the supercomputer has opened new vistas for the computational atomic physicist. Problems of hitherto unparalleled complexity are now being examined using these new machines, and important connections with other fields of physics are being established. This talk briefly reviews some of the most important trends in computational scattering physics and suggests some exciting possibilities for the future. 7 refs., 2 figs

  14. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 2

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  15. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 1

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  16. San Francisco folio, California, Tamalpais, San Francisco, Concord, San Mateo, and Haywards quadrangles

    Science.gov (United States)

    Lawson, Andrew Cowper

    1914-01-01

    The five sheets of the San Francisco folio the Tamalpais, Ban Francisco, Concord, Ban Mateo, and Haywards sheets map a territory lying between latitude 37° 30' and 38° and longitude 122° and 122° 45'. Large parts of four of these sheets cover the waters of the Bay of San Francisco or of the adjacent Pacific Ocean. (See fig. 1.) Within the area mapped are the cities of San Francisco, Oakland, Berkeley, Alameda, Ban Rafael, and San Mateo, and many smaller towns and villages. These cities, which have a population aggregating about 750,000, together form the largest and most important center of commercial and industrial activity on the west coast of the United States. The natural advantages afforded by a great harbor, where the railways from the east meet the ships from all ports of the world, have determined the site of a flourishing cosmopolitan, commercial city on the shores of San Francisco Bay. The bay is encircled by hilly and mountainous country diversified by fertile valley lands and divides the territory mapped into two rather contrasted parts, the western part being again divided by the Golden Gate. It will therefore be convenient to sketch the geographic features under four headings (1) the area east of San Francisco Bay; (2) the San Francisco Peninsula; (3) the Marin Peninsula; (4) San Francisco Bay. (See fig. 2.)

  17. Frequently updated noise threat maps created with use of supercomputing grid

    Directory of Open Access Journals (Sweden)

    Szczodrak Maciej

    2014-09-01

    Full Text Available An innovative supercomputing grid services devoted to noise threat evaluation were presented. The services described in this paper concern two issues, first is related to the noise mapping, while the second one focuses on assessment of the noise dose and its influence on the human hearing system. The discussed serviceswere developed within the PL-Grid Plus Infrastructure which accumulates Polish academic supercomputer centers. Selected experimental results achieved by the usage of the services proposed were presented. The assessment of the environmental noise threats includes creation of the noise maps using either ofline or online data, acquired through a grid of the monitoring stations. A concept of estimation of the source model parameters based on the measured sound level for the purpose of creating frequently updated noise maps was presented. Connecting the noise mapping grid service with a distributed sensor network enables to automatically update noise maps for a specified time period. Moreover, a unique attribute of the developed software is the estimation of the auditory effects evoked by the exposure to noise. The estimation method uses a modified psychoacoustic model of hearing and is based on the calculated noise level values and on the given exposure period. Potential use scenarios of the grid services for research or educational purpose were introduced. Presentation of the results of predicted hearing threshold shift caused by exposure to excessive noise can raise the public awareness of the noise threats.

  18. Use of high performance networks and supercomputers for real-time flight simulation

    Science.gov (United States)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  19. San Marino.

    Science.gov (United States)

    1985-02-01

    San Marino, an independent republic located in north central Italy, in 1983 had a population of 22,206 growing at an annual rate of .9%. The literacy rate is 97% and the infant mortality rate is 9.6/1000. The terrain is mountainous and the climate is moderate. According to local tradition, San Marino was founded by a Christian stonecutter in the 4th century A.D. as a refuge against religious persecution. Its recorded history began in the 9th century, and it has survived assaults on its independence by the papacy, the Malatesta lords of Rimini, Cesare Borgia, Napoleon, and Mussolini. An 1862 treaty with the newly formed Kingdom of Italy has been periodically renewed and amended. The present government is an alliance between the socialists and communists. San Marino has had its own statutes and governmental institutions since the 11th century. Legislative authority at present is vested in a 60-member unicameral parliament. Executive authority is exercised by the 11-member Congress of State, the members of which head the various administrative departments of the goverment. The posts are divided among the parties which form the coalition government. Judicial authority is partly exercised by Italian magistrates in civil and criminal cases. San Marino's policies are tied to Italy's and political organizations and labor unions active in Italy are also active in San Marino. Since World War II, there has been intense rivalry between 2 political coalitions, the Popular Alliance composed of the Christian Democratic Party and the Independent Social Democratic Party, and the Liberty Committee, coalition of the Communist Party and the Socialist Party. San Marino's gross domestic product was $137 million and its per capita income was $6290 in 1980. The principal economic activities are farming and livestock raising, along with some light manufacturing. Foreign transactions are dominated by tourism. The government derives most of its revenue from the sale of postage stamps to

  20. Building the Teraflops/Petabytes Production Computing Center

    International Nuclear Information System (INIS)

    Kramer, William T.C.; Lucas, Don; Simon, Horst D.

    1999-01-01

    In just one decade, the 1990s, supercomputer centers have undergone two fundamental transitions which require rethinking their operation and their role in high performance computing. The first transition in the early to mid-1990s resulted from a technology change in high performance computing architecture. Highly parallel distributed memory machines built from commodity parts increased the operational complexity of the supercomputer center, and required the introduction of intellectual services as equally important components of the center. The second transition is happening in the late 1990s as centers are introducing loosely coupled clusters of SMPs as their premier high performance computing platforms, while dealing with an ever-increasing volume of data. In addition, increasing network bandwidth enables new modes of use of a supercomputer center, in particular, computational grid applications. In this paper we describe what steps NERSC is taking to address these issues and stay at the leading edge of supercomputing centers.; N

  1. Integration of Titan supercomputer at OLCF with ATLAS production system

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration

    2016-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job...

  2. Integration of Titan supercomputer at OLCF with ATLAS Production System

    CERN Document Server

    AUTHOR|(SzGeCERN)643806; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Wenaus, Torre

    2017-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for jo...

  3. Extending ATLAS Computing to Commercial Clouds and Supercomputers

    CERN Document Server

    Nilsson, P; The ATLAS collaboration; Filipcic, A; Klimentov, A; Maeno, T; Oleynik, D; Panitkin, S; Wenaus, T; Wu, W

    2014-01-01

    The Large Hadron Collider will resume data collection in 2015 with substantially increased computing requirements relative to its first 2009-2013 run. A near doubling of the energy and the data rate, high level of event pile-up, and detector upgrades will mean the number and complexity of events to be analyzed will increase dramatically. A naive extrapolation of the Run 1 experience would suggest that a 5-6 fold increase in computing resources are needed - impossible within the anticipated flat computing budgets in the near future. Consequently ATLAS is engaged in an ambitious program to expand its computing to all available resources, notably including opportunistic use of commercial clouds and supercomputers. Such resources present new challenges in managing heterogeneity, supporting data flows, parallelizing workflows, provisioning software, and other aspects of distributed computing, all while minimizing operational load. We will present the ATLAS experience to date with clouds and supercomputers, and des...

  4. Data Mining Supercomputing with SAS JMP® Genomics

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2011-02-01

    Full Text Available JMP® Genomics is statistical discovery software that can uncover meaningful patterns in high-throughput genomics and proteomics data. JMP® Genomics is designed for biologists, biostatisticians, statistical geneticists, and those engaged in analyzing the vast stores of data that are common in genomic research (SAS, 2009. Data mining was performed using JMP® Genomics on the two collections of microarray databases available from National Center for Biotechnology Information (NCBI for lung cancer and breast cancer. The Gene Expression Omnibus (GEO of NCBI serves as a public repository for a wide range of highthroughput experimental data, including the two collections of lung cancer and breast cancer that were used for this research. The results for applying data mining using software JMP® Genomics are shown in this paper with numerous screen shots.

  5. Visualizing quantum scattering on the CM-2 supercomputer

    International Nuclear Information System (INIS)

    Richardson, J.L.

    1991-01-01

    We implement parallel algorithms for solving the time-dependent Schroedinger equation on the CM-2 supercomputer. These methods are unconditionally stable as well as unitary at each time step and have the advantage of being spatially local and explicit. We show how to visualize the dynamics of quantum scattering using techniques for visualizing complex wave functions. Several scattering problems are solved to demonstrate the use of these methods. (orig.)

  6. Development of seismic tomography software for hybrid supercomputers

    Science.gov (United States)

    Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton

    2015-04-01

    Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on

  7. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  8. Intelligent Personal Supercomputer for Solving Scientific and Technical Problems

    Directory of Open Access Journals (Sweden)

    Khimich, O.M.

    2016-09-01

    Full Text Available New domestic intellіgent personal supercomputer of hybrid architecture Inparkom_pg for the mathematical modeling of processes in the defense industry, engineering, construction, etc. was developed. Intelligent software for the automatic research and tasks of computational mathematics with approximate data of different structures was designed. Applied software to provide mathematical modeling problems in construction, welding and filtration processes was implemented.

  9. Cellular-automata supercomputers for fluid-dynamics modeling

    International Nuclear Information System (INIS)

    Margolus, N.; Toffoli, T.; Vichniac, G.

    1986-01-01

    We report recent developments in the modeling of fluid dynamics, and give experimental results (including dynamical exponents) obtained using cellular automata machines. Because of their locality and uniformity, cellular automata lend themselves to an extremely efficient physical realization; with a suitable architecture, an amount of hardware resources comparable to that of a home computer can achieve (in the simulation of cellular automata) the performance of a conventional supercomputer

  10. Porting Ordinary Applications to Blue Gene/Q Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy; Katz, Daniel S.; Binkowski, T. Andrew; Zhong, Xiaoliang; Heinonen, Olle; Karpeyev, Dmitry; Wilde, Michael

    2015-08-31

    Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt's sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.

  11. Proceedings of the first energy research power supercomputer users symposium

    International Nuclear Information System (INIS)

    1991-01-01

    The Energy Research Power Supercomputer Users Symposium was arranged to showcase the richness of science that has been pursued and accomplished in this program through the use of supercomputers and now high performance parallel computers over the last year: this report is the collection of the presentations given at the Symposium. ''Power users'' were invited by the ER Supercomputer Access Committee to show that the use of these computational tools and the associated data communications network, ESNet, go beyond merely speeding up computations. Today the work often directly contributes to the advancement of the conceptual developments in their fields and the computational and network resources form the very infrastructure of today's science. The Symposium also provided an opportunity, which is rare in this day of network access to computing resources, for the invited users to compare and discuss their techniques and approaches with those used in other ER disciplines. The significance of new parallel architectures was highlighted by the interesting evening talk given by Dr. Stephen Orszag of Princeton University

  12. Extracting the Textual and Temporal Structure of Supercomputing Logs

    Energy Technology Data Exchange (ETDEWEB)

    Jain, S; Singh, I; Chandra, A; Zhang, Z; Bronevetsky, G

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an online clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.

  13. Wavelet transform-vector quantization compression of supercomputer ocean model simulation output

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J N; Brislawn, C M

    1992-11-12

    We describe a new procedure for efficient compression of digital information for storage and transmission purposes. The algorithm involves a discrete wavelet transform subband decomposition of the data set, followed by vector quantization of the wavelet transform coefficients using application-specific vector quantizers. The new vector quantizer design procedure optimizes the assignment of both memory resources and vector dimensions to the transform subbands by minimizing an exponential rate-distortion functional subject to constraints on both overall bit-rate and encoder complexity. The wavelet-vector quantization method, which originates in digital image compression. is applicable to the compression of other multidimensional data sets possessing some degree of smoothness. In this paper we discuss the use of this technique for compressing the output of supercomputer simulations of global climate models. The data presented here comes from Semtner-Chervin global ocean models run at the National Center for Atmospheric Research and at the Los Alamos Advanced Computing Laboratory.

  14. Reactive flow simulations in complex geometries with high-performance supercomputing

    International Nuclear Information System (INIS)

    Rehm, W.; Gerndt, M.; Jahn, W.; Vogelsang, R.; Binninger, B.; Herrmann, M.; Olivier, H.; Weber, M.

    2000-01-01

    In this paper, we report on a modern field code cluster consisting of state-of-the-art reactive Navier-Stokes- and reactive Euler solvers that has been developed on vector- and parallel supercomputers at the research center Juelich. This field code cluster is used for hydrogen safety analyses of technical systems, for example, in the field of nuclear reactor safety and conventional hydrogen demonstration plants with fuel cells. Emphasis is put on the assessment of combustion loads, which could result from slow, fast or rapid flames, including transition from deflagration to detonation. As a sample of proof tests, the special tools have been tested for specific tasks, based on the comparison of experimental and numerical results, which are in reasonable agreement. (author)

  15. Supercomputers and the mathematical modeling of high complexity problems

    International Nuclear Information System (INIS)

    Belotserkovskii, Oleg M

    2010-01-01

    This paper is a review of many works carried out by members of our scientific school in past years. The general principles of constructing numerical algorithms for high-performance computers are described. Several techniques are highlighted and these are based on the method of splitting with respect to physical processes and are widely used in computing nonlinear multidimensional processes in fluid dynamics, in studies of turbulence and hydrodynamic instabilities and in medicine and other natural sciences. The advances and developments related to the new generation of high-performance supercomputing in Russia are presented.

  16. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    Science.gov (United States)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  17. A fast random number generator for the Intel Paragon supercomputer

    Science.gov (United States)

    Gutbrod, F.

    1995-06-01

    A pseudo-random number generator is presented which makes optimal use of the architecture of the i860-microprocessor and which is expected to have a very long period. It is therefore a good candidate for use on the parallel supercomputer Paragon XP. In the assembler version, it needs 6.4 cycles for a real∗4 random number. There is a FORTRAN routine which yields identical numbers up to rare and minor rounding discrepancies, and it needs 28 cycles. The FORTRAN performance on other microprocessors is somewhat better. Arguments for the quality of the generator and some numerical tests are given.

  18. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2008-05-15

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation.

  19. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    International Nuclear Information System (INIS)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho

    2008-01-01

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation

  20. Plane-wave electronic structure calculations on a parallel supercomputer

    International Nuclear Information System (INIS)

    Nelson, J.S.; Plimpton, S.J.; Sears, M.P.

    1993-01-01

    The development of iterative solutions of Schrodinger's equation in a plane-wave (pw) basis over the last several years has coincided with great advances in the computational power available for performing the calculations. These dual developments have enabled many new and interesting condensed matter phenomena to be studied from a first-principles approach. The authors present a detailed description of the implementation on a parallel supercomputer (hypercube) of the first-order equation-of-motion solution to Schrodinger's equation, using plane-wave basis functions and ab initio separable pseudopotentials. By distributing the plane-waves across the processors of the hypercube many of the computations can be performed in parallel, resulting in decreases in the overall computation time relative to conventional vector supercomputers. This partitioning also provides ample memory for large Fast Fourier Transform (FFT) meshes and the storage of plane-wave coefficients for many hundreds of energy bands. The usefulness of the parallel techniques is demonstrated by benchmark timings for both the FFT's and iterations of the self-consistent solution of Schrodinger's equation for different sized Si unit cells of up to 512 atoms

  1. Supercomputer algorithms for reactivity, dynamics and kinetics of small molecules

    International Nuclear Information System (INIS)

    Lagana, A.

    1989-01-01

    Even for small systems, the accurate characterization of reactive processes is so demanding of computer resources as to suggest the use of supercomputers having vector and parallel facilities. The full advantages of vector and parallel architectures can sometimes be obtained by simply modifying existing programs, vectorizing the manipulation of vectors and matrices, and requiring the parallel execution of independent tasks. More often, however, a significant time saving can be obtained only when the computer code undergoes a deeper restructuring, requiring a change in the computational strategy or, more radically, the adoption of a different theoretical treatment. This book discusses supercomputer strategies based upon act and approximate methods aimed at calculating the electronic structure and the reactive properties of small systems. The book shows how, in recent years, intense design activity has led to the ability to calculate accurate electronic structures for reactive systems, exact and high-level approximations to three-dimensional reactive dynamics, and to efficient directive and declaratory software for the modelling of complex systems

  2. The TeraGyroid Experiment – Supercomputing 2003

    Directory of Open Access Journals (Sweden)

    R.J. Blake

    2005-01-01

    Full Text Available Amphiphiles are molecules with hydrophobic tails and hydrophilic heads. When dispersed in solvents, they self assemble into complex mesophases including the beautiful cubic gyroid phase. The goal of the TeraGyroid experiment was to study defect pathways and dynamics in these gyroids. The UK's supercomputing and USA's TeraGrid facilities were coupled together, through a dedicated high-speed network, into a single computational Grid for research work that peaked around the Supercomputing 2003 conference. The gyroids were modeled using lattice Boltzmann methods with parameter spaces explored using many 1283 and 3grid point simulations, this data being used to inform the world's largest three-dimensional time dependent simulation with 10243-grid points. The experiment generated some 2 TBytes of useful data. In terms of Grid technology the project demonstrated the migration of simulations (using Globus middleware to and fro across the Atlantic exploiting the availability of resources. Integration of the systems accelerated the time to insight. Distributed visualisation of the output datasets enabled the parameter space of the interactions within the complex fluid to be explored from a number of sites, informed by discourse over the Access Grid. The project was sponsored by EPSRC (UK and NSF (USA with trans-Atlantic optical bandwidth provided by British Telecommunications.

  3. KfK-seminar series on supercomputing und visualization from May till September 1992

    International Nuclear Information System (INIS)

    Hohenhinnebusch, W.

    1993-05-01

    During the period of may 1992 to september 1992 a series of seminars was held at KfK on several topics of supercomputing in different fields of application. The aim was to demonstrate the importance of supercomputing and visualization in numerical simulations of complex physical and technical phenomena. This report contains the collection of all submitted seminar papers. (orig./HP) [de

  4. SUPERCOMPUTER SIMULATION OF CRITICAL PHENOMENA IN COMPLEX SOCIAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Petrus M.A. Sloot

    2014-09-01

    Full Text Available The paper describes a problem of computer simulation of critical phenomena in complex social systems on a petascale computing systems in frames of complex networks approach. The three-layer system of nested models of complex networks is proposed including aggregated analytical model to identify critical phenomena, detailed model of individualized network dynamics and model to adjust a topological structure of a complex network. The scalable parallel algorithm covering all layers of complex networks simulation is proposed. Performance of the algorithm is studied on different supercomputing systems. The issues of software and information infrastructure of complex networks simulation are discussed including organization of distributed calculations, crawling the data in social networks and results visualization. The applications of developed methods and technologies are considered including simulation of criminal networks disruption, fast rumors spreading in social networks, evolution of financial networks and epidemics spreading.

  5. Lectures in Supercomputational Neurosciences Dynamics in Complex Brain Networks

    CERN Document Server

    Graben, Peter beim; Thiel, Marco; Kurths, Jürgen

    2008-01-01

    Computational Neuroscience is a burgeoning field of research where only the combined effort of neuroscientists, biologists, psychologists, physicists, mathematicians, computer scientists, engineers and other specialists, e.g. from linguistics and medicine, seem to be able to expand the limits of our knowledge. The present volume is an introduction, largely from the physicists' perspective, to the subject matter with in-depth contributions by system neuroscientists. A conceptual model for complex networks of neurons is introduced that incorporates many important features of the real brain, such as various types of neurons, various brain areas, inhibitory and excitatory coupling and the plasticity of the network. The computational implementation on supercomputers, which is introduced and discussed in detail in this book, will enable the readers to modify and adapt the algortihm for their own research. Worked-out examples of applications are presented for networks of Morris-Lecar neurons to model the cortical co...

  6. Development of a Cloud Resolving Model for Heterogeneous Supercomputers

    Science.gov (United States)

    Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.

    2017-12-01

    A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.

  7. A supercomputing application for reactors core design and optimization

    International Nuclear Information System (INIS)

    Hourcade, Edouard; Gaudier, Fabrice; Arnaud, Gilles; Funtowiez, David; Ammar, Karim

    2010-01-01

    Advanced nuclear reactor designs are often intuition-driven processes where designers first develop or use simplified simulation tools for each physical phenomenon involved. Through the project development, complexity in each discipline increases and implementation of chaining/coupling capabilities adapted to supercomputing optimization process are often postponed to a further step so that task gets increasingly challenging. In the context of renewal in reactor designs, project of first realization are often run in parallel with advanced design although very dependant on final options. As a consequence, the development of tools to globally assess/optimize reactor core features, with the on-going design methods accuracy, is needed. This should be possible within reasonable simulation time and without advanced computer skills needed at project management scale. Also, these tools should be ready to easily cope with modeling progresses in each discipline through project life-time. An early stage development of multi-physics package adapted to supercomputing is presented. The URANIE platform, developed at CEA and based on the Data Analysis Framework ROOT, is very well adapted to this approach. It allows diversified sampling techniques (SRS, LHS, qMC), fitting tools (neuronal networks...) and optimization techniques (genetic algorithm). Also data-base management and visualization are made very easy. In this paper, we'll present the various implementing steps of this core physics tool where neutronics, thermo-hydraulics, and fuel mechanics codes are run simultaneously. A relevant example of optimization of nuclear reactor safety characteristics will be presented. Also, flexibility of URANIE tool will be illustrated with the presentation of several approaches to improve Pareto front quality. (author)

  8. Center for Adaptive Optics | Center

    Science.gov (United States)

    Astronomy, UCSC's CfAO and ISEE, and Maui Community College, runs education and internship programs in / Jacobs Retina Center Department of Psychology University of California, San Francisco Department of University School of Optometry Maui Community College Maui Community College Space Grant Program Montana

  9. Integration of Titan supercomputer at OLCF with ATLAS Production System

    Science.gov (United States)

    Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to

  10. Novel Supercomputing Approaches for High Performance Linear Algebra Using FPGAs, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Supercomputing plays a major role in many areas of science and engineering, and it has had tremendous impact for decades in areas such as aerospace, defense, energy,...

  11. De Novo Ultrascale Atomistic Simulations On High-End Parallel Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Nakano, A; Kalia, R K; Nomura, K; Sharma, A; Vashishta, P; Shimojo, F; van Duin, A; Goddard, III, W A; Biswas, R; Srivastava, D; Yang, L H

    2006-09-04

    /MD simulation on a Grid consisting of 6 supercomputer centers in the US and Japan (in total of 150 thousand processor-hours), in which the number of processors change dynamically on demand and resources are allocated and migrated dynamically in response to faults. Furthermore, performance portability has been demonstrated on a wide range of platforms such as BlueGene/L, Altix 3000, and AMD Opteron-based Linux clusters.

  12. SUPERCOMPUTERS FOR AIDING ECONOMIC PROCESSES WITH REFERENCE TO THE FINANCIAL SECTOR

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2014-12-01

    Full Text Available The article discusses the use of supercomputers to support business processes with particular emphasis on the financial sector. A reference was made to the selected projects that support economic development. In particular, we propose the use of supercomputers to perform artificial intel-ligence methods in banking. The proposed methods combined with modern technology enables a significant increase in the competitiveness of enterprises and banks by adding new functionality.

  13. Symbolic simulation of engineering systems on a supercomputer

    International Nuclear Information System (INIS)

    Ragheb, M.; Gvillo, D.; Makowitz, H.

    1986-01-01

    Model-Based Production-Rule systems for analysis are developed for the symbolic simulation of Complex Engineering systems on a CRAY X-MP Supercomputer. The Fault-Tree and Event-Tree Analysis methodologies from Systems-Analysis are used for problem representation and are coupled to the Rule-Based System Paradigm from Knowledge Engineering to provide modelling of engineering devices. Modelling is based on knowledge of the structure and function of the device rather than on human expertise alone. To implement the methodology, we developed a production-Rule Analysis System that uses both backward-chaining and forward-chaining: HAL-1986. The inference engine uses an Induction-Deduction-Oriented antecedent-consequent logic and is programmed in Portable Standard Lisp (PSL). The inference engine is general and can accommodate general modifications and additions to the knowledge base. The methodologies used will be demonstrated using a model for the identification of faults, and subsequent recovery from abnormal situations in Nuclear Reactor Safety Analysis. The use of the exposed methodologies for the prognostication of future device responses under operational and accident conditions using coupled symbolic and procedural programming is discussed

  14. Micro-mechanical Simulations of Soils using Massively Parallel Supercomputers

    Directory of Open Access Journals (Sweden)

    David W. Washington

    2004-06-01

    Full Text Available In this research a computer program, Trubal version 1.51, based on the Discrete Element Method was converted to run on a Connection Machine (CM-5,a massively parallel supercomputer with 512 nodes, to expedite the computational times of simulating Geotechnical boundary value problems. The dynamic memory algorithm in Trubal program did not perform efficiently in CM-2 machine with the Single Instruction Multiple Data (SIMD architecture. This was due to the communication overhead involving global array reductions, global array broadcast and random data movement. Therefore, a dynamic memory algorithm in Trubal program was converted to a static memory arrangement and Trubal program was successfully converted to run on CM-5 machines. The converted program was called "TRUBAL for Parallel Machines (TPM." Simulating two physical triaxial experiments and comparing simulation results with Trubal simulations validated the TPM program. With a 512 nodes CM-5 machine TPM produced a nine-fold speedup demonstrating the inherent parallelism within algorithms based on the Discrete Element Method.

  15. HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.

    2017-10-01

    PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.

  16. A visual analytics system for optimizing the performance of large-scale networks in supercomputing systems

    Directory of Open Access Journals (Sweden)

    Takanori Fujiwara

    2018-03-01

    Full Text Available The overall efficiency of an extreme-scale supercomputer largely relies on the performance of its network interconnects. Several of the state of the art supercomputers use networks based on the increasingly popular Dragonfly topology. It is crucial to study the behavior and performance of different parallel applications running on Dragonfly networks in order to make optimal system configurations and design choices, such as job scheduling and routing strategies. However, in order to study these temporal network behavior, we would need a tool to analyze and correlate numerous sets of multivariate time-series data collected from the Dragonfly’s multi-level hierarchies. This paper presents such a tool–a visual analytics system–that uses the Dragonfly network to investigate the temporal behavior and optimize the communication performance of a supercomputer. We coupled interactive visualization with time-series analysis methods to help reveal hidden patterns in the network behavior with respect to different parallel applications and system configurations. Our system also provides multiple coordinated views for connecting behaviors observed at different levels of the network hierarchies, which effectively helps visual analysis tasks. We demonstrate the effectiveness of the system with a set of case studies. Our system and findings can not only help improve the communication performance of supercomputing applications, but also the network performance of next-generation supercomputers. Keywords: Supercomputing, Parallel communication network, Dragonfly networks, Time-series data, Performance analysis, Visual analytics

  17. Specialty education in periodontics in Japan and the United States: comparison of programs at Nippon Dental University Hospital and the University of Texas Health Science Center at San Antonio.

    Science.gov (United States)

    Osawa, Ginko; Nakaya, Hiroshi; Mealey, Brian L; Kalkwarf, Kenneth; Cochran, David L

    2014-03-01

    Japan has institutions that train qualified postdoctoral students in the field of periodontics; however, Japan does not have comprehensive advanced periodontal programs and national standards for these specialty programs. To help Japanese programs move toward global standards in this area, this study was designed to describe overall differences in periodontics specialty education in Japan and the United States and to compare periodontics faculty members and residents' characteristics and attitudes in two specific programs, one in each country. Periodontal faculty members and residents at Nippon Dental University (NDU) and the University of Texas Health Science Center at San Antonio (UTHSCSA) Dental School participated in the survey study: four faculty members and nine residents at NDU; seven faculty members and thirteen residents at UTHSCSA. Demographic data were collected as well as respondents' attitudes toward and assessment of their programs. The results showed many differences in curriculum structure and clinical performance. In contrast to the UTHSCSA respondents, for example, the residents and faculty members at NDU reported that they did not have enough subject matter and time to learn clinical science. Although the residents at NDU reported seeing more total patients in one month than those at UTHSCSA, they were taught fewer varieties of periodontal treatments. To provide high-quality and consistent education for periodontal residents, Japan needs to establish a set of standards that will have positive consequences for those in Japan who need periodontal treatment.

  18. Post-9/11 cancer incidence in World Trade Center-exposed New York City firefighters as compared to a pooled cohort of firefighters from San Francisco, Chicago and Philadelphia (9/11/2001-2009)

    Science.gov (United States)

    Moir, William; Zeig-Owens, Rachel; Daniels, Robert D; Hall, Charles B; Webber, Mayris P; Jaber, Nadia; Yiin, James H; Schwartz, Theresa; Liu, Xiaoxue; Vossbrinck, Madeline; Kelly, Kerry; Prezant, David J

    2016-01-01

    Background We previously reported a modest excess of cancer cases in World Trade Center (WTC) exposed firefighters as compared with the general population. This study aimed to separate the potential carcinogenic effects of firefighting and WTC-exposure by using a cohort of non-WTC-exposed firefighters as the referent group. Methods Relative rates (RRs) for all cancers combined and individual cancer subtypes from 9/11/2001-12/31/2009 were modelled using Poisson regression comparing 11,457 WTC-exposed firefighters to 8,220 non-WTC-exposed firefighters from San Francisco, Chicago, and Philadelphia. Results Compared with non-WTC-exposed firefighters, there was no difference in the RR of all cancers combined for WTC-exposed firefighters (RR=0.96, 95% CI: 0.83–1.12). Thyroid cancer was significantly elevated (RR=3.82, 95% CI: 1.07–20.81) over the entire study; this was attenuated (RR=3.43, 95% CI: 0.94–18.94) and non-significant in a secondary analysis controlling for possible surveillance bias. Prostate cancer was elevated during the latter half (1/1/2005-12/31/2009; RR=1.38, 95% CI: 1.01–1.88). Conclusions Further follow-up is needed with this referent population to assess the relationship between WTC-exposure and cancers with longer latency periods. PMID:27582474

  19. Using the LANSCE irradiation facility to predict the number of fatal soft errors in one of the world's fastest supercomputers

    International Nuclear Information System (INIS)

    Michalak, S.E.; Harris, K.W.; Hengartner, N.W.; Takala, B.E.; Wender, S.A.

    2005-01-01

    Los Alamos National Laboratory (LANL) is home to the Los Alamos Neutron Science Center (LANSCE). LANSCE is a unique facility because its neutron spectrum closely mimics the neutron spectrum at terrestrial and aircraft altitudes, but is many times more intense. Thus, LANSCE provides an ideal setting for accelerated testing of semiconductor and other devices that are susceptible to cosmic ray induced neutrons. Many industrial companies use LANSCE to estimate device susceptibility to cosmic ray induced neutrons, and it has also been used to test parts from one of LANL's supercomputers, the ASC (Advanced Simulation and Computing Program) Q. This paper discusses our use of the LANSCE facility to study components in Q including a comparison with failure data from Q

  20. Evaluating the networking characteristics of the Cray XC-40 Intel Knights Landing-based Cori supercomputer at NERSC

    Energy Technology Data Exchange (ETDEWEB)

    Doerfler, Douglas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Austin, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cook, Brandon [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Deslippe, Jack [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kandalla, Krishna [Cray Inc, Bloomington, MN (United States); Mendygral, Peter [Cray Inc, Bloomington, MN (United States)

    2017-09-12

    There are many potential issues associated with deploying the Intel Xeon Phi™ (code named Knights Landing [KNL]) manycore processor in a large-scale supercomputer. One in particular is the ability to fully utilize the high-speed communications network, given that the serial performance of a Xeon Phi TM core is a fraction of a Xeon®core. In this paper, we take a look at the trade-offs associated with allocating enough cores to fully utilize the Aries high-speed network versus cores dedicated to computation, e.g., the trade-off between MPI and OpenMP. In addition, we evaluate new features of Cray MPI in support of KNL, such as internode optimizations. We also evaluate one-sided programming models such as Unified Parallel C. We quantify the impact of the above trade-offs and features using a suite of National Energy Research Scientific Computing Center applications.

  1. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone; Manzano Franco, Joseph B.

    2012-12-31

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, we introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.

  2. Holistic Approach to Data Center Energy Efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Steven W [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-18

    This presentation discusses NREL's Energy System Integrations Facility and NREL's holistic design approach to sustainable data centers that led to the world's most energy-efficient data center. It describes Peregrine, a warm water liquid cooled supercomputer, waste heat reuse in the data center, demonstrated PUE and ERE, and lessons learned during four years of operation.

  3. 75 FR 42014 - Proposed Amendment of Class E Airspace; San Clemente, CA

    Science.gov (United States)

    2010-07-20

    ...: Eldon Taylor, Federal Aviation Administration, Operations Support Group, Western Service Center, 1601... an extension to a Class D surface area, at San Clemente Island NALF (Fredrick Sherman Field), San... Clemente Island NALF (Fredrick Sherman Field), CA (Lat. 33[deg]01'22'' N., long. 118[deg]35'19'' W.) San...

  4. Survival and natality rate observations of California sea lions at San Miguel Island, California conducted by Alaska Fisheries Science Center, National Marine Mammal Laboratory from 1987-09-20 to 2014-09-25 (NCEI Accession 0145167)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The dataset contains initial capture and marking data for California sea lion (Zalophus californianus) pups at San Miguel Island, California and subsequent...

  5. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Papka, M.; Messina, P.; Coffey, R.; Drugan, C. (LCF)

    2012-08-16

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to

  6. Metabolomics Workbench (MetWB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Metabolomics Program's Data Repository and Coordinating Center (DRCC), housed at the San Diego Supercomputer Center (SDSC), University of California, San Diego,...

  7. Fitting the datum of SANS with Pxy program

    International Nuclear Information System (INIS)

    Sun, Liangwei; Peng, Mei; Chen, Liang

    2009-04-01

    The thesis introduces the basic theory of Small-Angle neutron scattering, enumerates several approximate law. It simply describes the components of Small-Angle neutron spectrometer (SANS) and the parameters of SANS of Budapest Neutron Center (BNC) in Hungary. During the period of studying at Budapest Neutron Center in Hungary, the experiments of wavelength calibration was carried out with SIBE and the SANS experiments of sample Micelles. The experiments are briefly introduced. Pxy program is used to fit these datum, and the results of wavelength and sizes of sample Micelles are presented. (authors)

  8. Guide to dataflow supercomputing basic concepts, case studies, and a detailed example

    CERN Document Server

    Milutinovic, Veljko; Trifunovic, Nemanja; Giorgi, Roberto

    2015-01-01

    This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; revie

  9. Interactive real-time nuclear plant simulations on a UNIX based supercomputer

    International Nuclear Information System (INIS)

    Behling, S.R.

    1990-01-01

    Interactive real-time nuclear plant simulations are critically important to train nuclear power plant engineers and operators. In addition, real-time simulations can be used to test the validity and timing of plant technical specifications and operational procedures. To accurately and confidently simulate a nuclear power plant transient in real-time, sufficient computer resources must be available. Since some important transients cannot be simulated using preprogrammed responses or non-physical models, commonly used simulation techniques may not be adequate. However, the power of a supercomputer allows one to accurately calculate the behavior of nuclear power plants even during very complex transients. Many of these transients can be calculated in real-time or quicker on the fastest supercomputers. The concept of running interactive real-time nuclear power plant transients on a supercomputer has been tested. This paper describes the architecture of the simulation program, the techniques used to establish real-time synchronization, and other issues related to the use of supercomputers in a new and potentially very important area. (author)

  10. Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers

    International Nuclear Information System (INIS)

    Dreher, Patrick; Scullin, William; Vouk, Mladen

    2015-01-01

    Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers. (paper)

  11. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  12. Argonne National Lab deploys Force10 networks' massively dense ethernet switch for supercomputing cluster

    CERN Multimedia

    2003-01-01

    "Force10 Networks, Inc. today announced that Argonne National Laboratory (Argonne, IL) has successfully deployed Force10 E-Series switch/routers to connect to the TeraGrid, the world's largest supercomputing grid, sponsored by the National Science Foundation (NSF)" (1/2 page).

  13. Design and performance characterization of electronic structure calculations on massively parallel supercomputers

    DEFF Research Database (Denmark)

    Romero, N. A.; Glinsvad, Christian; Larsen, Ask Hjorth

    2013-01-01

    Density function theory (DFT) is the most widely employed electronic structure method because of its favorable scaling with system size and accuracy for a broad range of molecular and condensed-phase systems. The advent of massively parallel supercomputers has enhanced the scientific community...

  14. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  15. An efficient implementation of a backpropagation learning algorithm on quadrics parallel supercomputer

    International Nuclear Information System (INIS)

    Taraglio, S.; Massaioli, F.

    1995-08-01

    A parallel implementation of a library to build and train Multi Layer Perceptrons via the Back Propagation algorithm is presented. The target machine is the SIMD massively parallel supercomputer Quadrics. Performance measures are provided on three different machines with different number of processors, for two network examples. A sample source code is given

  16. Visualization environment of the large-scale data of JAEA's supercomputer system

    Energy Technology Data Exchange (ETDEWEB)

    Sakamoto, Kensaku [Japan Atomic Energy Agency, Center for Computational Science and e-Systems, Tokai, Ibaraki (Japan); Hoshi, Yoshiyuki [Research Organization for Information Science and Technology (RIST), Tokai, Ibaraki (Japan)

    2013-11-15

    On research and development of various fields of nuclear energy, visualization of calculated data is especially useful to understand the result of simulation in an intuitive way. Many researchers who run simulations on the supercomputer in Japan Atomic Energy Agency (JAEA) are used to transfer calculated data files from the supercomputer to their local PCs for visualization. In recent years, as the size of calculated data has gotten larger with improvement of supercomputer performance, reduction of visualization processing time as well as efficient use of JAEA network is being required. As a solution, we introduced a remote visualization system which has abilities to utilize parallel processors on the supercomputer and to reduce the usage of network resources by transferring data of intermediate visualization process. This paper reports a study on the performance of image processing with the remote visualization system. The visualization processing time is measured and the influence of network speed is evaluated by varying the drawing mode, the size of visualization data and the number of processors. Based on this study, a guideline for using the remote visualization system is provided to show how the system can be used effectively. An upgrade policy of the next system is also shown. (author)

  17. San Francisco Accelerator Conference

    International Nuclear Information System (INIS)

    Southworth, Brian

    1991-01-01

    'Where are today's challenges in accelerator physics?' was the theme of the open session at the San Francisco meeting, the largest ever gathering of accelerator physicists and engineers

  18. NREL Receives Editors' Choice Awards for Supercomputer Research | News |

    Science.gov (United States)

    performance data center, high-bay labs, and office space. NREL's Martha Symko-Davies honored by Women in successful women working in the energy field. As NREL's Director of Partnerships for Energy Systems awards for the Peregrine high-performance computer and the groundbreaking research it made possible. The

  19. Public Involvement and Response Plan (Community Relations Plan), Presidio of San Francisco, San Francisco, California

    Science.gov (United States)

    1992-03-01

    passenger ship destination, and tourist attraction. San Francisco’s location and cultural and recreational opportunities make it a prime tourism center...equestrians, she said. C-52 m% smm : - TUESDAY, JUNE 19,1990 * . COPYKIGHT 1*90/THE TIMES MlRkOX COMPANY /CC/1 JO PAGES P. A-l, 22, 23 Complex

  20. Integration Of PanDA Workload Management System With Supercomputers

    CERN Document Server

    Klimentov, Alexei; The ATLAS collaboration; Maeno, Tadashi; Mashinistov, Ruslan; Nilsson, Paul; Oleynik, Danila; Panitkin, Sergey; Read, Kenneth; Ryabinkin, Evgeny; Wenaus, Torre

    2015-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 100,000 co...

  1. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  2. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  3. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Del Debbio, Luigi; Jordan, Kirk; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  4. Direct exploitation of a top 500 Supercomputer for Analysis of CMS Data

    International Nuclear Information System (INIS)

    Cabrillo, I; Cabellos, L; Marco, J; Fernandez, J; Gonzalez, I

    2014-01-01

    The Altamira Supercomputer hosted at the Instituto de Fisica de Cantatbria (IFCA) entered in operation in summer 2012. Its last generation FDR Infiniband network used (for message passing) in parallel jobs, supports the connection to General Parallel File System (GPFS) servers, enabling an efficient simultaneous processing of multiple data demanding jobs. Sharing a common GPFS system and a single LDAP-based identification with the existing Grid clusters at IFCA allows CMS researchers to exploit the large instantaneous capacity of this supercomputer to execute analysis jobs. The detailed experience describing this opportunistic use for skimming and final analysis of CMS 2012 data for a specific physics channel, resulting in an order of magnitude reduction of the waiting time, is presented.

  5. ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.

    Science.gov (United States)

    Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping

    2018-04-27

    A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.

  6. Explaining the gap between theoretical peak performance and real performance for supercomputer architectures

    International Nuclear Information System (INIS)

    Schoenauer, W.; Haefner, H.

    1993-01-01

    The basic architectures of vector and parallel computers with their properties are presented. Then the memory size and the arithmetic operations in the context of memory bandwidth are discussed. For the exemplary discussion of a single operation micro-measurements of the vector triad for the IBM 3090 VF and the CRAY Y-MP/8 are presented. They reveal the details of the losses for a single operation. Then we analyze the global performance of a whole supercomputer by identifying reduction factors that bring down the theoretical peak performance to the poor real performance. The responsibilities of the manufacturer and of the user for these losses are dicussed. Then the price-performance ratio for different architectures in a snapshot of January 1991 is briefly mentioned. Finally some remarks to a user-friendly architecture for a supercomputer will be made. (orig.)

  7. Study of ATLAS TRT performance with GRID and supercomputers.

    CERN Document Server

    Krasnopevtsev, Dimitriy; The ATLAS collaboration; Mashinistov, Ruslan; Belyaev, Nikita; Ryabinkin, Evgeny

    2015-01-01

    After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at high occupancy conditions is important for many on-going physics analyses. The ATLAS Transition Radiation Tracker (TRT) is one of these detectors. TRT is a large straw tube tracking system that is the outermost of the three subsystems of the ATLAS Inner Detector (ID). TRT contributes significantly to the resolution for high-pT tracks in the ID providing excellent particle identification capabilities and electron-pion separation. ATLAS experiment is using Worldwide LHC Computing Grid. WLCG is a global collaboration of computer centers and provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualisation tools and more. WLCG...

  8. Study of ATLAS TRT performance with GRID and supercomputers.

    CERN Document Server

    Krasnopevtsev, Dimitriy; The ATLAS collaboration; Belyaev, Nikita; Mashinistov, Ruslan; Ryabinkin, Evgeny

    2015-01-01

    After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at highoccupancy conditions is important for many on-going physics analyses. The ATLAS Transition Radiation Tracker (TRT) is one of these detectors. TRT is a large straw tube tracking system that is the outermost of the three subsystems of the ATLAS Inner Detector (ID). TRT contributes significantly to the resolution for high-pT tracks in the ID providing excellent particle identification capabilities and electron-pion separation. ATLAS experiment is using Worldwide LHC Computing Grid. WLCG is a global collaboration of computer centers and provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualization tools and more. WLCG ...

  9. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    Energy Technology Data Exchange (ETDEWEB)

    Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brightwell, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In this paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.

  10. Application of Supercomputer Technologies for Simulation Of Socio-Economic Systems

    Directory of Open Access Journals (Sweden)

    Vladimir Valentinovich Okrepilov

    2015-06-01

    Full Text Available To date, an extensive experience has been accumulated in investigation of problems related to quality, assessment of management systems, modeling of economic system sustainability. The performed studies have created a basis for development of a new research area — Economics of Quality. Its tools allow to use opportunities of model simulation for construction of the mathematical models adequately reflecting the role of quality in natural, technical, social regularities of functioning of the complex socio-economic systems. Extensive application and development of models, and also system modeling with use of supercomputer technologies, on our deep belief, will bring the conducted research of socio-economic systems to essentially new level. Moreover, the current scientific research makes a significant contribution to model simulation of multi-agent social systems and that is not less important, it belongs to the priority areas in development of science and technology in our country. This article is devoted to the questions of supercomputer technologies application in public sciences, first of all, — regarding technical realization of the large-scale agent-focused models (AFM. The essence of this tool is that owing to the power computer increase it has become possible to describe the behavior of many separate fragments of a difficult system, as socio-economic systems are. The article also deals with the experience of foreign scientists and practicians in launching the AFM on supercomputers, and also the example of AFM developed in CEMI RAS, stages and methods of effective calculating kernel display of multi-agent system on architecture of a modern supercomputer will be analyzed. The experiments on the basis of model simulation on forecasting the population of St. Petersburg according to three scenarios as one of the major factors influencing the development of socio-economic system and quality of life of the population are presented in the

  11. Heat dissipation computations of a HVDC ground electrode using a supercomputer

    International Nuclear Information System (INIS)

    Greiss, H.; Mukhedkar, D.; Lagace, P.J.

    1990-01-01

    This paper reports on the temperature, of soil surrounding a High Voltage Direct Current (HVDC) toroidal ground electrode of practical dimensions, in both homogeneous and non-homogeneous soils that was computed at incremental points in time using finite difference methods on a supercomputer. Curves of the response were computed and plotted at several locations within the soil in the vicinity of the ground electrode for various values of the soil parameters

  12. Analyzing the Interplay of Failures and Workload on a Leadership-Class Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Meneses, Esteban [University of Pittsburgh; Ni, Xiang [University of Illinois at Urbana-Champaign; Jones, Terry R [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of fault tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.

  13. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  14. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential of PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.

  15. Remembering San Diego

    International Nuclear Information System (INIS)

    Chuyanov, V.

    1999-01-01

    After 6 years of existence the ITER EDA project in San Diego, USA, was terminated by desition of the US Congress. This article describes how nice it was for everybody as long as it lasted and how sad it is now

  16. The TESS Science Processing Operations Center

    Science.gov (United States)

    Jenkins, Jon M.; Twicken, Joseph D.; McCauliff, Sean; Campbell, Jennifer; Sanderfer, Dwight; Lung, David; Mansouri-Samani, Masoud; Girouard, Forrest; Tenenbaum, Peter; Klaus, Todd; hide

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover approximately 1,000 small planets with R(sub p) less than 4 (solar radius) and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed at NASA Ames Research Center based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NASA Advanced Supercomputing Division's Pleiades supercomputer. The SPOC will also search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes (MAST).

  17. Visualization on supercomputing platform level II ASC milestone (3537-1B) results from Sandia.

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk (Kitware, Inc., Clifton Park, NY); Fabian, Nathan; Marion, Patrick (Kitware, Inc., Clifton Park, NY); Moreland, Kenneth D.

    2010-09-01

    This report provides documentation for the completion of the Sandia portion of the ASC Level II Visualization on the platform milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratories and Los Alamos National Laboratories. This milestone contains functionality required for performing visualization directly on a supercomputing platform, which is necessary for peta-scale visualization. Sandia's contribution concerns in-situ visualization, running a visualization in tandem with a solver. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors(GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the performance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. Scientific simulation on parallel supercomputers is traditionally performed in four

  18. Sandia`s network for Supercomputing `94: Linking the Los Alamos, Lawrence Livermore, and Sandia National Laboratories using switched multimegabit data service

    Energy Technology Data Exchange (ETDEWEB)

    Vahle, M.O.; Gossage, S.A.; Brenkosh, J.P. [Sandia National Labs., Albuquerque, NM (United States). Advanced Networking Integration Dept.

    1995-01-01

    Supercomputing `94, a high-performance computing and communications conference, was held November 14th through 18th, 1994 in Washington DC. For the past four years, Sandia National Laboratories has used this conference to showcase and focus its communications and networking endeavors. At the 1994 conference, Sandia built a Switched Multimegabit Data Service (SMDS) network running at 44.736 megabits per second linking its private SMDS network between its facilities in Albuquerque, New Mexico and Livermore, California to the convention center in Washington, D.C. For the show, the network was also extended from Sandia, New Mexico to Los Alamos National Laboratory and from Sandia, California to Lawrence Livermore National Laboratory. This paper documents and describes this network and how it was used at the conference.

  19. Sensitive Wildlife - Center for Natural Lands Management [ds431

    Data.gov (United States)

    California Natural Resource Agency — This dataset represents sensitive wildlife data collected for the Center for Natural Lands Management (CNLM) at dedicated nature preserves in San Diego County,...

  20. SANS-1 Experimental reports of 2000

    International Nuclear Information System (INIS)

    Willumeit, R.; Haramus, V.

    2001-01-01

    The instrument SANS-1 at the Geesthacht neutron facility GeNF was used for scattering experiments in 2000 at 196 of 200 days of reactor and cold source operation. The utilisation was shared between the in-house R and D program and user groups from different universities and research centers. These measurements were performed and analysed either by guest scientists or GKSS staff. The focus of the work in 2000 at the experiment SANS-1 was the structural investigation of hydrogen containing substances such as biological macromolecules (ribosomes, protein-RNA-complexes, protein solutions, glycolipids and membranes), molecules which are important in the fields of environmental research (refractoric organic substances) and technical chemistry (surfactants, micelles). (orig.) [de

  1. MaMiCo: Transient multi-instance molecular-continuum flow simulation on supercomputers

    Science.gov (United States)

    Neumann, Philipp; Bian, Xin

    2017-11-01

    We present extensions of the macro-micro-coupling tool MaMiCo, which was designed to couple continuum fluid dynamics solvers with discrete particle dynamics. To enable local extraction of smooth flow field quantities especially on rather short time scales, sampling over an ensemble of molecular dynamics simulations is introduced. We provide details on these extensions including the transient coupling algorithm, open boundary forcing, and multi-instance sampling. Furthermore, we validate the coupling in Couette flow using different particle simulation software packages and particle models, i.e. molecular dynamics and dissipative particle dynamics. Finally, we demonstrate the parallel scalability of the molecular-continuum simulations by using up to 65 536 compute cores of the supercomputer Shaheen II located at KAUST. Program Files doi:http://dx.doi.org/10.17632/w7rgdrhb85.1 Licensing provisions: BSD 3-clause Programming language: C, C++ External routines/libraries: For compiling: SCons, MPI (optional) Subprograms used: ESPResSo, LAMMPS, ls1 mardyn, waLBerla For installation procedures of the MaMiCo interfaces, see the README files in the respective code directories located in coupling/interface/impl. Journal reference of previous version: P. Neumann, H. Flohr, R. Arora, P. Jarmatz, N. Tchipev, H.-J. Bungartz. MaMiCo: Software design for parallel molecular-continuum flow simulations, Computer Physics Communications 200: 324-335, 2016 Does the new version supersede the previous version?: Yes. The functionality of the previous version is completely retained in the new version. Nature of problem: Coupled molecular-continuum simulation for multi-resolution fluid dynamics: parts of the domain are resolved by molecular dynamics or another particle-based solver whereas large parts are covered by a mesh-based CFD solver, e.g. a lattice Boltzmann automaton. Solution method: We couple existing MD and CFD solvers via MaMiCo (macro-micro coupling tool). Data exchange and

  2. 77 FR 49865 - Notice of Availability of an Environmental Impact Statement (EIS) for the San Francisco Veterans...

    Science.gov (United States)

    2012-08-17

    ... National Environmental Policy Act (NEPA) of 1969, as amended, (42 U.S.C. 4331 et seq.), the Council on...) for the San Francisco Veterans Affairs Medical Center (SFVAMC) Long Range Development Plan (LRDP... Francisco Veterans Affairs Medical Center, 4150 Clement Street, San Francisco, CA 94121 or by telephone...

  3. 76 FR 17752 - Notice of Intent To Prepare an Environmental Impact Statement for the San Francisco Veterans...

    Science.gov (United States)

    2011-03-30

    ... Environmental Policy Act (NEPA) of 1969, as amended, (42 U.S.C. 4331 et seq.), the Council on Environmental... the San Francisco Veterans Affairs Medical Center (SFVAMC) Institutional Master Plan AGENCY...: Comments should be addressed to John Pechman, Facility Planner, San Francisco VA Medical Center (001), 4150...

  4. Perspective View, San Andreas Fault

    Science.gov (United States)

    2000-01-01

    The prominent linear feature straight down the center of this perspective view is California's famous San Andreas Fault. The image, created with data from NASA's Shuttle Radar Topography Mission (SRTM), will be used by geologists studying fault dynamics and landforms resulting from active tectonics. This segment of the fault lies west of the city of Palmdale, Calif., about 100 kilometers (about 60 miles) northwest of Los Angeles. The fault is the active tectonic boundary between the North American plate on the right, and the Pacific plate on the left. Relative to each other, the Pacific plate is moving away from the viewer and the North American plate is moving toward the viewer along what geologists call a right lateral strike-slip fault. Two large mountain ranges are visible, the San Gabriel Mountains on the left and the Tehachapi Mountains in the upper right. Another fault, the Garlock Fault lies at the base of the Tehachapis; the San Andreas and the Garlock Faults meet in the center distance near the town of Gorman. In the distance, over the Tehachapi Mountains is California's Central Valley. Along the foothills in the right hand part of the image is the Antelope Valley, including the Antelope Valley California Poppy Reserve. The data used to create this image were acquired by SRTM aboard the Space Shuttle Endeavour, launched on February 11, 2000.This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.SRTM uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour

  5. Integration of PanDA workload management system with Titan supercomputer at OLCF

    Science.gov (United States)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  6. Re-inventing electromagnetics - Supercomputing solution of Maxwell's equations via direct time integration on space grids

    International Nuclear Information System (INIS)

    Taflove, A.

    1992-01-01

    This paper summarizes the present state and future directions of applying finite-difference and finite-volume time-domain techniques for Maxwell's equations on supercomputers to model complex electromagnetic wave interactions with structures. Applications so far have been dominated by radar cross section technology, but by no means are limited to this area. In fact, the gains we have made place us on the threshold of being able to make tremendous contributions to non-defense electronics and optical technology. Some of the most interesting research in these commercial areas is summarized. 47 refs

  7. Watson will see you now: a supercomputer to help clinicians make informed treatment decisions.

    Science.gov (United States)

    Doyle-Lindrud, Susan

    2015-02-01

    IBM has collaborated with several cancer care providers to develop and train the IBM supercomputer Watson to help clinicians make informed treatment decisions. When a patient is seen in clinic, the oncologist can input all of the clinical information into the computer system. Watson will then review all of the data and recommend treatment options based on the latest evidence and guidelines. Once the oncologist makes the treatment decision, this information can be sent directly to the insurance company for approval. Watson has the ability to standardize care and accelerate the approval process, a benefit to the healthcare provider and the patient.

  8. 75 FR 38412 - Safety Zone; San Diego POPS Fireworks, San Diego, CA

    Science.gov (United States)

    2010-07-02

    ...-AA00 Safety Zone; San Diego POPS Fireworks, San Diego, CA AGENCY: Coast Guard, DHS. ACTION: Temporary... waters of San Diego Bay in support of the San Diego POPS Fireworks. This safety zone is necessary to... San Diego POPS Fireworks, which will include fireworks presentations conducted from a barge in San...

  9. Grassroots Supercomputing

    CERN Multimedia

    Buchanan, Mark

    2005-01-01

    What started out as a way for SETI to plow through its piles or radio-signal data from deep space has turned into a powerful research tool as computer users acrosse the globe donate their screen-saver time to projects as diverse as climate-change prediction, gravitational-wave searches, and protein folding (4 pages)

  10. Factores de riesgo relacionados al uso de drogas ilegales: perspectiva crítica de familiares y personas cercanas en un centro de salud público en San Pedro Sula, Honduras Fatores de risco relacionados ao uso de drogas ilegais, perspectiva crítica de familiares e pessoas próximas, em um centro da saúde público em San Pedro Sula, Honduras Risk factors related to the use of illegal drugs: the critial perspective of drug users' relatives and acquaintances at a public health center in San Pedro Sula, Honduras

    Directory of Open Access Journals (Sweden)

    Gladys Magdalena Rodríguez Funes

    2009-01-01

    ência anterior com álcool/fumo, ter amigos/amigas que usam drogas, falta de conhecimento, baixa autoestima, idade, entre outros fatores pessoais, familiares e sociais. Em conclusão, deve-se reforçar a prevenção e proteção.This article presents quantitative data from a multicenter, cross-sectional study, which was performed at a public health center in San Pedro Sula, Honduras, using multiple methods. The objective of the study was to describe the critical perspective of people who reported being affected by their relationship with an illicit drug user (relative or acquaintance in terms of risk factors. Data collection was performed using 100 questionnaires. Most participants were women with low education levels. Drug users were mostly men, with an average age of 23.3 years. The most consumed drug was marijuana (78%, followed by crack/cocaine (72%, glue/inhalants (27%, hallucinogens (ecstasy/LSD (3%, amphetamines/stimulants (1%, and heroin (1%. The identified risk factors include: previous experience with alcohol/tobacco, having friends who use drugs, lack of information, low self-esteem, age, and other personal, family and social factors. In conclusion, prevention and protection should be reinforced.

  11. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00300320; Klimentov, Alexei; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Vaniachine, Alexandre; Wenaus, Torre; Schovancova, Jaroslava

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modi ed PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real time, information about unused...

  12. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration; Klimentov, Alexei; Oleynik, Danila; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently uses more than 100,000 cores at well over 100 Grid sites with a peak performance of 0.3 petaFLOPS, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real tim...

  13. Feynman diagrams sampling for quantum field theories on the QPACE 2 supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Rappl, Florian

    2016-08-01

    This work discusses the application of Feynman diagram sampling in quantum field theories. The method uses a computer simulation to sample the diagrammatic space obtained in a series expansion. For running large physical simulations powerful computers are obligatory, effectively splitting the thesis in two parts. The first part deals with the method of Feynman diagram sampling. Here the theoretical background of the method itself is discussed. Additionally, important statistical concepts and the theory of the strong force, quantum chromodynamics, are introduced. This sets the context of the simulations. We create and evaluate a variety of models to estimate the applicability of diagrammatic methods. The method is then applied to sample the perturbative expansion of the vertex correction. In the end we obtain the value for the anomalous magnetic moment of the electron. The second part looks at the QPACE 2 supercomputer. This includes a short introduction to supercomputers in general, as well as a closer look at the architecture and the cooling system of QPACE 2. Guiding benchmarks of the InfiniBand network are presented. At the core of this part, a collection of best practices and useful programming concepts are outlined, which enables the development of efficient, yet easily portable, applications for the QPACE 2 system.

  14. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; D' Azevedo, Eduardo [ORNL; Philip, Bobby [ORNL; Worley, Patrick H [ORNL

    2016-01-01

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phase of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.

  15. Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; Leinweber, David; Ruebel, Oliver; Wu, Kesheng

    2011-09-16

    This paper describes collaborative work between active traders, regulators, economists, and supercomputing researchers to replicate and extend investigations of the Flash Crash and other market anomalies in a National Laboratory HPC environment. Our work suggests that supercomputing tools and methods will be valuable to market regulators in achieving the goal of market safety, stability, and security. Research results using high frequency data and analytics are described, and directions for future development are discussed. Currently the key mechanism for preventing catastrophic market action are “circuit breakers.” We believe a more graduated approach, similar to the “yellow light” approach in motorsports to slow down traffic, might be a better way to achieve the same goal. To enable this objective, we study a number of indicators that could foresee hazards in market conditions and explore options to confirm such predictions. Our tests confirm that Volume Synchronized Probability of Informed Trading (VPIN) and a version of volume Herfindahl-Hirschman Index (HHI) for measuring market fragmentation can indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.

  16. Unique Methodologies for Nano/Micro Manufacturing Job Training Via Desktop Supercomputer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kimball, Clyde [Northern Illinois Univ., DeKalb, IL (United States); Karonis, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Lurio, Laurence [Northern Illinois Univ., DeKalb, IL (United States); Piot, Philippe [Northern Illinois Univ., DeKalb, IL (United States); Xiao, Zhili [Northern Illinois Univ., DeKalb, IL (United States); Glatz, Andreas [Northern Illinois Univ., DeKalb, IL (United States); Pohlman, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Hou, Minmei [Northern Illinois Univ., DeKalb, IL (United States); Demir, Veysel [Northern Illinois Univ., DeKalb, IL (United States); Song, Jie [Northern Illinois Univ., DeKalb, IL (United States); Duffin, Kirk [Northern Illinois Univ., DeKalb, IL (United States); Johns, Mitrick [Northern Illinois Univ., DeKalb, IL (United States); Sims, Thomas [Northern Illinois Univ., DeKalb, IL (United States); Yin, Yanbin [Northern Illinois Univ., DeKalb, IL (United States)

    2012-11-21

    This project establishes an initiative in high speed (Teraflop)/large-memory desktop supercomputing for modeling and simulation of dynamic processes important for energy and industrial applications. It provides a training ground for employment of current students in an emerging field with skills necessary to access the large supercomputing systems now present at DOE laboratories. It also provides a foundation for NIU faculty to quantum leap beyond their current small cluster facilities. The funding extends faculty and student capability to a new level of analytic skills with concomitant publication avenues. The components of the Hewlett Packard computer obtained by the DOE funds create a hybrid combination of a Graphics Processing System (12 GPU/Teraflops) and a Beowulf CPU system (144 CPU), the first expandable via the NIU GAEA system to ~60 Teraflops integrated with a 720 CPU Beowulf system. The software is based on access to the NVIDIA/CUDA library and the ability through MATLAB multiple licenses to create additional local programs. A number of existing programs are being transferred to the CPU Beowulf Cluster. Since the expertise necessary to create the parallel processing applications has recently been obtained at NIU, this effort for software development is in an early stage. The educational program has been initiated via formal tutorials and classroom curricula designed for the coming year. Specifically, the cost focus was on hardware acquisitions and appointment of graduate students for a wide range of applications in engineering, physics and computer science.

  17. Computational Science with the Titan Supercomputer: Early Outcomes and Lessons Learned

    Science.gov (United States)

    Wells, Jack

    2014-03-01

    Modeling and simulation with petascale computing has supercharged the process of innovation and understanding, dramatically accelerating time-to-insight and time-to-discovery. This presentation will focus on early outcomes from the Titan supercomputer at the Oak Ridge National Laboratory. Titan has over 18,000 hybrid compute nodes consisting of both CPUs and GPUs. In this presentation, I will discuss the lessons we have learned in deploying Titan and preparing applications to move from conventional CPU architectures to a hybrid machine. I will present early results of materials applications running on Titan and the implications for the research community as we prepare for exascale supercomputer in the next decade. Lastly, I will provide an overview of user programs at the Oak Ridge Leadership Computing Facility with specific information how researchers may apply for allocations of computing resources. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  18. Parallel Multivariate Spatio-Temporal Clustering of Large Ecological Datasets on Hybrid Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; Kumar, Jitendra [ORNL; Mills, Richard T. [Argonne National Laboratory; Hoffman, Forrest M. [ORNL; Sripathi, Vamsi [Intel Corporation; Hargrove, William Walter [United States Department of Agriculture (USDA), United States Forest Service (USFS)

    2017-09-01

    A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like the Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.

  19. An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.

    Science.gov (United States)

    Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei

    2017-12-01

    Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.

  20. Some examples of spin-off technologies: San Carlos de Bariloche; Algunos ejemplos de tecnologias derivadas: San Carlos de Bariloche

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Gabriel O [Comision Nacional de Energia Atomica, San Carlos de Bariloche (Argentina). Centro Atomico Bariloche

    2001-07-01

    The Bariloche Atomic Center (CAB) and the Balseiro Institute, both in San Carlos de Bariloche, are mainly devoted to scientific research and development, the first one; and to education and training the second one. Besides providing specialists in physics and nuclear engineering for research centers in Argentina and abroad, both establishments are transferring technologies and providing services in different fields such as waste management, metallurgy, forensic sciences, medicine, geology, modeling, archaeology, paleontology, etc.

  1. First experiences with large SAN storage and Linux

    International Nuclear Information System (INIS)

    Wezel, Jos van; Marten, Holger; Verstege, Bernhard; Jaeger, Axel

    2004-01-01

    The use of a storage area network (SAN) with Linux opens possibilities for scalable and affordable large data storage and poses a new challenge for cluster computing. The GridKa center uses a commercial parallel file system to create a highly available high-speed data storage using a combination of Fibre Channel (SAN) and Ethernet (LAN) to optimize between data throughput and costs. This article describes the design, implementation and optimizations of the GridKa storage solution which will offer over 400 TB online storage for 600 nodes. Presented are some throughput measurements of one of the largest Linux-based parallel storage systems in the world

  2. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  3. Large scale simulations of lattice QCD thermodynamics on Columbia Parallel Supercomputers

    International Nuclear Information System (INIS)

    Ohta, Shigemi

    1989-01-01

    The Columbia Parallel Supercomputer project aims at the construction of a parallel processing, multi-gigaflop computer optimized for numerical simulations of lattice QCD. The project has three stages; 16-node, 1/4GF machine completed in April 1985, 64-node, 1GF machine completed in August 1987, and 256-node, 16GF machine now under construction. The machines all share a common architecture; a two dimensional torus formed from a rectangular array of N 1 x N 2 independent and identical processors. A processor is capable of operating in a multi-instruction multi-data mode, except for periods of synchronous interprocessor communication with its four nearest neighbors. Here the thermodynamics simulations on the two working machines are reported. (orig./HSI)

  4. Use of QUADRICS supercomputer as embedded simulator in emergency management systems

    International Nuclear Information System (INIS)

    Bove, R.; Di Costanzo, G.; Ziparo, A.

    1996-07-01

    The experience related to the implementation of a MRBT, atmospheric spreading model with a short duration releasing, are reported. This model was implemented on a QUADRICS-Q1 supercomputer. First is reported a description of the MRBT model. It is an analytical model to study the speadings of light gases realised in the atmosphere cause incidental releasing. The solution of diffusion equation is Gaussian like. It yield the concentration of pollutant substance released. The concentration is function of space and time. Thus the QUADRICS architecture is introduced. And the implementation of the model is described. At the end it will be consider the integration of the QUADRICS-based model as simulator in a emergency management system

  5. Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers

    Science.gov (United States)

    Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi

    2018-03-01

    Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.

  6. Research to application: Supercomputing trends for the 90's - Opportunities for interdisciplinary computations

    International Nuclear Information System (INIS)

    Shankar, V.

    1991-01-01

    The progression of supercomputing is reviewed from the point of view of computational fluid dynamics (CFD), and multidisciplinary problems impacting the design of advanced aerospace configurations are addressed. The application of full potential and Euler equations to transonic and supersonic problems in the 70s and early 80s is outlined, along with Navier-Stokes computations widespread during the late 80s and early 90s. Multidisciplinary computations currently in progress are discussed, including CFD and aeroelastic coupling for both static and dynamic flexible computations, CFD, aeroelastic, and controls coupling for flutter suppression and active control, and the development of a computational electromagnetics technology based on CFD methods. Attention is given to computational challenges standing in a way of the concept of establishing a computational environment including many technologies. 40 refs

  7. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Science.gov (United States)

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  8. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Directory of Open Access Journals (Sweden)

    DeTar Carleton

    2018-01-01

    Full Text Available With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  9. Solving sparse linear least squares problems on some supercomputers by using large dense blocks

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Ostromsky, T; Sameh, A

    1997-01-01

    technique is preferable to sparse matrix technique when the matrices are not large, because the high computational speed compensates fully the disadvantages of using more arithmetic operations and more storage. For very large matrices the computations must be organized as a sequence of tasks in each......Efficient subroutines for dense matrix computations have recently been developed and are available on many high-speed computers. On some computers the speed of many dense matrix operations is near to the peak-performance. For sparse matrices storage and operations can be saved by operating only...... and storing only nonzero elements. However, the price is a great degradation of the speed of computations on supercomputers (due to the use of indirect addresses, to the need to insert new nonzeros in the sparse storage scheme, to the lack of data locality, etc.). On many high-speed computers a dense matrix...

  10. An Optimized Parallel FDTD Topology for Challenging Electromagnetic Simulations on Supercomputers

    Directory of Open Access Journals (Sweden)

    Shugang Jiang

    2015-01-01

    Full Text Available It may not be a challenge to run a Finite-Difference Time-Domain (FDTD code for electromagnetic simulations on a supercomputer with more than 10 thousands of CPU cores; however, to make FDTD code work with the highest efficiency is a challenge. In this paper, the performance of parallel FDTD is optimized through MPI (message passing interface virtual topology, based on which a communication model is established. The general rules of optimal topology are presented according to the model. The performance of the method is tested and analyzed on three high performance computing platforms with different architectures in China. Simulations including an airplane with a 700-wavelength wingspan, and a complex microstrip antenna array with nearly 2000 elements are performed very efficiently using a maximum of 10240 CPU cores.

  11. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Gallarno, George [Christian Brothers University; Rogers, James H [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  12. EDF's experience with supercomputing and challenges ahead - towards multi-physics and multi-scale approaches

    International Nuclear Information System (INIS)

    Delbecq, J.M.; Banner, D.

    2003-01-01

    Nuclear power plants are a major asset of the EDF company. To remain so, in particular in a context of deregulation, competitiveness, safety and public acceptance are three conditions. These stakes apply both to existing plants and to future reactors. The purpose of the presentation is to explain how supercomputing can help EDF to satisfy these requirements. Three examples are described in detail: ensuring optimal use of nuclear fuel under wholly safe conditions, understanding and simulating the material deterioration mechanisms and moving forward with numerical simulation for the performance of EDF's activities. In conclusion, a broader vision of EDF long term R and D in the field of numerical simulation is given and especially of five challenges taken up by EDF together with its industrial and scientific partners. (author)

  13. Development of a high performance eigensolver on the peta-scale next generation supercomputer system

    International Nuclear Information System (INIS)

    Imamura, Toshiyuki; Yamada, Susumu; Machida, Masahiko

    2010-01-01

    For the present supercomputer systems, a multicore and multisocket processors are necessary to build a system, and choice of interconnection is essential. In addition, for effective development of a new code, high performance, scalable, and reliable numerical software is one of the key items. ScaLAPACK and PETSc are well-known software on distributed memory parallel computer systems. It is needless to say that highly tuned software towards new architecture like many-core processors must be chosen for real computation. In this study, we present a high-performance and high-scalable eigenvalue solver towards the next-generation supercomputer system, so called 'K-computer' system. We have developed two versions, the standard version (eigen s) and enhanced performance version (eigen sx), which are developed on the T2K cluster system housed at University of Tokyo. Eigen s employs the conventional algorithms; Householder tridiagonalization, divide and conquer (DC) algorithm, and Householder back-transformation. They are carefully implemented with blocking technique and flexible two-dimensional data-distribution to reduce the overhead of memory traffic and data transfer, respectively. Eigen s performs excellently on the T2K system with 4096 cores (theoretical peak is 37.6 TFLOPS), and it shows fine performance 3.0 TFLOPS with a two hundred thousand dimensional matrix. The enhanced version, eigen sx, uses more advanced algorithms; the narrow-band reduction algorithm, DC for band matrices, and the block Householder back-transformation with WY-representation. Even though this version is still on a test stage, it shows 4.7 TFLOPS with the same dimensional matrix on eigen s. (author)

  14. High Temporal Resolution Mapping of Seismic Noise Sources Using Heterogeneous Supercomputers

    Science.gov (United States)

    Paitz, P.; Gokhberg, A.; Ermert, L. A.; Fichtner, A.

    2017-12-01

    The time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems like earthquake fault zones, volcanoes, geothermal and hydrocarbon reservoirs. We present results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service providing seismic noise source maps for Central Europe with high temporal resolution. We use source imaging methods based on the cross-correlation of seismic noise records from all seismic stations available in the region of interest. The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept to provide the interested researchers worldwide with regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for the generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise source mapping itself rests on the measurement of logarithmic amplitude ratios in suitably pre-processed noise correlations, and the use of simplified sensitivity kernels. During the implementation we addressed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service-oriented architecture for coordination of various sub-systems, and

  15. Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science

    Science.gov (United States)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.

    2008-12-01

    The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.

  16. Efficient development of memory bounded geo-applications to scale on modern supercomputers

    Science.gov (United States)

    Räss, Ludovic; Omlin, Samuel; Licul, Aleksandar; Podladchikov, Yuri; Herman, Frédéric

    2016-04-01

    Numerical modeling is an actual key tool in the area of geosciences. The current challenge is to solve problems that are multi-physics and for which the length scale and the place of occurrence might not be known in advance. Also, the spatial extend of the investigated domain might strongly vary in size, ranging from millimeters for reactive transport to kilometers for glacier erosion dynamics. An efficient way to proceed is to develop simple but robust algorithms that perform well and scale on modern supercomputers and permit therefore very high-resolution simulations. We propose an efficient approach to solve memory bounded real-world applications on modern supercomputers architectures. We optimize the software to run on our newly acquired state-of-the-art GPU cluster "octopus". Our approach shows promising preliminary results on important geodynamical and geomechanical problematics: we have developed a Stokes solver for glacier flow and a poromechanical solver including complex rheologies for nonlinear waves in stressed rocks porous rocks. We solve the system of partial differential equations on a regular Cartesian grid and use an iterative finite difference scheme with preconditioning of the residuals. The MPI communication happens only locally (point-to-point); this method is known to scale linearly by construction. The "octopus" GPU cluster, which we use for the computations, has been designed to achieve maximal data transfer throughput at minimal hardware cost. It is composed of twenty compute nodes, each hosting four Nvidia Titan X GPU accelerators. These high-density nodes are interconnected with a parallel (dual-rail) FDR InfiniBand network. Our efforts show promising preliminary results for the different physics investigated. The glacier flow solver achieves good accuracy in the relevant benchmarks and the coupled poromechanical solver permits to explain previously unresolvable focused fluid flow as a natural outcome of the porosity setup. In both cases

  17. 76 FR 1386 - Safety Zone; Centennial of Naval Aviation Kickoff, San Diego Bay, San Diego, CA

    Science.gov (United States)

    2011-01-10

    ...-AA00 Safety Zone; Centennial of Naval Aviation Kickoff, San Diego Bay, San Diego, CA AGENCY: Coast... zone on the navigable waters of San Diego Bay in San Diego, CA in support of the Centennial of Naval... February 12, 2010, the Centennial of Naval Aviation Kickoff will take place in San Diego Bay. In support of...

  18. Riparian Habitat - San Joaquin River

    Data.gov (United States)

    California Natural Resource Agency — The immediate focus of this study is to identify, describe and map the extent and diversity of riparian habitats found along the main stem of the San Joaquin River,...

  19. 78 FR 53243 - Safety Zone; TriRock San Diego, San Diego Bay, San Diego, CA

    Science.gov (United States)

    2013-08-29

    ... this rule because the logistical details of the San Diego Bay triathlon swim were not finalized nor... September 22, 2013. (c) Definitions. The following definition applies to this section: Designated...

  20. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  1. NASA Center for Climate Simulation (NCCS) Presentation

    Science.gov (United States)

    Webster, William P.

    2012-01-01

    The NASA Center for Climate Simulation (NCCS) offers integrated supercomputing, visualization, and data interaction technologies to enhance NASA's weather and climate prediction capabilities. It serves hundreds of users at NASA Goddard Space Flight Center, as well as other NASA centers, laboratories, and universities across the US. Over the past year, NCCS has continued expanding its data-centric computing environment to meet the increasingly data-intensive challenges of climate science. We doubled our Discover supercomputer's peak performance to more than 800 teraflops by adding 7,680 Intel Xeon Sandy Bridge processor-cores and most recently 240 Intel Xeon Phi Many Integrated Core (MIG) co-processors. A supercomputing-class analysis system named Dali gives users rapid access to their data on Discover and high-performance software including the Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT), with interfaces from user desktops and a 17- by 6-foot visualization wall. NCCS also is exploring highly efficient climate data services and management with a new MapReduce/Hadoop cluster while augmenting its data distribution to the science community. Using NCCS resources, NASA completed its modeling contributions to the Intergovernmental Panel on Climate Change (IPCG) Fifth Assessment Report this summer as part of the ongoing Coupled Modellntercomparison Project Phase 5 (CMIP5). Ensembles of simulations run on Discover reached back to the year 1000 to test model accuracy and projected climate change through the year 2300 based on four different scenarios of greenhouse gases, aerosols, and land use. The data resulting from several thousand IPCC/CMIP5 simulations, as well as a variety of other simulation, reanalysis, and observationdatasets, are available to scientists and decision makers through an enhanced NCCS Earth System Grid Federation Gateway. Worldwide downloads have totaled over 110 terabytes of data.

  2. SANS studies of polymers

    International Nuclear Information System (INIS)

    Wignall, G.D.

    1985-01-01

    Some information provided by the application of small-angle neutron scattering concerning polymer structure is reviewed herein. Information about the polymer structure as examined in H 2 O/D 2 O mixtures is also provided. Examples of results of experiments performed at the National Center for Small-Angle Scattering Research are included

  3. Crustal structure of the coastal and marine San Francisco Bay region, California

    Science.gov (United States)

    Parsons, Tom

    2002-01-01

    As of the time of this writing, the San Francisco Bay region is home to about 6.8 million people, ranking fifth among population centers in the United States. Most of these people live on the coastal lands along San Francisco Bay, the Sacramento River delta, and the Pacific coast. The region straddles the tectonic boundary between the Pacific and North American Plates and is crossed by several strands of the San Andreas Fault system. These faults, which are stressed by about 4 cm of relative plate motion each year, pose an obvious seismic hazard.

  4. Earth and environmental science in the 1980's: Part 1: Environmental data systems, supercomputer facilities and networks

    Science.gov (United States)

    1986-01-01

    Overview descriptions of on-line environmental data systems, supercomputer facilities, and networks are presented. Each description addresses the concepts of content, capability, and user access relevant to the point of view of potential utilization by the Earth and environmental science community. The information on similar systems or facilities is presented in parallel fashion to encourage and facilitate intercomparison. In addition, summary sheets are given for each description, and a summary table precedes each section.

  5. The BlueGene/L Supercomputer and Quantum ChromoDynamics

    International Nuclear Information System (INIS)

    Vranas, P; Soltz, R

    2006-01-01

    In summary our update contains: (1) Perfect speedup sustaining 19.3% of peak for the Wilson D D-slash Dirac operator. (2) Measurements of the full Conjugate Gradient (CG) inverter that inverts the Dirac operator. The CG inverter contains two global sums over the entire machine. Nevertheless, our measurements retain perfect speedup scaling demonstrating the robustness of our methods. (3) We ran on the largest BG/L system, the LLNL 64 rack BG/L supercomputer, and obtained a sustained speed of 59.1 TFlops. Furthermore, the speedup scaling of the Dirac operator and of the CG inverter are perfect all the way up to the full size of the machine, 131,072 cores (please see Figure II). The local lattice is rather small (4 x 4 x 4 x 16) while the total lattice has been a lattice QCD vision for thermodynamic studies (a total of 128 x 128 x 256 x 32 lattice sites). This speed is about five times larger compared to the speed we quoted in our submission. As we have pointed out in our paper QCD is notoriously sensitive to network and memory latencies, has a relatively high communication to computation ratio which can not be overlapped in BGL in virtual node mode, and as an application is in a class of its own. The above results are thrilling to us and a 30 year long dream for lattice QCD

  6. Parallel supercomputing: Advanced methods, algorithms, and software for large-scale linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Carey, G.F.; Young, D.M.

    1993-12-31

    The program outlined here is directed to research on methods, algorithms, and software for distributed parallel supercomputers. Of particular interest are finite element methods and finite difference methods together with sparse iterative solution schemes for scientific and engineering computations of very large-scale systems. Both linear and nonlinear problems will be investigated. In the nonlinear case, applications with bifurcation to multiple solutions will be considered using continuation strategies. The parallelizable numerical methods of particular interest are a family of partitioning schemes embracing domain decomposition, element-by-element strategies, and multi-level techniques. The methods will be further developed incorporating parallel iterative solution algorithms with associated preconditioners in parallel computer software. The schemes will be implemented on distributed memory parallel architectures such as the CRAY MPP, Intel Paragon, the NCUBE3, and the Connection Machine. We will also consider other new architectures such as the Kendall-Square (KSQ) and proposed machines such as the TERA. The applications will focus on large-scale three-dimensional nonlinear flow and reservoir problems with strong convective transport contributions. These are legitimate grand challenge class computational fluid dynamics (CFD) problems of significant practical interest to DOE. The methods developed and algorithms will, however, be of wider interest.

  7. 369 TFlop/s molecular dynamics simulations on the Roadrunner general-purpose heterogeneous supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Swaminarayan, Sriram [Los Alamos National Laboratory; Germann, Timothy C [Los Alamos National Laboratory; Kadau, Kai [Los Alamos National Laboratory; Fossum, Gordon C [IBM CORPORATION

    2008-01-01

    The authors present timing and performance numbers for a short-range parallel molecular dynamics (MD) code, SPaSM, that has been rewritten for the heterogeneous Roadrunner supercomputer. Each Roadrunner compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, so that there are four MPI ranks per node, each with one Opteron and one Cell. The interatomic forces are computed on the Cells (each with one PPU and eight SPU cores), while the Opterons are used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard-Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), corresponding to 124 MFlop/Watt/s at a price of approximately 3.69 MFlops/dollar. They demonstrate an initial target application, the jetting and ejection of material from a shocked surface.

  8. A user-friendly web portal for T-Coffee on supercomputers

    Directory of Open Access Journals (Sweden)

    Koetsier Jos

    2011-05-01

    Full Text Available Abstract Background Parallel T-Coffee (PTC was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. Results In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. Conclusions The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution.

  9. A Parallel Supercomputer Implementation of a Biological Inspired Neural Network and its use for Pattern Recognition

    International Nuclear Information System (INIS)

    De Ladurantaye, Vincent; Lavoie, Jean; Bergeron, Jocelyn; Parenteau, Maxime; Lu Huizhong; Pichevar, Ramin; Rouat, Jean

    2012-01-01

    A parallel implementation of a large spiking neural network is proposed and evaluated. The neural network implements the binding by synchrony process using the Oscillatory Dynamic Link Matcher (ODLM). Scalability, speed and performance are compared for 2 implementations: Message Passing Interface (MPI) and Compute Unified Device Architecture (CUDA) running on clusters of multicore supercomputers and NVIDIA graphical processing units respectively. A global spiking list that represents at each instant the state of the neural network is described. This list indexes each neuron that fires during the current simulation time so that the influence of their spikes are simultaneously processed on all computing units. Our implementation shows a good scalability for very large networks. A complex and large spiking neural network has been implemented in parallel with success, thus paving the road towards real-life applications based on networks of spiking neurons. MPI offers a better scalability than CUDA, while the CUDA implementation on a GeForce GTX 285 gives the best cost to performance ratio. When running the neural network on the GTX 285, the processing speed is comparable to the MPI implementation on RQCHP's Mammouth parallel with 64 notes (128 cores).

  10. Modeling radiative transport in ICF plasmas on an IBM SP2 supercomputer

    International Nuclear Information System (INIS)

    Johansen, J.A.; MacFarlane, J.J.; Moses, G.A.

    1995-01-01

    At the University of Wisconsin-Madison the authors have integrated a collisional-radiative-equilibrium model into their CONRAD radiation-hydrodynamics code. This integrated package allows them to accurately simulate the transport processes involved in ICF plasmas; including the important effects of self-absorption of line-radiation. However, as they increase the amount of atomic structure utilized in their transport models, the computational demands increase nonlinearly. In an attempt to meet this increased computational demand, they have recently embarked on a mission to parallelize the CONRAD program. The parallel CONRAD development is being performed on an IBM SP2 supercomputer. The parallelism is based on a message passing paradigm, and is being implemented using PVM. At the present time they have determined that approximately 70% of the sequential program can be executed in parallel. Accordingly, they expect that the parallel version will yield a speedup on the order of three times that of the sequential version. This translates into only 10 hours of execution time for the parallel version, whereas the sequential version required 30 hours

  11. Portable implementation model for CFD simulations. Application to hybrid CPU/GPU supercomputers

    Science.gov (United States)

    Oyarzun, Guillermo; Borrell, Ricard; Gorobets, Andrey; Oliva, Assensi

    2017-10-01

    Nowadays, high performance computing (HPC) systems experience a disruptive moment with a variety of novel architectures and frameworks, without any clarity of which one is going to prevail. In this context, the portability of codes across different architectures is of major importance. This paper presents a portable implementation model based on an algebraic operational approach for direct numerical simulation (DNS) and large eddy simulation (LES) of incompressible turbulent flows using unstructured hybrid meshes. The strategy proposed consists in representing the whole time-integration algorithm using only three basic algebraic operations: sparse matrix-vector product, a linear combination of vectors and dot product. The main idea is based on decomposing the nonlinear operators into a concatenation of two SpMV operations. This provides high modularity and portability. An exhaustive analysis of the proposed implementation for hybrid CPU/GPU supercomputers has been conducted with tests using up to 128 GPUs. The main objective consists in understanding the challenges of implementing CFD codes on new architectures.

  12. PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan; Mills, Richard T.

    2012-04-18

    PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors per realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.

  13. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  14. Marketing San Juan Basin gas

    International Nuclear Information System (INIS)

    Posner, D.M.

    1988-01-01

    Marketing natural gas produced in the San Juan Basin of New Mexico and Colorado principally involves four gas pipeline companies with significant facilities in the basin. The system capacity, transportation rates, regulatory status, and market access of each of these companies is evaluated. Because of excess gas supplies available to these pipeline companies, producers can expect improved take levels and prices by selling gas directly to end users and utilities as opposed to selling gas to the pipelines for system supply. The complexities of transporting gas today suggest that the services of an independent gas marketing company may be beneficial to smaller producers with gas supplies in the San Juan Basin

  15. Update: San Andreas Fault experiment

    Science.gov (United States)

    Christodoulidis, D. C.; Smith, D. E.

    1984-01-01

    Satellite laser ranging techniques are used to monitor the broad motion of the tectonic plates comprising the San Andreas Fault System. The San Andreas Fault Experiment, (SAFE), has progressed through the upgrades made to laser system hardware and an improvement in the modeling capabilities of the spaceborne laser targets. Of special note is the launch of the Laser Geodynamic Satellite, LAGEOS spacecraft, NASA's only completely dedicated laser satellite in 1976. The results of plate motion projected into this 896 km measured line over the past eleven years are summarized and intercompared.

  16. Developing solar power programs : San Francisco's experience

    International Nuclear Information System (INIS)

    Schwartz, F.

    2006-01-01

    This keynote address discussed an array of solar programs initiated in government-owned buildings in San Francisco. The programs were strongly supported by the city's mayor,and the voting public. Known for its fog and varying microclimates, 11 monitoring stations were set up throughout the city to determine viable locations for the successful application of solar technologies. It was observed that 90 per cent of the available sunshine occurred in the central valley, whereas fog along the Pacific shore was problematic. Seven of the monitoring sites showed excellent results. Relationships with various city departments were described, as well as details of study loads, load profiles, electrical systems, roofs and the structural capabilities of the selected government buildings. There was a focus on developing good relations with the local utility. The Moscone Convention Center was selected for the program's flagship installation, a 675 kW solar project which eventually won the US EPA Green Power Award for 2004 and received high press coverage. Cost of the project was $4.2 million. 825,000 kWh solar electricity was generated, along with 4,500,000 kWh electricity saved annually from efficiency measures, resulting in a net reduction of 5,325,000 kWh. Savings on utilities bills for the center were an estimated $1,078,000. A pipeline of solar projects followed, with installations at a sewage treatment plant and a large recycling depot. A program of smaller sites included libraries, schools and health facilities. Details of plans to apply solar technology to a 500 acre redevelopment site in southeast San Francisco with an aging and inadequate electrical infrastructure were described. A model of efficient solar housing for the development was presented, with details of insulation, windows, heating ventilation and air-conditioning (HVAC), water heating, lighting, appliances and a 1.2 kilowatt solar system. Peak demand reductions were also presented. tabs., figs

  17. Vegetation - San Felipe Valley [ds172

    Data.gov (United States)

    California Natural Resource Agency — This Vegetation Map of the San Felipe Valley Wildlife Area in San Diego County, California is based on vegetation samples collected in the field in 2002 and 2005 and...

  18. San Francisco Bay Water Quality Improvement Fund

    Science.gov (United States)

    EPAs grant program to protect and restore San Francisco Bay. The San Francisco Bay Water Quality Improvement Fund (SFBWQIF) has invested in 58 projects along with 70 partners contributing to restore wetlands, water quality, and reduce polluted runoff.,

  19. The San Bernabe power substation; La subestacion San Bernabe

    Energy Technology Data Exchange (ETDEWEB)

    Chavez Sanudo, Andres D. [Luz y Fuerza del Centro, Mexico, D. F. (Mexico)

    1997-12-31

    The first planning studies that gave rise to the San Bernabe substation go back to year 1985. The main circumstance that supports this decision is the gradual restriction for electric power generation that has been suffering the Miguel Aleman Hydro System, until its complete disappearance, to give priority to the potable water supply through the Cutzamala pumping system, that feeds in an important way Mexico City and the State of Mexico. In this document the author describes the construction project of the San Bernabe Substation; mention is made of the technological experiences obtained during the construction and its geographical location is shown, as well as the one line diagram of the same [Espanol] Los primeros estudios de planeacion que dieron origen a la subestacion San Bernabe se remontan al ano de 1985. La circunstancia principal que soporta esta decision es la restriccion paulatina para generar energia que ha venido experimentando el Sistema Hidroelectrico Miguel Aleman, hasta su desaparicion total, para dar prioridad al suministro de agua potable por medio del sistema de bombeo Cutzamala, que alimenta en forma importante a la Ciudad de Mexico y al Estado de Mexico. En este documento el autor describe el proyecto de construccion de la subestacion San Bernabe; se mencionan las experiencias tecnologicas obtenidas durante su construccion y se ilustra su ubicacion geografica, asi como un diagrama unifilar de la misma

  20. 33 CFR 165.754 - Safety Zone: San Juan Harbor, San Juan, PR.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Safety Zone: San Juan Harbor, San Juan, PR. 165.754 Section 165.754 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND... Zone: San Juan Harbor, San Juan, PR. (a) Regulated area. A moving safety zone is established in the...

  1. 76 FR 45693 - Safety Zone; San Diego POPS Fireworks, San Diego, CA

    Science.gov (United States)

    2011-08-01

    ...-AA00 Safety Zone; San Diego POPS Fireworks, San Diego, CA AGENCY: Coast Guard, DHS. ACTION: Temporary... San Diego Bay in support of the San Diego POPS Fireworks. This safety zone is necessary to provide for... of the waterway during scheduled fireworks events. Persons and vessels will be prohibited from...

  2. AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 1, Issue 2

    Science.gov (United States)

    2011-01-01

    area and the researchers working on these projects. Also inside: news from the AHPCRC consortium partners at Morgan State University and the NASA ...Computing Research Center is provided by the supercomputing and research facilities at Stanford University and at the NASA Ames Research Center at...atomic and molecular level, he said. He noted that “every general would like to have” a Star Trek -like holodeck, where holographic avatars could

  3. Effective Analysis of NGS Metagenomic Data with Ultra-Fast Clustering Algorithms (MICW - Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    Energy Technology Data Exchange (ETDEWEB)

    Li, Weizhong

    2011-10-12

    San Diego Supercomputer Center's Weizhong Li on "Effective Analysis of NGS Metagenomic Data with Ultra-fast Clustering Algorithms" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  4. Characterization of aerosols in the Metropolitan Area of San Jose

    International Nuclear Information System (INIS)

    Mejias Perez, J.A.

    1997-07-01

    The objective of the present study, was to elaborate a profile of the contamination by private matter and to characterize the aerosols collected in the Metropolitan Area of San Jose (Costa Rica). For that, a campaign of sampling was carried out in three points of the city of San Jose, differentiated by there degree of activity: Center of San Jose (Central Station of Firemen), San Isidro of Coronado -Canton of Vasquez of Coronado- (Municipality) and Escazu (Municipality). Such campaign was carried out from April 4 to July 4, 1996 (transition summer-winter), and in two periods of time of 8 hours: 8 a.m. to 4 p.m. and of 8 p.m. to 4 a.m. The aerosols were collected utilizing Gent Pm-10 samplers, in filters of polycarbonate of 0,4 μm and 8 μm in cascade, with a flow average of 15 L/min., and it determined the composition average of the present aerosols. The concentration of the majority of the anions were obtained by means of ionic chromatography of high resolution, and the main cations by spectrophotometry of atomic absorption with electro thermic atomization. The space-temporary variations of the concentrations were evaluated and their correlation with the meteorologic variable. (S. Grainger) [es

  5. ASTER Flyby of San Francisco

    Science.gov (United States)

    2002-01-01

    The Advanced Spaceborne Thermal Emission and Reflection radiometer, ASTER, is an international project: the instrument was supplied by Japan's Ministry of International Trade and Industry. A joint US/Japan science team developed algorithms for science data products, and is validating instrument performance. With its 14 spectral bands, extremely high spatial resolution, and 15 meter along-track stereo capability, ASTER is the zoom lens of the Terra satellite. The primary mission goals are to characterize the Earth's surface; and to monitor dynamic events and processes that influence habitability at human scales. ASTER's monitoring and mapping capabilities are illustrated by this series of images of the San Francisco area. The visible and near infrared image reveals suspended sediment in the bays, vegetation health, and details of the urban environment. Flying over San Francisco (3.2MB) (high-res (18.3MB)), we see the downtown, and shadows of the large buildings. Past the Golden Gate Bridge and Alcatraz Island, we cross San Pablo Bay and enter Suisun Bay. Turning south, we fly over the Berkeley and Oakland Hills. Large salt evaporation ponds come into view at the south end of San Francisco Bay. We turn northward, and approach San Francisco Airport. Rather than landing and ending our flight, we see this is as only the beginning of a 6 year mission to better understand the habitability of the world on which we live. For more information: ASTER images through Visible Earth ASTER Web Site Image courtesy of MITI, ERSDAC, JAROS, and the U.S./Japan ASTER Science Team

  6. NASA's Climate in a Box: Desktop Supercomputing for Open Scientific Model Development

    Science.gov (United States)

    Wojcik, G. S.; Seablom, M. S.; Lee, T. J.; McConaughy, G. R.; Syed, R.; Oloso, A.; Kemp, E. M.; Greenseid, J.; Smith, R.

    2009-12-01

    NASA's High Performance Computing Portfolio in cooperation with its Modeling, Analysis, and Prediction program intends to make its climate and earth science models more accessible to a larger community. A key goal of this effort is to open the model development and validation process to the scientific community at large such that a natural selection process is enabled and results in a more efficient scientific process. One obstacle to others using NASA models is the complexity of the models and the difficulty in learning how to use them. This situation applies not only to scientists who regularly use these models but also non-typical users who may want to use the models such as scientists from different domains, policy makers, and teachers. Another obstacle to the use of these models is that access to high performance computing (HPC) accounts, from which the models are implemented, can be restrictive with long wait times in job queues and delays caused by an arduous process of obtaining an account, especially for foreign nationals. This project explores the utility of using desktop supercomputers in providing a complete ready-to-use toolkit of climate research products to investigators and on demand access to an HPC system. One objective of this work is to pre-package NASA and NOAA models so that new users will not have to spend significant time porting the models. In addition, the prepackaged toolkit will include tools, such as workflow, visualization, social networking web sites, and analysis tools, to assist users in running the models and analyzing the data. The system architecture to be developed will allow for automatic code updates for each user and an effective means with which to deal with data that are generated. We plan to investigate several desktop systems, but our work to date has focused on a Cray CX1. Currently, we are investigating the potential capabilities of several non-traditional development environments. While most NASA and NOAA models are

  7. Parallel simulation of tsunami inundation on a large-scale supercomputer

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2013-12-01

    An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the

  8. ENERGY RESOURCES CENTER

    Energy Technology Data Exchange (ETDEWEB)

    Sternberg, Virginia

    1979-11-01

    First I will give a short history of this Center which has had three names and three moves (and one more in the offing) in three years. Then I will tell you about the accomplishments made in the past year. And last, I will discuss what has been learned and what is planned for the future. The Energy and Environment Information Center (EEIC), as it was first known, was organized in August 1975 in San Francisco as a cooperative venture by the Federal Energy Administration (FEA), Energy Research and Development Administration (ERDA) and the Environmental Protection Agency (EPA). These three agencies planned this effort to assist the public in obtaining information about energy and the environmental aspects of energy. The Public Affairs Offices of FEA, ERDA and EPA initiated the idea of the Center. One member from each agency worked at the Center, with assistance from the Lawrence Berkeley Laboratory Information Research Group (LBL IRG) and with on-site help from the EPA Library. The Center was set up in a corner of the EPA Library. FEA and ERDA each contributed one staff member on a rotating basis to cover the daily operation of the Center and money for books and periodicals. EPA contributed space, staff time for ordering, processing and indexing publications, and additional money for acquisitions. The LBL Information Research Group received funds from ERDA on a 189 FY 1976 research project to assist in the development of the Center as a model for future energy centers.

  9. 78 FR 19103 - Safety Zone; Spanish Navy School Ship San Sebastian El Cano Escort; Bahia de San Juan; San Juan, PR

    Science.gov (United States)

    2013-03-29

    ...-AA00 Safety Zone; Spanish Navy School Ship San Sebastian El Cano Escort; Bahia de San Juan; San Juan... temporary moving safety zone on the waters of Bahia de San Juan during the transit of the Spanish Navy... Channel entrance, and to protect the high ranking officials on board the Spanish Navy School Ship San...

  10. The Eastern California Shear Zone as the northward extension of the southern San Andreas Fault

    Science.gov (United States)

    Thatcher, Wayne R.; Savage, James C.; Simpson, Robert W.

    2016-01-01

    Cluster analysis offers an agnostic way to organize and explore features of the current GPS velocity field without reference to geologic information or physical models using information only contained in the velocity field itself. We have used cluster analysis of the Southern California Global Positioning System (GPS) velocity field to determine the partitioning of Pacific-North America relative motion onto major regional faults. Our results indicate the large-scale kinematics of the region is best described with two boundaries of high velocity gradient, one centered on the Coachella section of the San Andreas Fault and the Eastern California Shear Zone and the other defined by the San Jacinto Fault south of Cajon Pass and the San Andreas Fault farther north. The ~120 km long strand of the San Andreas between Cajon Pass and Coachella Valley (often termed the San Bernardino and San Gorgonio sections) is thus currently of secondary importance and carries lesser amounts of slip over most or all of its length. We show these first order results are present in maps of the smoothed GPS velocity field itself. They are also generally consistent with currently available, loosely bounded geologic and geodetic fault slip rate estimates that alone do not provide useful constraints on the large-scale partitioning we show here. Our analysis does not preclude the existence of smaller blocks and more block boundaries in Southern California. However, attempts to identify smaller blocks along and adjacent to the San Gorgonio section were not successful.

  11. Scalable geocomputation: evolving an environmental model building platform from single-core to supercomputers

    Science.gov (United States)

    Schmitz, Oliver; de Jong, Kor; Karssenberg, Derek

    2017-04-01

    There is an increasing demand to run environmental models on a big scale: simulations over large areas at high resolution. The heterogeneity of available computing hardware such as multi-core CPUs, GPUs or supercomputer potentially provides significant computing power to fulfil this demand. However, this requires detailed knowledge of the underlying hardware, parallel algorithm design and the implementation thereof in an efficient system programming language. Domain scientists such as hydrologists or ecologists often lack this specific software engineering knowledge, their emphasis is (and should be) on exploratory building and analysis of simulation models. As a result, models constructed by domain specialists mostly do not take full advantage of the available hardware. A promising solution is to separate the model building activity from software engineering by offering domain specialists a model building framework with pre-programmed building blocks that they combine to construct a model. The model building framework, consequently, needs to have built-in capabilities to make full usage of the available hardware. Developing such a framework providing understandable code for domain scientists and being runtime efficient at the same time poses several challenges on developers of such a framework. For example, optimisations can be performed on individual operations or the whole model, or tasks need to be generated for a well-balanced execution without explicitly knowing the complexity of the domain problem provided by the modeller. Ideally, a modelling framework supports the optimal use of available hardware whichsoever combination of model building blocks scientists use. We demonstrate our ongoing work on developing parallel algorithms for spatio-temporal modelling and demonstrate 1) PCRaster, an environmental software framework (http://www.pcraster.eu) providing spatio-temporal model building blocks and 2) parallelisation of about 50 of these building blocks using

  12. 77 FR 34988 - Notice of Inventory Completion: San Diego State University, San Diego, CA

    Science.gov (United States)

    2012-06-12

    .... ACTION: Notice. SUMMARY: San Diego State University Archeology Collections Management Program has... that believes itself to be culturally affiliated with the human remains and associated funerary objects may contact San Diego State University Archeology Collections Management Program. Repatriation of the...

  13. Simulation of x-rays in refractive structure by the Monte Carlo method using the supercomputer SKIF

    International Nuclear Information System (INIS)

    Yaskevich, Yu.R.; Kravchenko, O.I.; Soroka, I.I.; Chembrovskij, A.G.; Kolesnik, A.S.; Serikova, N.V.; Petrov, P.V.; Kol'chevskij, N.N.

    2013-01-01

    Software 'Xray-SKIF' for the simulation of the X-rays in refractive structures by the Monte-Carlo method using the supercomputer SKIF BSU are developed. The program generates a large number of rays propagated from a source to the refractive structure. The ray trajectory under assumption of geometrical optics is calculated. Absorption is calculated for each ray inside of refractive structure. Dynamic arrays are used for results of calculation rays parameters, its restore the X-ray field distributions very fast at different position of detector. It was found that increasing the number of processors leads to proportional decreasing of calculation time: simulation of 10 8 X-rays using supercomputer with the number of processors from 1 to 30 run-times equal 3 hours and 6 minutes, respectively. 10 9 X-rays are calculated by software 'Xray-SKIF' which allows to reconstruct the X-ray field after refractive structure with a special resolution of 1 micron. (authors)

  14. Species - San Diego Co. [ds121

    Data.gov (United States)

    California Natural Resource Agency — This is the Biological Observation Database point layer representing baseline observations of sensitive species (as defined by the MSCP) throughout San Diego County....

  15. Coherent 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an Optimal Supercomputer Optical Switch Fabric

    DEFF Research Database (Denmark)

    Karinou, Fotini; Borkowski, Robert; Zibar, Darko

    2013-01-01

    We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates.......We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates....

  16. 75 FR 15611 - Safety Zone; United Portuguese SES Centennial Festa, San Diego Bay, San Diego, CA

    Science.gov (United States)

    2010-03-30

    ...-AA00 Safety Zone; United Portuguese SES Centennial Festa, San Diego Bay, San Diego, CA AGENCY: Coast... navigable waters of the San Diego Bay in support of the United Portuguese SES Centennial Festa. This... Centennial Festa, which will include a fireworks presentation originating from a tug and barge combination in...

  17. 78 FR 34123 - Notice of Inventory Completion: San Francisco State University NAGPRA Program, San Francisco, CA

    Science.gov (United States)

    2013-06-06

    ... completion of an inventory of human remains and associated funerary objects under the control of the San....R50000] Notice of Inventory Completion: San Francisco State University NAGPRA Program, San Francisco, CA... NAGPRA Program has completed an inventory of human remains and associated funerary objects, in...

  18. 78 FR 21403 - Notice of Inventory Completion: San Francisco State University NAGPRA Program, San Francisco, CA

    Science.gov (United States)

    2013-04-10

    ... completion of an inventory of human remains and associated funerary objects under the control of the San....R50000] Notice of Inventory Completion: San Francisco State University NAGPRA Program, San Francisco, CA... NAGPRA Program has completed an inventory of human remains and associated funerary objects, in...

  19. Width and dip of the southern San Andreas Fault at Salt Creek from modeling of geophysical data

    Science.gov (United States)

    Langenheim, Victoria; Athens, Noah D.; Scheirer, Daniel S.; Fuis, Gary S.; Rymer, Michael J.; Goldman, Mark R.; Reynolds, Robert E.

    2014-01-01

    We investigate the geometry and width of the southernmost stretch of the San Andreas Fault zone using new gravity and magnetic data along line 7 of the Salton Seismic Imaging Project. In the Salt Creek area of Durmid Hill, the San Andreas Fault coincides with a complex magnetic signature, with high-amplitude, short-wavelength magnetic anomalies superposed on a broader magnetic anomaly that is at least 5 km wide centered 2–3 km northeast of the fault. Marine magnetic data show that high-frequency magnetic anomalies extend more than 1 km west of the mapped trace of the San Andreas Fault. Modeling of magnetic data is consistent with a moderate to steep (> 50 degrees) northeast dip of the San Andreas Fault, but also suggests that the sedimentary sequence is folded west of the fault, causing the short wavelength of the anomalies west of the fault. Gravity anomalies are consistent with the previously modeled seismic velocity structure across the San Andreas Fault. Modeling of gravity data indicates a steep dip for the San Andreas Fault, but does not resolve unequivocally the direction of dip. Gravity data define a deeper basin, bounded by the Powerline and Hot Springs Faults, than imaged by the seismic experiment. This basin extends southeast of Line 7 for nearly 20 km, with linear margins parallel to the San Andreas Fault. These data suggest that the San Andreas Fault zone is wider than indicated by its mapped surface trace.

  20. Cuartel San Carlos. Yacimiento veterano

    Directory of Open Access Journals (Sweden)

    Mariana Flores

    2007-01-01

    Full Text Available El Cuartel San Carlos es un monumento histórico nacional (1986 de finales del siglo XVIII (1785-1790, caracterizado por sufrir diversas adversidades en su construcción y soportar los terremotos de 1812 y 1900. En el año 2006, el organismo encargado de su custodia, el Instituto de Patrimonio Cultural del Ministerio de Cultura, ejecutó tres etapas de exploración arqueológica, que abarcaron las áreas Traspatio, Patio Central y las Naves Este y Oeste de la edificación. Este trabajo reseña el análisis de la documentación arqueológica obtenida en el sitio, a partir de la realización de dicho proyecto, denominado EACUSAC (Estudio Arqueológico del Cuartel San Carlos, que representa además, la tercera campaña realizada en el sitio. La importancia de este yacimiento histórico, radica en su participación en los acontecimientos que propiciaron conflictos de poder durante el surgimiento de la República y en los sucesos políticos del siglo XX. De igual manera, se encontró en el sitio una amplia muestra de materiales arqueológicos que reseñan un estilo de vida cotidiana militar, así como las dinámicas sociales internas ocurridas en el San Carlos, como lugar estratégico para la defensa de los diferentes regímenes que atravesó el país, desde la época del imperialismo español hasta nuestros días.

  1. SANS from interpenetrating polymer networks

    International Nuclear Information System (INIS)

    Markotsis, M.G.; Burford, R.P.; Knott, R.B.; Australian Nuclear Science and Technology Organisation, Menai, NSW; Hanley, T.L.; CRC for Polymers,; Australian Nuclear Science and Technology Organisation, Menai, NSW; Papamanuel, N.

    2003-01-01

    Full text: Interpenetrating polymer networks (IPNs) have been formed by combining two polymeric systems in order to gain enhanced material properties. IPNs are a combination of two or more polymers in network form with one network polymerised and/or crosslinked in the immediate presence of the other(s).1 IPNs allow better blending of two or more crosslinked networks. In this study two sets of IPNs were produced and their microstructure studied using a variety of techniques including small angle neutron scattering (SANS). The first system combined a glassy polymer (polystyrene) with an elastomeric polymer (SBS) with the glassy polymer predominating, to give a high impact plastic. The second set of IPNs contained epichlorohydrin (CO) and nitrile rubber (NBR), and was formed in order to produce novel materials with enhanced chemical and gas barrier properties. In both cases if the phase mixing is optimised the probability of controlled morphologies and synergistic behaviour is increased. The PS/SBS IPNs were prepared using sequential polymerisation. The primary SBS network was thermally crosslinked, then the polystyrene network was polymerised and crosslinked using gamma irradiation to avoid possible thermal degradation of the butadiene segment of the SBS. Tough transparent systems were produced with no apparent thermal degradation of the polybutadiene segments. The epichlorohydrin/nitrile rubber IPNs were formed by simultaneous thermal crosslinking reactions. The epichlorohydrin network was formed using lead based crosslinker, while the nitrile rubber was crosslinked by peroxide methods. The use of two different crosslinking systems was employed in order to achieve independent crosslinking thus resulting in an IPN with minimal grafting between the component networks. SANS, Transmission electron microscopy (TEM), and atomic force microscopy (AFM) were used to examine the size and shape of the phase domains and investigate any variation with crosslinking level and

  2. Three-dimensional kinetic simulations of whistler turbulence in solar wind on parallel supercomputers

    Science.gov (United States)

    Chang, Ouliang

    The objective of this dissertation is to study the physics of whistler turbulence evolution and its role in energy transport and dissipation in the solar wind plasmas through computational and theoretical investigations. This dissertation presents the first fully three-dimensional (3D) particle-in-cell (PIC) simulations of whistler turbulence forward cascade in a homogeneous, collisionless plasma with a uniform background magnetic field B o, and the first 3D PIC simulation of whistler turbulence with both forward and inverse cascades. Such computationally demanding research is made possible through the use of massively parallel, high performance electromagnetic PIC simulations on state-of-the-art supercomputers. Simulations are carried out to study characteristic properties of whistler turbulence under variable solar wind fluctuation amplitude (epsilon e) and electron beta (betae), relative contributions to energy dissipation and electron heating in whistler turbulence from the quasilinear scenario and the intermittency scenario, and whistler turbulence preferential cascading direction and wavevector anisotropy. The 3D simulations of whistler turbulence exhibit a forward cascade of fluctuations into broadband, anisotropic, turbulent spectrum at shorter wavelengths with wavevectors preferentially quasi-perpendicular to B o. The overall electron heating yields T ∥ > T⊥ for all epsilone and betae values, indicating the primary linear wave-particle interaction is Landau damping. But linear wave-particle interactions play a minor role in shaping the wavevector spectrum, whereas nonlinear wave-wave interactions are overall stronger and faster processes, and ultimately determine the wavevector anisotropy. Simulated magnetic energy spectra as function of wavenumber show a spectral break to steeper slopes, which scales as k⊥lambda e ≃ 1 independent of betae values, where lambdae is electron inertial length, qualitatively similar to solar wind observations. Specific

  3. Solar Feasibility Study May 2013 - San Carlos Apache Tribe

    Energy Technology Data Exchange (ETDEWEB)

    Rapp, Jim [Parametrix; Duncan, Ken [San Carlos Apache Tribe; Albert, Steve [Parametrix

    2013-05-01

    The San Carlos Apache Tribe (Tribe) in the interests of strengthening tribal sovereignty, becoming more energy self-sufficient, and providing improved services and economic opportunities to tribal members and San Carlos Apache Reservation (Reservation) residents and businesses, has explored a variety of options for renewable energy development. The development of renewable energy technologies and generation is consistent with the Tribe’s 2011 Strategic Plan. This Study assessed the possibilities for both commercial-scale and community-scale solar development within the southwestern portions of the Reservation around the communities of San Carlos, Peridot, and Cutter, and in the southeastern Reservation around the community of Bylas. Based on the lack of any commercial-scale electric power transmission between the Reservation and the regional transmission grid, Phase 2 of this Study greatly expanded consideration of community-scale options. Three smaller sites (Point of Pines, Dudleyville/Winkleman, and Seneca Lake) were also evaluated for community-scale solar potential. Three building complexes were identified within the Reservation where the development of site-specific facility-scale solar power would be the most beneficial and cost-effective: Apache Gold Casino/Resort, Tribal College/Skill Center, and the Dudleyville (Winkleman) Casino.

  4. Evaluation Methodologies for Information Management Systems; Building Digital Tobacco Industry Document Libraries at the University of California, San Francisco Library/Center for Knowledge Management; Experiments with the IFLA Functional Requirements for Bibliographic Records (FRBR); Coming to Term: Designing the Texas Email Repository Model.

    Science.gov (United States)

    Morse, Emile L.; Schmidt, Heidi; Butter, Karen; Rider, Cynthia; Hickey, Thomas B.; O'Neill, Edward T.; Toves, Jenny; Green, Marlan; Soy, Sue; Gunn, Stan; Galloway, Patricia

    2002-01-01

    Includes four articles that discuss evaluation methods for information management systems under the Defense Advanced Research Projects Agency; building digital libraries at the University of California San Francisco's Tobacco Control Archives; IFLA's Functional Requirements for Bibliographic Records; and designing the Texas email repository model…

  5. California State Waters Map Series: offshore of San Gregorio, California

    Science.gov (United States)

    Cochrane, Guy R.; Dartnell, Peter; Greene, H. Gary; Watt, Janet T.; Golden, Nadine E.; Endris, Charles A.; Phillips, Eleyne L.; Hartwell, Stephen R.; Johnson, Samuel Y.; Kvitek, Rikk G.; Erdey, Mercedes D.; Bretz, Carrie K.; Manson, Michael W.; Sliter, Ray W.; Ross, Stephanie L.; Dieter, Bryan E.; Chin, John L.; Cochran, Susan A.; Cochrane, Guy R.; Cochran, Susan A.

    2014-01-01

    In 2007, the California Ocean Protection Council initiated the California Seafloor Mapping Program (CSMP), designed to create a comprehensive seafloor map of high-resolution bathymetry, marine benthic habitats, and geology within the 3-nautical-mile limit of California's State Waters. The CSMP approach is to create highly detailed seafloor maps through collection, integration, interpretation, and visualization of swath sonar data, acoustic backscatter, seafloor video, seafloor photography, high-resolution seismic-reflection profiles, and bottom-sediment sampling data. The map products display seafloor morphology and character, identify potential marine benthic habitats, and illustrate both the surficial seafloor geology and shallow (to about 100 m) subsurface geology. The Offshore of San Gregorio map area is located in northern California, on the Pacific coast of the San Francisco Peninsula about 50 kilometers south of the Golden Gate. The map area lies offshore of the Santa Cruz Mountains, part of the northwest-trending Coast Ranges that run roughly parallel to the San Andreas Fault Zone. The Santa Cruz Mountains lie between the San Andreas Fault Zone and the San Gregorio Fault system. The nearest significant onshore cultural centers in the map area are San Gregorio and Pescadero, both unincorporated communities with populations well under 1,000. Both communities are situated inland of state beaches that share their names. No harbor facilities are within the Offshore of San Gregorio map area. The hilly coastal area is virtually undeveloped grazing land for sheep and cattle. The coastal geomorphology is controlled by late Pleistocene and Holocene slip in the San Gregorio Fault system. A westward bend in the San Andreas Fault Zone, southeast of the map area, coupled with right-lateral movement along the San Gregorio Fault system have caused regional folding and uplift. The coastal area consists of high coastal bluffs and vertical sea cliffs. Coastal promontories in

  6. University of California San Francisco (UCSF-2): Expression Analysis of Superior Cervical Ganglion from Backcrossed TH-MYCN Transgenic Mice | Office of Cancer Genomics

    Science.gov (United States)

    The CTD2 Center at University of California San Francisco (UCSF-2) used genetic analysis of the peripheral sympathetic nervous system to identify potential therapeutic targets in neuroblastoma. Read the abstract Experimental Approaches Read the detailed Experimental Approaches

  7. 33 CFR 334.870 - San Diego Harbor, Calif.; restricted area.

    Science.gov (United States)

    2010-07-01

    ..., Calif.; restricted area. (a) Restricted area at Bravo Pier, Naval Air Station—(1) The area. The water of... delay or loitering. On occasion, access to the bait barges may be delayed for intermittent periods not... Supply Center Pier—(1) The area. The waters of San Diego Bay extending approximately 100 feet out from...

  8. Building a High Performance Computing Infrastructure for Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Belov, S; Kaplin, V; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2011-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies (ICT), and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of the computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. Recently a dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities of participating institutes, thus providing a common platform for building the computing infrastructure for various scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technologies based on XEN and KVM platforms. The solution implemented was tested thoroughly within the computing environment of KEDR detector experiment which is being carried out at BINP, and foreseen to be applied to the use cases of other HEP experiments in the upcoming future.

  9. Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Anisenkov, A; Belov, S; Kaplin, V; Korol, A; Skovpen, K; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2012-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.

  10. Aggregate Settling Velocities in San Francisco Estuary Margins

    Science.gov (United States)

    Allen, R. M.; Stacey, M. T.; Variano, E. A.

    2015-12-01

    One way that humans impact aquatic ecosystems is by adding nutrients and contaminants, which can propagate up the food web and cause blooms and die-offs, respectively. Often, these chemicals are attached to fine sediments, and thus where sediments go, so do these anthropogenic influences. Vertical motion of sediments is important for sinking and burial, and also for indirect effects on horizontal transport. The dynamics of sinking sediment (often in aggregates) are complex, thus we need field data to test and validate existing models. San Francisco Bay is well studied and is often used as a test case for new measurement and model techniques (Barnard et al. 2013). Settling velocities for aggregates vary between 4*10-5 to 1.6*10-2 m/s along the estuary backbone (Manning and Schoellhamer 2013). Model results from South San Francisco Bay shoals suggest two populations of settling particles, one fast (ws of 9 to 5.8*10-4 m/s) and one slow (ws of Brand et al. 2015). While the open waters of San Francisco Bay and other estuaries are well studied and modeled, sediment and contaminants often originate from the margin regions, and the margins remain poorly characterized. We conducted a 24 hour field experiment in a channel slough of South San Francisco Bay, and measured settling velocity, turbulence and flow, and suspended sediment concentration. At this margin location, we found average settling velocities of 4-5*10-5 m/s, and saw settling velocities decrease with decreasing suspended sediment concentration. These results are consistent with, though at the low end of, those seen along the estuary center, and they suggest that the two population model that has been successful along the shoals may also apply in the margins.

  11. Trouble Brewing in San Francisco. Policy Brief

    Science.gov (United States)

    Buck, Stuart

    2010-01-01

    The city of San Francisco will face enormous budgetary pressures from the growing deficits in public pensions, both at a state and local level. In this policy brief, the author estimates that San Francisco faces an aggregate $22.4 billion liability for pensions and retiree health benefits that are underfunded--including $14.1 billion for the city…

  12. San Diego's High School Dropout Crisis

    Science.gov (United States)

    Wilson, James C.

    2012-01-01

    This article highlights San Diego's dropout problem and how much it's costing the city and the state. Most San Diegans do not realize the enormous impact high school dropouts on their city. The California Dropout Research Project, located at the University of California at Santa Barbara, has estimated the lifetime cost of one class or cohort of…

  13. Some examples of spin-off technologies: San Carlos de Bariloche

    International Nuclear Information System (INIS)

    Meyer, Gabriel O.

    2001-01-01

    The Bariloche Atomic Center (CAB) and the Balseiro Institute, both in San Carlos de Bariloche, are mainly devoted to scientific research and development, the first one; and to education and training the second one. Besides providing specialists in physics and nuclear engineering for research centers in Argentina and abroad, both establishments are transferring technologies and providing services in different fields such as waste management, metallurgy, forensic sciences, medicine, geology, modeling, archaeology, paleontology, etc

  14. Car2x with software defined networks, network functions virtualization and supercomputers technical and scientific preparations for the Amsterdam Arena telecoms fieldlab

    NARCIS (Netherlands)

    Meijer R.J.; Cushing R.; De Laat C.; Jackson P.; Klous S.; Koning R.; Makkes M.X.; Meerwijk A.

    2015-01-01

    In the invited talk 'Car2x with SDN, NFV and supercomputers' we report about how our past work with SDN [1, 2] allows the design of a smart mobility fieldlab in the huge parking lot the Amsterdam Arena. We explain how we can engineer and test software that handle the complex conditions of the Car2X

  15. MEGADOCK 4.0: an ultra-high-performance protein-protein docking software for heterogeneous supercomputers.

    Science.gov (United States)

    Ohue, Masahito; Shimoda, Takehiro; Suzuki, Shuji; Matsuzaki, Yuri; Ishida, Takashi; Akiyama, Yutaka

    2014-11-15

    The application of protein-protein docking in large-scale interactome analysis is a major challenge in structural bioinformatics and requires huge computing resources. In this work, we present MEGADOCK 4.0, an FFT-based docking software that makes extensive use of recent heterogeneous supercomputers and shows powerful, scalable performance of >97% strong scaling. MEGADOCK 4.0 is written in C++ with OpenMPI and NVIDIA CUDA 5.0 (or later) and is freely available to all academic and non-profit users at: http://www.bi.cs.titech.ac.jp/megadock. akiyama@cs.titech.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  16. A criticality safety analysis code using a vectorized Monte Carlo method on the HITAC S-810 supercomputer

    International Nuclear Information System (INIS)

    Morimoto, Y.; Maruyama, H.

    1987-01-01

    A vectorized Monte Carlo criticality safety analysis code has been developed on the vector supercomputer HITAC S-810. In this code, a multi-particle tracking algorithm was adopted for effective utilization of the vector processor. A flight analysis with pseudo-scattering was developed to reduce the computational time needed for flight analysis, which represents the bulk of computational time. This new algorithm realized a speed-up of factor 1.5 over the conventional flight analysis. The code also adopted the multigroup cross section constants library of the Bodarenko type with 190 groups, with 132 groups being for fast and epithermal regions and 58 groups being for the thermal region. Evaluation work showed that this code reproduce the experimental results to an accuracy of about 1 % for the effective neutron multiplication factor. (author)

  17. EDF's experience with supercomputing and challenges ahead - towards multi-physics and multi-scale approaches

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M.; Banner, D. [Electricite de France (EDF)- R and D Division, 92 - Clamart (France)

    2003-07-01

    Nuclear power plants are a major asset of the EDF company. To remain so, in particular in a context of deregulation, competitiveness, safety and public acceptance are three conditions. These stakes apply both to existing plants and to future reactors. The purpose of the presentation is to explain how supercomputing can help EDF to satisfy these requirements. Three examples are described in detail: ensuring optimal use of nuclear fuel under wholly safe conditions, understanding and simulating the material deterioration mechanisms and moving forward with numerical simulation for the performance of EDF's activities. In conclusion, a broader vision of EDF long term R and D in the field of numerical simulation is given and especially of five challenges taken up by EDF together with its industrial and scientific partners. (author)

  18. Performance Evaluation of an Intel Haswell- and Ivy Bridge-Based Supercomputer Using Scientific and Engineering Applications

    Science.gov (United States)

    Saini, Subhash; Hood, Robert T.; Chang, Johnny; Baron, John

    2016-01-01

    We present a performance evaluation conducted on a production supercomputer of the Intel Xeon Processor E5- 2680v3, a twelve-core implementation of the fourth-generation Haswell architecture, and compare it with Intel Xeon Processor E5-2680v2, an Ivy Bridge implementation of the third-generation Sandy Bridge architecture. Several new architectural features have been incorporated in Haswell including improvements in all levels of the memory hierarchy as well as improvements to vector instructions and power management. We critically evaluate these new features of Haswell and compare with Ivy Bridge using several low-level benchmarks including subset of HPCC, HPCG and four full-scale scientific and engineering applications. We also present a model to predict the performance of HPCG and Cart3D within 5%, and Overflow within 10% accuracy.

  19. COMPUTATIONAL SCIENCE CENTER

    International Nuclear Information System (INIS)

    DAVENPORT, J.

    2006-01-01

    Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together

  20. COMPUTATIONAL SCIENCE CENTER

    Energy Technology Data Exchange (ETDEWEB)

    DAVENPORT, J.

    2006-11-01

    Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to

  1. Volcano hazards in the San Salvador region, El Salvador

    Science.gov (United States)

    Major, J.J.; Schilling, S.P.; Sofield, D.J.; Escobar, C.D.; Pullinger, C.R.

    2001-01-01

    San Salvador volcano is one of many volcanoes along the volcanic arc in El Salvador (figure 1). This volcano, having a volume of about 110 cubic kilometers, towers above San Salvador, the country’s capital and largest city. The city has a population of approximately 2 million, and a population density of about 2100 people per square kilometer. The city of San Salvador and other communities have gradually encroached onto the lower flanks of the volcano, increasing the risk that even small events may have serious societal consequences. San Salvador volcano has not erupted for more than 80 years, but it has a long history of repeated, and sometimes violent, eruptions. The volcano is composed of remnants of multiple eruptive centers, and these remnants are commonly referred to by several names. The central part of the volcano, which contains a large circular crater, is known as El Boquerón, and it rises to an altitude of about 1890 meters. El Picacho, the prominent peak of highest elevation (1960 meters altitude) to the northeast of the crater, and El Jabali, the peak to the northwest of the crater, represent remnants of an older, larger edifice. The volcano has erupted several times during the past 70,000 years from vents central to the volcano as well as from smaller vents and fissures on its flanks [1] (numerals in brackets refer to end notes in the report). In addition, several small cinder cones and explosion craters are located within 10 kilometers of the volcano. Since about 1200 A.D., eruptions have occurred almost exclusively along, or a few kilometers beyond, the northwest flank of the volcano, and have consisted primarily of small explosions and emplacement of lava flows. However, San Salvador volcano has erupted violently and explosively in the past, even as recently as 800 years ago. When such eruptions occur again, substantial population and infrastructure will be at risk. Volcanic eruptions are not the only events that present a risk to local

  2. Choto-san in the treatment of vascular dementia: a double-blind, placebo-controlled study.

    Science.gov (United States)

    Terasawa, K; Shimada, Y; Kita, T; Yamamoto, T; Tosa, H; Tanaka, N; Saito, Y; Kanaki, E; Goto, S; Mizushima, N; Fujioka, M; Takase, S; Seki, H; Kimura, I; Ogawa, T; Nakamura, S; Araki, G; Maruyama, I; Maruyama, Y; Takaori, S

    1997-03-01

    In an earlier placebo-controlled study, we demonstrated that a kampo (Japanese herbal) medicine called Choto-san (Diao-Teng-San in Chinese) was effective in treating vascular dementia. To evaluate its efficacy using more objective criteria, we carried out a multi-center, double-blind study of Choto-san extract (7.5 g/day) and a placebo, each given three times a day for 12 weeks to patients suffering from this condition. The study enrolled and analyzed 139 patients, 50 males and 89 females, with a mean age of 76.6 years. Choto-san was statistically superior to the placebo in global improvement rating, utility rating, global improvement rating of subjective symptoms, global improvement rating of psychiatric symptoms and global improvement rating of disturbance in daily living activities. Such items as spontaneity of conversation, lack of facial expression, decline in simple mathematical ability, global intellectual ability, nocturnal delirium, sleep disturbance, hallucination or delusion, and putting on and taking off clothes were significantly improved at one or more evaluation points in those taking Choto-san compared to those taking the placebo. Furthermore, the change in revised version of Hasegawa's dementia scale from the beginning point in Choto-san group was tended to be higher than that in placebo group with no statistical significance. These results suggest that Choto-san is effective in the treatment of vascular dementia. Copyright © 1997 Gustav Fischer Verlag. Published by Elsevier GmbH.. All rights reserved.

  3. Integration Of PanDA Workload Management System With Supercomputers for ATLAS

    CERN Document Server

    Oleynik, Danila; The ATLAS collaboration; De, Kaushik; Wenaus, Torre; Maeno, Tadashi; Barreiro Megino, Fernando Harald; Nilsson, Paul; Guan, Wen; Panitkin, Sergey

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production ANd Distributed Analysis system) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more t...

  4. Adult Basic Learning in an Activity Center: A Demonstration Approach.

    Science.gov (United States)

    Metropolitan Adult Education Program, San Jose, CA.

    Escuela Amistad, an activity center in San Jose, California, is now operating at capacity, five months after its origin. Average daily attendance has been 125 adult students, 18-65, most of whom are females of Mexican-American background. Activities and services provided by the center are: instruction in English as a second language, home…

  5. Enhanced Preliminary Assessment Report: Presidio of San Francisco Military Reservation, San Francisco, California

    Science.gov (United States)

    1989-11-01

    CAD981415656 Filmore Steiner Bay San Francisco 24 PG&E Gas Plant SanFran 502-IG CAD981415714 Bay North Point Buchanan Laguna 25 PG&E Gas Plant SanFran 502-1H...76-ioV /5,JO /0.7 /,230 PSF Water PSF, Main U.N. Lagunda Honda Analvte Plant Clearwell Reservoir Plaza Reservoi- Chlordane inetab. ə.2 ə.2 (1.2 ə.2

  6. A case for historic joint rupture of the San Andreas and San Jacinto faults

    OpenAIRE

    Lozos, Julian C.

    2016-01-01

    The San Andreas fault is considered to be the primary plate boundary fault in southern California and the most likely fault to produce a major earthquake. I use dynamic rupture modeling to show that the San Jacinto fault is capable of rupturing along with the San Andreas in a single earthquake, and interpret these results along with existing paleoseismic data and historic damage reports to suggest that this has likely occurred in the historic past. In particular, I find that paleoseismic data...

  7. Activity report of Computing Research Center

    Energy Technology Data Exchange (ETDEWEB)

    1997-07-01

    On April 1997, National Laboratory for High Energy Physics (KEK), Institute of Nuclear Study, University of Tokyo (INS), and Meson Science Laboratory, Faculty of Science, University of Tokyo began to work newly as High Energy Accelerator Research Organization after reconstructing and converting their systems, under aiming at further development of a wide field of accelerator science using a high energy accelerator. In this Research Organization, Applied Research Laboratory is composed of four Centers to execute assistance of research actions common to one of the Research Organization and their relating research and development (R and D) by integrating the present four centers and their relating sections in Tanashi. What is expected for the assistance of research actions is not only its general assistance but also its preparation and R and D of a system required for promotion and future plan of the research. Computer technology is essential to development of the research and can communize for various researches in the Research Organization. On response to such expectation, new Computing Research Center is required for promoting its duty by coworking and cooperating with every researchers at a range from R and D on data analysis of various experiments to computation physics acting under driving powerful computer capacity such as supercomputer and so forth. Here were described on report of works and present state of Data Processing Center of KEK at the first chapter and of the computer room of INS at the second chapter and on future problems for the Computing Research Center. (G.K.)

  8. Description of gravity cores from San Pablo Bay and Carquinez Strait, San Francisco Bay, California

    Science.gov (United States)

    Woodrow, Donald L.; John L. Chin,; Wong, Florence L.; Fregoso, Theresa A.; Jaffe, Bruce E.

    2017-06-27

    Seventy-two gravity cores were collected by the U.S. Geological Survey in 1990, 1991, and 2000 from San Pablo Bay and Carquinez Strait, California. The gravity cores collected within San Pablo Bay contain bioturbated laminated silts and sandy clays, whole and broken bivalve shells (mostly mussels), fossil tube structures, and fine-grained plant or wood fragments. Gravity cores from the channel wall of Carquinez Strait east of San Pablo Bay consist of sand and clay layers, whole and broken bivalve shells (less than in San Pablo Bay), trace fossil tubes, and minute fragments of plant material.

  9. 76 FR 9709 - Water Quality Challenges in the San Francisco Bay/Sacramento-San Joaquin Delta Estuary

    Science.gov (United States)

    2011-02-22

    ... Water Quality Challenges in the San Francisco Bay/Sacramento-San Joaquin Delta Estuary AGENCY... the San Francisco Bay/ Sacramento-San Joaquin Delta Estuary (Bay Delta Estuary) in California. EPA is... programs to address recent significant declines in multiple aquatic species in the Bay Delta Estuary. EPA...

  10. 33 CFR 165.776 - Security Zone; Coast Guard Base San Juan, San Juan Harbor, Puerto Rico

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Security Zone; Coast Guard Base San Juan, San Juan Harbor, Puerto Rico 165.776 Section 165.776 Navigation and Navigable Waters COAST... Guard District § 165.776 Security Zone; Coast Guard Base San Juan, San Juan Harbor, Puerto Rico (a...

  11. 76 FR 22809 - Safety Zone; Bay Ferry II Maritime Security Exercise; San Francisco Bay, San Francisco, CA

    Science.gov (United States)

    2011-04-25

    ... DEPARTMENT OF HOMELAND SECURITY Coast Guard 33 CFR Part 165 [Docket No. USCG-2011-0196] RIN 1625-AA00 Safety Zone; Bay Ferry II Maritime Security Exercise; San Francisco Bay, San Francisco, CA AGENCY... Security Exercise; San Francisco Bay, San Francisco, CA. (a) Location. The limits of this safety zone...

  12. 76 FR 10945 - San Luis Trust Bank, FSB, San Luis Obispo, CA; Notice of Appointment of Receiver

    Science.gov (United States)

    2011-02-28

    ... DEPARTMENT OF THE TREASURY Office of Thrift Supervision San Luis Trust Bank, FSB, San Luis Obispo, CA; Notice of Appointment of Receiver Notice is hereby given that, pursuant to the authority... appointed the Federal Deposit Insurance Corporation as sole Receiver for San Luis Trust Bank, FSB, San Luis...

  13. Performance of BATAN-SANS instrument

    Energy Technology Data Exchange (ETDEWEB)

    Ikram, Abarrul; Insani, Andon [National Nuclear Energy Agency, P and D Centre for Materials Science and Technology, Serpong (Indonesia)

    2003-03-01

    SANS data from some standard samples have been obtained using BATAN-SANS instrument in Serpong. The experiments were performed for various experimental set-ups that involve different detector positions and collimator lengths. This paper describes the BATAN-SANS instrument briefly as well as the data taken from those experiments and followed with discussion of the results concerning the performance and calibration of the instrument. The standard samples utilized in these experiments include porous silica, polystyrene-poly isoprene, silver behenate, poly ball and polystyrene-poly (ethylene-alt-propylene). Even though the results show that BATAN-SANS instrument is in good shape, but rooms for improvements are still widely open especially for the velocity selector and its control system. (author)

  14. AMS San Diego Testbed - Calibration Data

    Data.gov (United States)

    Department of Transportation — The data in this repository were collected from the San Diego, California testbed, namely, I-15 from the interchange with SR-78 in the north to the interchange with...

  15. San Antonio Bay 1986-1989

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The effect of salinity on utilization of shallow-water nursery habitats by aquatic fauna was assessed in San Antonio Bay, Texas. Overall, 272 samples were collected...

  16. San Francisco Bay Interferometric Bathymetry: Area B

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — High resolution sonar data were collected over ultra-shallow areas of the San Francisco Bay estuary system. Bathymetric and acoustic backscatter data were collected...

  17. April 1906 San Francisco, USA Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 1906 San Francisco earthquake was the largest event (magnitude 8.3) to occur in the conterminous United States in the 20th Century. Recent estimates indicate...

  18. San Jacinto Tries Management by Objectives

    Science.gov (United States)

    Deegan, William

    1974-01-01

    San Jacinto, California, has adopted a measurable institutional objectives approach to management by objectives. Results reflect, not only improved cost effectiveness of community college education, but also more effective educational programs for students. (Author/WM)

  19. Radon emanation on San Andreas Fault

    International Nuclear Information System (INIS)

    King, C.-Y.

    1978-01-01

    It is stated that subsurface radon emanation monitored in shallow dry holes along an active segment of the San Andreas fault in central California shows spatially coherent large temporal variations that seem to be correlated with local seismicity. (author)

  20. SANS observations on weakly flocculated dispersions

    DEFF Research Database (Denmark)

    Mischenko, N.; Ourieva, G.; Mortensen, K.

    1997-01-01

    Structural changes occurring in colloidal dispersions of poly-(methyl metacrylate) (PMMA) particles, sterically stabilized with poly-(12-hydroxystearic acid) (PHSA), while varying the solvent quality, temperature and shear rate, are investigated by small-angle neutron scattering (SANS......). For a moderately concentrated dispersion in a marginal solvent the transition on cooling from the effective stability to a weak attraction is monitored, The degree of attraction is determined in the framework of the sticky spheres model (SSM), SANS and rheological results are correlated....

  1. Trouble Brewing in San Diego. Policy Brief

    Science.gov (United States)

    Buck, Stuart

    2010-01-01

    The city of San Diego will face enormous budgetary pressures from the growing deficits in public pensions, both at a state and local level. In this policy brief, the author estimates that San Diego faces total of $45.4 billion, including $7.95 billion for the county pension system, $5.4 billion for the city pension system, and an estimated $30.7…

  2. Toxic phytoplankton in San Francisco Bay

    Science.gov (United States)

    Rodgers, Kristine M.; Garrison, David L.; Cloern, James E.

    1996-01-01

    The Regional Monitoring Program (RMP) was conceived and designed to document the changing distribution and effects of trace substances in San Francisco Bay, with focus on toxic contaminants that have become enriched by human inputs. However, coastal ecosystems like San Francisco Bay also have potential sources of naturally-produced toxic substances that can disrupt food webs and, under extreme circumstances, become threats to public health. The most prevalent source of natural toxins is from blooms of algal species that can synthesize metabolites that are toxic to invertebrates or vertebrates. Although San Francisco Bay is nutrient-rich, it has so far apparently been immune from the epidemic of harmful algal blooms in the world’s nutrient-enriched coastal waters. This absence of acute harmful blooms does not imply that San Francisco Bay has unique features that preclude toxic blooms. No sampling program has been implemented to document the occurrence of toxin-producing algae in San Francisco Bay, so it is difficult to judge the likelihood of such events in the future. This issue is directly relevant to the goals of RMP because harmful species of phytoplankton have the potential to disrupt ecosystem processes that support animal populations, cause severe illness or death in humans, and confound the outcomes of toxicity bioassays such as those included in the RMP. Our purpose here is to utilize existing data on the phytoplankton community of San Francisco Bay to provide a provisional statement about the occurrence, distribution, and potential threats of harmful algae in this Estuary.

  3. Modelling SANS and SAXS data

    International Nuclear Information System (INIS)

    Reynolds, P.

    1999-01-01

    Full text: Small angle scattering data while on an absolute scale and relatively accurate over large ranges of observables (0.003 -1 ; 0.1 -1 ) is often relatively featureless. I will address some of the problems this causes, and some of the ways of minimising these, by reference to our recent SANS results. For the benefit of newer chums this will involve discussion of the strengths and weaknesses of data from ISIS (LOQ), Argonne (SAND) and the I.L.L. (D22), and the consequences these have for modelling. The use of simple portable or remote access systems for modelling will be discussed - in particular the IGOR based NIST system of Dr. S. Kline and the VAX based FISH system of Dr. R. Heenan, ISIS. I will illustrate that a wide variety of physically appealing and complete models are now available. If you have reason to believe in a particular microstructure, this belief can now be either falsified, or the microstructure quantified, by fitting to the entire set of scattering patterns over the entire Q-range. For example, only in cases of drastic ignorance need we use only Guinier and Porod analyses, although these may provide useful initial guidance in the modelling. We now rarely need to use oversimplified logically incomplete models - such as spherical micelles with neglect of intermicellar correlation- now that we possess fast desktop/experimental computers

  4. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry; Etienne, Vincent; Gashawbeza, Ewenet; Curiel, Emesto Sandoval; Khan, Azizur; Feki, Saber; Kortas, Samuel

    2017-01-01

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey's acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less than

  5. High Performance Simulation of Large-Scale Red Sea Ocean Bottom Seismic Data on the Supercomputer Shaheen II

    KAUST Repository

    Tonellot, Thierry

    2017-02-27

    A combination of both shallow and deepwater, plus islands and coral reefs, are some of the main features contributing to the complexity of subsalt seismic exploration in the Red Sea transition zone. These features often result in degrading effects on seismic images. State-of-the-art ocean bottom acquisition technologies are therefore required to record seismic data with optimal fold and offset, as well as advanced processing and imaging techniques. Numerical simulations of such complex seismic data can help improve acquisition design and also help in customizing, validating and benchmarking the processing and imaging workflows that will be applied on the field data. Subsequently, realistic simulation of wave propagation is a computationally intensive process requiring a realistic model and an efficient 3D wave equation solver. Large-scale computing resources are also required to meet turnaround time compatible with a production time frame. In this work, we present the numerical simulation of an ocean bottom seismic survey to be acquired in the Red Sea transition zone starting in summer 2016. The survey\\'s acquisition geometry comprises nearly 300,000 unique shot locations and 21,000 unique receiver locations, covering about 760 km2. Using well log measurements and legacy 2D seismic lines in this area, a 3D P-wave velocity model was built, with a maximum depth of 7 km. The model was sampled at 10 m in each direction, resulting in more than 5 billion cells. Wave propagation in this model was performed using a 3D finite difference solver in the time domain based on a staggered grid velocity-pressure formulation of acoustodynamics. To ensure that the resulting data could be generated sufficiently fast, the King Abdullah University of Science and Technology (KAUST) supercomputer Shaheen II Cray XC40 was used. A total of 21,000 three-component (pressure and vertical and horizontal velocity) common receiver gathers with a 50 Hz maximum frequency were computed in less

  6. The shape of the invisible halo: N-body simulations on parallel supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Warren, M.S.; Zurek, W.H. (Los Alamos National Lab., NM (USA)); Quinn, P.J. (Australian National Univ., Canberra (Australia). Mount Stromlo and Siding Spring Observatories); Salmon, J.K. (California Inst. of Tech., Pasadena, CA (USA))

    1990-01-01

    We study the shapes of halos and the relationship to their angular momentum content by means of N-body (N {approximately} 10{sup 6}) simulations. Results indicate that in relaxed halos with no apparent substructure: (i) the shape and orientation of the isodensity contours tends to persist throughout the virialised portion of the halo; (ii) most ({approx}70%) of the halos are prolate; (iii) the approximate direction of the angular momentum vector tends to persist throughout the halo; (iv) for spherical shells centered on the core of the halo the magnitude of the specific angular momentum is approximately proportional to their radius; (v) the shortest axis of the ellipsoid which approximates the shape of the halo tends to align with the rotation axis of the halo. This tendency is strongest in the fastest rotating halos. 13 refs., 4 figs.

  7. EX1103L1: Exploration and Mapping, Galapagos Spreading Center: Mapping, CTD and Tow-yo

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This project will be a transit from San Diego, CA to the Galapagos Spreading Center, where multibeam mapping, CTD casts, and CTD tow-yo operations will be performed....

  8. 77 FR 59969 - Notice of Inventory Completion: San Francisco State University, Department of Anthropology, San...

    Science.gov (United States)

    2012-10-01

    ... Inventory Completion: San Francisco State University, Department of Anthropology, San Francisco, CA... Francisco State University, NAGPRA Program (formerly in the Department of Anthropology). The human remains... State University Department of Anthropology records. In the Federal Register (73 FR 30156-30158, May 23...

  9. 78 FR 57482 - Safety Zone; America's Cup Aerobatic Box, San Francisco Bay, San Francisco, CA

    Science.gov (United States)

    2013-09-19

    ...-AA00 Safety Zone; America's Cup Aerobatic Box, San Francisco Bay, San Francisco, CA AGENCY: Coast Guard... America's Cup air shows. These safety zones are established to provide a clear area on the water for... announced by America's Cup Race Management. ADDRESSES: Documents mentioned in this preamble are part of...

  10. 77 FR 42649 - Safety Zone: Sea World San Diego Fireworks, Mission Bay; San Diego, CA

    Science.gov (United States)

    2012-07-20

    ... 1625-AA00 Safety Zone: Sea World San Diego Fireworks, Mission Bay; San Diego, CA AGENCY: Coast Guard... authorized by the Captain of the Port, or his designated representative. DATES: This rule is effective from 8... to ensure the public's safety. B. Basis and Purpose The Ports and Waterways Safety Act gives the...

  11. 75 FR 27432 - Security Zone; Golden Guardian 2010 Regional Exercise; San Francisco Bay, San Francisco, CA

    Science.gov (United States)

    2010-05-17

    ... can better evaluate its effects on them and participate in the rulemaking process. Small businesses... DEPARTMENT OF HOMELAND SECURITY Coast Guard 33 CFR Part 165 [Docket No. USCG-2010-0221] RIN 1625-AA87 Security Zone; Golden Guardian 2010 Regional Exercise; San Francisco Bay, San Francisco, CA AGENCY...

  12. SOFTWARE FOR SUPERCOMPUTER SKIF “ProLit-lC” and “ProNRS-lC” FOR FOUNDRY AND METALLURGICAL PRODUCTIONS

    Directory of Open Access Journals (Sweden)

    A. N. Chichko

    2008-01-01

    Full Text Available The data of modeling on supercomputer system SKIF of technological process of  molds filling by means of computer system 'ProLIT-lc', and also data of modeling of the steel pouring process by means ofTroNRS-lc'are presented. The influence of number of  processors of  multinuclear computer system SKIF on acceleration and time of  modeling of technological processes, connected with production of castings and slugs, is shown.

  13. Compact High Resolution SANS using very cold neutrons (VCN-SANS)

    International Nuclear Information System (INIS)

    Kennedy, S.; Yamada, M.; Iwashita, Y.; Geltenbort, P.; Bleuel, M.; Shimizu, H.

    2011-01-01

    SANS (Small Angle Neutron Scattering) is a popular method for elucidation of nano-scale structures. However science continually challenges SANS for higher performance, prompting exploration of ever-more exotic and expensive technologies. We propose a compact high resolution SANS, using very cold neutrons, magnetic focusing lens and a wide-angle spherical detector. This system will compete with modern 40 m pinhole SANS in one tenth of the length, matching minimum Q, Q-resolution and dynamic range. It will also probe dynamics using the MIEZE method. Our prototype lens (a rotating permanent-magnet sextupole), focuses a pulsed neutron beam over 3-5 nm wavelength and has measured SANS from micelles and polymer blends. (authors)

  14. San Pedro Martir Telescope: Mexican design endeavor

    Science.gov (United States)

    Toledo-Ramirez, Gengis K.; Bringas-Rico, Vicente; Reyes, Noe; Uribe, Jorge; Lopez, Aldo; Tovar, Carlos; Caballero, Xochitl; Del-Llano, Luis; Martinez, Cesar; Macias, Eduardo; Lee, William; Carramiñana, Alberto; Richer, Michael; González, Jesús; Sanchez, Beatriz; Lucero, Diana; Manuel, Rogelio; Segura, Jose; Rubio, Saul; Gonzalez, German; Hernandez, Obed; García, Mary; Lazaro, Jose; Rosales-Ortega, Fabian; Herrera, Joel; Sierra, Gerardo; Serrano, Hazael

    2016-08-01

    The Telescopio San Pedro Martir (TSPM) is a new ground-based optical telescope project, with a 6.5 meters honeycomb primary mirror, to be built in the Observatorio Astronomico Nacional on the Sierra San Pedro Martir (OAN-SPM) located in Baja California, Mexico. The OAN-SPM has an altitude of 2830 meters above sea level; it is among the best location for astronomical observation in the world. It is located 1830 m higher than the atmospheric inversion layer with 70% of photometric nights, 80% of spectroscopic nights and a sky brightness up to 22 mag/arcsec2. The TSPM will be suitable for general science projects intended to improve the knowledge of the universe established on the Official Mexican Program for Science, Technology and Innovation 2014-2018. The telescope efforts are headed by two Mexican institutions in name of the Mexican astronomical community: the Universidad Nacional Autonoma de Mexico and the Instituto Nacional de Astrofisica, Optica y Electronica. The telescope has been financially supported mainly by the Consejo Nacional de Ciencia y Tecnologia (CONACYT). It is under development by Mexican scientists and engineers from the Center for Engineering and Industrial Development. This development is supported by a Mexican-American scientific cooperation, through a partnership with the University of Arizona (UA), and the Smithsonian Astrophysical Observatory (SAO). M3 Engineering and Technology Corporation in charge of enclosure and building design. The TSPM will be designed to allow flexibility and possible upgrades in order to maximize resources. Its optical and mechanical designs are based upon those of the Magellan and MMT telescopes. The TSPM primary mirror and its cell will be provided by the INAOE and UA. The telescope will be optimized from the near ultraviolet to the near infrared wavelength range (0.35-2.5 m), but will allow observations up to 26μm. The TSPM will initially offer a f/5 Cassegrain focal station. Later, four folded Cassegrain and

  15. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2011-01-01

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  16. Supercomputations and big-data analysis in strong-field ultrafast optical physics: filamentation of high-peak-power ultrashort laser pulses

    Science.gov (United States)

    Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.

    2016-06-01

    High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3  +  1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.

  17. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-03-29

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  18. The ASCI Network for SC '99: A Step on the Path to a 100 Gigabit Per Second Supercomputing Network

    Energy Technology Data Exchange (ETDEWEB)

    PRATT,THOMAS J.; TARMAN,THOMAS D.; MARTINEZ,LUIS M.; MILLER,MARC M.; ADAMS,ROGER L.; CHEN,HELEN Y.; BRANDT,JAMES M.; WYCKOFF,PETER S.

    2000-07-24

    This document highlights the Discom{sup 2}'s Distance computing and communication team activities at the 1999 Supercomputing conference in Portland, Oregon. This conference is sponsored by the IEEE and ACM. Sandia, Lawrence Livermore and Los Alamos National laboratories have participated in this conference for eleven years. For the last four years the three laboratories have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives rubric. Communication support for the ASCI exhibit is provided by the ASCI DISCOM{sup 2} project. The DISCOM{sup 2} communication team uses this forum to demonstrate and focus communication and networking developments within the community. At SC 99, DISCOM built a prototype of the next generation ASCI network demonstrated remote clustering techniques, demonstrated the capabilities of the emerging Terabit Routers products, demonstrated the latest technologies for delivering visualization data to the scientific users, and demonstrated the latest in encryption methods including IP VPN technologies and ATM encryption research. The authors also coordinated the other production networking activities within the booth and between their demonstration partners on the exhibit floor. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia's overall strategies in ASCI networking.

  19. Combining density functional theory calculations, supercomputing, and data-driven methods to design new materials (Conference Presentation)

    Science.gov (United States)

    Jain, Anubhav

    2017-04-01

    Density functional theory (DFT) simulations solve for the electronic structure of materials starting from the Schrödinger equation. Many case studies have now demonstrated that researchers can often use DFT to design new compounds in the computer (e.g., for batteries, catalysts, and hydrogen storage) before synthesis and characterization in the lab. In this talk, I will focus on how DFT calculations can be executed on large supercomputing resources in order to generate very large data sets on new materials for functional applications. First, I will briefly describe the Materials Project, an effort at LBNL that has virtually characterized over 60,000 materials using DFT and has shared the results with over 17,000 registered users. Next, I will talk about how such data can help discover new materials, describing how preliminary computational screening led to the identification and confirmation of a new family of bulk AMX2 thermoelectric compounds with measured zT reaching 0.8. I will outline future plans for how such data-driven methods can be used to better understand the factors that control thermoelectric behavior, e.g., for the rational design of electronic band structures, in ways that are different from conventional approaches.

  20. Getting To Exascale: Applying Novel Parallel Programming Models To Lab Applications For The Next Generation Of Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Dube, Evi [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shereda, Charles [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nau, Lee [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Harris, Lance [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2010-09-27

    As supercomputing moves toward exascale, node architectures will change significantly. CPU core counts on nodes will increase by an order of magnitude or more. Heterogeneous architectures will become more commonplace, with GPUs or FPGAs providing additional computational power. Novel programming models may make better use of on-node parallelism in these new architectures than do current models. In this paper we examine several of these novel models – UPC, CUDA, and OpenCL –to determine their suitability to LLNL scientific application codes. Our study consisted of several phases: We conducted interviews with code teams and selected two codes to port; We learned how to program in the new models and ported the codes; We debugged and tuned the ported applications; We measured results, and documented our findings. We conclude that UPC is a challenge for porting code, Berkeley UPC is not very robust, and UPC is not suitable as a general alternative to OpenMP for a number of reasons. CUDA is well supported and robust but is a proprietary NVIDIA standard, while OpenCL is an open standard. Both are well suited to a specific set of application problems that can be run on GPUs, but some problems are not suited to GPUs. Further study of the landscape of novel models is recommended.

  1. Designing and application of SAN extension interface based on CWDM

    Science.gov (United States)

    Qin, Leihua; Yu, Shengsheng; Zhou, Jingli

    2005-11-01

    As Fibre Channel (FC) becomes the protocol of choice within corporate data centers, enterprises are increasingly deploying SANs in their data central. In order to mitigate the risk of losing data and improve the availability of data, more and more enterprises are increasingly adopting storage extension technologies to replicate their business critical data to a secondary site. Transmitting this information over distance requires a carrier grade environment with zero data loss, scalable throughput, low jitter, high security and ability to travel long distance. To address this business requirements, there are three basic architectures for storage extension, they are Storage over Internet Protocol, Storage over Synchronous Optical Network/Synchronous Digital Hierarchy (SONET/SDH) and Storage over Dense Wavelength Division Multiplexing (DWDM). Each approach varies in functionality, complexity, cost, scalability, security, availability , predictable behavior (bandwidth, jitter, latency) and multiple carrier limitations. Compared with these connectiviy technologies,Coarse Wavelength Division Multiplexing (CWDM) is a Simplified, Low Cost and High Performance connectivity solutions for enterprises to deploy their storage extension. In this paper, we design a storage extension connectivity over CWDM and test it's electrical characteristic and random read and write performance of disk array through the CWDM connectivity, testing result show us that the performance of the connectivity over CWDM is acceptable. Furthermore, we propose three kinds of network architecture of SAN extension based on CWDM interface. Finally the credit-Based flow control mechanism of FC, and the relationship between credits and extension distance is analyzed.

  2. SANS facility at the Pitesti 14 MW Triga reactor

    International Nuclear Information System (INIS)

    Ionita, I.; Anghel, E.; Mincu, M.; Datcu, A.; Grabcev, B.; Todireanu, S.; Constantin, F.; Shvetsov, V.; Popescu, G.

    2006-01-01

    Full text of publication follows: At the present time, an important not yet fully exploited potentiality is represented by the SANS instruments existent at lower power reactors and reactors in developing countries even if they are, generally, endowed with a simpler equipment and are characterized by the lack of infrastructure to maintain and repair high technology accessories. The application of SANS at lower power reactors and in developing countries nevertheless is possible in well selected topics where only a restricted Q range is required, when scattering power is expected to be sufficiently high or when the sample size can be increased at the expense of resolution. Examples of this type of applications are: 1) Phase separation and precipitates in material science, 2) Ultrafine grained materials (nano-crystals, ceramics), 3) Porous materials such as concretes and filter materials, 4) Conformation and entanglements of polymer-chains, 5) Aggregates of micelles in microemulsions, gels and colloids, 6) Radiation damage in steels and alloys. The need for the installation of a new SANS facility at the Triga Reactor of the Institute of Nuclear Researches in Pitesti, Romania become actual especially after the shutting down of the VVRS Reactor from Bucharest. A monochromatic neutron beam with 1.5 Angstrom ≤ λ ≤ 5 Angstrom is produced by a mechanical velocity selector with helical slots.The distance between sample and detectors plane is (5.2 m ). The sample width may be fixed between 10 mm and 20 mm. The minimum value of the scattering vector is Q min = 0.005 Angstrom -1 while the maximal value is Q max = 0.5 Angstrom -1 . The relative error is ΔQ/Q min = 0.5. The cooperation partnership between advanced research centers and the smaller ones from developing countries could be fruitful. The formers act as mentors in solving specific problems. Such a partnership was established between INR Pitesti, Romania and JINR Dubna, Russia. The first step in this cooperation

  3. Cacao use and the San Lorenzo Olmec

    Science.gov (United States)

    Powis, Terry G.; Cyphers, Ann; Gaikwad, Nilesh W.; Grivetti, Louis; Cheong, Kong

    2011-01-01

    Mesoamerican peoples had a long history of cacao use—spanning more than 34 centuries—as confirmed by previous identification of cacao residues on archaeological pottery from Paso de la Amada on the Pacific Coast and the Olmec site of El Manatí on the Gulf Coast. Until now, comparable evidence from San Lorenzo, the premier Olmec capital, was lacking. The present study of theobromine residues confirms the continuous presence and use of cacao products at San Lorenzo between 1800 and 1000 BCE, and documents assorted vessels forms used in its preparation and consumption. One elite context reveals cacao use as part of a mortuary ritual for sacrificial victims, an event that occurred during the height of San Lorenzo's power. PMID:21555564

  4. Decolonizing our plates : analyzing San Diego and vegans of color food politics

    OpenAIRE

    Navarro, Marilisa Cristina

    2011-01-01

    This project focuses on discursive formations of race, gender, class, and sexuality within food justice movements as well as these discursive formations within veganism. In particular, I analyze how mainstream food justice movements in San Diego engage in discourses of colorblindness, universalism, individualism, whiteness, and consumption. I also examine how these movements are centered on possessive individualism, or one's capacity to own private property, as the means through which they se...

  5. 77 FR 66499 - Environmental Impact Statement: San Bernardino and Los Angeles Counties, CA

    Science.gov (United States)

    2012-11-05

    ... San Bernardino, 285 East Hospitality Lane, San Bernardino, California 92408 (2) Sheraton Ontario..., November 13, 2012 from 5-7 p.m. at the Hilton San Bernardino, 285 East Hospitality Lane, San Bernardino...

  6. Rocks and geology in the San Francisco Bay region

    Science.gov (United States)

    Stoffer, Philip W.

    2002-01-01

    The landscape of the San Francisco Bay region is host to a greater variety of rocks than most other regions in the United States. This introductory guide provides illustrated descriptions of 46 common and important varieties of igneous, sedimentary, and metamorphic rock found in the region. Rock types are described in context of their identification qualities, how they form, and where they occur in the region. The guide also provides discussion about of regional geology, plate tectonics, the rock cycle, the significance of the selected rock types in relation to both earth history and the impact of mineral resources on the development in the region. Maps and text also provide information where rocks, fossils, and geologic features can be visited on public lands or in association with public displays in regional museums, park visitor centers, and other public facilities.

  7. In the San Joaquin Valley, hardly a sprinkle

    International Nuclear Information System (INIS)

    Holson, L.M.

    1993-01-01

    California has declared its six-year drought over, but in the San Joaquin Valley, center of the state's $18.5 billion agriculture industry, it lives on. The two weeks of strong rain this winter that swelled reservoirs and piled snow on the mountains is only trickling toward the region's nearly 20,000 farms. Federal water officials are under heavy pressure from the Environmental Protection Agency, which wants to improve water quality, and are worried about the plight of endangered fish in the Sacramento River. So, on March 12 they announced they will send farmers only 40% of the water allotments they got before the drought. The rest is being held against possible shortages. For the once-green valley, another year without water has brought many farmers perilously close to extinction

  8. Mammal Track Counts - San Diego County, 2010 [ds709

    Data.gov (United States)

    California Natural Resource Agency — The San Diego Tracking Team (SDTT) is a non-profit organization dedicated to promoting the preservation of wildlife habitat in San Diego County through citizen-based...

  9. Coastal Cactus Wren, San Diego Co. - 2009 [ds702

    Data.gov (United States)

    California Natural Resource Agency — The San Diego Multiple Species Conservation program (MSCP) was developed for the conservation of plants and animals in the southeast portion of San Diego County....

  10. Coastal Cactus Wren, San Diego Co. - 2011 [ds708

    Data.gov (United States)

    California Natural Resource Agency — The San Diego Multiple Species Conservation program (MSCP) was developed for the conservation of plants and animals in the southeast portion of San Diego County....

  11. Species Observations (poly) - San Diego County [ds648

    Data.gov (United States)

    California Natural Resource Agency — Created in 2009, the SanBIOS database serves as a single repository of species observations collected by various departments within the County of San Diego's Land...

  12. Mammal Track Counts - San Diego County [ds442

    Data.gov (United States)

    California Natural Resource Agency — The San Diego Tracking Team (SDTT) is a non-profit organization dedicated to promoting the preservation of wildlife habitat in San Diego County through citizen-based...

  13. Species Observations (poly) - San Diego County [ds648

    Data.gov (United States)

    California Department of Resources — Created in 2009, the SanBIOS database serves as a single repository of species observations collected by various departments within the County of San Diego's Land...

  14. New generation of docking programs: Supercomputer validation of force fields and quantum-chemical methods for docking.

    Science.gov (United States)

    Sulimov, Alexey V; Kutov, Danil C; Katkova, Ekaterina V; Ilin, Ivan S; Sulimov, Vladimir B

    2017-11-01

    Discovery of new inhibitors of the protein associated with a given disease is the initial and most important stage of the whole process of the rational development of new pharmaceutical substances. New inhibitors block the active site of the target protein and the disease is cured. Computer-aided molecular modeling can considerably increase effectiveness of new inhibitors development. Reliable predictions of the target protein inhibition by a small molecule, ligand, is defined by the accuracy of docking programs. Such programs position a ligand in the target protein and estimate the protein-ligand binding energy. Positioning accuracy of modern docking programs is satisfactory. However, the accuracy of binding energy calculations is too low to predict good inhibitors. For effective application of docking programs to new inhibitors development the accuracy of binding energy calculations should be higher than 1kcal/mol. Reasons of limited accuracy of modern docking programs are discussed. One of the most important aspects limiting this accuracy is imperfection of protein-ligand energy calculations. Results of supercomputer validation of several force fields and quantum-chemical methods for docking are presented. The validation was performed by quasi-docking as follows. First, the low energy minima spectra of 16 protein-ligand complexes were found by exhaustive minima search in the MMFF94 force field. Second, energies of the lowest 8192 minima are recalculated with CHARMM force field and PM6-D3H4X and PM7 quantum-chemical methods for each complex. The analysis of minima energies reveals the docking positioning accuracies of the PM7 and PM6-D3H4X quantum-chemical methods and the CHARMM force field are close to one another and they are better than the positioning accuracy of the MMFF94 force field. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. San Francisco Bay Long Term Management Strategy for Dredging

    Science.gov (United States)

    The San Francisco Bay Long Term Management Strategy (LTMS) is a cooperative effort to develop a new approach to dredging and dredged material disposal in the San Francisco Bay area. The LTMS serves as the Regional Dredging Team for the San Francisco area.

  16. 33 CFR 110.120 - San Luis Obispo Bay, Calif.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false San Luis Obispo Bay, Calif. 110... ANCHORAGES ANCHORAGE REGULATIONS Special Anchorage Areas § 110.120 San Luis Obispo Bay, Calif. (a) Area A-1. Area A-1 is the water area bounded by the San Luis Obispo County wharf, the shoreline, a line drawn...

  17. Usage Center

    DEFF Research Database (Denmark)

    Kleinaltenkamp, Michael; Plewa, Carolin; Gudergan, Siegfried

    2017-01-01

    Purpose: The purpose of this paper is to advance extant theorizing around resourceintegration by conceptualizing and delineating the notion of a usage center. Ausage center consists of a combination of interdependent actors that draw onresources across their individual usage processes to create v...

  18. October 1986 San Salvador, El Salvador Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — At least 1,000 people killed, 10,000 injured, 200,000 homeless and severe damage in the San Salvador area. About 50 fatalities were the result of landslides in the...

  19. SANS analysis of aqueous ionic perfluoropolyether micelles

    CERN Document Server

    Gambi, C M C; Chittofrati, A; Pieri, R; Baglioni, P; Teixeira, J

    2002-01-01

    Preliminary SANS results of ionic chlorine terminated perfluoropolyether micelles in water are given. The experimental spectra have been analyzed by a two-shell ellipsoidal model for the micellar form factor and a screened Coulombic plus hard-sphere repulsion potential for the structure factor. (orig.)

  20. 77 FR 46115 - Notice of Inventory Completion: San Diego Museum of Man, San Diego, CA

    Science.gov (United States)

    2012-08-02

    ...The San Diego Museum of Man has completed an inventory of human remains in consultation with the appropriate Indian tribe, and has determined that there is a cultural affiliation between the human remains and a present-day Indian tribe. Representatives of any Indian tribe that believes itself to be culturally affiliated with the human remains may contact the San Diego Museum of Man. Repatriation of the human remains to the Indian tribe stated below may occur if no additional claimants come forward.

  1. A case for historic joint rupture of the San Andreas and San Jacinto faults

    Science.gov (United States)

    Lozos, Julian C.

    2016-01-01

    The San Andreas fault is considered to be the primary plate boundary fault in southern California and the most likely fault to produce a major earthquake. I use dynamic rupture modeling to show that the San Jacinto fault is capable of rupturing along with the San Andreas in a single earthquake, and interpret these results along with existing paleoseismic data and historic damage reports to suggest that this has likely occurred in the historic past. In particular, I find that paleoseismic data and historic observations for the ~M7.5 earthquake of 8 December 1812 are best explained by a rupture that begins on the San Jacinto fault and propagates onto the San Andreas fault. This precedent carries the implications that similar joint ruptures are possible in the future and that the San Jacinto fault plays a more significant role in seismic hazard in southern California than previously considered. My work also shows how physics-based modeling can be used for interpreting paleoseismic data sets and understanding prehistoric fault behavior. PMID:27034977

  2. A case for historic joint rupture of the San Andreas and San Jacinto faults.

    Science.gov (United States)

    Lozos, Julian C

    2016-03-01

    The San Andreas fault is considered to be the primary plate boundary fault in southern California and the most likely fault to produce a major earthquake. I use dynamic rupture modeling to show that the San Jacinto fault is capable of rupturing along with the San Andreas in a single earthquake, and interpret these results along with existing paleoseismic data and historic damage reports to suggest that this has likely occurred in the historic past. In particular, I find that paleoseismic data and historic observations for the ~M7.5 earthquake of 8 December 1812 are best explained by a rupture that begins on the San Jacinto fault and propagates onto the San Andreas fault. This precedent carries the implications that similar joint ruptures are possible in the future and that the San Jacinto fault plays a more significant role in seismic hazard in southern California than previously considered. My work also shows how physics-based modeling can be used for interpreting paleoseismic data sets and understanding prehistoric fault behavior.

  3. San Juan Uchucuanicu: évolution historique

    Directory of Open Access Journals (Sweden)

    1975-01-01

    Full Text Available La communauté de San Juan est reconnue depuis 1939. Une première partie concerne l’organisation de la reducción de San Juan vers le milieu du XVIe siècle. Le poids fiscal s’exerce durement sur le village et la crise est générale dans toute la vallée du Chancay au XVIIe. siècle. La christianisation des habitants est définitive au milieu de ce même siècle. C’est vers la fin du XVIIe siècle et durant tout le XVIIIe que se multiplient les conflits entre San Juan et les villages voisins liés aux terrains de pâture et à la possession de l’eau. La deuxième partie du travail concerne les rapports de la communauté de San Juan avec le Pérou contemporain : contrainte fiscale toujours très lourde durant la fin de l’époque coloniale, exactions des militaires juste avant l’indépendance. La période républicaine voit toujours les conflits avec les villages voisins mais aussi la naissance de familles qui cherchent à retirer le maximum de la communauté. Les terres sont divisées et attribuées : la détérioration de l’organisation communale traditionnelle est manifeste. L4es conflits se multiplient entre petits propriétaires, mais aussi avec les haciendas voisines : c’est l’apparition d’une véritable lutte de classes. La situation actuelle est incertaine, le poids de l’économie marchande se développe avec l’exode des jeunes. Que sera la communauté San Juan à la fin de ce siècle? La comunidad de San Juan está reconocida desde 1939. La primera parte concierne a la organización de la 'reducción' de San Juan hacia mediados del siglo XVI. El peso fiscal se ejerce duramente sobre el pueblo y en el siglo XVII la crisis es general en todo el valle de Chancay. Hacia mediados del mismo siglo la cristianización de los habitantes es definitiva. Es hacia fines del siglo XVII y durante todo el siglo XVIII que se multiplican los conflictos entre San Juan y los pueblos vecinos, los que están relacionados con los terrenos de

  4. New Mexico High School Supercomputing Challenge, 1990--1995: Five years of making a difference to students, teachers, schools, and communities. Progress report

    Energy Technology Data Exchange (ETDEWEB)

    Foster, M.; Kratzer, D.

    1996-02-01

    The New Mexico High School Supercomputing Challenge is an academic program dedicated to increasing interest in science and math among high school students by introducing them to high performance computing. This report provides a summary and evaluation of the first five years of the program, describes the program and shows the impact that it has had on high school students, their teachers, and their communities. Goals and objectives are reviewed and evaluated, growth and development of the program are analyzed, and future directions are discussed.

  5. The disappearing San of southeastern Africa and their genetic affinities.

    Science.gov (United States)

    Schlebusch, Carina M; Prins, Frans; Lombard, Marlize; Jakobsson, Mattias; Soodyall, Himla

    2016-12-01

    Southern Africa was likely exclusively inhabited by San hunter-gatherers before ~2000 years ago. Around that time, East African groups assimilated with local San groups and gave rise to the Khoekhoe herders. Subsequently, Bantu-speaking farmers, arriving from the north (~1800 years ago), assimilated and displaced San and Khoekhoe groups, a process that intensified with the arrival of European colonists ~350 years ago. In contrast to the western parts of southern Africa, where several Khoe-San groups still live today, the eastern parts are largely populated by Bantu speakers and individuals of non-African descent. Only a few scattered groups with oral traditions of Khoe-San ancestry remain. Advances in genetic research open up new ways to understand the population history of southeastern Africa. We investigate the genomic variation of the remaining individuals from two South African groups with oral histories connecting them to eastern San groups, i.e., the San from Lake Chrissie and the Duma San of the uKhahlamba-Drakensberg. Using ~2.2 million genetic markers, combined with comparative published data sets, we show that the Lake Chrissie San have genetic ancestry from both Khoe-San (likely the ||Xegwi San) and Bantu speakers. Specifically, we found that the Lake Chrissie San are closely related to the current southern San groups (i.e., the Karretjie people). Duma San individuals, on the other hand, were genetically similar to southeastern Bantu speakers from South Africa. This study illustrates how genetic tools can be used to assess hypotheses about the ancestry of people who seemingly lost their historic roots, only recalling a vague oral tradition of their origin.

  6. Super-computer architecture

    CERN Document Server

    Hockney, R W

    1977-01-01

    This paper examines the design of the top-of-the-range, scientific, number-crunching computers. The market for such computers is not as large as that for smaller machines, but on the other hand it is by no means negligible. The present work-horse machines in this category are the CDC 7600 and IBM 360/195, and over fifty of the former machines have been sold. The types of installation that form the market for such machines are not only the major scientific research laboratories in the major countries-such as Los Alamos, CERN, Rutherford laboratory-but also major universities or university networks. It is also true that, as with sports cars, innovations made to satisfy the top of the market today often become the standard for the medium-scale computer of tomorrow. Hence there is considerable interest in examining present developments in this area. (0 refs).

  7. Supercomputer debugging workshop `92

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.S.

    1993-02-01

    This report contains papers or viewgraphs on the following topics: The ABCs of Debugging in the 1990s; Cray Computer Corporation; Thinking Machines Corporation; Cray Research, Incorporated; Sun Microsystems, Inc; Kendall Square Research; The Effects of Register Allocation and Instruction Scheduling on Symbolic Debugging; Debugging Optimized Code: Currency Determination with Data Flow; A Debugging Tool for Parallel and Distributed Programs; Analyzing Traces of Parallel Programs Containing Semaphore Synchronization; Compile-time Support for Efficient Data Race Detection in Shared-Memory Parallel Programs; Direct Manipulation Techniques for Parallel Debuggers; Transparent Observation of XENOOPS Objects; A Parallel Software Monitor for Debugging and Performance Tools on Distributed Memory Multicomputers; Profiling Performance of Inter-Processor Communications in an iWarp Torus; The Application of Code Instrumentation Technology in the Los Alamos Debugger; and CXdb: The Road to Remote Debugging.

  8. Algorithms for supercomputers

    International Nuclear Information System (INIS)

    Alder, B.J.

    1986-01-01

    Better numerical procedures, improved computational power and additional physical insights have contributed significantly to progress in dealing with classical and quantum statistical mechanics problems. Past developments are discussed and future possibilities outlined

  9. Algorithms for supercomputers

    International Nuclear Information System (INIS)

    Alder, B.J.

    1985-12-01

    Better numerical procedures, improved computational power and additional physical insights have contributed significantly to progress in dealing with classical and quantum statistical mechanics problems. Past developments are discussed and future possibilities outlined

  10. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  11. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  12. Damage Detection Response Characteristics of Open Circuit Resonant (SansEC) Sensors

    Science.gov (United States)

    Dudley, Kenneth L.; Szatkowski, George N.; Smith, Laura J.; Koppen, Sandra V.; Ely, Jay J.; Nguyen, Truong X.; Wang, Chuantong; Ticatch, Larry A.; Mielnik, John J.

    2013-01-01

    The capability to assess the current or future state of the health of an aircraft to improve safety, availability, and reliability while reducing maintenance costs has been a continuous goal for decades. Many companies, commercial entities, and academic institutions have become interested in Integrated Vehicle Health Management (IVHM) and a growing effort of research into "smart" vehicle sensing systems has emerged. Methods to detect damage to aircraft materials and structures have historically relied on visual inspection during pre-flight or post-flight operations by flight and ground crews. More quantitative non-destructive investigations with various instruments and sensors have traditionally been performed when the aircraft is out of operational service during major scheduled maintenance. Through the use of reliable sensors coupled with data monitoring, data mining, and data analysis techniques, the health state of a vehicle can be detected in-situ. NASA Langley Research Center (LaRC) is developing a composite aircraft skin damage detection method and system based on open circuit SansEC (Sans Electric Connection) sensor technology. Composite materials are increasingly used in modern aircraft for reducing weight, improving fuel efficiency, and enhancing the overall design, performance, and manufacturability of airborne vehicles. Materials such as fiberglass reinforced composites (FRC) and carbon-fiber-reinforced polymers (CFRP) are being used to great advantage in airframes, wings, engine nacelles, turbine blades, fairings, fuselage structures, empennage structures, control surfaces and aircraft skins. SansEC sensor technology is a new technical framework for designing, powering, and interrogating sensors to detect various types of damage in composite materials. The source cause of the in-service damage (lightning strike, impact damage, material fatigue, etc.) to the aircraft composite is not relevant. The sensor will detect damage independent of the cause

  13. San Rafael mining and fabrication complex today

    International Nuclear Information System (INIS)

    Navarra, Pablo; Aldebert, Sergio R.

    2005-01-01

    In Mendoza province, 35 km West San Rafael city, is located a CNEA installation for uranium ore extraction and concentration: the San Rafael Mining and Fabrication Complex. By the middle of the nineties, as a consequence of the very low prices of uranium concentrate in the international market and of the high internal production costs, uranium extraction was stopped. To day, the international price of the concentrate had a very important increase and the Government has decided the completion of the Atucha II Nuclear Power Station construction. Moreover, studies have been started for new nuclear power plants. In such circumstances the reactivation of the Complex will make sure the uranium supply for our nuclear power stations, contributing to the improvement of the energy generation mix in our country. (author) [es

  14. San Telmo, backpackers y otras globalizaciones

    Directory of Open Access Journals (Sweden)

    Fernando Firmo

    2015-12-01

    Full Text Available Este artículo pretende contribuir al debate sobre otras formas de globalización  presentando una etnografía realizada en el barrio de San Telmo sobre mochileros que combinan en sus experiencias viaje y trabajo. Su objetivo es viajar al mismo tiempo que sacan provecho de esto para conseguir el capital necesario que les permita continuar en movimiento alrededor del globo. En este texto quiero hablar sobre estos auténticos actores de la globalización popular que ponen el foco en procesos y agentes alternativos no hegemónicos y que en este caso desarrollan su actividad en el contexto de la experiencia mochilera en San Telmo, siendo mi intención enriquecer las reflexiones sobre la globalización desde abajo.

  15. SAFOD Penetrates the San Andreas Fault

    Directory of Open Access Journals (Sweden)

    Mark D. Zoback

    2006-03-01

    Full Text Available SAFOD, the San Andreas Fault Observatory at Depth (Fig. 1, completed an important milestone in July 2005 by drilling through the San Andreas Fault at seismogenic depth. SAFOD is one of three major components of EarthScope, a U.S. National Science Foundation (NSF initiative being conducted in collaboration with the U.S. Geological Survey (USGS. The International Continental Scientific DrillingProgram (ICDP provides engineering and technical support for the project as well as online access to project data and information (http://www.icdp-online.de/sites/sanandreas/news/news1.html. In 2002, the ICDP, the NSF, and the USGS provided funding for a pilot hole project at the SAFOD site. Twenty scientifi c papers summarizing the results of the pilot hole project as well as pre-SAFOD site characterization studies were published in Geophysical Research Letters (Vol.31, Nos. 12 and 15, 2004.

  16. Geology and petrography of the Socoscora Sierra . Province of San Luis. Republica Argentina

    International Nuclear Information System (INIS)

    Carugno Duran, A.

    1998-01-01

    The following paper include an study geological and petrographic of the Sierra de Socoscora. San Luis, Argentina. This mountainas is a block with less elevation that the Sierra de San Luis, and it located in the west center of it. It' s formed by an crystalline basement composed by metamorphic haigh grade rocks, with a penetrative foliation of strike N-S. in this context is possible to define petrographicly the following units, migmatitic that fill a big part of the mountain, amphibolites, marbles, skarns, milonites and pegmatites. This units have amphibolitic facies assemblanges mineral and in some them, we can observe retrograde metamorphism of the greesnschist facies. The metamorphic structure is complex and evidence at least three deformation event

  17. Patient Workload Profile: National Naval Medical Center (NNMC), Bethesda, MD.

    Science.gov (United States)

    1980-06-01

    AD-A09a 729 WESTEC SERVICES NC SAN DIEGOCA0S / PATIENT WORKLOAD PROFILE: NATIONAL NAVAL MEDICAL CENTER NNMC),- ETC(U) JUN 80 W T RASMUSSEN, H W...provides site workload data for the National Naval Medical Center (NNMC) within the following functional support areas: Patient Appointment...on managing medical and patient data, thereby offering the health care provider and administrator more powerful capabilities in dealing with and

  18. Hispanics of a San Diego Barrio.

    Science.gov (United States)

    1983-04-01

    electronic music of Black American discoteques, played loudly on automobile stereo systems or on the oversized "sound boxes" which have more...rider" automobiles , and intense partying are parts of an essentially anti-social image held by the larger San Diego community. Parallels might be drawn...Research Naval Academy, U.S. Annapolis, MD 21402 - .I . . . . I I II I I l i List 7 HRM Officer in Charge Commanding Officer Human Resource Management

  19. Pinturas Murales en San Marcos de Salamanca

    Directory of Open Access Journals (Sweden)

    Julián ÁLVAREZ VILLAR

    2009-10-01

    Full Text Available En los primeros días de setiembre se iniciaron unas obras de repaso en el tejado de la iglesia de San Marcos, aprovechando el tiempo del cierre obligado del templo para realizar algunas reparaciones que condujeron al párroco don José Marcos ayudado por el coadjutor don Leandro Lozano, a interesantes hallazgos que dan aún más valor artístico a esta interesante iglesia.

  20. San Bernardino National Wildlife Refuge Well 10

    Energy Technology Data Exchange (ETDEWEB)

    Ensminger, J.T.; Easterly, C.E.; Ketelle, R.H.; Quarles, H.; Wade, M.C.

    1999-12-01

    The U.S. Geological Survey (USGS), at the request of the U.S. Fish and Wildlife Service, evaluated the water production capacity of an artesian well in the San Bernardino National Wildlife Refuge, Arizona. Water from the well initially flows into a pond containing three federally threatened or endangered fish species, and water from this pond feeds an adjacent pond/wetland containing an endangered plant species.

  1. An overview of San Francisco Bay PORTS

    Science.gov (United States)

    Cheng, Ralph T.; McKinnie, David; English, Chad; Smith, Richard E.

    1998-01-01

    The Physical Oceanographic Real-Time System (PORTS) provides observations of tides, tidal currents, and meteorological conditions in real-time. The San Francisco Bay PORTS (SFPORTS) is a decision support system to facilitate safe and efficient maritime commerce. In addition to real-time observations, SFPORTS includes a nowcast numerical model forming a San Francisco Bay marine nowcast system. SFPORTS data and nowcast numerical model results are made available to users through the World Wide Web (WWW). A brief overview of SFPORTS is presented, from the data flow originated at instrument sensors to final results delivered to end users on the WWW. A user-friendly interface for SFPORTS has been designed and implemented. Appropriate field data analysis, nowcast procedures, design and generation of graphics for WWW display of field data and nowcast results are presented and discussed. Furthermore, SFPORTS is designed to support hazardous materials spill prevention and response, and to serve as resources to scientists studying the health of San Francisco Bay ecosystem. The success (or failure) of the SFPORTS to serve the intended user community is determined by the effectiveness of the user interface.

  2. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  3. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu

    2013-07-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  4. Demonstration of reliability centered maintenance

    International Nuclear Information System (INIS)

    Schwan, C.A.; Morgan, T.A.

    1991-04-01

    Reliability centered maintenance (RCM) is an approach to preventive maintenance planning and evaluation that has been used successfully by other industries, most notably the airlines and military. Now EPRI is demonstrating RCM in the commercial nuclear power industry. Just completed are large-scale, two-year demonstrations at Rochester Gas ampersand Electric (Ginna Nuclear Power Station) and Southern California Edison (San Onofre Nuclear Generating Station). Both demonstrations were begun in the spring of 1988. At each plant, RCM was performed on 12 to 21 major systems. Both demonstrations determined that RCM is an appropriate means to optimize a PM program and improve nuclear plant preventive maintenance on a large scale. Such favorable results had been suggested by three earlier EPRI pilot studies at Florida Power ampersand Light, Duke Power, and Southern California Edison. EPRI selected the Ginna and San Onofre sites because, together, they represent a broad range of utility and plant size, plant organization, plant age, and histories of availability and reliability. Significant steps in each demonstration included: selecting and prioritizing plant systems for RCM evaluation; performing the RCM evaluation steps on selected systems; evaluating the RCM recommendations by a multi-disciplinary task force; implementing the RCM recommendations; establishing a system to track and verify the RCM benefits; and establishing procedures to update the RCM bases and recommendations with time (a living program). 7 refs., 1 tab

  5. Computational fluid dynamics: complex flows requiring supercomputers. January 1975-July 1988 (Citations from the INSPEC: Information Services for the Physics and Engineering Communities data base). Report for January 1975-July 1988

    International Nuclear Information System (INIS)

    1988-08-01

    This bibliography contains citations concerning computational fluid dynamics (CFD), a new method in computational science to perform complex flow simulations in three dimensions. Applications include aerodynamic design and analysis for aircraft, rockets, and missiles, and automobiles; heat-transfer studies; and combustion processes. Included are references to supercomputers, array processors, and parallel processors where needed for complete, integrated design. Also included are software packages and grid-generation techniques required to apply CFD numerical solutions. Numerical methods for fluid dynamics, not requiring supercomputers, are found in a separate published search. (Contains 83 citations fully indexed and including a title list.)

  6. San Marco C-2 (San Marco-4) Post Launch Report No. 1

    Science.gov (United States)

    1974-01-01

    The San Marco C-2 spacecraft, now designated San Marco-4, was successfully launched by a Scout vehicle from the San Marco Platform on 18 February 1974 at 6:05 a.m. EDT. The launch occurred 2 hours 50 minutes into the 3-hour window due co low cloud cover at the launch site. All spacecraft subsystems have been checked and are functioning normally. The protective caps for the two U.S. experiments were ejected and the Omegatron experiment activated on 19 February. The neutral mass spectrometer was activated as scheduled on 22 February after sufficient time to allow for spacecraft outgassing and to avoid the possibility of corona occurring. Both instruments are performing properly and worthwhile scientific data is being acquired.

  7. Development of 40m SANS and Its Utilization Techniques

    International Nuclear Information System (INIS)

    Choi, Sung Min; Kim, Tae Hwan

    2010-06-01

    Small angle neutron scattering (SANS) has been a very powerful tool to study nanoscale (1-100 nm) bulk structures in various materials such as polymer, self assembled materials, nano-porous materials, nano-magnetic materials, metal and ceramics. Understanding the importance of the SANS instrument, the 8m SANS instrument was installed at the CN beam port of HANARO in 2001. However, without having a cold neutron source, the beam intensity is fairly low and the Q-range is rather limited due to short instrument length. In July 1, 2003, therefore, the HANARO cold neutron research facility project was launched and a state of the art 40m SANS instrument was selected as top-priority instrument. The development of the 40m SANS instrument was completed as a joint project between Korea Advanced Institute of Science and Technology and the HANARO in 2010. Here, we report the specification of a state of art 40m SANS instrument at HANARO

  8. A Retail Center Facing Change: Using Data to Determine Marketing Strategy

    Science.gov (United States)

    Walker, Kristen L.; Curren, Mary T.; Kiesler, Tina

    2013-01-01

    Plaza del Valle is an open-air shopping center in the San Fernando Valley region of Los Angeles. The new marketing manager must review primary and secondary data to determine a target market, a product positioning strategy, and a promotion strategy for the retail shopping center with the ultimate goal of increasing revenue for the Plaza. She is…

  9. City of San Francisco, California street tree resource analysis

    Science.gov (United States)

    E.G. McPherson; J.R. Simpson; P.J. Peper; Q. Xiao

    2004-01-01

    Street trees in San Francisco are comprised of two distinct populations, those managed by the city’s Department of Public Works (DPW) and those managed by private property owners with or without the help of San Francisco’s urban forestry nonprofit, Friends of the Urban Forest (FUF). These two entities believe that the public’s investment in stewardship of San Francisco...

  10. Performance Analysis and Scaling Behavior of the Terrestrial Systems Modeling Platform TerrSysMP in Large-Scale Supercomputing Environments

    Science.gov (United States)

    Kollet, S. J.; Goergen, K.; Gasper, F.; Shresta, P.; Sulis, M.; Rihani, J.; Simmer, C.; Vereecken, H.

    2013-12-01

    In studies of the terrestrial hydrologic, energy and biogeochemical cycles, integrated multi-physics simulation platforms take a central role in characterizing non-linear interactions, variances and uncertainties of system states and fluxes in reciprocity with observations. Recently developed integrated simulation platforms attempt to honor the complexity of the terrestrial system across multiple time and space scales from the deeper subsurface including groundwater dynamics into the atmosphere. Technically, this requires the coupling of atmospheric, land surface, and subsurface-surface flow models in supercomputing environments, while ensuring a high-degree of efficiency in the utilization of e.g., standard Linux clusters and massively parallel resources. A systematic performance analysis including profiling and tracing in such an application is crucial in the understanding of the runtime behavior, to identify optimum model settings, and is an efficient way to distinguish potential parallel deficiencies. On sophisticated leadership-class supercomputers, such as the 28-rack 5.9 petaFLOP IBM Blue Gene/Q 'JUQUEEN' of the Jülich Supercomputing Centre (JSC), this is a challenging task, but even more so important, when complex coupled component models are to be analysed. Here we want to present our experience from coupling, application tuning (e.g. 5-times speedup through compiler optimizations), parallel scaling and performance monitoring of the parallel Terrestrial Systems Modeling Platform TerrSysMP. The modeling platform consists of the weather prediction system COSMO of the German Weather Service; the Community Land Model, CLM of NCAR; and the variably saturated surface-subsurface flow code ParFlow. The model system relies on the Multiple Program Multiple Data (MPMD) execution model where the external Ocean-Atmosphere-Sea-Ice-Soil coupler (OASIS3) links the component models. TerrSysMP has been instrumented with the performance analysis tool Scalasca and analyzed

  11. Chain conformations of ABA triblock coplymers in microphase-separated structures for SANS

    International Nuclear Information System (INIS)

    Matsushita, Y.; Nomura, M.; Watanabe, J.; Mogi, Y.; Noda, I.; Han, C.C.

    1993-01-01

    Single chain conformations of center block, polystyrene, of poly(2-vinylpyridine-b-styrene-b-2-vinylpyridine)(PSP) triblock copolymers of the ABA type in bulk were measured by small angle neutron scattering (SANS), while microphase separation structures were studied by small angle X-ray Scattering (SAXS) and transmission electron microscopy (TEM). From the morphological observations, PSP block copolymers have confirmed to have alternating lamellar structure both when φs = 0.33 and φs = 0.5, where φs is the volume fraction of polystyrene blocks. It was also clarified that the chain dimension of center blocks of sample with φs = 0.33 is smaller than that of sample with φs = 0.5. This result may mean that the center blocks have bridge-righ conformation when φs = 0.33 while they have loop-rich conformation when φs = 0.5. (author)

  12. 78 FR 35593 - Special Local Regulation; Christmas Boat Parade, San Juan Harbor; San Juan, PR

    Science.gov (United States)

    2013-06-13

    ... individually or cumulatively have a significant effect on the human environment. This proposed rule involves.... Pearson, Captain, U.S. Coast Guard, Captain of the Port San Juan. [FR Doc. 2013-13994 Filed 6-12-13; 8:45...

  13. Dal "San Marco" al "Vega". (English Title: From "San Marco" to Vega)

    Science.gov (United States)

    Savi, E.

    2017-10-01

    Apart from the two superpowers, among the other countries Italy has had an important role in astronautics. The roots of Italian astronautics' history runs deep in the hottest years of the Cold War, and it had its first remarkable achievement in the San Marco project..after years of advanced technologies testing, they achieved European cooperation and built VEGA, the current Arianespace light launcher.

  14. 75 FR 39166 - Safety Zone; San Francisco Giants Baseball Game Promotion, San Francisco, CA

    Science.gov (United States)

    2010-07-08

    ... San Francisco, CA. The fireworks display is meant for entertainment purposes. This safety zone is... National Technology Transfer and Advancement Act (NTTAA) (15 U.S.C. 272 note) directs agencies to use...), of the Instruction. This rule involves establishing, disestablishing, or changing Regulated...

  15. SANS-II at SINQ: Installation of the former Risø-SANS facility

    DEFF Research Database (Denmark)

    Strunz, P.; Mortensen, K.; Janssen, S.

    2004-01-01

    SANS-II facility at SINQ (Paul Scherrer Institute)-the reinstalled former Riso small-angle neutron scattering instrument-is presented. Its operational characteristics are listed. Approaches for precise determination of wavelength, detector dead time and attenuation factors are described as well. (C...

  16. 78 FR 42027 - Safety Zone; San Diego Bayfair; Mission Bay, San Diego, CA

    Science.gov (United States)

    2013-07-15

    ... safety zones. Thunderboats Unlimited Inc. is sponsoring San Diego Bayfair, which is held on the navigable... distribution of power and responsibilities between the Federal Government and Indian tribes. 12. Energy Effects This proposed rule is not a ``significant energy action'' under Executive Order 13211, Actions...

  17. El San Juan y la Universidad Nacional

    Directory of Open Access Journals (Sweden)

    Victor Manuel Moncayo

    2000-04-01

    Full Text Available Encontrar una solución para la crisis de la Fundación San Juan de Dios no es un problema jurídico, ni tampoco de gestión ordinaria de una institución. La crisis es de tal magnitud que desborda las capacidades reales de la organización actual y, en especial, de su Junta Directiva o de quienes ejerzan su representación legal o de quienes colaboran como empleados o trabajadores de la institución.

  18. L’alimentation des sans-abri

    OpenAIRE

    Amistani, Carole; Terrolle, Daniel

    2012-01-01

    L’alimentation des sans-logis est analysable, à partir du terrain, selon deux versants, parfois utilisés conjointement, qui sont celui de leur autonomie et/ou celui de leur dépendance envers le don alimentaire. Dans ce dernier cas, les contenus comme les formes témoignent trop souvent d’une impossibilité d’assurer l’équilibre nutritionnel de ces mangeurs et le respect des multiples aspects socialisants compris dans l’acte alimentaire. Le choix d’un traitement social dans l’ « urgence » et par...

  19. Neuroimaging Features of San Luis Valley Syndrome

    Directory of Open Access Journals (Sweden)

    Matthew T. Whitehead

    2015-01-01

    Full Text Available A 14-month-old Hispanic female with a history of double-outlet right ventricle and developmental delay in the setting of recombinant chromosome 8 syndrome was referred for neurologic imaging. Brain MR revealed multiple abnormalities primarily affecting midline structures, including commissural dysgenesis, vermian and brainstem hypoplasia/dysplasia, an interhypothalamic adhesion, and an epidermoid between the frontal lobes that enlarged over time. Spine MR demonstrated hypoplastic C1 and C2 posterior elements, scoliosis, and a borderline low conus medullaris position. Presented herein is the first illustration of neuroimaging findings from a patient with San Luis Valley syndrome.

  20. for presence of hookworms (Uncinaria spp. on San Miguel Island, California

    Directory of Open Access Journals (Sweden)

    Lyons E. T.

    2016-06-01

    Full Text Available Necropsy and extensive parasitological examination of dead northern elephant seal (NES pups was done on San Miguel Island, California, in February, 2015. The main interest in the current study was to determine if hookworms were present in NESs on San Miguel Island where two hookworm species of the genus Uncinaria are known to be present - Uncinaria lyonsi in California sea lions and Uncinaria lucasi in northern fur seals. Hookworms were not detected in any of the NESs examined: stomachs or intestines of 16 pups, blubber of 13 pups and blubber of one bull. The results obtained in the present study of NESs on San Miguel Island plus similar finding on Año Nuevo State Reserve and The Marine Mammal Center provide strong indication that NES are not appropriate hosts for Uncinaria spp. Hookworm free-living third stage larvae, developed from eggs of California sea lions and northern fur seals, were recovered from sand. It seems that at this time, further search for hookworms in NESs would be nonproductive.

  1. Do PEV Drivers Park Near Publicly Accessible EVSE in San Diego but Not Use Them?

    Energy Technology Data Exchange (ETDEWEB)

    Francfort, James Edward [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-06-01

    The PEV charging stations deployed as part of The EV Project included both residential and non-residential sites. Non-residential sites included EVSE installed in workplace environments, fleet applications and those that were publicly accessible near retail centers, parking lots, and similar locations. The EV Project utilized its Micro-Climate® planning process to determine potential sites for publicly accessible EVSE in San Diego. This process worked with local stakeholders to target EVSE deployment near areas where significant PEV traffic and parking was expected. This planning process is described in The Micro-Climate deployment Process in San Diego1. The EV Project issued its deployment plan for San Diego in November 2010, prior to the sale of PEVs by Nissan and Chevrolet. The Project deployed residential EVSE concurrent with vehicle delivery starting in December 2010. The installation of non-residential EVSE commenced in April 2011 consistent with the original Project schedule, closely following the adoption of PEVs. The residential participation portion of The EV Project was fully subscribed by January 2013 and the non-residential EVSE deployment was essentially completed by August 2013.

  2. Pandemic (H1N1) 2009 Surveillance in Marginalized Populations, Tijuana, Mexico, and West Nile Virus Knowledge among Hispanics, San Diego, California, 2006

    Centers for Disease Control (CDC) Podcasts

    This podcast describes public health surveillance and communication in hard to reach populations in Tijuana, Mexico, and San Diego County, California. Dr. Marian McDonald, Associate Director of CDC's Health Disparities in the National Center for Emerging and Zoonotic Infectious Diseases, discusses the importance of being flexible in determining the most effective media for health communications.

  3. Foreign Language Folio. A Guide to Cultural Resources and Field Trip Opportunities in the San Francisco Bay Area for Teachers and Students of Foreign Languages, 1983-85.

    Science.gov (United States)

    Gonzales, Tony, Ed.; O'Connor, Roger, Ed.

    A listing of San Francisco area cultural resources and opportunities of use to foreign language teachers is presented. Included are the following: museums and galleries, schools, art sources, churches, clubs, cultural centers and organizations, publications and publishing companies, restaurants, food stores and markets, travel and tourism,…

  4. Summaries of research and development activities by using supercomputer system of JAEA in FY2015. April 1, 2015 - March 31, 2016

    International Nuclear Information System (INIS)

    2017-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As shown in the fact that about 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology. In FY2015, the system was used for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue, as well as for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great number of R and D results accomplished by using the system in FY2015, as well as user support, operational records and overviews of the system, and so on. (author)

  5. Summaries of research and development activities by using supercomputer system of JAEA in FY2014. April 1, 2014 - March 31, 2015

    International Nuclear Information System (INIS)

    2016-02-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As shown in the fact that about 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology. In FY2014, the system was used for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue, as well as for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great number of R and D results accomplished by using the system in FY2014, as well as user support, operational records and overviews of the system, and so on. (author)

  6. Summaries of research and development activities by using supercomputer system of JAEA in FY2013. April 1, 2013 - March 31, 2014

    International Nuclear Information System (INIS)

    2015-02-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. About 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2013, the system was used not only for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science, but also for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as a priority issue. This report presents a great amount of R and D results accomplished by using the system in FY2013, as well as user support, operational records and overviews of the system, and so on. (author)

  7. The design and implementation of cost-effective algorithms for direct solution of banded linear systems on the vector processor system 32 supercomputer

    Science.gov (United States)

    Samba, A. S.

    1985-01-01

    The problem of solving banded linear systems by direct (non-iterative) techniques on the Vector Processor System (VPS) 32 supercomputer is considered. Two efficient direct methods for solving banded linear systems on the VPS 32 are described. The vector cyclic reduction (VCR) algorithm is discussed in detail. The performance of the VCR on a three parameter model problem is also illustrated. The VCR is an adaptation of the conventional point cyclic reduction algorithm. The second direct method is the Customized Reduction of Augmented Triangles' (CRAT). CRAT has the dominant characteristics of an efficient VPS 32 algorithm. CRAT is tailored to the pipeline architecture of the VPS 32 and as a consequence the algorithm is implicitly vectorizable.

  8. Summaries of research and development activities by using supercomputer system of JAEA in FY2012. April 1, 2012 - March 31, 2013

    International Nuclear Information System (INIS)

    2014-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As more than 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2012, the system was used not only for JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science, but also for R and D aiming to restore Fukushima (nuclear plant decommissioning and environmental restoration) as apriority issue. This report presents a great amount of R and D results accomplished by using the system in FY2012, as well as user support, operational records and overviews of the system, and so on. (author)

  9. Summaries of research and development activities by using supercomputer system of JAEA in FY2011. April 1, 2011 - March 31, 2012

    International Nuclear Information System (INIS)

    2013-01-01

    Japan Atomic Energy Agency (JAEA) conducts research and development (R and D) in various fields related to nuclear power as a comprehensive institution of nuclear energy R and Ds, and utilizes computational science and technology in many activities. As more than 20 percent of papers published by JAEA are concerned with R and D using computational science, the supercomputer system of JAEA has become an important infrastructure to support computational science and technology utilization. In FY2011, the system was used for analyses of the accident at the Fukushima Daiichi Nuclear Power Station and establishment of radioactive decontamination plan, as well as the JAEA's major projects such as Fast Reactor Cycle System, Fusion R and D and Quantum Beam Science. This report presents a great amount of R and D results accomplished by using the system in FY2011, as well as user support structure, operational records and overviews of the system, and so on. (author)

  10. Use of QUADRICS supercomputer as embedded simulator in emergency management systems; Utilizzo del calcolatore QUADRICS come simulatore in linea in un sistema di gestione delle emergenze

    Energy Technology Data Exchange (ETDEWEB)

    Bove, R.; Di Costanzo, G.; Ziparo, A. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Dip. Energia

    1996-07-01

    The experience related to the implementation of a MRBT, atmospheric spreading model with a short duration releasing, are reported. This model was implemented on a QUADRICS-Q1 supercomputer. First is reported a description of the MRBT model. It is an analytical model to study the speadings of light gases realised in the atmosphere cause incidental releasing. The solution of diffusion equation is Gaussian like. It yield the concentration of pollutant substance released. The concentration is function of space and time. Thus the QUADRICS architecture is introduced. And the implementation of the model is described. At the end it will be consider the integration of the QUADRICS-based model as simulator in a emergency management system.

  11. San Francisco Biofuel Program: Brown Grease to Biodiesel Demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Jolis, Domènec [San Francisco Public Utilities Commission, San Francisco, CA (United States); Martis, Mary [San Francisco Public Utilities Commission, San Francisco, CA (United States); Jones, Bonnie [San Francisco Public Utilities Commission, San Francisco, CA (United States); Miot, Alex [San Francisco Public Utilities Commission, San Francisco, CA (United States); Ving, Karri [San Francisco Public Utilities Commission, San Francisco, CA (United States); Sierra, Natalie [San Francisco Public Utilities Commission, San Francisco, CA (United States); Niobi, Morayo [San Francisco Public Utilities Commission, San Francisco, CA (United States)

    2013-03-01

    Municipal wastewater treatment facilities have typically been limited to the role of accepting wastewater, treating it to required levels, and disposing of its treatment residuals. However, a new view is emerging which includes wastewater treatment facilities as regional resource recovery centers. This view is a direct result of increasingly stringent regulations, concerns over energy use, carbon footprint, and worldwide depletion of fossil fuel resources. Resources in wastewater include chemical and thermal energy, as well as nutrients, and water. A waste stream such as residual grease, which concentrates in the drainage from restaurants (referred to as Trap Waste), is a good example of a resource with an energy content that can be recovered for beneficial reuse. If left in wastewater, grease accumulates inside of the wastewater collection system and can lead to increased corrosion and pipe blockages that can cause wastewater overflows. Also, grease in wastewater that arrives at the treatment facility can impair the operation of preliminary treatment equipment and is only partly removed in the primary treatment process. In addition, residual grease increases the demand in treatment materials such as oxygen in the secondary treatment process. When disposed of in landfills, grease is likely to undergo anaerobic decay prior to landfill capping, resulting in the atmospheric release of methane, a greenhouse gas (GHG). This research project was therefore conceptualized and implemented by the San Francisco Public Utilities Commission (SFPUC) to test the feasibility of energy recovery from Trap Waste in the form of Biodiesel or Methane gas.

  12. EX1103: Exploration and Mapping, Galapagos Spreading Center: Mapping, CTD, Tow-Yo, and ROV on NOAA Ship Okeanos Explorer (EM302)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This cruise will be composed of two separate legs. The first leg will be a transit from San Diego, CA to the Galapagos Spreading Center, where multibeam mapping, CTD...

  13. Quick-Reaction Report on the Audit of Defense Base Realignment and Closure Budget Data for Naval Training Center Great Lakes, Illinois

    National Research Council Canada - National Science Library

    Granetto, Paul

    1994-01-01

    .... The Hull Technician School will share building 520 with the Advanced Hull Technician School, which is being realigned from the Naval Training Center San Diego, California, under project P-608T...

  14. EX1103: Exploration and Mapping, Galapagos Spreading Center: Mapping, CTD, Tow-Yo, and ROV on NOAA Ship Okeanos Explorer between 20110608 and 20110728

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This cruise will be composed of two separate legs. The first leg will be a transit from San Diego, CA to the Galapagos Spreading Center, where multibeam mapping, CTD...

  15. Synthetic seismicity for the San Andreas fault

    Directory of Open Access Journals (Sweden)

    S. N. Ward

    1994-06-01

    Full Text Available Because historical catalogs generally span only a few repetition intervals of major earthquakes, they do not provide much constraint on how regularly earthquakes recur. In order to obtain better recurrence statistics and long-term probability estimates for events M ? 6 on the San Andreas fault, we apply a seismicity model to this fault. The model is based on the concept of fault segmentation and the physics of static dislocations which allow for stress transfer between segments. Constraints are provided by geological and seismological observations of segment lengths, characteristic magnitudes and long-term slip rates. Segment parameters slightly modified from the Working Group on California Earthquake Probabilities allow us to reproduce observed seismicity over four orders of magnitude. The model yields quite irregular earthquake recurrence patterns. Only the largest events (M ? 7.5 are quasi-periodic; small events cluster. Both the average recurrence time and the aperiodicity are also a function of position along the fault. The model results are consistent with paleoseismic data for the San Andreas fault as well as a global set of historical and paleoseismic recurrence data. Thus irregular earthquake recurrence resulting from segment interaction is consistent with a large range of observations.

  16. LaRC Modeling of Ozone Formation in San Antonio, Texas

    Science.gov (United States)

    Guo, F.; Griffin, R. J.; Bui, A.; Schulze, B.; Wallace, H. W., IV; Flynn, J. H., III; Erickson, M.; Kotsakis, A.; Alvarez, S. L.; Usenko, S.; Sheesley, R. J.; Yoon, S.

    2017-12-01

    Ozone (O3) is one of the most important trace species within the troposphere and results from photochemistry involving emissions from a complex array of sources. Ground-level O3 is detrimental to ecosystems and causes a variety of human health problems including respiratory irritation, asthma and reduction in lung capacity. However, the O3 Design Value in San Antonio, Texas, was in violation of the federal threshold set by the EPA (70 ppb, 8-hr max) based on the average for the most recent three-year period (2014-2016). To understand the sources of high O3 concentrations in this nonattainment area, we assembled and deployed a mobile air quality laboratory and operated it in two locations in the southeast (Traveler's World RV Park) and northwest (University of Texas at San Antonio) of downtown San Antonio during summer 2017 to measure O3 and its precursors, including total nitrogen oxides (NOx) and volatile organic compounds (VOCs). Additional measurements included temperature, relative humidity, pressure, solar radiation, wind speed, wind direction, total reactive nitrogen (NOy), carbon monoxide (CO), and aerosol composition and concentration. We will use the campaign data and the NASA Langley Research Center (LaRC) Zero-Dimensional Box Model (Crawford et al., 1999; Olson et al., 2006) to calculate O3 production rate, NOx and hydroxyl radical chain length, and NOx versus VOCs sensitivity at different times of a day with different photochemical and meteorological conditions. A key to our understanding is to combine model results with measurements of precursor gases, particle chemistry and particle size to support the identification of O3 sources, its major formation pathways, and how the ozone production efficiency (OPE) depends on various factors. The resulting understanding of the causes of high O3 concentrations in the San Antonio area will provide insight into future air quality protection.

  17. Adaptive Management Methods to Protect the California Sacramento-San Joaquin Delta Water Resource

    Science.gov (United States)

    Bubenheim, David

    2016-01-01

    The California Sacramento-San Joaquin River Delta is the hub for California's water supply, conveying water from Northern to Southern California agriculture and communities while supporting important ecosystem services, agriculture, and communities in the Delta. Changes in climate, long-term drought, water quality changes, and expansion of invasive aquatic plants threatens ecosystems, impedes ecosystem restoration, and is economically, environmentally, and sociologically detrimental to the San Francisco Bay/California Delta complex. NASA Ames Research Center and the USDA-ARS partnered with the State of California and local governments to develop science-based, adaptive-management strategies for the Sacramento-San Joaquin Delta. The project combines science, operations, and economics related to integrated management scenarios for aquatic weeds to help land and waterway managers make science-informed decisions regarding management and outcomes. The team provides a comprehensive understanding of agricultural and urban land use in the Delta and the major water sheds (San Joaquin/Sacramento) supplying the Delta and interaction with drought and climate impacts on the environment, water quality, and weed growth. The team recommends conservation and modified land-use practices and aids local Delta stakeholders in developing management strategies. New remote sensing tools have been developed to enhance ability to assess conditions, inform decision support tools, and monitor management practices. Science gaps in understanding how native and invasive plants respond to altered environmental conditions are being filled and provide critical biological response parameters for Delta-SWAT simulation modeling. Operational agencies such as the California Department of Boating and Waterways provide testing and act as initial adopter of decision support tools. Methods developed by the project can become routine land and water management tools in complex river delta systems.

  18. The San Diego Panasonic Partnership: A Case Study in Restructuring.

    Science.gov (United States)

    Holzman, Michael; Tewel, Kenneth J.

    1992-01-01

    The Panasonic Foundation provides resources for restructuring school districts. The article examines its partnership with the San Diego City School District, highlighting four schools that demonstrate promising practices and guiding principles. It describes recent partnership work on systemic issues, noting the next steps to be taken in San Diego.…

  19. Characterization of alumina using small angle neutron scattering (SANS)

    International Nuclear Information System (INIS)

    Megat Harun Al Rashidn Megat Ahmad; Abdul Aziz Mohamed; Azmi Ibrahim; Che Seman Mahmood; Edy Giri Rachman Putra; Muhammad Rawi Muhammad Zin; Razali Kassim; Rafhayudi Jamro

    2007-01-01

    Alumina powder was synthesized from an aluminium precursor and studied using small angle neutron scattering (SANS) technique and complemented with transmission electron microscope (TEM). XRD measurement confirmed that the alumina produced was high purity and highly crystalline αphase. SANS examination indicates the formation of mass fractals microstructures with fractal dimension of about 2.8 on the alumina powder. (Author)

  20. Voice and Valency in San Luis Potosi Huasteco

    Science.gov (United States)

    Munoz Ledo Yanez, Veronica

    2014-01-01

    This thesis presents an analysis of the system of transitivity, voice and valency alternations in Huasteco of San Luis Potosi (Mayan) within a functional-typological framework. The study is based on spoken discourse and elicited data collected in the municipalities of Aquismon and Tancanhuitz de Santos in the state of San Luis Potosi, Mexico. The…

  1. 33 CFR 80.1130 - San Luis Obispo Bay, CA.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false San Luis Obispo Bay, CA. 80.1130 Section 80.1130 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY INTERNATIONAL NAVIGATION RULES COLREGS DEMARCATION LINES Pacific Coast § 80.1130 San Luis Obispo Bay, CA. A line drawn from...

  2. The GLOBE-Consortium: The Erasmus Computing Grid – Building a Super-Computer at Erasmus MC for FREE

    NARCIS (Netherlands)

    T.A. Knoch (Tobias)

    2005-01-01

    textabstractTo meet the enormous computational needs of live-science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop computing grids in the world – The Erasmus Computing Grid.

  3. SAN MICHELE. ENTRE CIELO Y MAR / San Michele, between sky and sea

    Directory of Open Access Journals (Sweden)

    Pablo Blázquez Jesús

    2012-11-01

    Full Text Available RESUMEN El cementerio es uno de los tipos arquitectónicos más profundos y metafóricos. El concurso para la ampliación del cementerio de San Michele, convocado en 1998 por la administración Municipal de Venecia, se convierte en un excelente campo de pruebas sobre el que poder analizar el contexto histórico en torno a esta tipología, y su relación con la ciudad y el territorio. El estudio de este caso concreto nos permite descubrir personajes, relaciones casuales y hallazgos que se despliegan a lo largo del texto. La historia del cementerio de San Michele es también la crónica de la transformación de la ciudad de Venecia y su Laguna. Interpretando este concurso como un instrumento de investigación, el objetivo del artículo es el de comprender la realidad contemporánea de la arquitectura funeraria a través de la isla de San Michele, Venecia, y las propuestas finalistas de Carlos Ferrater, Enric Miralles y David Chipperfield. Una historia bajo la cual se vislumbran claves que nos sirven para reflexionar acerca del cementerio contemporáneo, la ciudad y el territorio. SUMMARY The cemetery is one of the most profound and metaphorical kinds of architecture. The competition for the extension of the San Michele Cemetery, called in 1998 by the Venice municipal administration, is an excellent testing ground on which to analyse the historical context surrounding this type of architecture, and its relationship with the city and the region. The study of this particular case allows us to uncover characters, casual relationships and findings that unfold throughout the text. The history of the San Michele cemetery is also the chronicle of the transformation of the city of Venice and its Lagoon. Interpreting this competition as a research tool, the aim of the paper is to understand the contemporary reality of funerary architecture through the island of San Michele, Venice, and the finalist proposals of Carlos Ferrater, Enric Miralles and David

  4. 76 FR 70480 - Otay River Estuary Restoration Project, South San Diego Bay Unit of the San Diego Bay National...

    Science.gov (United States)

    2011-11-14

    ... River Estuary Restoration Project, South San Diego Bay Unit of the San Diego Bay National Wildlife...), intend to prepare an environmental impact statement (EIS) for the proposed Otay River Estuary Restoration... any one of the following methods. Email: [email protected] . Please include ``Otay Estuary NOI'' in the...

  5. Identifying Telemedicine Services to Improve Access to Specialty Care for the Underserved in the San Francisco Safety Net

    Directory of Open Access Journals (Sweden)

    Ken Russell Coelho

    2011-01-01

    Full Text Available Safety-net settings across the country have grappled with providing adequate access to specialty care services. San Francisco General Hospital and Trauma Center, serving as the city's primary safety-net hospital, has also had to struggle with the same issue. With Healthy San Francisco, the City and County of San Francisco's Universal Healthcare mandate, the increased demand for specialty care services has placed a further strain on the system. With the recent passage of California Proposition 1D, infrastructural funds are now set aside to assist in connecting major hospitals with primary care clinics in remote areas all over the state of California, using telemedicine. Based on a selected sample of key informant interviews with local staff physicians, this study provides further insight into the current process of e-referral which uses electronic communication for making referrals to specialty care. It also identifies key services for telemedicine in primary and specialty care settings within the San Francisco public health system. This study concludes with proposals for a framework that seek to increase collaboration between the referring primary care physician and specialist, to prioritize institution of these key services for telemedicine.

  6. Beneficial Reuse of San Ardo Produced Water

    Energy Technology Data Exchange (ETDEWEB)

    Robert A. Liske

    2006-07-31

    This DOE funded study was performed to evaluate the potential for treatment and beneficial reuse of produced water from the San Ardo oilfield in Monterey County, CA. The potential benefits of a successful full-scale implementation of this project include improvements in oil production efficiency and additional recoverable oil reserves as well as the addition of a new reclaimed water resource. The overall project was conducted in two Phases. Phase I identified and evaluated potential end uses for the treated produced water, established treated water quality objectives, reviewed regulations related to treatment, transport, storage and use of the treated produced water, and investigated various water treatment technology options. Phase II involved the construction and operation of a small-scale water treatment pilot facility to evaluate the process's performance on produced water from the San Ardo oilfield. Cost estimates for a potential full-scale facility were also developed. Potential end uses identified for the treated water include (1) agricultural use near the oilfield, (2) use by Monterey County Water Resources Agency (MCWRA) for the Salinas Valley Water Project or Castroville Seawater Intrusion Project, (3) industrial or power plant use in King City, and (4) use for wetlands creation in the Salinas Basin. All of these uses were found to have major obstacles that prevent full-scale implementation. An additional option for potential reuse of the treated produced water was subsequently identified. That option involves using the treated produced water to recharge groundwater in the vicinity of the oil field. The recharge option may avoid the limitations that the other reuse options face. The water treatment pilot process utilized: (1) warm precipitation softening to remove hardness and silica, (2) evaporative cooling to meet downstream temperature limitations and facilitate removal of ammonia, and (3) reverse osmosis (RO) for removal of dissolved salts, boron

  7. Microbial biogeography of San Francisco Bay sediments

    Science.gov (United States)

    Lee, J. A.; Francis, C. A.

    2014-12-01

    The largest estuary on the west coast of North America, San Francisco Bay is an ecosystem of enormous biodiversity, and also enormous human impact. The benthos has experienced dredging, occupation by invasive species, and over a century of sediment input as a result of hydraulic mining. Although the Bay's great cultural and ecological importance has inspired numerous surveys of the benthic macrofauna, to date there has been almost no investigation of the microbial communities on the Bay floor. An understanding of those microbial communities would contribute significantly to our understanding of both the biogeochemical processes (which are driven by the microbiota) and the physical processes (which contribute to microbial distributions) in the Bay. Here, we present the first broad survey of bacterial and archaeal taxa in the sediments of the San Francisco Bay. We conducted 16S rRNA community sequencing of bacteria and archaea in sediment samples taken bimonthly for one year, from five sites spanning the salinity gradient between Suisun and Central Bay, in order to capture the effect of both spatial and temporal environmental variation on microbial diversity. From the same samples we also conducted deep sequencing of a nitrogen-cycling functional gene, nirS, allowing an assessment of evolutionary diversity at a much finer taxonomic scale within an important and widespread functional group of bacteria. We paired these sequencing projects with extensive geochemical metadata as well as information about macrofaunal distribution. Our data reveal a diversity of distinct biogeographical patterns among different taxa: clades ubiquitous across sites; clades that respond to measurable environmental drivers; and clades that show geographical site-specificity. These community datasets allow us to test the hypothesis that salinity is a major driver of both overall microbial community structure and community structure of the denitrifying bacteria specifically; and to assess

  8. San Andreas-sized Strike-slip Fault on Europa

    Science.gov (United States)

    1998-01-01

    subsequent tidal stress causes it to move lengthwise in one direction. Then tidal forces close the fault again, preventing the area from moving back to its original position. Daily tidal cycles produce a steady accumulation of lengthwise offset motions. Here on Earth, unlike Europa, large strike-slip faults like the San Andreas are set in motion by plate tectonic forces. North is to the top of the picture and the sun illuminates the surface from the top. The image, centered at 66 degrees south latitude and 195 degrees west longitude, covers an area approximately 300 by 203 kilometers(185 by 125 miles). The pictures were taken on September 26, 1998by Galileo's solid-state imaging system. This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepo

  9. Aerial radiological survey of the San Onofre Nuclear Generating Station and surrounding area, San Clemente, California

    International Nuclear Information System (INIS)

    Hilton, L.K.

    1980-12-01

    An airborne radiological survey of an 11 km 2 area surrounding the San Onofre Nuclear Generating Station was made 9 to 17 January 1980. Count rates observed at 60 m altitude were converted to exposure rates at 1 m above the ground and are presented in the form of an isopleth map. Detected radioisotopes and their associated gamma ray exposure rates were consistent with that expected from normal background emitters, except directly over the plant

  10. San Onofre - the evolution of outage management

    International Nuclear Information System (INIS)

    Slagle, K.A.

    1993-01-01

    With the addition of units 2 and 3 to San Onofre nuclear station in 1983 and 1984, it became evident that a separate group was needed to manage outages. Despite early establishment of a division to handle outages, it was a difficult journey to make the changes to achieve short outages. Early organizational emphasis was on developing an error-free operating environment and work culture. This is difficult for a relatively large organization at a three-unit site. The work processes and decision styles were designed to be very deliberate with many checks and balances. The organization leadership and accountability were focused in the traditional operations, maintenance, and engineering divisions. Later, our organization emphasis shifted to achieving engineering excellence. With a sound foundation of operating and engineering excellence, our organizational focus has turned to achieving quality outages. This means accomplishing the right work in a shorter duration and having the units run until the next refueling

  11. Paleomagnetism of San Cristobal Island, Galapagos

    Science.gov (United States)

    Cox, A.

    1971-01-01

    Isla San Cristobal, the most easterly of the Galapagos Islands, consists of two parts: a large volcano constitutes the southwest half of the island and an irregular apron of small cones and flows makes up the northeast half. As some of the younger flows on the flanks of the large volcano are reversely magnetized, the minimum age of the volcano is 0.7 my, which is the age of the Brunhes-Matuyama reversal boundary. The true age is probably several times greater. The cones and flows to the northeast are all normally magnetized. The between-site angular dispersion of virtual poles is 11.3?? - a value consistent with mathematical models for the latitude dependence of geomagnetic secular variation. ?? 1971.

  12. San Pedro River Aquifer Binational Report

    Science.gov (United States)

    Callegary, James B.; Minjárez Sosa, Ismael; Tapia Villaseñor, Elia María; dos Santos, Placido; Monreal Saavedra, Rogelio; Grijalva Noriega, Franciso Javier; Huth, A. K.; Gray, Floyd; Scott, C. A.; Megdal, Sharon; Oroz Ramos, L. A.; Rangel Medina, Miguel; Leenhouts, James M.

    2016-01-01

    The United States and Mexico share waters in a number of hydrological basins and aquifers that cross the international boundary. Both countries recognize that, in a region of scarce water resources and expanding populations, a greater scientific understanding of these aquifer systems would be beneficial. In light of this, the Mexican and U.S. Principal Engineers of the International Boundary and Water Commission (IBWC) signed the “Joint Report of the Principal Engineers Regarding the Joint Cooperative Process United States-Mexico for the Transboundary Aquifer Assessment Program" on August 19, 2009 (IBWC-CILA, 2009). This IBWC “Joint Report” serves as the framework for U.S.-Mexico coordination and dialogue to implement transboundary aquifer studies. The document clarifies several details about the program such as background, roles, responsibilities, funding, relevance of the international water treaties, and the use of information collected or compiled as part of the program. In the document, it was agreed by the parties involved, which included the IBWC, the Mexican National Water Commission (CONAGUA), the U.S. Geological Survey (USGS), and the Universities of Arizona and Sonora, to study two priority binational aquifers, one in the San Pedro River basin and the other in the Santa Cruz River basin. This report focuses on the Binational San Pedro Basin (BSPB). Reasons for the focus on and interest in this aquifer include the fact that it is shared by the two countries, that the San Pedro River has an elevated ecological value because of the riparian ecosystem that it sustains, and that water resources are needed to sustain the river, existing communities, and continued development. This study describes the aquifer’s characteristics in its binational context; however, most of the scientific work has been undertaken for many years by each country without full knowledge of the conditions on the other side of the border. The general objective of this study is to

  13. San Andreas Fault in the Carrizo Plain

    Science.gov (United States)

    2000-01-01

    The 1,200-kilometer (800-mile)San Andreas is the longest fault in California and one of the longest in North America. This perspective view of a portion of the fault was generated using data from the Shuttle Radar Topography Mission (SRTM), which flew on NASA's Space Shuttle last February, and an enhanced, true-color Landsat satellite image. The view shown looks southeast along the San Andreas where it cuts along the base of the mountains in the Temblor Range near Bakersfield. The fault is the distinctively linear feature to the right of the mountains. To the left of the range is a portion of the agriculturally rich San Joaquin Valley. In the background is the snow-capped peak of Mt. Pinos at an elevation of 2,692 meters (8,831 feet). The complex topography in the area is some of the most spectacular along the course of the fault. To the right of the fault is the famous Carrizo Plain. Dry conditions on the plain have helped preserve the surface trace of the fault, which is scrutinized by both amateur and professional geologists. In 1857, one of the largest earthquakes ever recorded in the United States occurred just north of the Carrizo Plain. With an estimated magnitude of 8.0, the quake severely shook buildings in Los Angeles, caused significant surface rupture along a 350-kilometer (220-mile) segment of the fault, and was felt as far away as Las Vegas, Nev. This portion of the San Andreas is an important area of study for seismologists. For visualization purposes, topographic heights displayed in this image are exaggerated two times.The elevation data used in this image was acquired by SRTM aboard the Space Shuttle Endeavour, launched on February 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of Earth's land surface. To collect the 3-D SRTM data, engineers added a mast 60

  14. San Carlos Apache Tribe - Energy Organizational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rapp, James; Albert, Steve

    2012-04-01

    The San Carlos Apache Tribe (SCAT) was awarded $164,000 in late-2011 by the U.S. Department of Energy (U.S. DOE) Tribal Energy Program's "First Steps Toward Developing Renewable Energy and Energy Efficiency on Tribal Lands" Grant Program. This grant funded:  The analysis and selection of preferred form(s) of tribal energy organization (this Energy Organization Analysis, hereinafter referred to as "EOA").  Start-up staffing and other costs associated with the Phase 1 SCAT energy organization.  An intern program.  Staff training.  Tribal outreach and workshops regarding the new organization and SCAT energy programs and projects, including two annual tribal energy summits (2011 and 2012). This report documents the analysis and selection of preferred form(s) of a tribal energy organization.

  15. Paleohydrogeology of the San Joaquin basin, California

    Science.gov (United States)

    Wilson, A.M.; Garven, G.; Boles, J.R.

    1999-01-01

    Mass transport can have a significant effect on chemical diagenetic processes in sedimentary basins. This paper presents results from the first part of a study that was designed to explore the role of an evolving hydrodynamic system in driving mass transport and chemical diagenesis, using the San Joaquin basin of California as a field area. We use coupled hydrogeologic models to establish the paleohydrogeology, thermal history, and behavior of nonreactive solutes in the basin. These models rely on extensive geological information and account for variable-density fluid flow, heat transport, solute transport, tectonic uplift, sediment compaction, and clay dehydration. In our numerical simulations, tectonic uplift and ocean regression led to large-scale changes in fluid flow and composition by strengthening topography-driven fluid flow and allowing deep influx of fresh ground water in the San Joaquin basin. Sediment compaction due to rapid deposition created moderate overpressures, leading to upward flow from depth. The unusual distribution of salinity in the basin reflects influx of fresh ground water to depths of as much as 2 km and dilution of saline fluids by dehydration reactions at depths greater than ???2.5 km. Simulations projecting the future salinity of the basin show marine salinities persisting for more than 10 m.y. after ocean regression. Results also show a change from topography-to compaction-driven flow in the Stevens Sandstone at ca. 5 Ma that coincides with an observed change in the diagenetic sequence. Results of this investigation provide a framework for future hydrologic research exploring the link between fluid flow and diagenesis.

  16. Puente Coronado - San Diego (EE. UU.

    Directory of Open Access Journals (Sweden)

    Editorial, Equipo

    1971-12-01

    Full Text Available This 3,5 km long bridge, joining the cities of San Diego and Coronado is one of the longest in the world of this type, and one of the three most important straight line bridges in the United States. Its supporting structure consists of reinforced concrete columns resting on footings or piles, according to whether they are under the sea water or on dry land. The superstructure is partly of metal plates and partly of box girders. The surfacing of the deck consists of asphalt epoxy concrete, of 5 cm depth. Special paint was applied to the bridge, including layers of vinyl, iron oxide and blue vinyl on a zinc base.Este puente, de unos 3 km y medio, que une las ciudades de San Diego y Coronado es uno de los de mayor longitud del mundo, de este tipo, y uno de los tres principales ortótropos de los Estados Unidos de América. Su infraestructura está constituida por pilas de hormigón armado apoyadas sobre pilotes o sobre zapatas, según estén en el mar o en tierra firme. La superestructura está formada, en parte, por chapas metálicas y, en parte, por vigas cajón. El acabado del tablero metálico se realizó a base de hormigón asfáltico de epoxi con un espesor de 5 cm. La pintura es especial y se compone de capas de vinilo, de óxido de hierro y de vinilo azul sobre una capa de cinc.

  17. Backwater Flooding in San Marcos, TX from the Blanco River

    Science.gov (United States)

    Earl, Richard; Gaenzle, Kyle G.; Hollier, Andi B.

    2016-01-01

    Large sections of San Marcos, TX were flooded in Oct. 1998, May 2015, and Oct. 2015. Much of the flooding in Oct. 1998 and Oct. 2015 was produced by overbank flooding of San Marcos River and its tributaries by spills from upstream dams. The May 2015 flooding was almost entirely produced by backwater flooding from the Blanco River whose confluence is approximately 2.2 miles southeast of downtown. We use the stage height of the Blanco River to generate maps of the areas of San Marcos that are lower than the flood peaks and compare those results with data for the observed extent of flooding in San Marcos. Our preliminary results suggest that the flooding occurred at locations more than 20 feet lower than the maximum stage height of the Blanco River at San Marcos gage (08171350). This suggest that the datum for either gage 08171350 or 08170500 (San Marcos River at San Marcos) or both are incorrect. There are plans for the U.S. Army Corps of Engineers to construct a Blanco River bypass that will divert Blanco River floodwaters approximately 2 miles farther downstream, but the $60 million price makes its implementation problematic.

  18. 77 FR 123 - Proposed CERCLA Administrative Cost Recovery Settlement; North Hollywood Operable Unit of the San...

    Science.gov (United States)

    2012-01-03

    ...In accordance with Section 122(i) of the Comprehensive Environmental Response, Compensation, and Liability Act, as amended (``CERCLA''), 42 U.S.C. 9622(i), notice is hereby given of a proposed administrative settlement for recovery of response costs concerning the North Hollywood Operable Unit of the San Fernando Valley Area 1 Superfund Site, located in the vicinity of Los Angeles, California, with the following settling party: Waste Management Recycling & Disposal Services of California, Inc., dba Bradley Landfill & Recycling Center. The settlement requires the settling party to pay a total of $185,734 to the North Hollywood Operable Unit Special Account within the Hazardous Substance Superfund. The settlement also includes a covenant not to sue the settling party pursuant to Section 107(a) of CERCLA, 42 U.S.C. 9607(a). For thirty (30) days following the date of publication of this notice, the Agency will receive written comments relating to the settlement. The Agency will consider all comments received and may modify or withdraw its consent to the settlement if comments received disclose facts or considerations which indicate that the settlement is inappropriate, improper, or inadequate. The Agency's response to any comments received will be available for public inspection at the City of Los Angeles Central Library, Science and Technology Department, 630 West 5th Street, Los Angeles CA 90071 and at the EPA Region 9 Superfund Records Center, Mail Stop SFD-7C, 95 Hawthorne Street, Room 403, San Francisco, CA 94105.

  19. The San values of conflict prevention and avoidance in Platfontein

    Directory of Open Access Journals (Sweden)

    Nina Mollema

    2017-09-01

    Full Text Available The aim of this article is to identify measures that can prevent violent conflict through the maintenance of traditional cultural values that guide conflict avoidance. Moreover, the article focuses on the concepts of conflict prevention and conflict avoidance as applied by the San community of Platfontein. The causes of the inter-communal tensions between the San community members are also examined. A selected conflict situation, that of superstition and witchcraft, is assessed as factors increasing interpersonal conflict in the Platfontein community. This investigation is made to determine if the San preventive measures have an impact in the community, so as to prevent ongoing conflicts from escalating further.

  20. The Effect of Bangpungtongsung-san Extracts on Adipocyte Metabolism

    Directory of Open Access Journals (Sweden)

    Sang Min, Lee

    2008-03-01

    Full Text Available Objective : The purpose of this study is to investigate the effects of Bangpungtongsung-san extracts on the preadipocytes proliferation, of 3T3-L1 cell line. lipolysis of adipocytes in rat's epididymis and localized fat accumulation of porcine by extraction methods(alcohol and water. Methods : Diminish 3T3-L1 proliferation and lipogenesis do primary role to reduce obesity. So, 3T3-L1 preadipocyte and adipocytes were performed on cell cultures, and using Sprague-Dawley rats for the lipogenesis, and treated with 0.01-1 ㎎/㎖ Bangpungtongsung-san Extracts depend on concentrations. Porcine skin including fat tissue after treated Bangpungtongsung-san Extracts by means of the dosage dependent variation are investigated the histologic changes after injection of these extracts. Results : Following results were obtained from the 3T3-L1 preadipocyte proliferation and lipolysis of adipocyte in rats and histologic investigation of fat tissue. 1. Bangpungtongsung-san extracts were showed the effect of decreased preadipocyte proliferation on the high dosage(1.0㎎/㎖. 2. Bangpungtongsung-san extracts were showed the effect of decreased the activity of glycerol-3-phosphate dehydrogenase(GPDH on the high dosage(1.0㎎/㎖ and Specially, alcohol extract of Bangpungtongsung -san was clear as time goes by high concentration. 3. Bangpungtongsung-san extracts were showed tries to compare the effect of lipolysis, alcohol extract of Bangpungtongsung-san on the high dosage(1.0㎎/㎖ was observed the effect is higher than water extract. 4. Investigated the histological changes in porcine fat tissue after treated Bangpungtongsung-san extracts, we knew that water extract of Bangpungtongsung-san was showed the effect of lipolysis on the high dosage(10.0㎎/㎖ and alcohol extract of Bangpungtongsung-san was showed significant activity to the lysis of cell membranes in all concentration. Conclusion : These results suggest that Bangpungtongsung-san extracts efficiently

  1. San Juanico Hybrid System Technical and Institutional Assessment: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Corbus, D.; Newcomb, C.; Yewdall, Z.

    2004-07-01

    San Juanico is a fishing village of approximately 120 homes in the Municipality of Comondu, Baja California. In April, 1999, a hybrid power system was installed in San Juanico to provide 24-hour power, which was not previously available. Before the installation of the hybrid power system, a field study was conducted to characterize the electrical usage and institutional and social framework of San Juanico. One year after the installation of the hybrid power system a''post-electrification'' study was performed to document the changes that had occurred after the installation. In December of 2003, NREL visited the site to conduct a technical assessment of the system.

  2. Academic Medical Centers as digital health catalysts.

    Science.gov (United States)

    DePasse, Jacqueline W; Chen, Connie E; Sawyer, Aenor; Jethwani, Kamal; Sim, Ida

    2014-09-01

    Emerging digital technologies offer enormous potential to improve quality, reduce cost, and increase patient-centeredness in healthcare. Academic Medical Centers (AMCs) play a key role in advancing medical care through cutting-edge medical research, yet traditional models for invention, validation and commercialization at AMCs have been designed around biomedical initiatives, and are less well suited for new digital health technologies. Recently, two large bi-coastal Academic Medical Centers, the University of California, San Francisco (UCSF) through the Center for Digital Health Innovation (CDHI) and Partners Healthcare through the Center for Connected Health (CCH) have launched centers focused on digital health innovation. These centers show great promise but are also subject to significant financial, organizational, and visionary challenges. We explore these AMC initiatives, which share the following characteristics: a focus on academic research methodology; integration of digital technology in educational programming; evolving models to support "clinician innovators"; strategic academic-industry collaboration and emergence of novel revenue models. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. 78 FR 21399 - Notice of Inventory Completion: Center for Archaeological Research at the University of Texas at...

    Science.gov (United States)

    2013-04-10

    ...-PPWOCRADN0] Notice of Inventory Completion: Center for Archaeological Research at the University of Texas at San Antonio, TX AGENCY: National Park Service, Interior. ACTION: Notice. SUMMARY: The Center for... consultation with the appropriate Indian tribe, and has determined that there is a cultural affiliation between...

  4. Shifting shoals and shattered rocks : How man has transformed the floor of west-central San Francisco Bay

    Science.gov (United States)

    Chin, John L.; Wong, Florence L.; Carlson, Paul R.

    2004-01-01

    San Francisco Bay, one of the world's finest natural harbors and a major center for maritime trade, is referred to as the 'Gateway to the Pacific Rim.' The bay is an urbanized estuary that is considered by many to be the major estuary in the United States most modified by man's activities. The population around the estuary has grown rapidly since the 1850's and now exceeds 7 million people. The San Francisco Bay area's economy ranks as one of the largest in the world, larger even than that of many countries. More than 10 million tourists are estimated to visit the bay region each year. The bay area's population and associated development have increasingly changed the estuary and its environment. San Francisco Bay and the contiguous Sacramento-San Joaquin Delta encompass roughly 1,600 square miles (4,100 km2) and are the outlet of a major watershed that drains more than 40 percent of the land area of the State of California. This watershed provides drinking water for 20 million people (two thirds of the State's population) and irrigates 4.5 million acres of farmland and ranchland. During the past several decades, much has been done to clean up the environment and waters of San Francisco Bay. Conservationist groups have even bought many areas on the margins of the bay with the intention of restoring them to a condition more like the natural marshes they once were. However, many of the major manmade changes to the bay's environment occurred so long ago that the nature of them has been forgotten. In addition, many changes continue to occur today, such as the introduction of exotic species and the loss of commercial and sport fisheries because of declining fish populations. The economy and population of the nine counties that surround the bay continue to grow and put increasing pressure on the bay, both direct and indirect. Therefore, there are mixed signals for the future health and welfare of San Francisco Bay. The San Francisco Bay estuary consists of three

  5. 77 FR 60897 - Safety Zone: America's Cup World Series Finish-Line, San Francisco, CA

    Science.gov (United States)

    2012-10-05

    ... navigable waters of the San Francisco Bay in vicinity of San Francisco West Yacht Harbor Light 2... vicinity of San Francisco West Yacht Harbor Light 2. Unauthorized persons or vessels are prohibited from... San Francisco West Yacht Harbor Light 2. This safety zone establishes a temporary restricted area on...

  6. 75 FR 65985 - Safety Zone: Epic Roasthouse Private Party Firework Display, San Francisco, CA

    Science.gov (United States)

    2010-10-27

    ... the navigable waters of San Francisco Bay 1,000 yards off Epic Roasthouse Restaurant, San Francisco.... Wright, Program Manager, Docket Operations, telephone 202-366-9826. SUPPLEMENTARY INFORMATION: Regulatory... waters of San Francisco Bay, 1,000 yards off Epic Roasthouse Restaurant, San Francisco, CA. The fireworks...

  7. Timber resource statistics for the San Joaquin and southern resource areas of California.

    Science.gov (United States)

    Karen L. Waddell; Patricia M. Bassett

    1997-01-01

    This report is a summary of timber resource statistics for the San Joaquin and Southern Resource Areas of California, which include Alpine, Amador, Calaveras, Fresno, Imperial, Inyo, Kern, Kings, Los Angeles, Madera, Mariposa, Merced, Mono, Orange, Riverside, San Bernardino, San Diego, San Joaquin, Stanislaus, Tulare, and Tuolumne Counties. Data were collected as part...

  8. PRODEEDINGS OF RIKEN BNL RESEARCH CENTER WORKSHOP : HIGH PERFORMANCE COMPUTING WITH QCDOC AND BLUEGENE.

    Energy Technology Data Exchange (ETDEWEB)

    CHRIST,N.; DAVENPORT,J.; DENG,Y.; GARA,A.; GLIMM,J.; MAWHINNEY,R.; MCFADDEN,E.; PESKIN,A.; PULLEYBLANK,W.

    2003-03-11

    Staff of Brookhaven National Laboratory, Columbia University, IBM and the RIKEN BNL Research Center organized a one-day workshop held on February 28, 2003 at Brookhaven to promote the following goals: (1) To explore areas other than QCD applications where the QCDOC and BlueGene/L machines can be applied to good advantage, (2) To identify areas where collaboration among the sponsoring institutions can be fruitful, and (3) To expose scientists to the emerging software architecture. This workshop grew out of an informal visit last fall by BNL staff to the IBM Thomas J. Watson Research Center that resulted in a continuing dialog among participants on issues common to these two related supercomputers. The workshop was divided into three sessions, addressing the hardware and software status of each system, prospective applications, and future directions.

  9. Perspective view, Landsat overlay San Andreas Fault, Palmdale, California

    Science.gov (United States)

    2000-01-01

    The prominent linear feature straight down the center of this perspective view is the San Andreas Fault. This segment of the fault lies near the city of Palmdale, California (the flat area in the right half of the image) about 60 kilometers (37 miles) north of Los Angeles. The fault is the active tectonic boundary between the North American plate on the right, and the Pacific plate on the left. Relative to each other, the Pacific plate is moving away from the viewer and the North American plate is moving toward the viewer along what geologists call a right lateral strike-slip fault. Two large mountain ranges are visible, the San Gabriel Mountains on the left and the Tehachapi Mountains in the upper right. The Lake Palmdale Reservoir, approximately 1.5 kilometers (0.9 miles) across, sits in the topographic depression created by past movement along the fault. Highway 14 is the prominent linear feature starting at the lower left edge of the image and continuing along the far side of the reservoir. The patterns of residential and agricultural development around Palmdale are seen in the Landsat imagery in the right half of the image. SRTM topographic data will be used by geologists studying fault dynamics and landforms resulting from active tectonics.This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR

  10. ASTER Images San Francisco Bay Area

    Science.gov (United States)

    2000-01-01

    These images of the San Francisco Bay region were acquired on March 3, 2000 by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on NASA's Terra satellite. Each covers an area 60 kilometers (37 miles) wide and 75 kilometers (47 miles) long. With its 14 spectral bands from the visible to the thermal infrared wavelength region, and its high spatial resolution of 15 to 90 meters (about 50 to 300 feet), ASTER will image the Earth for the next 6 years to map and monitor the changing surface of our planet.Upper Left: The color infrared composite uses bands in the visible and reflected infrared. Vegetation is red, urban areas are gray; sediment in the bays shows up as lighter shades of blue. Thanks to the 15 meter (50-foot) spatial resolution, shadows of the towers along the Bay Bridge can be seen.Upper right: A composite of bands in the short wave infrared displays differences in soils and rocks in the mountainous areas. Even though these regions appear entirely vegetated in the visible, enough surface shows through openings in the vegetation to allow the ground to be imaged.Lower left: This composite of multispectral thermal bands shows differences in urban materials in varying colors. Separation of materials is due to differences in thermal emission properties, analogous to colors in the visible.Lower right: This is a color coded temperature image of water temperature, derived from the thermal bands. Warm waters are in white and yellow, colder waters are blue. Suisun Bay in the upper right is fed directly from the cold Sacramento River. As the water flows through San Pablo and San Francisco Bays on the way to the Pacific, the waters warm up.Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is one of five Earth-observing instruments launched December 18, 1999, on NASA's Terra satellite. The instrument was built by Japan's Ministry of International Trade and Industry. A joint U.S./Japan science team is responsible for

  11. SUSTAINABLE DEVELOPMENT MULTIDISCIPLINARY COMMITTEE OF SAN MIGUEL ALMAYA

    Directory of Open Access Journals (Sweden)

    Carolina Mejía-Madero

    2013-04-01

    Full Text Available In this article it is analyzed the role of the “Sustainable Development Multidisciplinary Committee of San Miguel Almaya” created in this community with an otomi background in the State of Mexico, with the purpose to continue with the touristic acts supported in 2006 by the Federal and State Secretaries of Tourism. All with the aim to get benefit from its potential, centered on a lagoon and an extinct volcano. The Committee was created in 2010 because an Eco Tourist Park wanted to be constructed; Even though two stages of the eco tourist park were constructed, it was not concluded; in order to give it continuity, the local authorities decided to negotiate resources. The purpose of the present document is to analyze through the Public Policy Networks the role its members played at the moment of taking decisions to determine if they created the necessary conditions to promote the tourist and the sustainability of the community. The study was based on the methodology of Cruz (2008 and Zabaleta (2006 which identifies the objectives, interests, resources, capabilities, limitations and attributions, between the elements that have an influence on establishing links; in this case, among the actors of the network formed inside the committee. The information was obtained from an empiric and documental investigation that included reunions with the local authorities. One of the most important results is that, the decisions taken within the committee and the exclusion of some of the actors stopped the sustainable development due to a lack of negotiation between its members and differences in their objectives and interests, resulting in a lack of compromise and cooperation to solve the normative, economic, ecologic and cultural problems of the community that could put into risk the touristic potential of the zone.

  12. San Francisco-Pacifica Coast Landslide Susceptibility 2011

    Data.gov (United States)

    California Natural Resource Agency — The San Francisco-Pacifica Coast grid map was extracted from the California Geological Survey Map Sheet 58 that covers the entire state of California and originally...

  13. San Diego, California 1/3 arc-second DEM

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 1/3-second San Diego, California Elevation Grid provides bathymetric data in ASCII raster format of 1/3-second resolution in geographic coordinates. This grid is...

  14. San Diego Littoral Cell CRSMP Receiver Sites 2009

    Data.gov (United States)

    California Natural Resource Agency — A total of 27 possible placement sites (some with multiple placement footprints) are incorporated into this San Diego Coastal Regional Sediment Management Plan to...

  15. Geological literature on the San Joaquin Valley of California

    Science.gov (United States)

    Maher, J.C.; Trollman, W.M.; Denman, J.M.

    1973-01-01

    The following list of references includes most of the geological literature on the San Joaquin Valley and vicinity in central California (see figure 1) published prior to January 1, 1973. The San Joaquin Valley comprises all or parts of 11 counties -- Alameda, Calaveras, Contra Costa, Fresno, Kern, Kings, Madera, Merced, San Joaquin, Stanislaus, and Tulare (figure 2). As a matter of convenient geographical classification the boundaries of the report area have been drawn along county lines, and to include San Benito and Santa Clara Counties on the west and Mariposa and Tuolumne Counties on the east. Therefore, this list of geological literature includes some publications on the Diablo and Temblor Ranges on the west, the Tehachapi Mountains and Mojave Desert on the south, and the Sierra Nevada Foothills and Mountains on the east.

  16. San Francisco Bay Multi-beam Bathymetry: Area A

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These multi-beam bathymetric data were collected over shallow subtidal areas in the San Francisco Bay estuary system. Bathymetric and acoustic backscatter data were...

  17. San Francisco Bay Interferometric Side Scan Imagery: Area A

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Backscatter imagery data were collected over shallow subtidal areas in the San Francisco Bay estuary system. Bathymetric and acoustic backscatter data were collected...

  18. National Status and Trends: Bioeffects Program - San Francisco Bay Database

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This study was based on the sediment quality triad (SQT) approach. A stratified probabilistic sampling design was utilized to characterize the San Francisco Bay...

  19. San Diego Littoral Cell CRSMP Potential Offshore Borrow Areas 2009

    Data.gov (United States)

    California Natural Resource Agency — Offshore sediment sources along the entire reach of the San Diego Coastal RSM Plan region were previously identified by SANDAG and used for Regional Beach Sand...

  20. San Diego Littoral Cell CRSMP Receiver Sites 2009

    Data.gov (United States)

    California Department of Resources — A total of 27 possible placement sites (some with multiple placement footprints) are incorporated into this San Diego Coastal Regional Sediment Management Plan to...

  1. Vegetation Mapping - Tecolote Canyon, San Diego Co. [ds656

    Data.gov (United States)

    California Natural Resource Agency — Vegetation mapping has been conducted at various City of San Diego Park and Recreation Open Space lands in support of natural resource management objectives and the...

  2. San Juan, Puerto Rico Tsunami Forecast Grids for MOST Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The San Juan, Puerto Rico Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model....

  3. Port San Luis, California Tsunami Forecast Grids for MOST Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Port San Luis, California Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST)...

  4. BNSF San Bernardino case study : positive train control risk assessment.

    Science.gov (United States)

    2014-09-01

    The Federal Railroad Administration funded the BNSF San Bernardino Case Study to verify its Generalized Train Movement : Simulator (GTMS) risk assessment capabilities on a planned implementation of the I-ETMS PTC system. The analysis explicitly : sim...

  5. San Francisco Bay, California 1 arc-second DEM

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 1-second San Francisco Bay, California Elevation Grid provides bathymetric data in ASCII raster format of 1-second resolution in geographic coordinates. This...

  6. Baseline Surveys - Tecolote Canyon, San Diego Co. [ds655

    Data.gov (United States)

    California Natural Resource Agency — Various resource projects have been conducted in the City of San Diego's Open Space Parks as part of the implementation of the City's Multiple Species Conservation...

  7. San Francisco, California Tsunami Forecast Grids for MOST Model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The San Francisco, California Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST)...

  8. Rare Plants - City of San Diego [ds455

    Data.gov (United States)

    California Natural Resource Agency — The Biological Monitoring Plan (BMP; Ogden 1996) for the Multiple Species Conservation Program (MSCP) was developed in 1996 and is a component of the City of San...

  9. San Juan County 2010 Census Roads

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census...

  10. San Juan County Current Area Landmark

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census...

  11. San Miguel County 2010 Census Block Groups

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census...

  12. San Juan County 2010 Census Block Groups

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census...

  13. San Juan County 2010 Census Tracts

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census...

  14. San Miguel County 2010 Census Roads

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census...

  15. San Juan County 2010 Census Edges

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census...

  16. San Miguel County 2010 Census Edges

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census...

  17. San Miguel County 2010 Census Tracts

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census...

  18. San Juan County 2010 Census Blocks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census...

  19. San Miguel County 2010 Census Blocks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census...

  20. San Miguel County Current Area Landmark

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census...

  1. San Juan County Current Point Landmarks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census...

  2. San Miguel County Current Point Landmarks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census...

  3. San Miguel County 2000 Census Blocks

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — TIGER, TIGER/Line, and Census TIGER are registered trademarks of the Bureau of the Census. The Redistricting Census 2000 TIGER/Line files are an extract of selected...

  4. Effects of Choto-san and Chotoko on thiopental-induced sleeping time

    OpenAIRE

    JEENAPONGSA, Rattima; Tohda, Michihisa; Watanabe, Hiroshi

    2003-01-01

    Choto-san has been used for treatment of centrally regulated disorders such as dementia, hypertension, headache and vertigo. Our laboratory showed that Choto-san improved learning memory in ischemic mice. It is noticeable that Choto-san treated animals and animals that underwent conducting occlusion of common carotid arteries (2VO) operation slept longer than the normal animals. Therefore, this study aimed to clarify the effects of Choto-san and its related component; Chotoko and Choto-san wi...

  5. Dos edificios administrativos, en San Francisco

    Directory of Open Access Journals (Sweden)

    Skidmore, Owings & Merrill, Arquitectos

    1964-07-01

    Full Text Available The Crown Zellerbach has been built on a large triangular plaza, at the outskirts of San Francisco. This is one of the most recent tall buildings in the city. The Wells Fargo Bank is also situated on this plaza. It is of special interest, both as regards its shape and functional organisation. It has a ground floor, where most of the mercantile activities take place, and a basement, containing a Council room; the strong rooms, with 2,500 private boxes as well as the bank's own safe; washing rooms; mechanical equipment rooms; a rest room; a bar for the employees and independent stairs. The building has a circular planform, 21.5 m in diameter and 354 m2 in area. The structure is metallic, the vertical supports are along the periphery, spaced every 1.626 m. The enclosing curtain walls are glass and anodized aluminium. The roof has radially distributed metal beams, interconnected by prefabricated concrete units, covered with copper sheeting. This bank, shaped like a hunting lodge, and finished with delicate care, contrasts sharply with the powerful volume of the Crown Zellerbach, and of other nearby buildings, and adds distinction to the plaza.Sobre una gran plaza triangular del extrarradio de San Francisco se alzan: el Crown Zellerbach, uno de sus más recientes rascacielos, y un bello pabellón independiente, el Wells Fargo Bank. El resto de la plaza es de dominio público. La originalidad, en forma y organización del segundo, ha hecho que le dediquemos la mayor atención: consta de una planta baja, en la que se desarrollan, prácticamente, todas las actividades mercantiles, y un piso inferior, en donde se distribuyen: un Salón de Consejos, el departamento de cajas de seguridad, con 2.500 unidades, y las cajas del Banco, los aseos, equipos mecánicos, etc., una sala de descanso y bar para los empleados, con escalera de acceso independiente. Tiene planta circular, de 21,5 m de diámetro y 354 m2 de superficie. La estructura es metálica, con

  6. San Gregorio mining: general presentation of the enterprise

    International Nuclear Information System (INIS)

    1997-01-01

    This work is a project presented by San Gregorio Mine.This company is responsible for the extraction and gold ore deposits benefits in San Gregorio and East extension in Minas de Corrales. For this project was carried out an environmental impact study as well as and agreement with the LATU for the laboratory analyzes and the surface and groundwater monitoring within the Environmental program established by the Company

  7. Cuisine Preference of Local Tourists in San Juan, Batangas, Philippines

    OpenAIRE

    RYENE SELLINE B. KALALO; ANGELICA LYNTTE A. CABLAO; MARICRISS P. CABATAY; CHARISSA P. MANTAL; RHONALYN T. MANALO; SEVILLA S. FELICEN

    2014-01-01

    This study aimed to determine the cuisine preference of the local tourist in San Juan, Batangas. More specifically, it aimed to describe the demographic profile of local tourist; to identify the preferred cuisine by different restaurants; to determine the significant difference when group according to demographic profile; and to determine the cuisine preference of local tourists in San Juan, Batangas. The research design used the descriptive method because it is the most appropria...

  8. Corps sans organes et anamnèse

    DEFF Research Database (Denmark)

    Wilson, Alexander

    2011-01-01

    Je trace certains liens entre le corps sans organes de Deleuze et Guattari et les principes de l’organologie générale que décrit Bernard Stiegler.......Je trace certains liens entre le corps sans organes de Deleuze et Guattari et les principes de l’organologie générale que décrit Bernard Stiegler....

  9. Poziv: Duhovnost i san. San o oružju Franje Asiškoga

    OpenAIRE

    Balajić, Siniša

    2009-01-01

    San o oružju Franje Asiškoga doima se važnim elemenatom u proučavanju, ne toliko poziva Franje Asiškoga, koliko poziva uopće. Budući da se analizom i interpretacijom snova bave antropologija, psihologija, filozofija, teologija, itd., koncept razumijevanja ovoga sna uokviruje različita znanstvena (antropologija, psihologija) i teorijska načela (filozofija, teologija-duhovnost). Svjesni smo da proučavanje nečijega života, a pogotovo proučavanje nečijih snova, nije nimalo la...

  10. San Francisco Bay Water Quality Improvement Fund Project Locations, San Francisco CA, 2017, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — The San Francisco Bay Water Quality Improvement Fund is a competitive grant program that is helping implement TMDLs to improve water quality, protect wetlands, and...

  11. San Francisco Bay Water Quality Improvement Fund Map Service, San Francisco CA, 2012, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — The San Francisco Bay Water Quality Improvement Fund is a competitive grant program that is helping implement TMDLs to improve water quality, protect wetlands, and...

  12. San Francisco Bay Area Baseline Trash Loading Summary Results, San Francisco Bay Area CA, 2012, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — The San Francisco Bay Area stormwater permit sets trash control guidelines for discharges through the storm drain system. The permit covers Alameda, Contra Costa,...

  13. 2010 Northern San Francisco Bay Area Lidar: Portions of Alameda, Contra Costa, Marin, Napa, San Francisco, Solano, and Sonoma Counties

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This Light Detection and Ranging (LiDAR) dataset is a survey of northern San Francisco Bay, California. The project area consists of approximately 437 square miles...

  14. Mercury in San Francisco Bay forage fish

    Energy Technology Data Exchange (ETDEWEB)

    Greenfield, Ben K., E-mail: ben@sfei.or [San Francisco Estuary Institute, 7770 Pardee Lane, Oakland, CA 94621 (United States); Jahn, Andrew, E-mail: andyjahn@mac.co [1000 Riverside Drive, Ukiah, CA 95482 (United States)

    2010-08-15

    In the San Francisco Estuary, management actions including tidal marsh restoration could change fish mercury (Hg) concentrations. From 2005 to 2007, small forage fish were collected and analyzed to identify spatial and interannual variation in biotic methylmercury (MeHg) exposure. The average whole body total Hg concentration was 0.052 {mu}g g{sup -1} (wet-weight) for 457 composite samples representing 13 fish species. MeHg constituted 94% of total Hg. At a given length, Hg concentrations were higher in nearshore mudflat and wetland species (Clevelandia ios, Menidia audens, and Ilypnus gilberti), compared to species that move offshore (e.g., Atherinops affinis and Lepidogobius lepidus). Gut content analysis indicated similar diets between Atherinops affinis and Menidia audens, when sampled at the same locations. Hg concentrations were higher in sites closest to the Guadalupe River, which drains a watershed impacted by historic Hg mining. Results demonstrate that despite differences among years and fish species, nearshore forage fish exhibit consistent Hg spatial gradients. - Total mercury in estuarine forage fish varies with species, habitat, and proximity to a historic mercury mine.

  15. Trans Women Doing Sex in San Francisco.

    Science.gov (United States)

    Williams, Colin J; Weinberg, Martin S; Rosenberger, Joshua G

    2016-10-01

    This research investigates the sexuality of trans women (individuals who were assigned male status at birth who currently identify as women), by focusing on the "bodily techniques" (Crossley, 2006) they use in "doing" sexuality. The "doing sexuality" framework not only is modeled after the "doing gender" approach of West and Zimmerman (1987), but also utilizes the idea of "sexual embodiment" to emphasize the agency of trans women as they conceptualize and organize their sexuality in a socially recognized way. This is often difficult as they confront discrimination from medical and legal professionals as well as intimate partners who may find it difficult to adapt to the trans woman's atypical body and conception of gender. However, with a study group of 25 trans women from San Francisco, we found the study participants to be adept at overcoming such hurdles and developing techniques to "do" their sexuality. At the same time, we found trans women's agency constrained by the erotic habitus (Green, 2008) of the wider society. The interplay between innovation and cultural tradition provides an opportunity to fashion a more general model of "doing" sexuality.

  16. Mercury in San Francisco Bay forage fish

    International Nuclear Information System (INIS)

    Greenfield, Ben K.; Jahn, Andrew

    2010-01-01

    In the San Francisco Estuary, management actions including tidal marsh restoration could change fish mercury (Hg) concentrations. From 2005 to 2007, small forage fish were collected and analyzed to identify spatial and interannual variation in biotic methylmercury (MeHg) exposure. The average whole body total Hg concentration was 0.052 μg g -1 (wet-weight) for 457 composite samples representing 13 fish species. MeHg constituted 94% of total Hg. At a given length, Hg concentrations were higher in nearshore mudflat and wetland species (Clevelandia ios, Menidia audens, and Ilypnus gilberti), compared to species that move offshore (e.g., Atherinops affinis and Lepidogobius lepidus). Gut content analysis indicated similar diets between Atherinops affinis and Menidia audens, when sampled at the same locations. Hg concentrations were higher in sites closest to the Guadalupe River, which drains a watershed impacted by historic Hg mining. Results demonstrate that despite differences among years and fish species, nearshore forage fish exhibit consistent Hg spatial gradients. - Total mercury in estuarine forage fish varies with species, habitat, and proximity to a historic mercury mine.

  17. 77 FR 34984 - Notice of Intent To Repatriate a Cultural Item: San Diego Museum of Man, San Diego, CA

    Science.gov (United States)

    2012-06-12

    ...The San Diego Museum of Man, in consultation with the appropriate Indian tribes, has determined that a cultural item meets the definition of unassociated funerary object and repatriation to the Indian tribes stated below may occur if no additional claimants come forward. Representatives of any Indian tribe that believes itself to be culturally affiliated with the cultural item may contact the San Diego Museum of Man.

  18. 33 CFR 165.1182 - Safety/Security Zone: San Francisco Bay, San Pablo Bay, Carquinez Strait, and Suisun Bay, CA.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Safety/Security Zone: San... Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) PORTS AND WATERWAYS SAFETY... Areas Eleventh Coast Guard District § 165.1182 Safety/Security Zone: San Francisco Bay, San Pablo Bay...

  19. Pandemic (H1N1) 2009 Surveillance in Marginalized Populations, Tijuana, Mexico, and West Nile Virus Knowledge among Hispanics, San Diego, California, 2006

    Centers for Disease Control (CDC) Podcasts

    2010-08-10

    This podcast describes public health surveillance and communication in hard to reach populations in Tijuana, Mexico, and San Diego County, California. Dr. Marian McDonald, Associate Director of CDC's Health Disparities in the National Center for Emerging and Zoonotic Infectious Diseases, discusses the importance of being flexible in determining the most effective media for health communications.  Created: 8/10/2010 by National Center for Emerging and Zoonotic Infectious Diseases, National Center for Immunization and Respiratory Diseases.   Date Released: 8/10/2010.

  20. Converting positive and negative symptom scores between PANSS and SAPS/SANS.

    Science.gov (United States)

    van Erp, Theo G M; Preda, Adrian; Nguyen, Dana; Faziola, Lawrence; Turner, Jessica; Bustillo, Juan; Belger, Aysenil; Lim, Kelvin O; McEwen, Sarah; Voyvodic, James; Mathalon, Daniel H; Ford, Judith; Potkin, Steven G; Fbirn

    2014-01-01

    The Scale for the Assessment of Positive Symptoms (SAPS), the Scale for the Assessment of Negative Symptoms (SANS), and the Positive and Negative Syndrome Scale for Schizophrenia (PANSS) are the most widely used schizophrenia symptom rating scales, but despite their co-existence for 25 years no easily usable between-scale conversion mechanism exists. The aim of this study was to provide equations for between-scale symptom rating conversions. Two-hundred-and-five schizophrenia patients [mean age±SD=39.5±11.6, 156 males] were assessed with the SANS, SAPS, and PANSS. Pearson's correlations between symptom scores from each of the scales were computed. Linear regression analyses, on data from 176 randomly selected patients, were performed to derive equations for converting ratings between the scales. Intraclass correlations, on data from the remaining 29 patients, not part of the regression analyses, were performed to determine rating conversion accuracy. Between-scale positive and negative symptom ratings were highly correlated. Intraclass correlations between the original positive and negative symptom ratings and those obtained via conversion of alternative ratings using the conversion equations were moderate to high (ICCs=0.65 to 0.91). Regression-based equations may be useful for conversion between schizophrenia symptom severity as measured by the SANS/SAPS and PANSS, though additional validation is warranted. This study's conversion equations, implemented at http:/converteasy.org, may aid in the comparison of medication efficacy studies, in meta- and mega-analyses examining symptoms as moderator variables, and in retrospective combination of symptom data in multi-center data sharing projects that need to pool symptom rating data when such data are obtained using different scales. Copyright © 2013 Elsevier B.V. All rights reserved.