WorldWideScience

Sample records for supercomputer center san

  1. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    Energy Technology Data Exchange (ETDEWEB)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S [Earth Sciences Department. Barcelona Supercomputing Center. Barcelona (Spain); Cuevas, E [Izanaa Atmospheric Research Center. Agencia Estatal de Meteorologia, Tenerife (Spain); Nickovic, S [Atmospheric Research and Environment Branch, World Meteorological Organization, Geneva (Switzerland)], E-mail: carlos.perez@bsc.es

    2009-03-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  2. Dust modelling and forecasting in the Barcelona Supercomputing Center: Activities and developments

    International Nuclear Information System (INIS)

    Perez, C; Baldasano, J M; Jimenez-Guerrero, P; Jorba, O; Haustein, K; Basart, S; Cuevas, E; Nickovic, S

    2009-01-01

    The Barcelona Supercomputing Center (BSC) is the National Supercomputer Facility in Spain, hosting MareNostrum, one of the most powerful Supercomputers in Europe. The Earth Sciences Department of BSC operates daily regional dust and air quality forecasts and conducts intensive modelling research for short-term operational prediction. This contribution summarizes the latest developments and current activities in the field of sand and dust storm modelling and forecasting.

  3. Computational Dimensionalities of Global Supercomputing

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2013-12-01

    Acxiom Laboratory of Applied Research (ALAR, University of Central Arkansas (UCA, Conway, AR, April 9, 2010. [78.] Segall, Richard S., Zhang, Qingyu and Pierce, Ryan M.(2009, "Visualization by Supercomputing Data Mining", Proceedings of the 4th INFORMS Workshop on Data Mining and System Informatics, San Diego, CA, October 10, 2009. [79.] Segall, Richard S., Zhang, Qingyu, and Pierce, Ryan (2010, "Data Mining Supercomputing with SAS™ JMP® Genomics", Proceedings of 14th World Multi-Conference on Systemics, Cybernetics and Informatics: WMSCI 2010, Orlando, FL, June 29-July 2, 2010 [80.] Segall, Richard S., Zhang, Qingyu, and Pierce, Ryan (2010, "Data Mining Supercomputing with SAS™ JMP® Genomics", Journal of Systemics, Cybernetics and Informatics (JSCI, Vol. 9, No. 1, 2011, pp.28-33. [81.] Segall, RS, Zhang, Q., and Pierce, RM (2009, Visualization by supercomputing data mining, Proceedings of the 4 th INFORMS Workshop on Data Mining and System Informatics, San Diego, CA, October 10, 2009

  4. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee

    2011-11-15

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  5. KAUST Supercomputing Laboratory

    KAUST Repository

    Bailey, April Renee; Kaushik, Dinesh; Winfer, Andrew

    2011-01-01

    KAUST has partnered with IBM to establish a Supercomputing Research Center. KAUST is hosting the Shaheen supercomputer, named after the Arabian falcon famed for its swiftness of flight. This 16-rack IBM Blue Gene/P system is equipped with 4 gigabyte memory per node and capable of 222 teraflops, making KAUST campus the site of one of the world’s fastest supercomputers in an academic environment. KAUST is targeting petaflop capability within 3 years.

  6. Supercomputing Centers and Electricity Service Providers

    DEFF Research Database (Denmark)

    Patki, Tapasya; Bates, Natalie; Ghatikar, Girish

    2016-01-01

    from a detailed, quantitative survey-based analysis and compare the perspectives of the European grid and SCs to the ones of the United States (US). We then show that contrary to the expectation, SCs in the US are more open toward cooperating and developing demand-management strategies with their ESPs......Supercomputing Centers (SCs) have high and variable power demands, which increase the challenges of the Electricity Service Providers (ESPs) with regards to efficient electricity distribution and reliable grid operation. High penetration of renewable energy generation further exacerbates...... this problem. In order to develop a symbiotic relationship between the SCs and their ESPs and to support effective power management at all levels, it is critical to understand and analyze how the existing relationships were formed and how these are expected to evolve. In this paper, we first present results...

  7. An assessment of worldwide supercomputer usage

    Energy Technology Data Exchange (ETDEWEB)

    Wasserman, H.J.; Simmons, M.L.; Hayes, A.H.

    1995-01-01

    This report provides a comparative study of advanced supercomputing usage in Japan and the United States as of Spring 1994. It is based on the findings of a group of US scientists whose careers have centered on programming, evaluating, and designing high-performance supercomputers for over ten years. The report is a follow-on to an assessment of supercomputing technology in Europe and Japan that was published in 1993. Whereas the previous study focused on supercomputer manufacturing capabilities, the primary focus of the current work was to compare where and how supercomputers are used. Research for this report was conducted through both literature studies and field research in Japan.

  8. A training program for scientific supercomputing users

    Energy Technology Data Exchange (ETDEWEB)

    Hanson, F.; Moher, T.; Sabelli, N.; Solem, A.

    1988-01-01

    There is need for a mechanism to transfer supercomputing technology into the hands of scientists and engineers in such a way that they will acquire a foundation of knowledge that will permit integration of supercomputing as a tool in their research. Most computing center training emphasizes computer-specific information about how to use a particular computer system; most academic programs teach concepts to computer scientists. Only a few brief courses and new programs are designed for computational scientists. This paper describes an eleven-week training program aimed principally at graduate and postdoctoral students in computationally-intensive fields. The program is designed to balance the specificity of computing center courses, the abstractness of computer science courses, and the personal contact of traditional apprentice approaches. It is based on the experience of computer scientists and computational scientists, and consists of seminars and clinics given by many visiting and local faculty. It covers a variety of supercomputing concepts, issues, and practices related to architecture, operating systems, software design, numerical considerations, code optimization, graphics, communications, and networks. Its research component encourages understanding of scientific computing and supercomputer hardware issues. Flexibility in thinking about computing needs is emphasized by the use of several different supercomputer architectures, such as the Cray X/MP48 at the National Center for Supercomputing Applications at University of Illinois at Urbana-Champaign, IBM 3090 600E/VF at the Cornell National Supercomputer Facility, and Alliant FX/8 at the Advanced Computing Research Facility at Argonne National Laboratory. 11 refs., 6 tabs.

  9. Computational fluid dynamics research at the United Technologies Research Center requiring supercomputers

    Science.gov (United States)

    Landgrebe, Anton J.

    1987-01-01

    An overview of research activities at the United Technologies Research Center (UTRC) in the area of Computational Fluid Dynamics (CFD) is presented. The requirement and use of various levels of computers, including supercomputers, for the CFD activities is described. Examples of CFD directed toward applications to helicopters, turbomachinery, heat exchangers, and the National Aerospace Plane are included. Helicopter rotor codes for the prediction of rotor and fuselage flow fields and airloads were developed with emphasis on rotor wake modeling. Airflow and airload predictions and comparisons with experimental data are presented. Examples are presented of recent parabolized Navier-Stokes and full Navier-Stokes solutions for hypersonic shock-wave/boundary layer interaction, and hydrogen/air supersonic combustion. In addition, other examples of CFD efforts in turbomachinery Navier-Stokes methodology and separated flow modeling are presented. A brief discussion of the 3-tier scientific computing environment is also presented, in which the researcher has access to workstations, mid-size computers, and supercomputers.

  10. TOP500 Supercomputers for November 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-11-16

    22nd Edition of TOP500 List of World s Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 22nd edition of the TOP500 list of the worlds fastest supercomputers was released today (November 16, 2003). The Earth Simulator supercomputer retains the number one position with its Linpack benchmark performance of 35.86 Tflop/s (''teraflops'' or trillions of calculations per second). It was built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan.

  11. Assessment techniques for a learning-centered curriculum: evaluation design for adventures in supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Helland, B. [Ames Lab., IA (United States); Summers, B.G. [Oak Ridge National Lab., TN (United States)

    1996-09-01

    As the classroom paradigm shifts from being teacher-centered to being learner-centered, student assessments are evolving from typical paper and pencil testing to other methods of evaluation. Students should be probed for understanding, reasoning, and critical thinking abilities rather than their ability to return memorized facts. The assessment of the Department of Energy`s pilot program, Adventures in Supercomputing (AiS), offers one example of assessment techniques developed for learner-centered curricula. This assessment has employed a variety of methods to collect student data. Methods of assessment used were traditional testing, performance testing, interviews, short questionnaires via email, and student presentations of projects. The data obtained from these sources have been analyzed by a professional assessment team at the Center for Children and Technology. The results have been used to improve the AiS curriculum and establish the quality of the overall AiS program. This paper will discuss the various methods of assessment used and the results.

  12. Integration of Panda Workload Management System with supercomputers

    Science.gov (United States)

    De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Nilsson, P.; Novikov, A.; Oleynik, D.; Panitkin, S.; Poyda, A.; Read, K. F.; Ryabinkin, E.; Teslyuk, A.; Velikhov, V.; Wells, J. C.; Wenaus, T.

    2016-09-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250000 cores with a peak performance of 0.3+ petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center "Kurchatov Institute", IT4 in Ostrava, and others). The current approach utilizes a modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run singlethreaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads

  13. What is supercomputing ?

    International Nuclear Information System (INIS)

    Asai, Kiyoshi

    1992-01-01

    Supercomputing means the high speed computation using a supercomputer. Supercomputers and the technical term ''supercomputing'' have spread since ten years ago. The performances of the main computers installed so far in Japan Atomic Energy Research Institute are compared. There are two methods to increase computing speed by using existing circuit elements, parallel processor system and vector processor system. CRAY-1 is the first successful vector computer. Supercomputing technology was first applied to meteorological organizations in foreign countries, and to aviation and atomic energy research institutes in Japan. The supercomputing for atomic energy depends on the trend of technical development in atomic energy, and the contents are divided into the increase of computing speed in existing simulation calculation and the acceleration of the new technical development of atomic energy. The examples of supercomputing in Japan Atomic Energy Research Institute are reported. (K.I.)

  14. TOP500 Supercomputers for June 2003

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2003-06-23

    21st Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 21st edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2003). The Earth Simulator supercomputer built by NEC and installed last year at the Earth Simulator Center in Yokohama, Japan, with its Linpack benchmark performance of 35.86 Tflop/s (teraflops or trillions of calculations per second), retains the number one position. The number 2 position is held by the re-measured ASCI Q system at Los Alamos National Laboratory. With 13.88 Tflop/s, it is the second system ever to exceed the 10 Tflop/smark. ASCIQ was built by Hewlett-Packard and is based on the AlphaServerSC computer system.

  15. TOP500 Supercomputers for June 2002

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2002-06-20

    19th Edition of TOP500 List of World's Fastest Supercomputers Released MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a much-anticipated event in the world of high-performance computing, the 19th edition of the TOP500 list of the worlds fastest supercomputers was released today (June 20, 2002). The recently installed Earth Simulator supercomputer at the Earth Simulator Center in Yokohama, Japan, is as expected the clear new number 1. Its performance of 35.86 Tflop/s (trillions of calculations per second) running the Linpack benchmark is almost five times higher than the performance of the now No.2 IBM ASCI White system at Lawrence Livermore National Laboratory (7.2 Tflop/s). This powerful leap frogging to the top by a system so much faster than the previous top system is unparalleled in the history of the TOP500.

  16. Comprehensive efficiency analysis of supercomputer resource usage based on system monitoring data

    Science.gov (United States)

    Mamaeva, A. A.; Shaykhislamov, D. I.; Voevodin, Vad V.; Zhumatiy, S. A.

    2018-03-01

    One of the main problems of modern supercomputers is the low efficiency of their usage, which leads to the significant idle time of computational resources, and, in turn, to the decrease in speed of scientific research. This paper presents three approaches to study the efficiency of supercomputer resource usage based on monitoring data analysis. The first approach performs an analysis of computing resource utilization statistics, which allows to identify different typical classes of programs, to explore the structure of the supercomputer job flow and to track overall trends in the supercomputer behavior. The second approach is aimed specifically at analyzing off-the-shelf software packages and libraries installed on the supercomputer, since efficiency of their usage is becoming an increasingly important factor for the efficient functioning of the entire supercomputer. Within the third approach, abnormal jobs – jobs with abnormally inefficient behavior that differs significantly from the standard behavior of the overall supercomputer job flow – are being detected. For each approach, the results obtained in practice in the Supercomputer Center of Moscow State University are demonstrated.

  17. Applications of supercomputing and the utility industry: Calculation of power transfer capabilities

    International Nuclear Information System (INIS)

    Jensen, D.D.; Behling, S.R.; Betancourt, R.

    1990-01-01

    Numerical models and iterative simulation using supercomputers can furnish cost-effective answers to utility industry problems that are all but intractable using conventional computing equipment. An example of the use of supercomputers by the utility industry is the determination of power transfer capability limits for power transmission systems. This work has the goal of markedly reducing the run time of transient stability codes used to determine power distributions following major system disturbances. To date, run times of several hours on a conventional computer have been reduced to several minutes on state-of-the-art supercomputers, with further improvements anticipated to reduce run times to less than a minute. In spite of the potential advantages of supercomputers, few utilities have sufficient need for a dedicated in-house supercomputing capability. This problem is resolved using a supercomputer center serving a geographically distributed user base coupled via high speed communication networks

  18. Automatic discovery of the communication network topology for building a supercomputer model

    Science.gov (United States)

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim

    2016-10-01

    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  19. Supercomputational science

    CERN Document Server

    Wilson, S

    1990-01-01

    In contemporary research, the supercomputer now ranks, along with radio telescopes, particle accelerators and the other apparatus of "big science", as an expensive resource, which is nevertheless essential for state of the art research. Supercomputers are usually provided as shar.ed central facilities. However, unlike, telescopes and accelerators, they are find a wide range of applications which extends across a broad spectrum of research activity. The difference in performance between a "good" and a "bad" computer program on a traditional serial computer may be a factor of two or three, but on a contemporary supercomputer it can easily be a factor of one hundred or even more! Furthermore, this factor is likely to increase with future generations of machines. In keeping with the large capital and recurrent costs of these machines, it is appropriate to devote effort to training and familiarization so that supercomputers are employed to best effect. This volume records the lectures delivered at a Summer School ...

  20. Research center Juelich to install Germany's most powerful supercomputer new IBM System for science and research will achieve 5.8 trillion computations per second

    CERN Multimedia

    2002-01-01

    "The Research Center Juelich, Germany, and IBM today announced that they have signed a contract for the delivery and installation of a new IBM supercomputer at the Central Institute for Applied Mathematics" (1/2 page).

  1. San Joaquin Valley Aerosol Health Effects Research Center (SAHERC)

    Data.gov (United States)

    Federal Laboratory Consortium — At the San Joaquin Valley Aerosol Health Effects Center, located at the University of California-Davis, researchers will investigate the properties of particles that...

  2. Enabling department-scale supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, D.S.; Hart, W.E.; Phillips, C.A.

    1997-11-01

    The Department of Energy (DOE) national laboratories have one of the longest and most consistent histories of supercomputer use. The authors summarize the architecture of DOE`s new supercomputers that are being built for the Accelerated Strategic Computing Initiative (ASCI). The authors then argue that in the near future scaled-down versions of these supercomputers with petaflop-per-weekend capabilities could become widely available to hundreds of research and engineering departments. The availability of such computational resources will allow simulation of physical phenomena to become a full-fledged third branch of scientific exploration, along with theory and experimentation. They describe the ASCI and other supercomputer applications at Sandia National Laboratories, and discuss which lessons learned from Sandia`s long history of supercomputing can be applied in this new setting.

  3. Metabolomics Workbench (MetWB)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Metabolomics Program's Data Repository and Coordinating Center (DRCC), housed at the San Diego Supercomputer Center (SDSC), University of California, San Diego,...

  4. 33 CFR 165.1121 - Security Zone: Fleet Supply Center Industrial Pier, San Diego, CA.

    Science.gov (United States)

    2010-07-01

    ... Guard District § 165.1121 Security Zone: Fleet Supply Center Industrial Pier, San Diego, CA. (a... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Security Zone: Fleet Supply Center Industrial Pier, San Diego, CA. 165.1121 Section 165.1121 Navigation and Navigable Waters COAST...

  5. Tryton Supercomputer Capabilities for Analysis of Massive Data Streams

    Directory of Open Access Journals (Sweden)

    Krawczyk Henryk

    2015-09-01

    Full Text Available The recently deployed supercomputer Tryton, located in the Academic Computer Center of Gdansk University of Technology, provides great means for massive parallel processing. Moreover, the status of the Center as one of the main network nodes in the PIONIER network enables the fast and reliable transfer of data produced by miscellaneous devices scattered in the area of the whole country. The typical examples of such data are streams containing radio-telescope and satellite observations. Their analysis, especially with real-time constraints, can be challenging and requires the usage of dedicated software components. We propose a solution for such parallel analysis using the supercomputer, supervised by the KASKADA platform, which with the conjunction with immerse 3D visualization techniques can be used to solve problems such as pulsar detection and chronometric or oil-spill simulation on the sea surface.

  6. Centralized supercomputer support for magnetic fusion energy research

    International Nuclear Information System (INIS)

    Fuss, D.; Tull, G.G.

    1984-01-01

    High-speed computers with large memories are vital to magnetic fusion energy research. Magnetohydrodynamic (MHD), transport, equilibrium, Vlasov, particle, and Fokker-Planck codes that model plasma behavior play an important role in designing experimental hardware and interpreting the resulting data, as well as in advancing plasma theory itself. The size, architecture, and software of supercomputers to run these codes are often the crucial constraints on the benefits such computational modeling can provide. Hence, vector computers such as the CRAY-1 offer a valuable research resource. To meet the computational needs of the fusion program, the National Magnetic Fusion Energy Computer Center (NMFECC) was established in 1974 at the Lawrence Livermore National Laboratory. Supercomputers at the central computing facility are linked to smaller computer centers at each of the major fusion laboratories by a satellite communication network. In addition to providing large-scale computing, the NMFECC environment stimulates collaboration and the sharing of computer codes and data among the many fusion researchers in a cost-effective manner

  7. Japanese supercomputer technology

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Ewald, R.H.; Worlton, W.J.

    1982-01-01

    In February 1982, computer scientists from the Los Alamos National Laboratory and Lawrence Livermore National Laboratory visited several Japanese computer manufacturers. The purpose of these visits was to assess the state of the art of Japanese supercomputer technology and to advise Japanese computer vendors of the needs of the US Department of Energy (DOE) for more powerful supercomputers. The Japanese foresee a domestic need for large-scale computing capabilities for nuclear fusion, image analysis for the Earth Resources Satellite, meteorological forecast, electrical power system analysis (power flow, stability, optimization), structural and thermal analysis of satellites, and very large scale integrated circuit design and simulation. To meet this need, Japan has launched an ambitious program to advance supercomputer technology. This program is described

  8. Status of supercomputers in the US

    International Nuclear Information System (INIS)

    Fernbach, S.

    1985-01-01

    Current Supercomputers; that is, the Class VI machines which first became available in 1976 are being delivered in greater quantity than ever before. In addition, manufacturers are busily working on Class VII machines to be ready for delivery in CY 1987. Mainframes are being modified or designed to take on some features of the supercomputers and new companies with the intent of either competing directly in the supercomputer arena or in providing entry-level systems from which to graduate to supercomputers are springing up everywhere. Even well founded organizations like IBM and CDC are adding machines with vector instructions in their repertoires. Japanese - manufactured supercomputers are also being introduced into the U.S. Will these begin to compete with those of U.S. manufacture. Are they truly competitive. It turns out that both from the hardware and software points of view they may be superior. We may be facing the same problems in supercomputers that we faced in videosystems

  9. Role of supercomputers in magnetic fusion and energy research programs

    International Nuclear Information System (INIS)

    Killeen, J.

    1985-06-01

    The importance of computer modeling in magnetic fusion (MFE) and energy research (ER) programs is discussed. The need for the most advanced supercomputers is described, and the role of the National Magnetic Fusion Energy Computer Center in meeting these needs is explained

  10. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Energy Technology Data Exchange (ETDEWEB)

    De, K [University of Texas at Arlington; Jha, S [Rutgers University; Klimentov, A [Brookhaven National Laboratory (BNL); Maeno, T [Brookhaven National Laboratory (BNL); Nilsson, P [Brookhaven National Laboratory (BNL); Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Wells, Jack C [ORNL; Wenaus, T [Brookhaven National Laboratory (BNL)

    2016-01-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), MIRA supercomputer at Argonne Leadership Computing Facilities (ALCF), Supercomputer at the National Research Center Kurchatov Institute , IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation

  11. 76 FR 1521 - Security Zone: Fleet Industrial Supply Center Pier, San Diego, CA

    Science.gov (United States)

    2011-01-11

    ...-AA87 Security Zone: Fleet Industrial Supply Center Pier, San Diego, CA AGENCY: Coast Guard, DHS. ACTION... Diego, CA. The existing security zone is around the former Fleet Industrial Supply Center Pier. The security zone encompasses all navigable waters within 100 feet of the former Fleet Industrial Supply Center...

  12. Supercomputing and related national projects in Japan

    International Nuclear Information System (INIS)

    Miura, Kenichi

    1985-01-01

    Japanese supercomputer development activities in the industry and research projects are outlined. Architecture, technology, software, and applications of Fujitsu's Vector Processor Systems are described as an example of Japanese supercomputers. Applications of supercomputers to high energy physics are also discussed. (orig.)

  13. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    Science.gov (United States)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  14. TOP500 Supercomputers for June 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-06-23

    23rd Edition of TOP500 List of World's Fastest Supercomputers Released: Japan's Earth Simulator Enters Third Year in Top Position MANNHEIM, Germany; KNOXVILLE, Tenn.;&BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 23rd edition of the TOP500 list of the world's fastest supercomputers was released today (June 23, 2004) at the International Supercomputer Conference in Heidelberg, Germany.

  15. Building the Teraflops/Petabytes Production Computing Center

    International Nuclear Information System (INIS)

    Kramer, William T.C.; Lucas, Don; Simon, Horst D.

    1999-01-01

    In just one decade, the 1990s, supercomputer centers have undergone two fundamental transitions which require rethinking their operation and their role in high performance computing. The first transition in the early to mid-1990s resulted from a technology change in high performance computing architecture. Highly parallel distributed memory machines built from commodity parts increased the operational complexity of the supercomputer center, and required the introduction of intellectual services as equally important components of the center. The second transition is happening in the late 1990s as centers are introducing loosely coupled clusters of SMPs as their premier high performance computing platforms, while dealing with an ever-increasing volume of data. In addition, increasing network bandwidth enables new modes of use of a supercomputer center, in particular, computational grid applications. In this paper we describe what steps NERSC is taking to address these issues and stay at the leading edge of supercomputing centers.; N

  16. A workbench for tera-flop supercomputing

    International Nuclear Information System (INIS)

    Resch, M.M.; Kuester, U.; Mueller, M.S.; Lang, U.

    2003-01-01

    Supercomputers currently reach a peak performance in the range of TFlop/s. With but one exception - the Japanese Earth Simulator - none of these systems has so far been able to also show a level of sustained performance for a variety of applications that comes close to the peak performance. Sustained TFlop/s are therefore rarely seen. The reasons are manifold and are well known: Bandwidth and latency both for main memory and for the internal network are the key internal technical problems. Cache hierarchies with large caches can bring relief but are no remedy to the problem. However, there are not only technical problems that inhibit the full exploitation by scientists of the potential of modern supercomputers. More and more organizational issues come to the forefront. This paper shows the approach of the High Performance Computing Center Stuttgart (HLRS) to deliver a sustained performance of TFlop/s for a wide range of applications from a large group of users spread over Germany. The core of the concept is the role of the data. Around this we design a simulation workbench that hides the complexity of interacting computers, networks and file systems from the user. (authors)

  17. TOP500 Supercomputers for June 2005

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2005-06-22

    25th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/L LNL BlueGene/L and IBM gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 25th edition of the TOP500 list of the world's fastest supercomputers was released today (June 22, 2005) at the 20th International Supercomputing Conference (ISC2005) in Heidelberg Germany.

  18. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    Science.gov (United States)

    Klimentov, A.; De, K.; Jha, S.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Wells, J.; Wenaus, T.

    2016-10-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  19. Integration Of PanDA Workload Management System With Supercomputers for ATLAS and Data Intensive Science

    International Nuclear Information System (INIS)

    Klimentov, A; Maeno, T; Nilsson, P; Panitkin, S; Wenaus, T; De, K; Oleynik, D; Jha, S; Wells, J

    2016-01-01

    The.LHC, operating at CERN, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 150 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 250,000 cores with a peak performance of 0.3 petaFLOPS, LHC data taking runs require more resources than grid can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility. Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on LCFs multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms for ALICE and ATLAS experiments and it is in full pro duction for the ATLAS since September 2015. We will present our current accomplishments with running PanDA at supercomputers and demonstrate our ability to use PanDA as a portal independent of the

  20. INTEL: Intel based systems move up in supercomputing ranks

    CERN Multimedia

    2002-01-01

    "The TOP500 supercomputer rankings released today at the Supercomputing 2002 conference show a dramatic increase in the number of Intel-based systems being deployed in high-performance computing (HPC) or supercomputing areas" (1/2 page).

  1. World's fastest supercomputer opens up to users

    Science.gov (United States)

    Xin, Ling

    2016-08-01

    China's latest supercomputer - Sunway TaihuLight - has claimed the crown as the world's fastest computer according to the latest TOP500 list, released at the International Supercomputer Conference in Frankfurt in late June.

  2. OpenMP Performance on the Columbia Supercomputer

    Science.gov (United States)

    Haoqiang, Jin; Hood, Robert

    2005-01-01

    This presentation discusses Columbia World Class Supercomputer which is one of the world's fastest supercomputers providing 61 TFLOPs (10/20/04). Conceived, designed, built, and deployed in just 120 days. A 20-node supercomputer built on proven 512-processor nodes. The largest SGI system in the world with over 10,000 Intel Itanium 2 processors and provides the largest node size incorporating commodity parts (512) and the largest shared-memory environment (2048) with 88% efficiency tops the scalar systems on the Top500 list.

  3. The San Diego Center for Patient Safety: Creating a Research, Education, and Community Consortium

    National Research Council Canada - National Science Library

    Pratt, Nancy; Vo, Kelly; Ganiats, Theodore G; Weinger, Matthew B

    2005-01-01

    In response to the Agency for Healthcare Research and Quality's Developmental Centers of Education and Research in Patient Safety grant program, a group of clinicians and academicians proposed the San...

  4. Supercomputing - Use Cases, Advances, The Future (1/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the first day, we will focus on the history and theory of supercomputing, the top500 list and the hardware that makes supercomputers tick. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP an...

  5. Supercomputing - Use Cases, Advances, The Future (2/2)

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Supercomputing has become a staple of science and the poster child for aggressive developments in silicon technology, energy efficiency and programming. In this series we examine the key components of supercomputing setups and the various advances – recent and past – that made headlines and delivered bigger and bigger machines. We also take a closer look at the future prospects of supercomputing, and the extent of its overlap with high throughput computing, in the context of main use cases ranging from oil exploration to market simulation. On the second day, we will focus on software and software paradigms driving supercomputers, workloads that need supercomputing treatment, advances in technology and possible future developments. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and i...

  6. Advanced parallel processing with supercomputer architectures

    International Nuclear Information System (INIS)

    Hwang, K.

    1987-01-01

    This paper investigates advanced parallel processing techniques and innovative hardware/software architectures that can be applied to boost the performance of supercomputers. Critical issues on architectural choices, parallel languages, compiling techniques, resource management, concurrency control, programming environment, parallel algorithms, and performance enhancement methods are examined and the best answers are presented. The authors cover advanced processing techniques suitable for supercomputers, high-end mainframes, minisupers, and array processors. The coverage emphasizes vectorization, multitasking, multiprocessing, and distributed computing. In order to achieve these operation modes, parallel languages, smart compilers, synchronization mechanisms, load balancing methods, mapping parallel algorithms, operating system functions, application library, and multidiscipline interactions are investigated to ensure high performance. At the end, they assess the potentials of optical and neural technologies for developing future supercomputers

  7. Desktop supercomputer: what can it do?

    Science.gov (United States)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-12-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  8. Adaptability of supercomputers to nuclear computations

    International Nuclear Information System (INIS)

    Asai, Kiyoshi; Ishiguro, Misako; Matsuura, Toshihiko.

    1983-01-01

    Recently in the field of scientific and technical calculation, the usefulness of supercomputers represented by CRAY-1 has been recognized, and they are utilized in various countries. The rapid computation of supercomputers is based on the function of vector computation. The authors investigated the adaptability to vector computation of about 40 typical atomic energy codes for the past six years. Based on the results of investigation, the adaptability of the function of vector computation that supercomputers have to atomic energy codes, the problem regarding the utilization and the future prospect are explained. The adaptability of individual calculation codes to vector computation is largely dependent on the algorithm and program structure used for the codes. The change to high speed by pipeline vector system, the investigation in the Japan Atomic Energy Research Institute and the results, and the examples of expressing the codes for atomic energy, environmental safety and nuclear fusion by vector are reported. The magnification of speed up for 40 examples was from 1.5 to 9.0. It can be said that the adaptability of supercomputers to atomic energy codes is fairly good. (Kako, I.)

  9. Communications and Collaboration Keep San Francisco VA Medical Center Project on Track

    International Nuclear Information System (INIS)

    Federal Energy Management Program

    2001-01-01

    This case study about energy saving performance contacts (ESPCs) presents an overview of how the Veterans Affairs Medical Center in San Francisco established an ESPC contract and the benefits derived from it. The Federal Energy Management Program instituted these special contracts to help federal agencies finance energy-saving projects at their facilities

  10. Visualization at supercomputing centers: the tale of little big iron and the three skinny guys.

    Science.gov (United States)

    Bethel, E W; van Rosendale, J; Southard, D; Gaither, K; Childs, H; Brugger, E; Ahern, S

    2011-01-01

    Supercomputing centers are unique resources that aim to enable scientific knowledge discovery by employing large computational resources-the "Big Iron." Design, acquisition, installation, and management of the Big Iron are carefully planned and monitored. Because these Big Iron systems produce a tsunami of data, it's natural to colocate the visualization and analysis infrastructure. This infrastructure consists of hardware (Little Iron) and staff (Skinny Guys). Our collective experience suggests that design, acquisition, installation, and management of the Little Iron and Skinny Guys doesn't receive the same level of treatment as that of the Big Iron. This article explores the following questions about the Little Iron: How should we size the Little Iron to adequately support visualization and analysis of data coming off the Big Iron? What sort of capabilities must it have? Related questions concern the size of visualization support staff: How big should a visualization program be-that is, how many Skinny Guys should it have? What should the staff do? How much of the visualization should be provided as a support service, and how much should applications scientists be expected to do on their own?

  11. Desktop supercomputer: what can it do?

    International Nuclear Information System (INIS)

    Bogdanov, A.; Degtyarev, A.; Korkhov, V.

    2017-01-01

    The paper addresses the issues of solving complex problems that require using supercomputers or multiprocessor clusters available for most researchers nowadays. Efficient distribution of high performance computing resources according to actual application needs has been a major research topic since high-performance computing (HPC) technologies became widely introduced. At the same time, comfortable and transparent access to these resources was a key user requirement. In this paper we discuss approaches to build a virtual private supercomputer available at user's desktop: a virtual computing environment tailored specifically for a target user with a particular target application. We describe and evaluate possibilities to create the virtual supercomputer based on light-weight virtualization technologies, and analyze the efficiency of our approach compared to traditional methods of HPC resource management.

  12. TOP500 Supercomputers for November 2004

    Energy Technology Data Exchange (ETDEWEB)

    Strohmaier, Erich; Meuer, Hans W.; Dongarra, Jack; Simon, Horst D.

    2004-11-08

    24th Edition of TOP500 List of World's Fastest Supercomputers Released: DOE/IBM BlueGene/L and NASA/SGI's Columbia gain Top Positions MANNHEIM, Germany; KNOXVILLE, Tenn.; BERKELEY, Calif. In what has become a closely watched event in the world of high-performance computing, the 24th edition of the TOP500 list of the worlds fastest supercomputers was released today (November 8, 2004) at the SC2004 Conference in Pittsburgh, Pa.

  13. Status reports of supercomputing astrophysics in Japan

    International Nuclear Information System (INIS)

    Nakamura, Takashi; Nagasawa, Mikio

    1990-01-01

    The Workshop on Supercomputing Astrophysics was held at National Laboratory for High Energy Physics (KEK, Tsukuba) from August 31 to September 2, 1989. More than 40 participants of physicists, astronomers were attendant and discussed many topics in the informal atmosphere. The main purpose of this workshop was focused on the theoretical activities in computational astrophysics in Japan. It was also aimed to promote effective collaboration between the numerical experimentists working on supercomputing technique. The various subjects of the presented papers of hydrodynamics, plasma physics, gravitating systems, radiative transfer and general relativity are all stimulating. In fact, these numerical calculations become possible now in Japan owing to the power of Japanese supercomputer such as HITAC S820, Fujitsu VP400E and NEC SX-2. (J.P.N.)

  14. Comments on the parallelization efficiency of the Sunway TaihuLight supercomputer

    OpenAIRE

    Végh, János

    2016-01-01

    In the world of supercomputers, the large number of processors requires to minimize the inefficiencies of parallelization, which appear as a sequential part of the program from the point of view of Amdahl's law. The recently suggested new figure of merit is applied to the recently presented supercomputer, and the timeline of "Top 500" supercomputers is scrutinized using the metric. It is demonstrated, that in addition to the computing performance and power consumption, the new supercomputer i...

  15. The ETA10 supercomputer system

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, Inc. ETA 10 is a next-generation supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamics MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed. (orig.)

  16. 33 CFR 334.1170 - San Pablo Bay, Calif.; gunnery range, Naval Inshore Operations Training Center, Mare Island...

    Science.gov (United States)

    2010-07-01

    ... range, Naval Inshore Operations Training Center, Mare Island, Vallejo. 334.1170 Section 334.1170... Operations Training Center, Mare Island, Vallejo. (a) The Danger Zone. A sector in San Pablo Bay delineated..., Vallejo, California, will conduct gunnery practice in the area during the period April 1 through September...

  17. Supercomputers to transform Science

    CERN Multimedia

    2006-01-01

    "New insights into the structure of space and time, climate modeling, and the design of novel drugs, are but a few of the many research areas that will be transforned by the installation of three supercomputers at the Unversity of Bristol." (1/2 page)

  18. Convex unwraps its first grown-up supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Manuel, T.

    1988-03-03

    Convex Computer Corp.'s new supercomputer family is even more of an industry blockbuster than its first system. At a tenfold jump in performance, it's far from just an incremental upgrade over its first minisupercomputer, the C-1. The heart of the new family, the new C-2 processor, churning at 50 million floating-point operations/s, spawns a group of systems whose performance could pass for some fancy supercomputers-namely those of the Cray Research Inc. family. When added to the C-1, Convex's five new supercomputers create the C series, a six-member product group offering a performance range from 20 to 200 Mflops. They mark an important transition for Convex from a one-product high-tech startup to a multinational company with a wide-ranging product line. It's a tough transition but the Richardson, Texas, company seems to be doing it. The extended product line propels Convex into the upper end of the minisupercomputer class and nudges it into the low end of the big supercomputers. It positions Convex in an uncrowded segment of the market in the $500,000 to $1 million range offering 50 to 200 Mflops of performance. The company is making this move because the minisuper area, which it pioneered, quickly became crowded with new vendors, causing prices and gross margins to drop drastically.

  19. Effective Analysis of NGS Metagenomic Data with Ultra-Fast Clustering Algorithms (MICW - Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    Energy Technology Data Exchange (ETDEWEB)

    Li, Weizhong

    2011-10-12

    San Diego Supercomputer Center's Weizhong Li on "Effective Analysis of NGS Metagenomic Data with Ultra-fast Clustering Algorithms" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  20. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-01-01

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  1. Supercomputer debugging workshop 1991 proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.

    1991-12-31

    This report discusses the following topics on supercomputer debugging: Distributed debugging; use interface to debugging tools and standards; debugging optimized codes; debugging parallel codes; and debugger performance and interface as analysis tools. (LSP)

  2. The ETA systems plans for supercomputers

    International Nuclear Information System (INIS)

    Swanson, C.D.

    1987-01-01

    The ETA Systems, is a class VII supercomputer featuring multiprocessing, a large hierarchical memory system, high performance input/output, and network support for both batch and interactive processing. Advanced technology used in the ETA 10 includes liquid nitrogen cooled CMOS logic with 20,000 gates per chip, a single printed circuit board for each CPU, and high density static and dynamic MOS memory chips. Software for the ETA 10 includes an underlying kernel that supports multiple user environments, a new ETA FORTRAN compiler with an advanced automatic vectorizer, a multitasking library and debugging tools. Possible developments for future supercomputers from ETA Systems are discussed

  3. PNNL supercomputer to become largest computing resource on the Grid

    CERN Multimedia

    2002-01-01

    Hewlett Packard announced that the US DOE Pacific Northwest National Laboratory will connect a 9.3-teraflop HP supercomputer to the DOE Science Grid. This will be the largest supercomputer attached to a computer grid anywhere in the world (1 page).

  4. San Francisco folio, California, Tamalpais, San Francisco, Concord, San Mateo, and Haywards quadrangles

    Science.gov (United States)

    Lawson, Andrew Cowper

    1914-01-01

    The five sheets of the San Francisco folio the Tamalpais, Ban Francisco, Concord, Ban Mateo, and Haywards sheets map a territory lying between latitude 37° 30' and 38° and longitude 122° and 122° 45'. Large parts of four of these sheets cover the waters of the Bay of San Francisco or of the adjacent Pacific Ocean. (See fig. 1.) Within the area mapped are the cities of San Francisco, Oakland, Berkeley, Alameda, Ban Rafael, and San Mateo, and many smaller towns and villages. These cities, which have a population aggregating about 750,000, together form the largest and most important center of commercial and industrial activity on the west coast of the United States. The natural advantages afforded by a great harbor, where the railways from the east meet the ships from all ports of the world, have determined the site of a flourishing cosmopolitan, commercial city on the shores of San Francisco Bay. The bay is encircled by hilly and mountainous country diversified by fertile valley lands and divides the territory mapped into two rather contrasted parts, the western part being again divided by the Golden Gate. It will therefore be convenient to sketch the geographic features under four headings (1) the area east of San Francisco Bay; (2) the San Francisco Peninsula; (3) the Marin Peninsula; (4) San Francisco Bay. (See fig. 2.)

  5. Cooperative visualization and simulation in a supercomputer environment

    International Nuclear Information System (INIS)

    Ruehle, R.; Lang, U.; Wierse, A.

    1993-01-01

    The article takes a closer look on the requirements being imposed by the idea to integrate all the components into a homogeneous software environment. To this end several methods for the distribtuion of applications in dependence of certain problem types are discussed. The currently available methods at the University of Stuttgart Computer Center for the distribution of applications are further explained. Finally the aims and characteristics of a European sponsored project, called PAGEIN, are explained, which fits perfectly into the line of developments at RUS. The aim of the project is to experiment with future cooperative working modes of aerospace scientists in a high speed distributed supercomputing environment. Project results will have an impact on the development of real future scientific application environments. (orig./DG)

  6. Supercomputers Of The Future

    Science.gov (United States)

    Peterson, Victor L.; Kim, John; Holst, Terry L.; Deiwert, George S.; Cooper, David M.; Watson, Andrew B.; Bailey, F. Ron

    1992-01-01

    Report evaluates supercomputer needs of five key disciplines: turbulence physics, aerodynamics, aerothermodynamics, chemistry, and mathematical modeling of human vision. Predicts these fields will require computer speed greater than 10(Sup 18) floating-point operations per second (FLOP's) and memory capacity greater than 10(Sup 15) words. Also, new parallel computer architectures and new structured numerical methods will make necessary speed and capacity available.

  7. NASA Advanced Supercomputing Facility Expansion

    Science.gov (United States)

    Thigpen, William W.

    2017-01-01

    The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.

  8. ATLAS Software Installation on Supercomputers

    CERN Document Server

    Undrus, Alexander; The ATLAS collaboration

    2018-01-01

    PowerPC and high performance computers (HPC) are important resources for computing in the ATLAS experiment. The future LHC data processing will require more resources than Grid computing, currently using approximately 100,000 cores at well over 100 sites, can provide. Supercomputers are extremely powerful as they use resources of hundreds of thousands CPUs joined together. However their architectures have different instruction sets. ATLAS binary software distributions for x86 chipsets do not fit these architectures, as emulation of these chipsets results in huge performance loss. This presentation describes the methodology of ATLAS software installation from source code on supercomputers. The installation procedure includes downloading the ATLAS code base as well as the source of about 50 external packages, such as ROOT and Geant4, followed by compilation, and rigorous unit and integration testing. The presentation reports the application of this procedure at Titan HPC and Summit PowerPC at Oak Ridge Computin...

  9. JINR supercomputer of the module type for event parallel analysis

    International Nuclear Information System (INIS)

    Kolpakov, I.F.; Senner, A.E.; Smirnov, V.A.

    1987-01-01

    A model of a supercomputer with 50 million of operations per second is suggested. Its realization allows one to solve JINR data analysis problems for large spectrometers (in particular DELPHY collaboration). The suggested module supercomputer is based on 32-bit commercial available microprocessor with a processing rate of about 1 MFLOPS. The processors are combined by means of VME standard busbars. MicroVAX-11 is a host computer organizing the operation of the system. Data input and output is realized via microVAX-11 computer periphery. Users' software is based on the FORTRAN-77. The supercomputer is connected with a JINR net port and all JINR users get an access to the suggested system

  10. Supercomputers and quantum field theory

    International Nuclear Information System (INIS)

    Creutz, M.

    1985-01-01

    A review is given of why recent simulations of lattice gauge theories have resulted in substantial demands from particle theorists for supercomputer time. These calculations have yielded first principle results on non-perturbative aspects of the strong interactions. An algorithm for simulating dynamical quark fields is discussed. 14 refs

  11. Adventures in supercomputing: An innovative program for high school teachers

    Energy Technology Data Exchange (ETDEWEB)

    Oliver, C.E.; Hicks, H.R.; Summers, B.G. [Oak Ridge National Lab., TN (United States); Staten, D.G. [Wartburg Central High School, TN (United States)

    1994-12-31

    Within the realm of education, seldom does an innovative program become available with the potential to change an educator`s teaching methodology. Adventures in Supercomputing (AiS), sponsored by the U.S. Department of Energy (DOE), is such a program. It is a program for high school teachers that changes the teacher paradigm from a teacher-directed approach of teaching to a student-centered approach. {open_quotes}A student-centered classroom offers better opportunities for development of internal motivation, planning skills, goal setting and perseverance than does the traditional teacher-directed mode{close_quotes}. Not only is the process of teaching changed, but the cross-curricula integration within the AiS materials is remarkable. Written from a teacher`s perspective, this paper will describe the AiS program and its effects on teachers and students, primarily at Wartburg Central High School, in Wartburg, Tennessee. The AiS program in Tennessee is sponsored by Oak Ridge National Laboratory (ORNL).

  12. Supercomputer applications in nuclear research

    International Nuclear Information System (INIS)

    Ishiguro, Misako

    1992-01-01

    The utilization of supercomputers in Japan Atomic Energy Research Institute is mainly reported. The fields of atomic energy research which use supercomputers frequently and the contents of their computation are outlined. What is vectorizing is simply explained, and nuclear fusion, nuclear reactor physics, the hydrothermal safety of nuclear reactors, the parallel property that the atomic energy computations of fluids and others have, the algorithm for vector treatment and the effect of speed increase by vectorizing are discussed. At present Japan Atomic Energy Research Institute uses two systems of FACOM VP 2600/10 and three systems of M-780. The contents of computation changed from criticality computation around 1970, through the analysis of LOCA after the TMI accident, to nuclear fusion research, the design of new type reactors and reactor safety assessment at present. Also the method of using computers advanced from batch processing to time sharing processing, from one-dimensional to three dimensional computation, from steady, linear to unsteady nonlinear computation, from experimental analysis to numerical simulation and so on. (K.I.)

  13. Computational plasma physics and supercomputers

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1984-09-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular codes, but parallel processing poses new coding difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematics

  14. Mistral Supercomputer Job History Analysis

    OpenAIRE

    Zasadziński, Michał; Muntés-Mulero, Victor; Solé, Marc; Ludwig, Thomas

    2018-01-01

    In this technical report, we show insights and results of operational data analysis from petascale supercomputer Mistral, which is ranked as 42nd most powerful in the world as of January 2018. Data sources include hardware monitoring data, job scheduler history, topology, and hardware information. We explore job state sequences, spatial distribution, and electric power patterns.

  15. Interactive real-time nuclear plant simulations on a UNIX based supercomputer

    International Nuclear Information System (INIS)

    Behling, S.R.

    1990-01-01

    Interactive real-time nuclear plant simulations are critically important to train nuclear power plant engineers and operators. In addition, real-time simulations can be used to test the validity and timing of plant technical specifications and operational procedures. To accurately and confidently simulate a nuclear power plant transient in real-time, sufficient computer resources must be available. Since some important transients cannot be simulated using preprogrammed responses or non-physical models, commonly used simulation techniques may not be adequate. However, the power of a supercomputer allows one to accurately calculate the behavior of nuclear power plants even during very complex transients. Many of these transients can be calculated in real-time or quicker on the fastest supercomputers. The concept of running interactive real-time nuclear power plant transients on a supercomputer has been tested. This paper describes the architecture of the simulation program, the techniques used to establish real-time synchronization, and other issues related to the use of supercomputers in a new and potentially very important area. (author)

  16. Porting Ordinary Applications to Blue Gene/Q Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Maheshwari, Ketan C.; Wozniak, Justin M.; Armstrong, Timothy; Katz, Daniel S.; Binkowski, T. Andrew; Zhong, Xiaoliang; Heinonen, Olle; Karpeyev, Dmitry; Wilde, Michael

    2015-08-31

    Efficiently porting ordinary applications to Blue Gene/Q supercomputers is a significant challenge. Codes are often originally developed without considering advanced architectures and related tool chains. Science needs frequently lead users to want to run large numbers of relatively small jobs (often called many-task computing, an ensemble, or a workflow), which can conflict with supercomputer configurations. In this paper, we discuss techniques developed to execute ordinary applications over leadership class supercomputers. We use the high-performance Swift parallel scripting framework and build two workflow execution techniques-sub-jobs and main-wrap. The sub-jobs technique, built on top of the IBM Blue Gene/Q resource manager Cobalt's sub-block jobs, lets users submit multiple, independent, repeated smaller jobs within a single larger resource block. The main-wrap technique is a scheme that enables C/C++ programs to be defined as functions that are wrapped by a high-performance Swift wrapper and that are invoked as a Swift script. We discuss the needs, benefits, technicalities, and current limitations of these techniques. We further discuss the real-world science enabled by these techniques and the results obtained.

  17. The TESS Science Processing Operations Center

    Science.gov (United States)

    Jenkins, Jon M.; Twicken, Joseph D.; McCauliff, Sean; Campbell, Jennifer; Sanderfer, Dwight; Lung, David; Mansouri-Samani, Masoud; Girouard, Forrest; Tenenbaum, Peter; Klaus, Todd; hide

    2016-01-01

    The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover approximately 1,000 small planets with R(sub p) less than 4 (solar radius) and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed at NASA Ames Research Center based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NASA Advanced Supercomputing Division's Pleiades supercomputer. The SPOC will also search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes (MAST).

  18. Use of high performance networks and supercomputers for real-time flight simulation

    Science.gov (United States)

    Cleveland, Jeff I., II

    1993-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be consistent in processing time and be completed in as short a time as possible. These operations include simulation mathematical model computation and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to the Computer Automated Measurement and Control (CAMAC) technology which resulted in a factor of ten increase in the effective bandwidth and reduced latency of modules necessary for simulator communication. This technology extension is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC are completing the development of the use of supercomputers for mathematical model computation to support real-time flight simulation. This includes the development of a real-time operating system and development of specialized software and hardware for the simulator network. This paper describes the data acquisition technology and the development of supercomputing for flight simulation.

  19. Reactive flow simulations in complex geometries with high-performance supercomputing

    International Nuclear Information System (INIS)

    Rehm, W.; Gerndt, M.; Jahn, W.; Vogelsang, R.; Binninger, B.; Herrmann, M.; Olivier, H.; Weber, M.

    2000-01-01

    In this paper, we report on a modern field code cluster consisting of state-of-the-art reactive Navier-Stokes- and reactive Euler solvers that has been developed on vector- and parallel supercomputers at the research center Juelich. This field code cluster is used for hydrogen safety analyses of technical systems, for example, in the field of nuclear reactor safety and conventional hydrogen demonstration plants with fuel cells. Emphasis is put on the assessment of combustion loads, which could result from slow, fast or rapid flames, including transition from deflagration to detonation. As a sample of proof tests, the special tools have been tested for specific tasks, based on the comparison of experimental and numerical results, which are in reasonable agreement. (author)

  20. Extracting the Textual and Temporal Structure of Supercomputing Logs

    Energy Technology Data Exchange (ETDEWEB)

    Jain, S; Singh, I; Chandra, A; Zhang, Z; Bronevetsky, G

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an online clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.

  1. Introduction to Reconfigurable Supercomputing

    CERN Document Server

    Lanzagorta, Marco; Rosenberg, Robert

    2010-01-01

    This book covers technologies, applications, tools, languages, procedures, advantages, and disadvantages of reconfigurable supercomputing using Field Programmable Gate Arrays (FPGAs). The target audience is the community of users of High Performance Computers (HPe who may benefit from porting their applications into a reconfigurable environment. As such, this book is intended to guide the HPC user through the many algorithmic considerations, hardware alternatives, usability issues, programming languages, and design tools that need to be understood before embarking on the creation of reconfigur

  2. Interview with Jennie E. Rodríguez, Executive Director of the Mission Cultural Center for Latino Arts, San Francisco, CA, USA, August 15, 2001 Entretien avec Jennie E. Rodríguez, directrice, Mission Cultural Center for Latino Arts, San Francisco, CA, États-Unis

    Directory of Open Access Journals (Sweden)

    Gérard Selbach

    2009-10-01

    Full Text Available ForewordThe Mission Cultural Center for Latino Arts (MCCLA is located at 2868 Mission Street in San Francisco, in a district mainly inhabited by Hispanics and well-known for its numerous murals. The Center was founded in 1977 by artists and community activists who shared “the vision to promote, preserve and develop the Latino cultural arts that reflect the living tradition and experiences of Chicano, Central and South American, and Caribbean people.”August 2001 was as busy at the Center as a...

  3. Fitting the datum of SANS with Pxy program

    International Nuclear Information System (INIS)

    Sun, Liangwei; Peng, Mei; Chen, Liang

    2009-04-01

    The thesis introduces the basic theory of Small-Angle neutron scattering, enumerates several approximate law. It simply describes the components of Small-Angle neutron spectrometer (SANS) and the parameters of SANS of Budapest Neutron Center (BNC) in Hungary. During the period of studying at Budapest Neutron Center in Hungary, the experiments of wavelength calibration was carried out with SIBE and the SANS experiments of sample Micelles. The experiments are briefly introduced. Pxy program is used to fit these datum, and the results of wavelength and sizes of sample Micelles are presented. (authors)

  4. SUPERCOMPUTERS FOR AIDING ECONOMIC PROCESSES WITH REFERENCE TO THE FINANCIAL SECTOR

    Directory of Open Access Journals (Sweden)

    Jerzy Balicki

    2014-12-01

    Full Text Available The article discusses the use of supercomputers to support business processes with particular emphasis on the financial sector. A reference was made to the selected projects that support economic development. In particular, we propose the use of supercomputers to perform artificial intel-ligence methods in banking. The proposed methods combined with modern technology enables a significant increase in the competitiveness of enterprises and banks by adding new functionality.

  5. Holistic Approach to Data Center Energy Efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Steven W [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-18

    This presentation discusses NREL's Energy System Integrations Facility and NREL's holistic design approach to sustainable data centers that led to the world's most energy-efficient data center. It describes Peregrine, a warm water liquid cooled supercomputer, waste heat reuse in the data center, demonstrated PUE and ERE, and lessons learned during four years of operation.

  6. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sarje, Abhinav [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jacobsen, Douglas W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ringler, Todd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-05-01

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  7. Visualization environment of the large-scale data of JAEA's supercomputer system

    Energy Technology Data Exchange (ETDEWEB)

    Sakamoto, Kensaku [Japan Atomic Energy Agency, Center for Computational Science and e-Systems, Tokai, Ibaraki (Japan); Hoshi, Yoshiyuki [Research Organization for Information Science and Technology (RIST), Tokai, Ibaraki (Japan)

    2013-11-15

    On research and development of various fields of nuclear energy, visualization of calculated data is especially useful to understand the result of simulation in an intuitive way. Many researchers who run simulations on the supercomputer in Japan Atomic Energy Agency (JAEA) are used to transfer calculated data files from the supercomputer to their local PCs for visualization. In recent years, as the size of calculated data has gotten larger with improvement of supercomputer performance, reduction of visualization processing time as well as efficient use of JAEA network is being required. As a solution, we introduced a remote visualization system which has abilities to utilize parallel processors on the supercomputer and to reduce the usage of network resources by transferring data of intermediate visualization process. This paper reports a study on the performance of image processing with the remote visualization system. The visualization processing time is measured and the influence of network speed is evaluated by varying the drawing mode, the size of visualization data and the number of processors. Based on this study, a guideline for using the remote visualization system is provided to show how the system can be used effectively. An upgrade policy of the next system is also shown. (author)

  8. The Pawsey Supercomputer geothermal cooling project

    Science.gov (United States)

    Regenauer-Lieb, K.; Horowitz, F.; Western Australian Geothermal Centre Of Excellence, T.

    2010-12-01

    The Australian Government has funded the Pawsey supercomputer in Perth, Western Australia, providing computational infrastructure intended to support the future operations of the Australian Square Kilometre Array radiotelescope and to boost next-generation computational geosciences in Australia. Supplementary funds have been directed to the development of a geothermal exploration well to research the potential for direct heat use applications at the Pawsey Centre site. Cooling the Pawsey supercomputer may be achieved by geothermal heat exchange rather than by conventional electrical power cooling, thus reducing the carbon footprint of the Pawsey Centre and demonstrating an innovative green technology that is widely applicable in industry and urban centres across the world. The exploration well is scheduled to be completed in 2013, with drilling due to commence in the third quarter of 2011. One year is allocated to finalizing the design of the exploration, monitoring and research well. Success in the geothermal exploration and research program will result in an industrial-scale geothermal cooling facility at the Pawsey Centre, and will provide a world-class student training environment in geothermal energy systems. A similar system is partially funded and in advanced planning to provide base-load air-conditioning for the main campus of the University of Western Australia. Both systems are expected to draw ~80-95 degrees C water from aquifers lying between 2000 and 3000 meters depth from naturally permeable rocks of the Perth sedimentary basin. The geothermal water will be run through absorption chilling devices, which only require heat (as opposed to mechanical work) to power a chilled water stream adequate to meet the cooling requirements. Once the heat has been removed from the geothermal water, licensing issues require the water to be re-injected back into the aquifer system. These systems are intended to demonstrate the feasibility of powering large-scale air

  9. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  10. Frequently updated noise threat maps created with use of supercomputing grid

    Directory of Open Access Journals (Sweden)

    Szczodrak Maciej

    2014-09-01

    Full Text Available An innovative supercomputing grid services devoted to noise threat evaluation were presented. The services described in this paper concern two issues, first is related to the noise mapping, while the second one focuses on assessment of the noise dose and its influence on the human hearing system. The discussed serviceswere developed within the PL-Grid Plus Infrastructure which accumulates Polish academic supercomputer centers. Selected experimental results achieved by the usage of the services proposed were presented. The assessment of the environmental noise threats includes creation of the noise maps using either ofline or online data, acquired through a grid of the monitoring stations. A concept of estimation of the source model parameters based on the measured sound level for the purpose of creating frequently updated noise maps was presented. Connecting the noise mapping grid service with a distributed sensor network enables to automatically update noise maps for a specified time period. Moreover, a unique attribute of the developed software is the estimation of the auditory effects evoked by the exposure to noise. The estimation method uses a modified psychoacoustic model of hearing and is based on the calculated noise level values and on the given exposure period. Potential use scenarios of the grid services for research or educational purpose were introduced. Presentation of the results of predicted hearing threshold shift caused by exposure to excessive noise can raise the public awareness of the noise threats.

  11. FEASIBILITY STUDY OF ESTABLISHING AN ARTIFICIAL INSEMINATION (AI CENTER FOR CARABAOS IN SAN ILDEFONSO, BULACAN, PHILIPPINES

    Directory of Open Access Journals (Sweden)

    F.Q. Arrienda II

    2014-10-01

    Full Text Available The productivity of the carabao subsector is influenced by several constraints such as social,technical, economic and policy factors. The need to enhance the local production of carabaos will helplocal farmers to increase their income. Thus, producing thorough breeds of carabaos and improving itgenetically is the best response to these constraints. This study was conducted to present the feasibilitystudy of establishing an Artificial Insemination (AI Center and its planned area of operation in Brgy.San Juan, Ildefonso, Bulacan. The market, production, organizational and financial viability of operatingthe business would also be evaluated. This particular study will provide insights in establishing an AICenter. Included in this study is the identification of anticipated problems that could affect the businessand recommendation of specific courses of action to counteract these possible problems. Primary datawere obtained through interviews with key informants from the Philippine. Carabao Center (PCC. Togain insights about the present status of an AI Center, interviews with the technicians of PCC and privatefarm were done to get additional information. Secondary data were acquired from various literatures andfrom San Ildefonso Municipal Office. The proposed area would be 1,500 square meters that would beallotted for the laboratory and bullpen. The AI Center will operate six days a week and will be openedfrom 8 AM until 5 PM. However, customers or farmers can call the technicians beyond the office hoursin case of emergency. The total initial investment of Php 3,825,417.39 is needed in establishing the AICenter. The whole amount will be sourced from the owner’s equity. Financial projection showed an IRRof 30% with a computed NPV of Php 2,415,597.00 and a payback period of 3.97 years. Based on all themarket, technical, organizational, financial factors, projections and data analysis, it is said that thisbusiness endeavor is viable and feasible.

  12. QCD on the BlueGene/L Supercomputer

    International Nuclear Information System (INIS)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-01-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented

  13. QCD on the BlueGene/L Supercomputer

    Science.gov (United States)

    Bhanot, G.; Chen, D.; Gara, A.; Sexton, J.; Vranas, P.

    2005-03-01

    In June 2004 QCD was simulated for the first time at sustained speed exceeding 1 TeraFlops in the BlueGene/L supercomputer at the IBM T.J. Watson Research Lab. The implementation and performance of QCD in the BlueGene/L is presented.

  14. Development of seismic tomography software for hybrid supercomputers

    Science.gov (United States)

    Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton

    2015-04-01

    Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on

  15. The new library building at the University of Texas Health Science Center at San Antonio.

    Science.gov (United States)

    Kronick, D A; Bowden, V M; Olivier, E R

    1985-04-01

    The new University of Texas Health Science Center at San Antonio Library opened in June 1983, replacing the 1968 library building. Planning a new library building provides an opportunity for the staff to rethink their philosophy of service. Of paramount concern and importance is the need to convey this philosophy to the architects. This paper describes the planning process and the building's external features, interior layouts, and accommodations for technology. Details of the move to the building are considered and various aspects of the building are reviewed.

  16. Proceedings of the first energy research power supercomputer users symposium

    International Nuclear Information System (INIS)

    1991-01-01

    The Energy Research Power Supercomputer Users Symposium was arranged to showcase the richness of science that has been pursued and accomplished in this program through the use of supercomputers and now high performance parallel computers over the last year: this report is the collection of the presentations given at the Symposium. ''Power users'' were invited by the ER Supercomputer Access Committee to show that the use of these computational tools and the associated data communications network, ESNet, go beyond merely speeding up computations. Today the work often directly contributes to the advancement of the conceptual developments in their fields and the computational and network resources form the very infrastructure of today's science. The Symposium also provided an opportunity, which is rare in this day of network access to computing resources, for the invited users to compare and discuss their techniques and approaches with those used in other ER disciplines. The significance of new parallel architectures was highlighted by the interesting evening talk given by Dr. Stephen Orszag of Princeton University

  17. Graphics supercomputer for computational fluid dynamics research

    Science.gov (United States)

    Liaw, Goang S.

    1994-11-01

    The objective of this project is to purchase a state-of-the-art graphics supercomputer to improve the Computational Fluid Dynamics (CFD) research capability at Alabama A & M University (AAMU) and to support the Air Force research projects. A cutting-edge graphics supercomputer system, Onyx VTX, from Silicon Graphics Computer Systems (SGI), was purchased and installed. Other equipment including a desktop personal computer, PC-486 DX2 with a built-in 10-BaseT Ethernet card, a 10-BaseT hub, an Apple Laser Printer Select 360, and a notebook computer from Zenith were also purchased. A reading room has been converted to a research computer lab by adding some furniture and an air conditioning unit in order to provide an appropriate working environments for researchers and the purchase equipment. All the purchased equipment were successfully installed and are fully functional. Several research projects, including two existing Air Force projects, are being performed using these facilities.

  18. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  19. A visual analytics system for optimizing the performance of large-scale networks in supercomputing systems

    Directory of Open Access Journals (Sweden)

    Takanori Fujiwara

    2018-03-01

    Full Text Available The overall efficiency of an extreme-scale supercomputer largely relies on the performance of its network interconnects. Several of the state of the art supercomputers use networks based on the increasingly popular Dragonfly topology. It is crucial to study the behavior and performance of different parallel applications running on Dragonfly networks in order to make optimal system configurations and design choices, such as job scheduling and routing strategies. However, in order to study these temporal network behavior, we would need a tool to analyze and correlate numerous sets of multivariate time-series data collected from the Dragonfly’s multi-level hierarchies. This paper presents such a tool–a visual analytics system–that uses the Dragonfly network to investigate the temporal behavior and optimize the communication performance of a supercomputer. We coupled interactive visualization with time-series analysis methods to help reveal hidden patterns in the network behavior with respect to different parallel applications and system configurations. Our system also provides multiple coordinated views for connecting behaviors observed at different levels of the network hierarchies, which effectively helps visual analysis tasks. We demonstrate the effectiveness of the system with a set of case studies. Our system and findings can not only help improve the communication performance of supercomputing applications, but also the network performance of next-generation supercomputers. Keywords: Supercomputing, Parallel communication network, Dragonfly networks, Time-series data, Performance analysis, Visual analytics

  20. KfK-seminar series on supercomputing und visualization from May till September 1992

    International Nuclear Information System (INIS)

    Hohenhinnebusch, W.

    1993-05-01

    During the period of may 1992 to september 1992 a series of seminars was held at KfK on several topics of supercomputing in different fields of application. The aim was to demonstrate the importance of supercomputing and visualization in numerical simulations of complex physical and technical phenomena. This report contains the collection of all submitted seminar papers. (orig./HP) [de

  1. Computational plasma physics and supercomputers. Revision 1

    International Nuclear Information System (INIS)

    Killeen, J.; McNamara, B.

    1985-01-01

    The Supercomputers of the 80's are introduced. They are 10 to 100 times more powerful than today's machines. The range of physics modeling in the fusion program is outlined. New machine architecture will influence particular models, but parallel processing poses new programming difficulties. Increasing realism in simulations will require better numerics and more elaborate mathematical models

  2. Application of Supercomputer Technologies for Simulation Of Socio-Economic Systems

    Directory of Open Access Journals (Sweden)

    Vladimir Valentinovich Okrepilov

    2015-06-01

    Full Text Available To date, an extensive experience has been accumulated in investigation of problems related to quality, assessment of management systems, modeling of economic system sustainability. The performed studies have created a basis for development of a new research area — Economics of Quality. Its tools allow to use opportunities of model simulation for construction of the mathematical models adequately reflecting the role of quality in natural, technical, social regularities of functioning of the complex socio-economic systems. Extensive application and development of models, and also system modeling with use of supercomputer technologies, on our deep belief, will bring the conducted research of socio-economic systems to essentially new level. Moreover, the current scientific research makes a significant contribution to model simulation of multi-agent social systems and that is not less important, it belongs to the priority areas in development of science and technology in our country. This article is devoted to the questions of supercomputer technologies application in public sciences, first of all, — regarding technical realization of the large-scale agent-focused models (AFM. The essence of this tool is that owing to the power computer increase it has become possible to describe the behavior of many separate fragments of a difficult system, as socio-economic systems are. The article also deals with the experience of foreign scientists and practicians in launching the AFM on supercomputers, and also the example of AFM developed in CEMI RAS, stages and methods of effective calculating kernel display of multi-agent system on architecture of a modern supercomputer will be analyzed. The experiments on the basis of model simulation on forecasting the population of St. Petersburg according to three scenarios as one of the major factors influencing the development of socio-economic system and quality of life of the population are presented in the

  3. SANS studies of polymers

    International Nuclear Information System (INIS)

    Wignall, G.D.

    1984-10-01

    Before small-angle neutron scattering (SANS), chain conformation studies were limited to light and small angle x-ray scattering techniques, usually in dilute solution. SANS from blends of normal and labeled molecules could give direct information on chain conformation in bulk polymers. Water-soluble polymers may be examined in H 2 O/D 2 O mixtures using contrast variation methods to provide further information on polymer structure. This paper reviews some of the information provided by this technique using examples of experiments performed at the National Center for Small-Angle Scattering Research (NCSASR)

  4. Extending ATLAS Computing to Commercial Clouds and Supercomputers

    CERN Document Server

    Nilsson, P; The ATLAS collaboration; Filipcic, A; Klimentov, A; Maeno, T; Oleynik, D; Panitkin, S; Wenaus, T; Wu, W

    2014-01-01

    The Large Hadron Collider will resume data collection in 2015 with substantially increased computing requirements relative to its first 2009-2013 run. A near doubling of the energy and the data rate, high level of event pile-up, and detector upgrades will mean the number and complexity of events to be analyzed will increase dramatically. A naive extrapolation of the Run 1 experience would suggest that a 5-6 fold increase in computing resources are needed - impossible within the anticipated flat computing budgets in the near future. Consequently ATLAS is engaged in an ambitious program to expand its computing to all available resources, notably including opportunistic use of commercial clouds and supercomputers. Such resources present new challenges in managing heterogeneity, supporting data flows, parallelizing workflows, provisioning software, and other aspects of distributed computing, all while minimizing operational load. We will present the ATLAS experience to date with clouds and supercomputers, and des...

  5. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel's MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  6. Performance modeling of hybrid MPI/OpenMP scientific applications on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2013-12-01

    In this paper, we present a performance modeling framework based on memory bandwidth contention time and a parameterized communication model to predict the performance of OpenMP, MPI and hybrid applications with weak scaling on three large-scale multicore supercomputers: IBM POWER4, POWER5+ and BlueGene/P, and analyze the performance of these MPI, OpenMP and hybrid applications. We use STREAM memory benchmarks and Intel\\'s MPI benchmarks to provide initial performance analysis and model validation of MPI and OpenMP applications on these multicore supercomputers because the measured sustained memory bandwidth can provide insight into the memory bandwidth that a system should sustain on scientific applications with the same amount of workload per core. In addition to using these benchmarks, we also use a weak-scaling hybrid MPI/OpenMP large-scale scientific application: Gyrokinetic Toroidal Code (GTC) in magnetic fusion to validate our performance model of the hybrid application on these multicore supercomputers. The validation results for our performance modeling method show less than 7.77% error rate in predicting the performance of hybrid MPI/OpenMP GTC on up to 512 cores on these multicore supercomputers. © 2013 Elsevier Inc.

  7. Guide to dataflow supercomputing basic concepts, case studies, and a detailed example

    CERN Document Server

    Milutinovic, Veljko; Trifunovic, Nemanja; Giorgi, Roberto

    2015-01-01

    This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; revie

  8. Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.

    Energy Technology Data Exchange (ETDEWEB)

    Younge, Andrew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pedretti, Kevin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grant, Ryan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brightwell, Ron [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In this paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.

  9. Cellular-automata supercomputers for fluid-dynamics modeling

    International Nuclear Information System (INIS)

    Margolus, N.; Toffoli, T.; Vichniac, G.

    1986-01-01

    We report recent developments in the modeling of fluid dynamics, and give experimental results (including dynamical exponents) obtained using cellular automata machines. Because of their locality and uniformity, cellular automata lend themselves to an extremely efficient physical realization; with a suitable architecture, an amount of hardware resources comparable to that of a home computer can achieve (in the simulation of cellular automata) the performance of a conventional supercomputer

  10. 75 FR 42014 - Proposed Amendment of Class E Airspace; San Clemente, CA

    Science.gov (United States)

    2010-07-20

    ...: Eldon Taylor, Federal Aviation Administration, Operations Support Group, Western Service Center, 1601... an extension to a Class D surface area, at San Clemente Island NALF (Fredrick Sherman Field), San... Clemente Island NALF (Fredrick Sherman Field), CA (Lat. 33[deg]01'22'' N., long. 118[deg]35'19'' W.) San...

  11. The TeraGyroid Experiment – Supercomputing 2003

    Directory of Open Access Journals (Sweden)

    R.J. Blake

    2005-01-01

    Full Text Available Amphiphiles are molecules with hydrophobic tails and hydrophilic heads. When dispersed in solvents, they self assemble into complex mesophases including the beautiful cubic gyroid phase. The goal of the TeraGyroid experiment was to study defect pathways and dynamics in these gyroids. The UK's supercomputing and USA's TeraGrid facilities were coupled together, through a dedicated high-speed network, into a single computational Grid for research work that peaked around the Supercomputing 2003 conference. The gyroids were modeled using lattice Boltzmann methods with parameter spaces explored using many 1283 and 3grid point simulations, this data being used to inform the world's largest three-dimensional time dependent simulation with 10243-grid points. The experiment generated some 2 TBytes of useful data. In terms of Grid technology the project demonstrated the migration of simulations (using Globus middleware to and fro across the Atlantic exploiting the availability of resources. Integration of the systems accelerated the time to insight. Distributed visualisation of the output datasets enabled the parameter space of the interactions within the complex fluid to be explored from a number of sites, informed by discourse over the Access Grid. The project was sponsored by EPSRC (UK and NSF (USA with trans-Atlantic optical bandwidth provided by British Telecommunications.

  12. Analyzing the Interplay of Failures and Workload on a Leadership-Class Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Meneses, Esteban [University of Pittsburgh; Ni, Xiang [University of Illinois at Urbana-Champaign; Jones, Terry R [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The unprecedented computational power of cur- rent supercomputers now makes possible the exploration of complex problems in many scientific fields, from genomic analysis to computational fluid dynamics. Modern machines are powerful because they are massive: they assemble millions of cores and a huge quantity of disks, cards, routers, and other components. But it is precisely the size of these machines that glooms the future of supercomputing. A system that comprises many components has a high chance to fail, and fail often. In order to make the next generation of supercomputers usable, it is imperative to use some type of fault tolerance platform to run applications on large machines. Most fault tolerance strategies can be optimized for the peculiarities of each system and boost efficacy by keeping the system productive. In this paper, we aim to understand how failure characterization can improve resilience in several layers of the software stack: applications, runtime systems, and job schedulers. We examine the Titan supercomputer, one of the fastest systems in the world. We analyze a full year of Titan in production and distill the failure patterns of the machine. By looking into Titan s log files and using the criteria of experts, we provide a detailed description of the types of failures. In addition, we inspect the job submission files and describe how the system is used. Using those two sources, we cross correlate failures in the machine to executing jobs and provide a picture of how failures affect the user experience. We believe such characterization is fundamental in developing appropriate fault tolerance solutions for Cray systems similar to Titan.

  13. Wavelet transform-vector quantization compression of supercomputer ocean model simulation output

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J N; Brislawn, C M

    1992-11-12

    We describe a new procedure for efficient compression of digital information for storage and transmission purposes. The algorithm involves a discrete wavelet transform subband decomposition of the data set, followed by vector quantization of the wavelet transform coefficients using application-specific vector quantizers. The new vector quantizer design procedure optimizes the assignment of both memory resources and vector dimensions to the transform subbands by minimizing an exponential rate-distortion functional subject to constraints on both overall bit-rate and encoder complexity. The wavelet-vector quantization method, which originates in digital image compression. is applicable to the compression of other multidimensional data sets possessing some degree of smoothness. In this paper we discuss the use of this technique for compressing the output of supercomputer simulations of global climate models. The data presented here comes from Semtner-Chervin global ocean models run at the National Center for Atmospheric Research and at the Los Alamos Advanced Computing Laboratory.

  14. NASA Center for Climate Simulation (NCCS) Presentation

    Science.gov (United States)

    Webster, William P.

    2012-01-01

    The NASA Center for Climate Simulation (NCCS) offers integrated supercomputing, visualization, and data interaction technologies to enhance NASA's weather and climate prediction capabilities. It serves hundreds of users at NASA Goddard Space Flight Center, as well as other NASA centers, laboratories, and universities across the US. Over the past year, NCCS has continued expanding its data-centric computing environment to meet the increasingly data-intensive challenges of climate science. We doubled our Discover supercomputer's peak performance to more than 800 teraflops by adding 7,680 Intel Xeon Sandy Bridge processor-cores and most recently 240 Intel Xeon Phi Many Integrated Core (MIG) co-processors. A supercomputing-class analysis system named Dali gives users rapid access to their data on Discover and high-performance software including the Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT), with interfaces from user desktops and a 17- by 6-foot visualization wall. NCCS also is exploring highly efficient climate data services and management with a new MapReduce/Hadoop cluster while augmenting its data distribution to the science community. Using NCCS resources, NASA completed its modeling contributions to the Intergovernmental Panel on Climate Change (IPCG) Fifth Assessment Report this summer as part of the ongoing Coupled Modellntercomparison Project Phase 5 (CMIP5). Ensembles of simulations run on Discover reached back to the year 1000 to test model accuracy and projected climate change through the year 2300 based on four different scenarios of greenhouse gases, aerosols, and land use. The data resulting from several thousand IPCC/CMIP5 simulations, as well as a variety of other simulation, reanalysis, and observationdatasets, are available to scientists and decision makers through an enhanced NCCS Earth System Grid Federation Gateway. Worldwide downloads have totaled over 110 terabytes of data.

  15. Supercomputer and cluster performance modeling and analysis efforts:2004-2006.

    Energy Technology Data Exchange (ETDEWEB)

    Sturtevant, Judith E.; Ganti, Anand; Meyer, Harold (Hal) Edward; Stevenson, Joel O.; Benner, Robert E., Jr. (.,; .); Goudy, Susan Phelps; Doerfler, Douglas W.; Domino, Stefan Paul; Taylor, Mark A.; Malins, Robert Joseph; Scott, Ryan T.; Barnette, Daniel Wayne; Rajan, Mahesh; Ang, James Alfred; Black, Amalia Rebecca; Laub, Thomas William; Vaughan, Courtenay Thomas; Franke, Brian Claude

    2007-02-01

    This report describes efforts by the Performance Modeling and Analysis Team to investigate performance characteristics of Sandia's engineering and scientific applications on the ASC capability and advanced architecture supercomputers, and Sandia's capacity Linux clusters. Efforts to model various aspects of these computers are also discussed. The goals of these efforts are to quantify and compare Sandia's supercomputer and cluster performance characteristics; to reveal strengths and weaknesses in such systems; and to predict performance characteristics of, and provide guidelines for, future acquisitions and follow-on systems. Described herein are the results obtained from running benchmarks and applications to extract performance characteristics and comparisons, as well as modeling efforts, obtained during the time period 2004-2006. The format of the report, with hypertext links to numerous additional documents, purposefully minimizes the document size needed to disseminate the extensive results from our research.

  16. Ultrascalable petaflop parallel supercomputer

    Science.gov (United States)

    Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Chiu, George [Cross River, NY; Cipolla, Thomas M [Katonah, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Hall, Shawn [Pleasantville, NY; Haring, Rudolf A [Cortlandt Manor, NY; Heidelberger, Philip [Cortlandt Manor, NY; Kopcsay, Gerard V [Yorktown Heights, NY; Ohmacht, Martin [Yorktown Heights, NY; Salapura, Valentina [Chappaqua, NY; Sugavanam, Krishnan [Mahopac, NY; Takken, Todd [Brewster, NY

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  17. Direct exploitation of a top 500 Supercomputer for Analysis of CMS Data

    International Nuclear Information System (INIS)

    Cabrillo, I; Cabellos, L; Marco, J; Fernandez, J; Gonzalez, I

    2014-01-01

    The Altamira Supercomputer hosted at the Instituto de Fisica de Cantatbria (IFCA) entered in operation in summer 2012. Its last generation FDR Infiniband network used (for message passing) in parallel jobs, supports the connection to General Parallel File System (GPFS) servers, enabling an efficient simultaneous processing of multiple data demanding jobs. Sharing a common GPFS system and a single LDAP-based identification with the existing Grid clusters at IFCA allows CMS researchers to exploit the large instantaneous capacity of this supercomputer to execute analysis jobs. The detailed experience describing this opportunistic use for skimming and final analysis of CMS 2012 data for a specific physics channel, resulting in an order of magnitude reduction of the waiting time, is presented.

  18. Toward a Proof of Concept Cloud Framework for Physics Applications on Blue Gene Supercomputers

    International Nuclear Information System (INIS)

    Dreher, Patrick; Scullin, William; Vouk, Mladen

    2015-01-01

    Traditional high performance supercomputers are capable of delivering large sustained state-of-the-art computational resources to physics applications over extended periods of time using batch processing mode operating environments. However, today there is an increasing demand for more complex workflows that involve large fluctuations in the levels of HPC physics computational requirements during the simulations. Some of the workflow components may also require a richer set of operating system features and schedulers than normally found in a batch oriented HPC environment. This paper reports on progress toward a proof of concept design that implements a cloud framework onto BG/P and BG/Q platforms at the Argonne Leadership Computing Facility. The BG/P implementation utilizes the Kittyhawk utility and the BG/Q platform uses an experimental heterogeneous FusedOS operating system environment. Both platforms use the Virtual Computing Laboratory as the cloud computing system embedded within the supercomputer. This proof of concept design allows a cloud to be configured so that it can capitalize on the specialized infrastructure capabilities of a supercomputer and the flexible cloud configurations without resorting to virtualization. Initial testing of the proof of concept system is done using the lattice QCD MILC code. These types of user reconfigurable environments have the potential to deliver experimental schedulers and operating systems within a working HPC environment for physics computations that may be different from the native OS and schedulers on production HPC supercomputers. (paper)

  19. Plane-wave electronic structure calculations on a parallel supercomputer

    International Nuclear Information System (INIS)

    Nelson, J.S.; Plimpton, S.J.; Sears, M.P.

    1993-01-01

    The development of iterative solutions of Schrodinger's equation in a plane-wave (pw) basis over the last several years has coincided with great advances in the computational power available for performing the calculations. These dual developments have enabled many new and interesting condensed matter phenomena to be studied from a first-principles approach. The authors present a detailed description of the implementation on a parallel supercomputer (hypercube) of the first-order equation-of-motion solution to Schrodinger's equation, using plane-wave basis functions and ab initio separable pseudopotentials. By distributing the plane-waves across the processors of the hypercube many of the computations can be performed in parallel, resulting in decreases in the overall computation time relative to conventional vector supercomputers. This partitioning also provides ample memory for large Fast Fourier Transform (FFT) meshes and the storage of plane-wave coefficients for many hundreds of energy bands. The usefulness of the parallel techniques is demonstrated by benchmark timings for both the FFT's and iterations of the self-consistent solution of Schrodinger's equation for different sized Si unit cells of up to 512 atoms

  20. Problem solving in nuclear engineering using supercomputers

    International Nuclear Information System (INIS)

    Schmidt, F.; Scheuermann, W.; Schatz, A.

    1987-01-01

    The availability of supercomputers enables the engineer to formulate new strategies for problem solving. One such strategy is the Integrated Planning and Simulation System (IPSS). With the integrated systems, simulation models with greater consistency and good agreement with actual plant data can be effectively realized. In the present work some of the basic ideas of IPSS are described as well as some of the conditions necessary to build such systems. Hardware and software characteristics as realized are outlined. (orig.) [de

  1. FPS scientific and supercomputers computers in chemistry

    International Nuclear Information System (INIS)

    Curington, I.J.

    1987-01-01

    FPS Array Processors, scientific computers, and highly parallel supercomputers are used in nearly all aspects of compute-intensive computational chemistry. A survey is made of work utilizing this equipment, both published and current research. The relationship of the computer architecture to computational chemistry is discussed, with specific reference to Molecular Dynamics, Quantum Monte Carlo simulations, and Molecular Graphics applications. Recent installations of the FPS T-Series are highlighted, and examples of Molecular Graphics programs running on the FPS-5000 are shown

  2. Visualizing quantum scattering on the CM-2 supercomputer

    International Nuclear Information System (INIS)

    Richardson, J.L.

    1991-01-01

    We implement parallel algorithms for solving the time-dependent Schroedinger equation on the CM-2 supercomputer. These methods are unconditionally stable as well as unitary at each time step and have the advantage of being spatially local and explicit. We show how to visualize the dynamics of quantum scattering using techniques for visualizing complex wave functions. Several scattering problems are solved to demonstrate the use of these methods. (orig.)

  3. Integration of Titan supercomputer at OLCF with ATLAS Production System

    CERN Document Server

    AUTHOR|(SzGeCERN)643806; The ATLAS collaboration; De, Kaushik; Klimentov, Alexei; Nilsson, Paul; Oleynik, Danila; Padolski, Siarhei; Panitkin, Sergey; Wenaus, Torre

    2017-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for jo...

  4. Integration of Titan supercomputer at OLCF with ATLAS production system

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration

    2016-01-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this talk we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job...

  5. Supercomputer algorithms for reactivity, dynamics and kinetics of small molecules

    International Nuclear Information System (INIS)

    Lagana, A.

    1989-01-01

    Even for small systems, the accurate characterization of reactive processes is so demanding of computer resources as to suggest the use of supercomputers having vector and parallel facilities. The full advantages of vector and parallel architectures can sometimes be obtained by simply modifying existing programs, vectorizing the manipulation of vectors and matrices, and requiring the parallel execution of independent tasks. More often, however, a significant time saving can be obtained only when the computer code undergoes a deeper restructuring, requiring a change in the computational strategy or, more radically, the adoption of a different theoretical treatment. This book discusses supercomputer strategies based upon act and approximate methods aimed at calculating the electronic structure and the reactive properties of small systems. The book shows how, in recent years, intense design activity has led to the ability to calculate accurate electronic structures for reactive systems, exact and high-level approximations to three-dimensional reactive dynamics, and to efficient directive and declaratory software for the modelling of complex systems

  6. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  7. Center for Adaptive Optics | Center

    Science.gov (United States)

    Astronomy, UCSC's CfAO and ISEE, and Maui Community College, runs education and internship programs in / Jacobs Retina Center Department of Psychology University of California, San Francisco Department of University School of Optometry Maui Community College Maui Community College Space Grant Program Montana

  8. Novel Supercomputing Approaches for High Performance Linear Algebra Using FPGAs, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — Supercomputing plays a major role in many areas of science and engineering, and it has had tremendous impact for decades in areas such as aerospace, defense, energy,...

  9. BSMBench: a flexible and scalable supercomputer benchmark from computational particle physics

    CERN Document Server

    Bennett, Ed; Del Debbio, Luigi; Jordan, Kirk; Patella, Agostino; Pica, Claudio; Rago, Antonio

    2016-01-01

    Benchmarking plays a central role in the evaluation of High Performance Computing architectures. Several benchmarks have been designed that allow users to stress various components of supercomputers. In order for the figures they provide to be useful, benchmarks need to be representative of the most common real-world scenarios. In this work, we introduce BSMBench, a benchmarking suite derived from Monte Carlo code used in computational particle physics. The advantage of this suite (which can be freely downloaded from http://www.bsmbench.org/) over others is the capacity to vary the relative importance of computation and communication. This enables the tests to simulate various practical situations. To showcase BSMBench, we perform a wide range of tests on various architectures, from desktop computers to state-of-the-art supercomputers, and discuss the corresponding results. Possible future directions of development of the benchmark are also outlined.

  10. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  11. 76 FR 17752 - Notice of Intent To Prepare an Environmental Impact Statement for the San Francisco Veterans...

    Science.gov (United States)

    2011-03-30

    ... Environmental Policy Act (NEPA) of 1969, as amended, (42 U.S.C. 4331 et seq.), the Council on Environmental... the San Francisco Veterans Affairs Medical Center (SFVAMC) Institutional Master Plan AGENCY...: Comments should be addressed to John Pechman, Facility Planner, San Francisco VA Medical Center (001), 4150...

  12. Public Involvement and Response Plan (Community Relations Plan), Presidio of San Francisco, San Francisco, California

    Science.gov (United States)

    1992-03-01

    passenger ship destination, and tourist attraction. San Francisco’s location and cultural and recreational opportunities make it a prime tourism center...equestrians, she said. C-52 m% smm : - TUESDAY, JUNE 19,1990 * . COPYKIGHT 1*90/THE TIMES MlRkOX COMPANY /CC/1 JO PAGES P. A-l, 22, 23 Complex

  13. Intelligent Personal Supercomputer for Solving Scientific and Technical Problems

    Directory of Open Access Journals (Sweden)

    Khimich, O.M.

    2016-09-01

    Full Text Available New domestic intellіgent personal supercomputer of hybrid architecture Inparkom_pg for the mathematical modeling of processes in the defense industry, engineering, construction, etc. was developed. Intelligent software for the automatic research and tasks of computational mathematics with approximate data of different structures was designed. Applied software to provide mathematical modeling problems in construction, welding and filtration processes was implemented.

  14. Supercomputers and the future of computational atomic scattering physics

    International Nuclear Information System (INIS)

    Younger, S.M.

    1989-01-01

    The advent of the supercomputer has opened new vistas for the computational atomic physicist. Problems of hitherto unparalleled complexity are now being examined using these new machines, and important connections with other fields of physics are being established. This talk briefly reviews some of the most important trends in computational scattering physics and suggests some exciting possibilities for the future. 7 refs., 2 figs

  15. Visualization on supercomputing platform level II ASC milestone (3537-1B) results from Sandia.

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk (Kitware, Inc., Clifton Park, NY); Fabian, Nathan; Marion, Patrick (Kitware, Inc., Clifton Park, NY); Moreland, Kenneth D.

    2010-09-01

    This report provides documentation for the completion of the Sandia portion of the ASC Level II Visualization on the platform milestone. This ASC Level II milestone is a joint milestone between Sandia National Laboratories and Los Alamos National Laboratories. This milestone contains functionality required for performing visualization directly on a supercomputing platform, which is necessary for peta-scale visualization. Sandia's contribution concerns in-situ visualization, running a visualization in tandem with a solver. Visualization and analysis of petascale data is limited by several factors which must be addressed as ACES delivers the Cielo platform. Two primary difficulties are: (1) Performance of interactive rendering, which is most computationally intensive portion of the visualization process. For terascale platforms, commodity clusters with graphics processors(GPUs) have been used for interactive rendering. For petascale platforms, visualization and rendering may be able to run efficiently on the supercomputer platform itself. (2) I/O bandwidth, which limits how much information can be written to disk. If we simply analyze the sparse information that is saved to disk we miss the opportunity to analyze the rich information produced every timestep by the simulation. For the first issue, we are pursuing in-situ analysis, in which simulations are coupled directly with analysis libraries at runtime. This milestone will evaluate the visualization and rendering performance of current and next generation supercomputers in contrast to GPU-based visualization clusters, and evaluate the performance of common analysis libraries coupled with the simulation that analyze and write data to disk during a running simulation. This milestone will explore, evaluate and advance the maturity level of these technologies and their applicability to problems of interest to the ASC program. Scientific simulation on parallel supercomputers is traditionally performed in four

  16. De Novo Ultrascale Atomistic Simulations On High-End Parallel Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Nakano, A; Kalia, R K; Nomura, K; Sharma, A; Vashishta, P; Shimojo, F; van Duin, A; Goddard, III, W A; Biswas, R; Srivastava, D; Yang, L H

    2006-09-04

    /MD simulation on a Grid consisting of 6 supercomputer centers in the US and Japan (in total of 150 thousand processor-hours), in which the number of processors change dynamically on demand and resources are allocated and migrated dynamically in response to faults. Furthermore, performance portability has been demonstrated on a wide range of platforms such as BlueGene/L, Altix 3000, and AMD Opteron-based Linux clusters.

  17. 77 FR 49865 - Notice of Availability of an Environmental Impact Statement (EIS) for the San Francisco Veterans...

    Science.gov (United States)

    2012-08-17

    ... National Environmental Policy Act (NEPA) of 1969, as amended, (42 U.S.C. 4331 et seq.), the Council on...) for the San Francisco Veterans Affairs Medical Center (SFVAMC) Long Range Development Plan (LRDP... Francisco Veterans Affairs Medical Center, 4150 Clement Street, San Francisco, CA 94121 or by telephone...

  18. Multi-petascale highly efficient parallel supercomputer

    Science.gov (United States)

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen-Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2018-05-15

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time and supports DMA functionality allowing for parallel processing message-passing.

  19. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 2

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  20. Mathematical methods and supercomputing in nuclear applications. Proceedings. Vol. 1

    International Nuclear Information System (INIS)

    Kuesters, H.; Stein, E.; Werner, W.

    1993-04-01

    All papers of the two volumes are separately indexed in the data base. Main topics are: Progress in advanced numerical techniques, fluid mechanics, on-line systems, artificial intelligence applications, nodal methods reactor kinetics, reactor design, supercomputer architecture, probabilistic estimation of risk assessment, methods in transport theory, advances in Monte Carlo techniques, and man-machine interface. (orig.)

  1. AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 1, Issue 2

    Science.gov (United States)

    2011-01-01

    area and the researchers working on these projects. Also inside: news from the AHPCRC consortium partners at Morgan State University and the NASA ...Computing Research Center is provided by the supercomputing and research facilities at Stanford University and at the NASA Ames Research Center at...atomic and molecular level, he said. He noted that “every general would like to have” a Star Trek -like holodeck, where holographic avatars could

  2. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2008-05-15

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation.

  3. Personal Supercomputing for Monte Carlo Simulation Using a GPU

    International Nuclear Information System (INIS)

    Oh, Jae-Yong; Koo, Yang-Hyun; Lee, Byung-Ho

    2008-01-01

    Since the usability, accessibility, and maintenance of a personal computer (PC) are very good, a PC is a useful computer simulation tool for researchers. It has enough calculation power to simulate a small scale system with the improved performance of a PC's CPU. However, if a system is large or long time scale, we need a cluster computer or supercomputer. Recently great changes have occurred in the PC calculation environment. A graphic process unit (GPU) on a graphic card, only used to calculate display data, has a superior calculation capability to a PC's CPU. This GPU calculation performance is a match for the supercomputer in 2000. Although it has such a great calculation potential, it is not easy to program a simulation code for GPU due to difficult programming techniques for converting a calculation matrix to a 3D rendering image using graphic APIs. In 2006, NVIDIA provided the Software Development Kit (SDK) for the programming environment for NVIDIA's graphic cards, which is called the Compute Unified Device Architecture (CUDA). It makes the programming on the GPU easy without knowledge of the graphic APIs. This paper describes the basic architectures of NVIDIA's GPU and CUDA, and carries out a performance benchmark for the Monte Carlo simulation

  4. Building a High Performance Computing Infrastructure for Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Belov, S; Kaplin, V; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2011-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies (ICT), and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of the computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. Recently a dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities of participating institutes, thus providing a common platform for building the computing infrastructure for various scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technologies based on XEN and KVM platforms. The solution implemented was tested thoroughly within the computing environment of KEDR detector experiment which is being carried out at BINP, and foreseen to be applied to the use cases of other HEP experiments in the upcoming future.

  5. Design and performance characterization of electronic structure calculations on massively parallel supercomputers

    DEFF Research Database (Denmark)

    Romero, N. A.; Glinsvad, Christian; Larsen, Ask Hjorth

    2013-01-01

    Density function theory (DFT) is the most widely employed electronic structure method because of its favorable scaling with system size and accuracy for a broad range of molecular and condensed-phase systems. The advent of massively parallel supercomputers has enhanced the scientific community...

  6. Some examples of spin-off technologies: San Carlos de Bariloche; Algunos ejemplos de tecnologias derivadas: San Carlos de Bariloche

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, Gabriel O [Comision Nacional de Energia Atomica, San Carlos de Bariloche (Argentina). Centro Atomico Bariloche

    2001-07-01

    The Bariloche Atomic Center (CAB) and the Balseiro Institute, both in San Carlos de Bariloche, are mainly devoted to scientific research and development, the first one; and to education and training the second one. Besides providing specialists in physics and nuclear engineering for research centers in Argentina and abroad, both establishments are transferring technologies and providing services in different fields such as waste management, metallurgy, forensic sciences, medicine, geology, modeling, archaeology, paleontology, etc.

  7. SANS-1 Experimental reports of 2000

    International Nuclear Information System (INIS)

    Willumeit, R.; Haramus, V.

    2001-01-01

    The instrument SANS-1 at the Geesthacht neutron facility GeNF was used for scattering experiments in 2000 at 196 of 200 days of reactor and cold source operation. The utilisation was shared between the in-house R and D program and user groups from different universities and research centers. These measurements were performed and analysed either by guest scientists or GKSS staff. The focus of the work in 2000 at the experiment SANS-1 was the structural investigation of hydrogen containing substances such as biological macromolecules (ribosomes, protein-RNA-complexes, protein solutions, glycolipids and membranes), molecules which are important in the fields of environmental research (refractoric organic substances) and technical chemistry (surfactants, micelles). (orig.) [de

  8. Computational Science with the Titan Supercomputer: Early Outcomes and Lessons Learned

    Science.gov (United States)

    Wells, Jack

    2014-03-01

    Modeling and simulation with petascale computing has supercharged the process of innovation and understanding, dramatically accelerating time-to-insight and time-to-discovery. This presentation will focus on early outcomes from the Titan supercomputer at the Oak Ridge National Laboratory. Titan has over 18,000 hybrid compute nodes consisting of both CPUs and GPUs. In this presentation, I will discuss the lessons we have learned in deploying Titan and preparing applications to move from conventional CPU architectures to a hybrid machine. I will present early results of materials applications running on Titan and the implications for the research community as we prepare for exascale supercomputer in the next decade. Lastly, I will provide an overview of user programs at the Oak Ridge Leadership Computing Facility with specific information how researchers may apply for allocations of computing resources. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  9. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00300320; Klimentov, Alexei; Oleynik, Danila; Panitkin, Sergey; Petrosyan, Artem; Vaniachine, Alexandre; Wenaus, Torre; Schovancova, Jaroslava

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modi ed PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real time, information about unused...

  10. Integration of PanDA workload management system with Titan supercomputer at OLCF

    CERN Document Server

    Panitkin, Sergey; The ATLAS collaboration; Klimentov, Alexei; Oleynik, Danila; Petrosyan, Artem; Schovancova, Jaroslava; Vaniachine, Alexandre; Wenaus, Torre

    2015-01-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently uses more than 100,000 cores at well over 100 Grid sites with a peak performance of 0.3 petaFLOPS, next LHC data taking run will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. It also gives PanDA new capability to collect, in real tim...

  11. ParaBTM: A Parallel Processing Framework for Biomedical Text Mining on Supercomputers.

    Science.gov (United States)

    Xing, Yuting; Wu, Chengkun; Yang, Xi; Wang, Wei; Zhu, En; Yin, Jianping

    2018-04-27

    A prevailing way of extracting valuable information from biomedical literature is to apply text mining methods on unstructured texts. However, the massive amount of literature that needs to be analyzed poses a big data challenge to the processing efficiency of text mining. In this paper, we address this challenge by introducing parallel processing on a supercomputer. We developed paraBTM, a runnable framework that enables parallel text mining on the Tianhe-2 supercomputer. It employs a low-cost yet effective load balancing strategy to maximize the efficiency of parallel processing. We evaluated the performance of paraBTM on several datasets, utilizing three types of named entity recognition tasks as demonstration. Results show that, in most cases, the processing efficiency can be greatly improved with parallel processing, and the proposed load balancing strategy is simple and effective. In addition, our framework can be readily applied to other tasks of biomedical text mining besides NER.

  12. Explaining the gap between theoretical peak performance and real performance for supercomputer architectures

    International Nuclear Information System (INIS)

    Schoenauer, W.; Haefner, H.

    1993-01-01

    The basic architectures of vector and parallel computers with their properties are presented. Then the memory size and the arithmetic operations in the context of memory bandwidth are discussed. For the exemplary discussion of a single operation micro-measurements of the vector triad for the IBM 3090 VF and the CRAY Y-MP/8 are presented. They reveal the details of the losses for a single operation. Then we analyze the global performance of a whole supercomputer by identifying reduction factors that bring down the theoretical peak performance to the poor real performance. The responsibilities of the manufacturer and of the user for these losses are dicussed. Then the price-performance ratio for different architectures in a snapshot of January 1991 is briefly mentioned. Finally some remarks to a user-friendly architecture for a supercomputer will be made. (orig.)

  13. Crustal structure of the coastal and marine San Francisco Bay region, California

    Science.gov (United States)

    Parsons, Tom

    2002-01-01

    As of the time of this writing, the San Francisco Bay region is home to about 6.8 million people, ranking fifth among population centers in the United States. Most of these people live on the coastal lands along San Francisco Bay, the Sacramento River delta, and the Pacific coast. The region straddles the tectonic boundary between the Pacific and North American Plates and is crossed by several strands of the San Andreas Fault system. These faults, which are stressed by about 4 cm of relative plate motion each year, pose an obvious seismic hazard.

  14. Two-Bin Kanban: Ordering Impact at Navy Medical Center San Diego

    Science.gov (United States)

    2016-06-17

    Wiley. Weed, J. (2010, July 10). Factory efficiency comes to hospital. New York Times, 1–3. Weiss, N. (2008). Introductory statistics . San Francisco...Urology, and Oral Maxillofacial Surgery (OMFS) departments at NMCSD. The data is statistically significant in 2015 when compared to 2013. Procurement...31 3. C. Procurement Cost and Procurement Efficiency Statistics

  15. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-01-01

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  16. HPL and STREAM Benchmarks on SANAM Supercomputer

    KAUST Repository

    Bin Sulaiman, Riman A.

    2017-03-13

    SANAM supercomputer was jointly built by KACST and FIAS in 2012 ranking second that year in the Green500 list with a power efficiency of 2.3 GFLOPS/W (Rohr et al., 2014). It is a heterogeneous accelerator-based HPC system that has 300 compute nodes. Each node includes two Intel Xeon E5?2650 CPUs, two AMD FirePro S10000 dual GPUs and 128 GiB of main memory. In this work, the seven benchmarks of HPCC were installed and configured to reassess the performance of SANAM, as part of an unpublished master thesis, after it was reassembled in the Kingdom of Saudi Arabia. We present here detailed results of HPL and STREAM benchmarks.

  17. Tomographic Rayleigh wave group velocities in the Central Valley, California, centered on the Sacramento/San Joaquin Delta

    Science.gov (United States)

    Fletcher, Jon B.; Erdem, Jemile; Seats, Kevin; Lawrence, Jesse

    2016-04-01

    If shaking from a local or regional earthquake in the San Francisco Bay region were to rupture levees in the Sacramento/San Joaquin Delta, then brackish water from San Francisco Bay would contaminate the water in the Delta: the source of freshwater for about half of California. As a prelude to a full shear-wave velocity model that can be used in computer simulations and further seismic hazard analysis, we report on the use of ambient noise tomography to build a fundamental mode, Rayleigh wave group velocity model for the region around the Sacramento/San Joaquin Delta in the western Central Valley, California. Recordings from the vertical component of about 31 stations were processed to compute the spatial distribution of Rayleigh wave group velocities. Complex coherency between pairs of stations was stacked over 8 months to more than a year. Dispersion curves were determined from 4 to about 18 s. We calculated average group velocities for each period and inverted for deviations from the average for a matrix of cells that covered the study area. Smoothing using the first difference is applied. Cells of the model were about 5.6 km in either dimension. Checkerboard tests of resolution, which are dependent on station density, suggest that the resolving ability of the array is reasonably good within the middle of the array with resolution between 0.2 and 0.4°. Overall, low velocities in the middle of each image reflect the deeper sedimentary syncline in the Central Valley. In detail, the model shows several centers of low velocity that may be associated with gross geologic features such as faulting along the western margin of the Central Valley, oil and gas reservoirs, and large crosscutting features like the Stockton arch. At shorter periods around 5.5 s, the model's western boundary between low and high velocities closely follows regional fault geometry and the edge of a residual isostatic gravity low. In the eastern part of the valley, the boundaries of the low

  18. Tomographic Rayleigh-wave group velocities in the Central Valley, California centered on the Sacramento/San Joaquin Delta

    Science.gov (United States)

    Fletcher, Jon Peter B.; Erdem, Jemile; Seats, Kevin; Lawrence, Jesse

    2016-01-01

    If shaking from a local or regional earthquake in the San Francisco Bay region were to rupture levees in the Sacramento/San Joaquin Delta then brackish water from San Francisco Bay would contaminate the water in the Delta: the source of fresh water for about half of California. As a prelude to a full shear-wave velocity model that can be used in computer simulations and further seismic hazard analysis, we report on the use of ambient noise tomography to build a fundamental-mode, Rayleigh-wave group velocity model for the region around the Sacramento/San Joaquin Delta in the western Central Valley, California. Recordings from the vertical component of about 31 stations were processed to compute the spatial distribution of Rayleigh wave group velocities. Complex coherency between pairs of stations were stacked over 8 months to more than a year. Dispersion curves were determined from 4 to about 18 seconds. We calculated average group velocities for each period and inverted for deviations from the average for a matrix of cells that covered the study area. Smoothing using the first difference is applied. Cells of the model were about 5.6 km in either dimension. Checkerboard tests of resolution, which is dependent on station density, suggest that the resolving ability of the array is reasonably good within the middle of the array with resolution between 0.2 and 0.4 degrees. Overall, low velocities in the middle of each image reflect the deeper sedimentary syncline in the Central Valley. In detail, the model shows several centers of low velocity that may be associated with gross geologic features such as faulting along the western margin of the Central Valley, oil and gas reservoirs, and large cross cutting features like the Stockton arch. At shorter periods around 5.5s, the model’s western boundary between low and high velocities closely follows regional fault geometry and the edge of a residual isostatic gravity low. In the eastern part of the valley, the boundaries

  19. An efficient implementation of a backpropagation learning algorithm on quadrics parallel supercomputer

    International Nuclear Information System (INIS)

    Taraglio, S.; Massaioli, F.

    1995-08-01

    A parallel implementation of a library to build and train Multi Layer Perceptrons via the Back Propagation algorithm is presented. The target machine is the SIMD massively parallel supercomputer Quadrics. Performance measures are provided on three different machines with different number of processors, for two network examples. A sample source code is given

  20. 75 FR 38412 - Safety Zone; San Diego POPS Fireworks, San Diego, CA

    Science.gov (United States)

    2010-07-02

    ...-AA00 Safety Zone; San Diego POPS Fireworks, San Diego, CA AGENCY: Coast Guard, DHS. ACTION: Temporary... waters of San Diego Bay in support of the San Diego POPS Fireworks. This safety zone is necessary to... San Diego POPS Fireworks, which will include fireworks presentations conducted from a barge in San...

  1. Integration of PanDA workload management system with Titan supercomputer at OLCF

    Science.gov (United States)

    De, K.; Klimentov, A.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    The PanDA (Production and Distributed Analysis) workload management system (WMS) was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. While PanDA currently distributes jobs to more than 100,000 cores at well over 100 Grid sites, the future LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). The current approach utilizes a modified PanDA pilot framework for job submission to Titan's batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multicore worker nodes. It also gives PanDA new capability to collect, in real time, information about unused worker nodes on Titan, which allows precise definition of the size and duration of jobs submitted to Titan according to available free resources. This capability significantly reduces PanDA job wait time while improving Titan's utilization efficiency. This implementation was tested with a variety of Monte-Carlo workloads on Titan and is being tested on several other supercomputing platforms. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  2. Building more powerful less expensive supercomputers using Processing-In-Memory (PIM) LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, Richard C.

    2009-09-01

    This report details the accomplishments of the 'Building More Powerful Less Expensive Supercomputers Using Processing-In-Memory (PIM)' LDRD ('PIM LDRD', number 105809) for FY07-FY09. Latency dominates all levels of supercomputer design. Within a node, increasing memory latency, relative to processor cycle time, limits CPU performance. Between nodes, the same increase in relative latency impacts scalability. Processing-In-Memory (PIM) is an architecture that directly addresses this problem using enhanced chip fabrication technology and machine organization. PIMs combine high-speed logic and dense, low-latency, high-bandwidth DRAM, and lightweight threads that tolerate latency by performing useful work during memory transactions. This work examines the potential of PIM-based architectures to support mission critical Sandia applications and an emerging class of more data intensive informatics applications. This work has resulted in a stronger architecture/implementation collaboration between 1400 and 1700. Additionally, key technology components have impacted vendor roadmaps, and we are in the process of pursuing these new collaborations. This work has the potential to impact future supercomputer design and construction, reducing power and increasing performance. This final report is organized as follow: this summary chapter discusses the impact of the project (Section 1), provides an enumeration of publications and other public discussion of the work (Section 1), and concludes with a discussion of future work and impact from the project (Section 1). The appendix contains reprints of the refereed publications resulting from this work.

  3. Accelerating Science Impact through Big Data Workflow Management and Supercomputing

    Directory of Open Access Journals (Sweden)

    De K.

    2016-01-01

    Full Text Available The Large Hadron Collider (LHC, operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. ATLAS, one of the largest collaborations ever assembled in the the history of science, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. To manage the workflow for all data processing on hundreds of data centers the PanDA (Production and Distributed AnalysisWorkload Management System is used. An ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF, is realizing within BigPanDA and megaPanDA projects. These projects are now exploring how PanDA might be used for managing computing jobs that run on supercomputers including OLCF’s Titan and NRC-KI HPC2. The main idea is to reuse, as much as possible, existing components of the PanDA system that are already deployed on the LHC Grid for analysis of physics data. The next generation of PanDA will allow many data-intensive sciences employing a variety of computing platforms to benefit from ATLAS experience and proven tools in highly scalable processing.

  4. Supercomputers and the mathematical modeling of high complexity problems

    International Nuclear Information System (INIS)

    Belotserkovskii, Oleg M

    2010-01-01

    This paper is a review of many works carried out by members of our scientific school in past years. The general principles of constructing numerical algorithms for high-performance computers are described. Several techniques are highlighted and these are based on the method of splitting with respect to physical processes and are widely used in computing nonlinear multidimensional processes in fluid dynamics, in studies of turbulence and hydrodynamic instabilities and in medicine and other natural sciences. The advances and developments related to the new generation of high-performance supercomputing in Russia are presented.

  5. United States Air Force Personalized Medicine and Advanced Diagnostics Program Panel: Representative Research at the San Antonio Military Medical Center

    Science.gov (United States)

    2016-05-20

    health system. dedicated to excellence in global care PROCESSING OF PROFESSIONAL MEDICAL RESEARCH PUBLICATIONS/PRESENTATIONS INSTRUCTIONS 1. The...present this research at the University of Texas at San Antonio/SAMHS & Universities Research Forum, SURF 2016 in San Antonio, TX, on 20 May 2016. The...at San Antonio/SAMHS & Universities Research Forum, SURF 2016 in San Antonio, TX, on 20 May 2016. 3. LAWS AND REGULATIONS: DoD 5500.07-R, Joint

  6. First experiences with large SAN storage and Linux

    International Nuclear Information System (INIS)

    Wezel, Jos van; Marten, Holger; Verstege, Bernhard; Jaeger, Axel

    2004-01-01

    The use of a storage area network (SAN) with Linux opens possibilities for scalable and affordable large data storage and poses a new challenge for cluster computing. The GridKa center uses a commercial parallel file system to create a highly available high-speed data storage using a combination of Fibre Channel (SAN) and Ethernet (LAN) to optimize between data throughput and costs. This article describes the design, implementation and optimizations of the GridKa storage solution which will offer over 400 TB online storage for 600 nodes. Presented are some throughput measurements of one of the largest Linux-based parallel storage systems in the world

  7. Accuracy of Perceived Estimated Travel Time by EMS to a Trauma Center in San Bernardino County, California

    Directory of Open Access Journals (Sweden)

    Michael M. Neeki

    2016-06-01

    Full Text Available Introduction: Mobilization of trauma resources has the potential to cause ripple effects throughout hospital operations. One major factor affecting efficient utilization of trauma resources is a discrepancy between the prehospital estimated time of arrival (ETA as communicated by emergency medical services (EMS personnel and their actual time of arrival (TOA. The current study aimed to assess the accuracy of the perceived prehospital estimated arrival time by EMS personnel in comparison to their actual arrival time at a Level II trauma center in San Bernardino County, California. Methods: This retrospective study included traumas classified as alerts or activations that were transported to Arrowhead Regional Medical Center in 2013. We obtained estimated arrival time and actual arrival time for each transport from the Surgery Department Trauma Registry. The difference between the median of ETA and actual TOA by EMS crews to the trauma center was calculated for these transports. Additional variables assessed included time of day and month during which the transport took place. Results: A total of 2,454 patients classified as traumas were identified in the Surgery Department Trauma Registry. After exclusion of trauma consults, walk-ins, handoffs between agencies, downgraded traumas, traumas missing information, and traumas transported by agencies other than American Medical Response, Ontario Fire, Rialto Fire or San Bernardino County Fire, we included a final sample size of 555 alert and activation classified traumas in the final analysis. When combining all transports by the included EMS agencies, the median of the ETA was 10 minutes and the median of the actual TOA was 22 minutes (median of difference=9 minutes, p<0.0001. Furthermore, when comparing the difference between trauma alerts and activations, trauma activations demonstrated an equal or larger difference in the median of the estimated and actual time of arrival (p<0.0001. We also found

  8. Using the LANSCE irradiation facility to predict the number of fatal soft errors in one of the world's fastest supercomputers

    International Nuclear Information System (INIS)

    Michalak, S.E.; Harris, K.W.; Hengartner, N.W.; Takala, B.E.; Wender, S.A.

    2005-01-01

    Los Alamos National Laboratory (LANL) is home to the Los Alamos Neutron Science Center (LANSCE). LANSCE is a unique facility because its neutron spectrum closely mimics the neutron spectrum at terrestrial and aircraft altitudes, but is many times more intense. Thus, LANSCE provides an ideal setting for accelerated testing of semiconductor and other devices that are susceptible to cosmic ray induced neutrons. Many industrial companies use LANSCE to estimate device susceptibility to cosmic ray induced neutrons, and it has also been used to test parts from one of LANL's supercomputers, the ASC (Advanced Simulation and Computing Program) Q. This paper discusses our use of the LANSCE facility to study components in Q including a comparison with failure data from Q

  9. Heat dissipation computations of a HVDC ground electrode using a supercomputer

    International Nuclear Information System (INIS)

    Greiss, H.; Mukhedkar, D.; Lagace, P.J.

    1990-01-01

    This paper reports on the temperature, of soil surrounding a High Voltage Direct Current (HVDC) toroidal ground electrode of practical dimensions, in both homogeneous and non-homogeneous soils that was computed at incremental points in time using finite difference methods on a supercomputer. Curves of the response were computed and plotted at several locations within the soil in the vicinity of the ground electrode for various values of the soil parameters

  10. Evolution of the Virtualized HPC Infrastructure of Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Anisenkov, A; Belov, S; Kaplin, V; Korol, A; Skovpen, K; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2012-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies, and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for a particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. A dedicated optical network with the initial bandwidth of 10 Gb/s connecting these three facilities was built in order to make it possible to share the computing resources among the research communities, thus increasing the efficiency of operating the existing computing facilities and offering a common platform for building the computing infrastructure for future scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technology based on XEN and KVM platforms. This contribution gives a thorough review of the present status and future development prospects for the NSC virtualized computing infrastructure and the experience gained while using it for running production data analysis jobs related to HEP experiments being carried out at BINP, especially the KEDR detector experiment at the VEPP-4M electron-positron collider.

  11. Argonne National Lab deploys Force10 networks' massively dense ethernet switch for supercomputing cluster

    CERN Multimedia

    2003-01-01

    "Force10 Networks, Inc. today announced that Argonne National Laboratory (Argonne, IL) has successfully deployed Force10 E-Series switch/routers to connect to the TeraGrid, the world's largest supercomputing grid, sponsored by the National Science Foundation (NSF)" (1/2 page).

  12. A supercomputing application for reactors core design and optimization

    International Nuclear Information System (INIS)

    Hourcade, Edouard; Gaudier, Fabrice; Arnaud, Gilles; Funtowiez, David; Ammar, Karim

    2010-01-01

    Advanced nuclear reactor designs are often intuition-driven processes where designers first develop or use simplified simulation tools for each physical phenomenon involved. Through the project development, complexity in each discipline increases and implementation of chaining/coupling capabilities adapted to supercomputing optimization process are often postponed to a further step so that task gets increasingly challenging. In the context of renewal in reactor designs, project of first realization are often run in parallel with advanced design although very dependant on final options. As a consequence, the development of tools to globally assess/optimize reactor core features, with the on-going design methods accuracy, is needed. This should be possible within reasonable simulation time and without advanced computer skills needed at project management scale. Also, these tools should be ready to easily cope with modeling progresses in each discipline through project life-time. An early stage development of multi-physics package adapted to supercomputing is presented. The URANIE platform, developed at CEA and based on the Data Analysis Framework ROOT, is very well adapted to this approach. It allows diversified sampling techniques (SRS, LHS, qMC), fitting tools (neuronal networks...) and optimization techniques (genetic algorithm). Also data-base management and visualization are made very easy. In this paper, we'll present the various implementing steps of this core physics tool where neutronics, thermo-hydraulics, and fuel mechanics codes are run simultaneously. A relevant example of optimization of nuclear reactor safety characteristics will be presented. Also, flexibility of URANIE tool will be illustrated with the presentation of several approaches to improve Pareto front quality. (author)

  13. Evaluating the networking characteristics of the Cray XC-40 Intel Knights Landing-based Cori supercomputer at NERSC

    Energy Technology Data Exchange (ETDEWEB)

    Doerfler, Douglas [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Austin, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cook, Brandon [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Deslippe, Jack [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kandalla, Krishna [Cray Inc, Bloomington, MN (United States); Mendygral, Peter [Cray Inc, Bloomington, MN (United States)

    2017-09-12

    There are many potential issues associated with deploying the Intel Xeon Phi™ (code named Knights Landing [KNL]) manycore processor in a large-scale supercomputer. One in particular is the ability to fully utilize the high-speed communications network, given that the serial performance of a Xeon Phi TM core is a fraction of a Xeon®core. In this paper, we take a look at the trade-offs associated with allocating enough cores to fully utilize the Aries high-speed network versus cores dedicated to computation, e.g., the trade-off between MPI and OpenMP. In addition, we evaluate new features of Cray MPI in support of KNL, such as internode optimizations. We also evaluate one-sided programming models such as Unified Parallel C. We quantify the impact of the above trade-offs and features using a suite of National Energy Research Scientific Computing Center applications.

  14. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    Science.gov (United States)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; hide

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  15. An Interface for Biomedical Big Data Processing on the Tianhe-2 Supercomputer.

    Science.gov (United States)

    Yang, Xi; Wu, Chengkun; Lu, Kai; Fang, Lin; Zhang, Yong; Li, Shengkang; Guo, Guixin; Du, YunFei

    2017-12-01

    Big data, cloud computing, and high-performance computing (HPC) are at the verge of convergence. Cloud computing is already playing an active part in big data processing with the help of big data frameworks like Hadoop and Spark. The recent upsurge of high-performance computing in China provides extra possibilities and capacity to address the challenges associated with big data. In this paper, we propose Orion-a big data interface on the Tianhe-2 supercomputer-to enable big data applications to run on Tianhe-2 via a single command or a shell script. Orion supports multiple users, and each user can launch multiple tasks. It minimizes the effort needed to initiate big data applications on the Tianhe-2 supercomputer via automated configuration. Orion follows the "allocate-when-needed" paradigm, and it avoids the idle occupation of computational resources. We tested the utility and performance of Orion using a big genomic dataset and achieved a satisfactory performance on Tianhe-2 with very few modifications to existing applications that were implemented in Hadoop/Spark. In summary, Orion provides a practical and economical interface for big data processing on Tianhe-2.

  16. 78 FR 19103 - Safety Zone; Spanish Navy School Ship San Sebastian El Cano Escort; Bahia de San Juan; San Juan, PR

    Science.gov (United States)

    2013-03-29

    ...-AA00 Safety Zone; Spanish Navy School Ship San Sebastian El Cano Escort; Bahia de San Juan; San Juan... temporary moving safety zone on the waters of Bahia de San Juan during the transit of the Spanish Navy... Channel entrance, and to protect the high ranking officials on board the Spanish Navy School Ship San...

  17. Sandia`s network for Supercomputing `94: Linking the Los Alamos, Lawrence Livermore, and Sandia National Laboratories using switched multimegabit data service

    Energy Technology Data Exchange (ETDEWEB)

    Vahle, M.O.; Gossage, S.A.; Brenkosh, J.P. [Sandia National Labs., Albuquerque, NM (United States). Advanced Networking Integration Dept.

    1995-01-01

    Supercomputing `94, a high-performance computing and communications conference, was held November 14th through 18th, 1994 in Washington DC. For the past four years, Sandia National Laboratories has used this conference to showcase and focus its communications and networking endeavors. At the 1994 conference, Sandia built a Switched Multimegabit Data Service (SMDS) network running at 44.736 megabits per second linking its private SMDS network between its facilities in Albuquerque, New Mexico and Livermore, California to the convention center in Washington, D.C. For the show, the network was also extended from Sandia, New Mexico to Los Alamos National Laboratory and from Sandia, California to Lawrence Livermore National Laboratory. This paper documents and describes this network and how it was used at the conference.

  18. The BirthPlace collaborative practice model: results from the San Diego Birth Center Study.

    Science.gov (United States)

    Swartz; Jackson; Lang; Ecker; Ganiats; Dickinson; Nguyen

    1998-07-01

    Objective: The search for quality, cost-effective health care programs in the United States is now a major focus in the era of health care reform. New programs need to be evaluated as alternatives are developed in the health care system. The BirthPlace program provides comprehensive perinatal services with certified nurse-midwives and obstetricians working together in an integrated collaborative practice serving a primarily low-income population. Low-risk women are delivered by nurse-midwives in a freestanding birth center (The BirthPlace), which is one component of a larger integrated health network. All others are delivered by team obstetricians at the affiliated tertiary hospital. Wellness, preventive measures, early intervention, and family involvement are emphasized. The San Diego Birth Center Study is a 4-year research project funded by the U.S. Federal Agency for Health Care Policy and Research (#R01-HS07161) to evaluate this program. The National Birth Center Study (NEJM, 1989; 321(26): 1801-11) described the advantages and safety of freestanding birth centers. However, a prospective cohort study with a concurrent comparison group of comparable risk had not been conducted on a collaborative practice-freestanding birth center model to address questions of safety, cost, and patient satisfaction.Methods: The specific aims of this study are to compare this collaborative practice model to the traditional model of perinatal health care (physician providers and hospital delivery). A prospective cohort study comparing these two health care models was conducted with a final expected sample size of approximately 2,000 birth center and 1,350 traditional care subjects. Women were recruited from both the birth center and traditional care programs (private physicians offices and hospital based clinics) at the beginning of prenatal care and followed through the end of the perinatal period. Prenatal, intrapartum, postpartum and infant morbidity and mortality are being

  19. Supercomputations and big-data analysis in strong-field ultrafast optical physics: filamentation of high-peak-power ultrashort laser pulses

    Science.gov (United States)

    Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.

    2016-06-01

    High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3  +  1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.

  20. Compact High Resolution SANS using very cold neutrons (VCN-SANS)

    International Nuclear Information System (INIS)

    Kennedy, S.; Yamada, M.; Iwashita, Y.; Geltenbort, P.; Bleuel, M.; Shimizu, H.

    2011-01-01

    SANS (Small Angle Neutron Scattering) is a popular method for elucidation of nano-scale structures. However science continually challenges SANS for higher performance, prompting exploration of ever-more exotic and expensive technologies. We propose a compact high resolution SANS, using very cold neutrons, magnetic focusing lens and a wide-angle spherical detector. This system will compete with modern 40 m pinhole SANS in one tenth of the length, matching minimum Q, Q-resolution and dynamic range. It will also probe dynamics using the MIEZE method. Our prototype lens (a rotating permanent-magnet sextupole), focuses a pulsed neutron beam over 3-5 nm wavelength and has measured SANS from micelles and polymer blends. (authors)

  1. Quantum Hamiltonian Physics with Supercomputers

    International Nuclear Information System (INIS)

    Vary, James P.

    2014-01-01

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed

  2. Quantum Hamiltonian Physics with Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Vary, James P.

    2014-06-15

    The vision of solving the nuclear many-body problem in a Hamiltonian framework with fundamental interactions tied to QCD via Chiral Perturbation Theory is gaining support. The goals are to preserve the predictive power of the underlying theory, to test fundamental symmetries with the nucleus as laboratory and to develop new understandings of the full range of complex quantum phenomena. Advances in theoretical frameworks (renormalization and many-body methods) as well as in computational resources (new algorithms and leadership-class parallel computers) signal a new generation of theory and simulations that will yield profound insights into the origins of nuclear shell structure, collective phenomena and complex reaction dynamics. Fundamental discovery opportunities also exist in such areas as physics beyond the Standard Model of Elementary Particles, the transition between hadronic and quark–gluon dominated dynamics in nuclei and signals that characterize dark matter. I will review some recent achievements and present ambitious consensus plans along with their challenges for a coming decade of research that will build new links between theory, simulations and experiment. Opportunities for graduate students to embark upon careers in the fast developing field of supercomputer simulations is also discussed.

  3. 76 FR 45693 - Safety Zone; San Diego POPS Fireworks, San Diego, CA

    Science.gov (United States)

    2011-08-01

    ...-AA00 Safety Zone; San Diego POPS Fireworks, San Diego, CA AGENCY: Coast Guard, DHS. ACTION: Temporary... San Diego Bay in support of the San Diego POPS Fireworks. This safety zone is necessary to provide for... of the waterway during scheduled fireworks events. Persons and vessels will be prohibited from...

  4. Some examples of spin-off technologies: San Carlos de Bariloche

    International Nuclear Information System (INIS)

    Meyer, Gabriel O.

    2001-01-01

    The Bariloche Atomic Center (CAB) and the Balseiro Institute, both in San Carlos de Bariloche, are mainly devoted to scientific research and development, the first one; and to education and training the second one. Besides providing specialists in physics and nuclear engineering for research centers in Argentina and abroad, both establishments are transferring technologies and providing services in different fields such as waste management, metallurgy, forensic sciences, medicine, geology, modeling, archaeology, paleontology, etc

  5. Coherent 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an Optimal Supercomputer Optical Switch Fabric

    DEFF Research Database (Denmark)

    Karinou, Fotini; Borkowski, Robert; Zibar, Darko

    2013-01-01

    We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates.......We demonstrate, for the first time, the feasibility of using 40 Gb/s SP-16QAM and 80 Gb/s PDM-16QAM in an optimized cell switching supercomputer optical interconnect architecture based on semiconductor optical amplifiers as ON/OFF gates....

  6. 33 CFR 165.754 - Safety Zone: San Juan Harbor, San Juan, PR.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Safety Zone: San Juan Harbor, San Juan, PR. 165.754 Section 165.754 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND... Zone: San Juan Harbor, San Juan, PR. (a) Regulated area. A moving safety zone is established in the...

  7. Feynman diagrams sampling for quantum field theories on the QPACE 2 supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Rappl, Florian

    2016-08-01

    This work discusses the application of Feynman diagram sampling in quantum field theories. The method uses a computer simulation to sample the diagrammatic space obtained in a series expansion. For running large physical simulations powerful computers are obligatory, effectively splitting the thesis in two parts. The first part deals with the method of Feynman diagram sampling. Here the theoretical background of the method itself is discussed. Additionally, important statistical concepts and the theory of the strong force, quantum chromodynamics, are introduced. This sets the context of the simulations. We create and evaluate a variety of models to estimate the applicability of diagrammatic methods. The method is then applied to sample the perturbative expansion of the vertex correction. In the end we obtain the value for the anomalous magnetic moment of the electron. The second part looks at the QPACE 2 supercomputer. This includes a short introduction to supercomputers in general, as well as a closer look at the architecture and the cooling system of QPACE 2. Guiding benchmarks of the InfiniBand network are presented. At the core of this part, a collection of best practices and useful programming concepts are outlined, which enables the development of efficient, yet easily portable, applications for the QPACE 2 system.

  8. Federal Market Information Technology in the Post Flash Crash Era: Roles for Supercomputing

    Energy Technology Data Exchange (ETDEWEB)

    Bethel, E. Wes; Leinweber, David; Ruebel, Oliver; Wu, Kesheng

    2011-09-16

    This paper describes collaborative work between active traders, regulators, economists, and supercomputing researchers to replicate and extend investigations of the Flash Crash and other market anomalies in a National Laboratory HPC environment. Our work suggests that supercomputing tools and methods will be valuable to market regulators in achieving the goal of market safety, stability, and security. Research results using high frequency data and analytics are described, and directions for future development are discussed. Currently the key mechanism for preventing catastrophic market action are “circuit breakers.” We believe a more graduated approach, similar to the “yellow light” approach in motorsports to slow down traffic, might be a better way to achieve the same goal. To enable this objective, we study a number of indicators that could foresee hazards in market conditions and explore options to confirm such predictions. Our tests confirm that Volume Synchronized Probability of Informed Trading (VPIN) and a version of volume Herfindahl-Hirschman Index (HHI) for measuring market fragmentation can indeed give strong signals ahead of the Flash Crash event on May 6 2010. This is a preliminary step toward a full-fledged early-warning system for unusual market conditions.

  9. A fast random number generator for the Intel Paragon supercomputer

    Science.gov (United States)

    Gutbrod, F.

    1995-06-01

    A pseudo-random number generator is presented which makes optimal use of the architecture of the i860-microprocessor and which is expected to have a very long period. It is therefore a good candidate for use on the parallel supercomputer Paragon XP. In the assembler version, it needs 6.4 cycles for a real∗4 random number. There is a FORTRAN routine which yields identical numbers up to rare and minor rounding discrepancies, and it needs 28 cycles. The FORTRAN performance on other microprocessors is somewhat better. Arguments for the quality of the generator and some numerical tests are given.

  10. Development of a Cloud Resolving Model for Heterogeneous Supercomputers

    Science.gov (United States)

    Sreepathi, S.; Norman, M. R.; Pal, A.; Hannah, W.; Ponder, C.

    2017-12-01

    A cloud resolving climate model is needed to reduce major systematic errors in climate simulations due to structural uncertainty in numerical treatments of convection - such as convective storm systems. This research describes the porting effort to enable SAM (System for Atmosphere Modeling) cloud resolving model on heterogeneous supercomputers using GPUs (Graphical Processing Units). We have isolated a standalone configuration of SAM that is targeted to be integrated into the DOE ACME (Accelerated Climate Modeling for Energy) Earth System model. We have identified key computational kernels from the model and offloaded them to a GPU using the OpenACC programming model. Furthermore, we are investigating various optimization strategies intended to enhance GPU utilization including loop fusion/fission, coalesced data access and loop refactoring to a higher abstraction level. We will present early performance results, lessons learned as well as optimization strategies. The computational platform used in this study is the Summitdev system, an early testbed that is one generation removed from Summit, the next leadership class supercomputer at Oak Ridge National Laboratory. The system contains 54 nodes wherein each node has 2 IBM POWER8 CPUs and 4 NVIDIA Tesla P100 GPUs. This work is part of a larger project, ACME-MMF component of the U.S. Department of Energy(DOE) Exascale Computing Project. The ACME-MMF approach addresses structural uncertainty in cloud processes by replacing traditional parameterizations with cloud resolving "superparameterization" within each grid cell of global climate model. Super-parameterization dramatically increases arithmetic intensity, making the MMF approach an ideal strategy to achieve good performance on emerging exascale computing architectures. The goal of the project is to integrate superparameterization into ACME, and explore its full potential to scientifically and computationally advance climate simulation and prediction.

  11. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2011-01-01

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  12. Performance characteristics of hybrid MPI/OpenMP implementations of NAS parallel benchmarks SP and BT on large-scale multicore supercomputers

    KAUST Repository

    Wu, Xingfu

    2011-03-29

    The NAS Parallel Benchmarks (NPB) are well-known applications with the fixed algorithms for evaluating parallel systems and tools. Multicore supercomputers provide a natural programming paradigm for hybrid programs, whereby OpenMP can be used with the data sharing with the multicores that comprise a node and MPI can be used with the communication between nodes. In this paper, we use SP and BT benchmarks of MPI NPB 3.3 as a basis for a comparative approach to implement hybrid MPI/OpenMP versions of SP and BT. In particular, we can compare the performance of the hybrid SP and BT with the MPI counterparts on large-scale multicore supercomputers. Our performance results indicate that the hybrid SP outperforms the MPI SP by up to 20.76%, and the hybrid BT outperforms the MPI BT by up to 8.58% on up to 10,000 cores on BlueGene/P at Argonne National Laboratory and Jaguar (Cray XT4/5) at Oak Ridge National Laboratory. We also use performance tools and MPI trace libraries available on these supercomputers to further investigate the performance characteristics of the hybrid SP and BT.

  13. 33 CFR 334.870 - San Diego Harbor, Calif.; restricted area.

    Science.gov (United States)

    2010-07-01

    ..., Calif.; restricted area. (a) Restricted area at Bravo Pier, Naval Air Station—(1) The area. The water of... delay or loitering. On occasion, access to the bait barges may be delayed for intermittent periods not... Supply Center Pier—(1) The area. The waters of San Diego Bay extending approximately 100 feet out from...

  14. Minería, conflicto y mediadores locales: Minera San Xavier en Cerro de San Pedro, México Mineira, conflito e mediadores locais: Minera San Xavier em Cerro de San Pedro Mining, conflict and local brokers: Minera San Xavier in Cerro de San Pedro

    Directory of Open Access Journals (Sweden)

    Hernán Horacio Schiaffini

    2011-12-01

    Full Text Available Este trabajo indaga en las instancias de mediación que intervienen en la articulación de procesos económicos de gran escala y su puesta en práctica local. Basándonos en el conflicto que se produjo en el Municipio de Cerro de San Pedro (San Luis Potosí, México entre la empresa Minera San Xavier y el Frente Amplio Opositor (FAO a la misma, aplicamos el método etnográfico con el objetivo de describir las estructuras locales de mediación política y analizar sus prácticas y racionalidad. Intentamos demostrar así la importancia de los factores políticos locales en las vinculaciones entre estado, empresa y población.Este trabalho indaga nas instâncias de mediação que intervêm em processos econômicos de grande escala e sua posta em prática local. Baseando-nos no conflito no Cerro de San Pedro (San Luis Potosí, México entre a empresa Minera San Xavier e a Frente Amplio Opositor (FAO aplicamos o método etnográfico pra descrever as estruturas de mediação política locais e analisar suas práticas e racionalidade. Tenta-se demonstrar assim a importância dos fatores políticos locais nas vinculações entre estado, empresa e população.This paper investigates in instances of mediation involved in large-scale economic processes and local implementation. Analyzing the conflict in Cerro de San Pedro (San Luis Potosí, México among San Xavier mining company and the Frente Amplio Opositor (FAO, it applies an ethnographic approach to describe the local structures of political mediation and its practices and rationality. The work shows the relevance of local factors in the relationships between State, company and people.

  15. Simulation of x-rays in refractive structure by the Monte Carlo method using the supercomputer SKIF

    International Nuclear Information System (INIS)

    Yaskevich, Yu.R.; Kravchenko, O.I.; Soroka, I.I.; Chembrovskij, A.G.; Kolesnik, A.S.; Serikova, N.V.; Petrov, P.V.; Kol'chevskij, N.N.

    2013-01-01

    Software 'Xray-SKIF' for the simulation of the X-rays in refractive structures by the Monte-Carlo method using the supercomputer SKIF BSU are developed. The program generates a large number of rays propagated from a source to the refractive structure. The ray trajectory under assumption of geometrical optics is calculated. Absorption is calculated for each ray inside of refractive structure. Dynamic arrays are used for results of calculation rays parameters, its restore the X-ray field distributions very fast at different position of detector. It was found that increasing the number of processors leads to proportional decreasing of calculation time: simulation of 10 8 X-rays using supercomputer with the number of processors from 1 to 30 run-times equal 3 hours and 6 minutes, respectively. 10 9 X-rays are calculated by software 'Xray-SKIF' which allows to reconstruct the X-ray field after refractive structure with a special resolution of 1 micron. (authors)

  16. Characterization of aerosols in the Metropolitan Area of San Jose

    International Nuclear Information System (INIS)

    Mejias Perez, J.A.

    1997-07-01

    The objective of the present study, was to elaborate a profile of the contamination by private matter and to characterize the aerosols collected in the Metropolitan Area of San Jose (Costa Rica). For that, a campaign of sampling was carried out in three points of the city of San Jose, differentiated by there degree of activity: Center of San Jose (Central Station of Firemen), San Isidro of Coronado -Canton of Vasquez of Coronado- (Municipality) and Escazu (Municipality). Such campaign was carried out from April 4 to July 4, 1996 (transition summer-winter), and in two periods of time of 8 hours: 8 a.m. to 4 p.m. and of 8 p.m. to 4 a.m. The aerosols were collected utilizing Gent Pm-10 samplers, in filters of polycarbonate of 0,4 μm and 8 μm in cascade, with a flow average of 15 L/min., and it determined the composition average of the present aerosols. The concentration of the majority of the anions were obtained by means of ionic chromatography of high resolution, and the main cations by spectrophotometry of atomic absorption with electro thermic atomization. The space-temporary variations of the concentrations were evaluated and their correlation with the meteorologic variable. (S. Grainger) [es

  17. 76 FR 1386 - Safety Zone; Centennial of Naval Aviation Kickoff, San Diego Bay, San Diego, CA

    Science.gov (United States)

    2011-01-10

    ...-AA00 Safety Zone; Centennial of Naval Aviation Kickoff, San Diego Bay, San Diego, CA AGENCY: Coast... zone on the navigable waters of San Diego Bay in San Diego, CA in support of the Centennial of Naval... February 12, 2010, the Centennial of Naval Aviation Kickoff will take place in San Diego Bay. In support of...

  18. San Marino.

    Science.gov (United States)

    1985-02-01

    San Marino, an independent republic located in north central Italy, in 1983 had a population of 22,206 growing at an annual rate of .9%. The literacy rate is 97% and the infant mortality rate is 9.6/1000. The terrain is mountainous and the climate is moderate. According to local tradition, San Marino was founded by a Christian stonecutter in the 4th century A.D. as a refuge against religious persecution. Its recorded history began in the 9th century, and it has survived assaults on its independence by the papacy, the Malatesta lords of Rimini, Cesare Borgia, Napoleon, and Mussolini. An 1862 treaty with the newly formed Kingdom of Italy has been periodically renewed and amended. The present government is an alliance between the socialists and communists. San Marino has had its own statutes and governmental institutions since the 11th century. Legislative authority at present is vested in a 60-member unicameral parliament. Executive authority is exercised by the 11-member Congress of State, the members of which head the various administrative departments of the goverment. The posts are divided among the parties which form the coalition government. Judicial authority is partly exercised by Italian magistrates in civil and criminal cases. San Marino's policies are tied to Italy's and political organizations and labor unions active in Italy are also active in San Marino. Since World War II, there has been intense rivalry between 2 political coalitions, the Popular Alliance composed of the Christian Democratic Party and the Independent Social Democratic Party, and the Liberty Committee, coalition of the Communist Party and the Socialist Party. San Marino's gross domestic product was $137 million and its per capita income was $6290 in 1980. The principal economic activities are farming and livestock raising, along with some light manufacturing. Foreign transactions are dominated by tourism. The government derives most of its revenue from the sale of postage stamps to

  19. San Francisco District Laboratory (SAN)

    Data.gov (United States)

    Federal Laboratory Consortium — Program CapabilitiesFood Analysis SAN-DO Laboratory has an expert in elemental analysis who frequently performs field inspections of materials. A recently acquired...

  20. The KhoeSan Early Learning Center Pilot Project: Negotiating Power and Possibility in a South African Institute of Higher Learning

    Science.gov (United States)

    De Wet, Priscilla

    2011-01-01

    As we search for a new paradigm in post-apartheid South Africa, the knowledge base and worldview of the KhoeSan first Indigenous peoples is largely missing. The South African government has established various mechanisms as agents for social change. Institutions of higher learning have implemented transformation programs. KhoeSan peoples, however,…

  1. Solar Feasibility Study May 2013 - San Carlos Apache Tribe

    Energy Technology Data Exchange (ETDEWEB)

    Rapp, Jim [Parametrix; Duncan, Ken [San Carlos Apache Tribe; Albert, Steve [Parametrix

    2013-05-01

    The San Carlos Apache Tribe (Tribe) in the interests of strengthening tribal sovereignty, becoming more energy self-sufficient, and providing improved services and economic opportunities to tribal members and San Carlos Apache Reservation (Reservation) residents and businesses, has explored a variety of options for renewable energy development. The development of renewable energy technologies and generation is consistent with the Tribe’s 2011 Strategic Plan. This Study assessed the possibilities for both commercial-scale and community-scale solar development within the southwestern portions of the Reservation around the communities of San Carlos, Peridot, and Cutter, and in the southeastern Reservation around the community of Bylas. Based on the lack of any commercial-scale electric power transmission between the Reservation and the regional transmission grid, Phase 2 of this Study greatly expanded consideration of community-scale options. Three smaller sites (Point of Pines, Dudleyville/Winkleman, and Seneca Lake) were also evaluated for community-scale solar potential. Three building complexes were identified within the Reservation where the development of site-specific facility-scale solar power would be the most beneficial and cost-effective: Apache Gold Casino/Resort, Tribal College/Skill Center, and the Dudleyville (Winkleman) Casino.

  2. Choto-san in the treatment of vascular dementia: a double-blind, placebo-controlled study.

    Science.gov (United States)

    Terasawa, K; Shimada, Y; Kita, T; Yamamoto, T; Tosa, H; Tanaka, N; Saito, Y; Kanaki, E; Goto, S; Mizushima, N; Fujioka, M; Takase, S; Seki, H; Kimura, I; Ogawa, T; Nakamura, S; Araki, G; Maruyama, I; Maruyama, Y; Takaori, S

    1997-03-01

    In an earlier placebo-controlled study, we demonstrated that a kampo (Japanese herbal) medicine called Choto-san (Diao-Teng-San in Chinese) was effective in treating vascular dementia. To evaluate its efficacy using more objective criteria, we carried out a multi-center, double-blind study of Choto-san extract (7.5 g/day) and a placebo, each given three times a day for 12 weeks to patients suffering from this condition. The study enrolled and analyzed 139 patients, 50 males and 89 females, with a mean age of 76.6 years. Choto-san was statistically superior to the placebo in global improvement rating, utility rating, global improvement rating of subjective symptoms, global improvement rating of psychiatric symptoms and global improvement rating of disturbance in daily living activities. Such items as spontaneity of conversation, lack of facial expression, decline in simple mathematical ability, global intellectual ability, nocturnal delirium, sleep disturbance, hallucination or delusion, and putting on and taking off clothes were significantly improved at one or more evaluation points in those taking Choto-san compared to those taking the placebo. Furthermore, the change in revised version of Hasegawa's dementia scale from the beginning point in Choto-san group was tended to be higher than that in placebo group with no statistical significance. These results suggest that Choto-san is effective in the treatment of vascular dementia. Copyright © 1997 Gustav Fischer Verlag. Published by Elsevier GmbH.. All rights reserved.

  3. Communication Characterization and Optimization of Applications Using Topology-Aware Task Mapping on Large Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; D' Azevedo, Eduardo [ORNL; Philip, Bobby [ORNL; Worley, Patrick H [ORNL

    2016-01-01

    On large supercomputers, the job scheduling systems may assign a non-contiguous node allocation for user applications depending on available resources. With parallel applications using MPI (Message Passing Interface), the default process ordering does not take into account the actual physical node layout available to the application. This contributes to non-locality in terms of physical network topology and impacts communication performance of the application. In order to mitigate such performance penalties, this work describes techniques to identify suitable task mapping that takes the layout of the allocated nodes as well as the application's communication behavior into account. During the first phase of this research, we instrumented and collected performance data to characterize communication behavior of critical US DOE (United States - Department of Energy) applications using an augmented version of the mpiP tool. Subsequently, we developed several reordering methods (spectral bisection, neighbor join tree etc.) to combine node layout and application communication data for optimized task placement. We developed a tool called mpiAproxy to facilitate detailed evaluation of the various reordering algorithms without requiring full application executions. This work presents a comprehensive performance evaluation (14,000 experiments) of the various task mapping techniques in lowering communication costs on Titan, the leadership class supercomputer at Oak Ridge National Laboratory.

  4. Plasma turbulence calculations on supercomputers

    International Nuclear Information System (INIS)

    Carreras, B.A.; Charlton, L.A.; Dominguez, N.; Drake, J.B.; Garcia, L.; Leboeuf, J.N.; Lee, D.K.; Lynch, V.E.; Sidikman, K.

    1991-01-01

    Although the single-particle picture of magnetic confinement is helpful in understanding some basic physics of plasma confinement, it does not give a full description. Collective effects dominate plasma behavior. Any analysis of plasma confinement requires a self-consistent treatment of the particles and fields. The general picture is further complicated because the plasma, in general, is turbulent. The study of fluid turbulence is a rather complex field by itself. In addition to the difficulties of classical fluid turbulence, plasma turbulence studies face the problems caused by the induced magnetic turbulence, which couples field by itself. In addition to the difficulties of classical fluid turbulence, plasma turbulence studies face the problems caused by the induced magnetic turbulence, which couples back to the fluid. Since the fluid is not a perfect conductor, this turbulence can lead to changes in the topology of the magnetic field structure, causing the magnetic field lines to wander radially. Because the plasma fluid flows along field lines, they carry the particles with them, and this enhances the losses caused by collisions. The changes in topology are critical for the plasma confinement. The study of plasma turbulence and the concomitant transport is a challenging problem. Because of the importance of solving the plasma turbulence problem for controlled thermonuclear research, the high complexity of the problem, and the necessity of attacking the problem with supercomputers, the study of plasma turbulence in magnetic confinement devices is a Grand Challenge problem

  5. 76 FR 9709 - Water Quality Challenges in the San Francisco Bay/Sacramento-San Joaquin Delta Estuary

    Science.gov (United States)

    2011-02-22

    ... Water Quality Challenges in the San Francisco Bay/Sacramento-San Joaquin Delta Estuary AGENCY... the San Francisco Bay/ Sacramento-San Joaquin Delta Estuary (Bay Delta Estuary) in California. EPA is... programs to address recent significant declines in multiple aquatic species in the Bay Delta Estuary. EPA...

  6. Perspective View, San Andreas Fault

    Science.gov (United States)

    2000-01-01

    The prominent linear feature straight down the center of this perspective view is California's famous San Andreas Fault. The image, created with data from NASA's Shuttle Radar Topography Mission (SRTM), will be used by geologists studying fault dynamics and landforms resulting from active tectonics. This segment of the fault lies west of the city of Palmdale, Calif., about 100 kilometers (about 60 miles) northwest of Los Angeles. The fault is the active tectonic boundary between the North American plate on the right, and the Pacific plate on the left. Relative to each other, the Pacific plate is moving away from the viewer and the North American plate is moving toward the viewer along what geologists call a right lateral strike-slip fault. Two large mountain ranges are visible, the San Gabriel Mountains on the left and the Tehachapi Mountains in the upper right. Another fault, the Garlock Fault lies at the base of the Tehachapis; the San Andreas and the Garlock Faults meet in the center distance near the town of Gorman. In the distance, over the Tehachapi Mountains is California's Central Valley. Along the foothills in the right hand part of the image is the Antelope Valley, including the Antelope Valley California Poppy Reserve. The data used to create this image were acquired by SRTM aboard the Space Shuttle Endeavour, launched on February 11, 2000.This type of display adds the important dimension of elevation to the study of land use and environmental processes as observed in satellite images. The perspective view was created by draping a Landsat satellite image over an SRTM elevation model. Topography is exaggerated 1.5 times vertically. The Landsat image was provided by the United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota.SRTM uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour

  7. Sensitive Wildlife - Center for Natural Lands Management [ds431

    Data.gov (United States)

    California Natural Resource Agency — This dataset represents sensitive wildlife data collected for the Center for Natural Lands Management (CNLM) at dedicated nature preserves in San Diego County,...

  8. 33 CFR 165.776 - Security Zone; Coast Guard Base San Juan, San Juan Harbor, Puerto Rico

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Security Zone; Coast Guard Base San Juan, San Juan Harbor, Puerto Rico 165.776 Section 165.776 Navigation and Navigable Waters COAST... Guard District § 165.776 Security Zone; Coast Guard Base San Juan, San Juan Harbor, Puerto Rico (a...

  9. Storage-Intensive Supercomputing Benchmark Study

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows

  10. Parallel Multivariate Spatio-Temporal Clustering of Large Ecological Datasets on Hybrid Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Sreepathi, Sarat [ORNL; Kumar, Jitendra [ORNL; Mills, Richard T. [Argonne National Laboratory; Hoffman, Forrest M. [ORNL; Sripathi, Vamsi [Intel Corporation; Hargrove, William Walter [United States Department of Agriculture (USDA), United States Forest Service (USFS)

    2017-09-01

    A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like the Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.

  11. High temporal resolution mapping of seismic noise sources using heterogeneous supercomputers

    Science.gov (United States)

    Gokhberg, Alexey; Ermert, Laura; Paitz, Patrick; Fichtner, Andreas

    2017-04-01

    Time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems. Significant interest in seismic noise source maps with high temporal resolution (days) is expected to come from a number of domains, including natural resources exploration, analysis of active earthquake fault zones and volcanoes, as well as geothermal and hydrocarbon reservoir monitoring. Currently, knowledge of noise sources is insufficient for high-resolution subsurface monitoring applications. Near-real-time seismic data, as well as advanced imaging methods to constrain seismic noise sources have recently become available. These methods are based on the massive cross-correlation of seismic noise records from all available seismic stations in the region of interest and are therefore very computationally intensive. Heterogeneous massively parallel supercomputing systems introduced in the recent years combine conventional multi-core CPU with GPU accelerators and provide an opportunity for manifold increase and computing performance. Therefore, these systems represent an efficient platform for implementation of a noise source mapping solution. We present the first results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service that provides seismic noise source maps for Central Europe with high temporal resolution (days to few weeks depending on frequency and data availability). The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept in order to provide the interested external researchers the regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for

  12. Enhanced Preliminary Assessment Report: Presidio of San Francisco Military Reservation, San Francisco, California

    Science.gov (United States)

    1989-11-01

    CAD981415656 Filmore Steiner Bay San Francisco 24 PG&E Gas Plant SanFran 502-IG CAD981415714 Bay North Point Buchanan Laguna 25 PG&E Gas Plant SanFran 502-1H...76-ioV /5,JO /0.7 /,230 PSF Water PSF, Main U.N. Lagunda Honda Analvte Plant Clearwell Reservoir Plaza Reservoi- Chlordane inetab. ə.2 ə.2 (1.2 ə.2

  13. for presence of hookworms (Uncinaria spp. on San Miguel Island, California

    Directory of Open Access Journals (Sweden)

    Lyons E. T.

    2016-06-01

    Full Text Available Necropsy and extensive parasitological examination of dead northern elephant seal (NES pups was done on San Miguel Island, California, in February, 2015. The main interest in the current study was to determine if hookworms were present in NESs on San Miguel Island where two hookworm species of the genus Uncinaria are known to be present - Uncinaria lyonsi in California sea lions and Uncinaria lucasi in northern fur seals. Hookworms were not detected in any of the NESs examined: stomachs or intestines of 16 pups, blubber of 13 pups and blubber of one bull. The results obtained in the present study of NESs on San Miguel Island plus similar finding on Año Nuevo State Reserve and The Marine Mammal Center provide strong indication that NES are not appropriate hosts for Uncinaria spp. Hookworm free-living third stage larvae, developed from eggs of California sea lions and northern fur seals, were recovered from sand. It seems that at this time, further search for hookworms in NESs would be nonproductive.

  14. Hippotherapy: Remuneration issues impair the offering of this therapeutic strategy at Southern California rehabilitation centers.

    Science.gov (United States)

    Pham, Christine; Bitonte, Robert

    2016-04-06

    Hippotherapy is the use of equine movement in physical, occupational, or speech therapy in order to obtain functional improvements in patients. Studies show improvement in motor function and sensory processing for patients with a variety of neuromuscular disabilities, developmental disorders, or skeletal impairments as a result of using hippotherapy. The primary objective of this study is to identify the pervasiveness of hippotherapy in Southern California, and any factors that impair its utilization. One hundred and fifty-two rehabilitation centers in the Southern California counties of Los Angeles, San Diego, Orange, Riverside, San Bernardino, San Diego, San Luis Obispo, Santa Barbara, Ventura, and Kern County were identified, and surveyed to ascertain if hippotherapy is utilized, and if not, why not. Through a review of forty facilities that responded to our inquiry, our study indicates that the majority of rehabilitation centers are familiar with hippotherapy, however, only seven have reported that hippotherapy is indeed available as an option in therapy at their centers. It is concluded that hippotherapy, used in a broad based array of physical and sensory disorders, is limited in its ability to be utilized, primarily due to remuneration issues.

  15. The Eastern California Shear Zone as the northward extension of the southern San Andreas Fault

    Science.gov (United States)

    Thatcher, Wayne R.; Savage, James C.; Simpson, Robert W.

    2016-01-01

    Cluster analysis offers an agnostic way to organize and explore features of the current GPS velocity field without reference to geologic information or physical models using information only contained in the velocity field itself. We have used cluster analysis of the Southern California Global Positioning System (GPS) velocity field to determine the partitioning of Pacific-North America relative motion onto major regional faults. Our results indicate the large-scale kinematics of the region is best described with two boundaries of high velocity gradient, one centered on the Coachella section of the San Andreas Fault and the Eastern California Shear Zone and the other defined by the San Jacinto Fault south of Cajon Pass and the San Andreas Fault farther north. The ~120 km long strand of the San Andreas between Cajon Pass and Coachella Valley (often termed the San Bernardino and San Gorgonio sections) is thus currently of secondary importance and carries lesser amounts of slip over most or all of its length. We show these first order results are present in maps of the smoothed GPS velocity field itself. They are also generally consistent with currently available, loosely bounded geologic and geodetic fault slip rate estimates that alone do not provide useful constraints on the large-scale partitioning we show here. Our analysis does not preclude the existence of smaller blocks and more block boundaries in Southern California. However, attempts to identify smaller blocks along and adjacent to the San Gorgonio section were not successful.

  16. A case for historic joint rupture of the San Andreas and San Jacinto faults.

    Science.gov (United States)

    Lozos, Julian C

    2016-03-01

    The San Andreas fault is considered to be the primary plate boundary fault in southern California and the most likely fault to produce a major earthquake. I use dynamic rupture modeling to show that the San Jacinto fault is capable of rupturing along with the San Andreas in a single earthquake, and interpret these results along with existing paleoseismic data and historic damage reports to suggest that this has likely occurred in the historic past. In particular, I find that paleoseismic data and historic observations for the ~M7.5 earthquake of 8 December 1812 are best explained by a rupture that begins on the San Jacinto fault and propagates onto the San Andreas fault. This precedent carries the implications that similar joint ruptures are possible in the future and that the San Jacinto fault plays a more significant role in seismic hazard in southern California than previously considered. My work also shows how physics-based modeling can be used for interpreting paleoseismic data sets and understanding prehistoric fault behavior.

  17. A case for historic joint rupture of the San Andreas and San Jacinto faults

    Science.gov (United States)

    Lozos, Julian C.

    2016-01-01

    The San Andreas fault is considered to be the primary plate boundary fault in southern California and the most likely fault to produce a major earthquake. I use dynamic rupture modeling to show that the San Jacinto fault is capable of rupturing along with the San Andreas in a single earthquake, and interpret these results along with existing paleoseismic data and historic damage reports to suggest that this has likely occurred in the historic past. In particular, I find that paleoseismic data and historic observations for the ~M7.5 earthquake of 8 December 1812 are best explained by a rupture that begins on the San Jacinto fault and propagates onto the San Andreas fault. This precedent carries the implications that similar joint ruptures are possible in the future and that the San Jacinto fault plays a more significant role in seismic hazard in southern California than previously considered. My work also shows how physics-based modeling can be used for interpreting paleoseismic data sets and understanding prehistoric fault behavior. PMID:27034977

  18. 76 FR 10945 - San Luis Trust Bank, FSB, San Luis Obispo, CA; Notice of Appointment of Receiver

    Science.gov (United States)

    2011-02-28

    ... DEPARTMENT OF THE TREASURY Office of Thrift Supervision San Luis Trust Bank, FSB, San Luis Obispo, CA; Notice of Appointment of Receiver Notice is hereby given that, pursuant to the authority... appointed the Federal Deposit Insurance Corporation as sole Receiver for San Luis Trust Bank, FSB, San Luis...

  19. Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science

    Science.gov (United States)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.

    2008-12-01

    The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.

  20. 76 FR 22809 - Safety Zone; Bay Ferry II Maritime Security Exercise; San Francisco Bay, San Francisco, CA

    Science.gov (United States)

    2011-04-25

    ... DEPARTMENT OF HOMELAND SECURITY Coast Guard 33 CFR Part 165 [Docket No. USCG-2011-0196] RIN 1625-AA00 Safety Zone; Bay Ferry II Maritime Security Exercise; San Francisco Bay, San Francisco, CA AGENCY... Security Exercise; San Francisco Bay, San Francisco, CA. (a) Location. The limits of this safety zone...

  1. Integration of Titan supercomputer at OLCF with ATLAS Production System

    Science.gov (United States)

    Barreiro Megino, F.; De, K.; Jha, S.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Padolski, S.; Panitkin, S.; Wells, J.; Wenaus, T.; ATLAS Collaboration

    2017-10-01

    The PanDA (Production and Distributed Analysis) workload management system was developed to meet the scale and complexity of distributed computing for the ATLAS experiment. PanDA managed resources are distributed worldwide, on hundreds of computing sites, with thousands of physicists accessing hundreds of Petabytes of data and the rate of data processing already exceeds Exabyte per year. While PanDA currently uses more than 200,000 cores at well over 100 Grid sites, future LHC data taking runs will require more resources than Grid computing can possibly provide. Additional computing and storage resources are required. Therefore ATLAS is engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. In this paper we will describe a project aimed at integration of ATLAS Production System with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF). Current approach utilizes modified PanDA Pilot framework for job submission to Titan’s batch queues and local data management, with lightweight MPI wrappers to run single node workloads in parallel on Titan’s multi-core worker nodes. It provides for running of standard ATLAS production jobs on unused resources (backfill) on Titan. The system already allowed ATLAS to collect on Titan millions of core-hours per month, execute hundreds of thousands jobs, while simultaneously improving Titans utilization efficiency. We will discuss the details of the implementation, current experience with running the system, as well as future plans aimed at improvements in scalability and efficiency. Notice: This manuscript has been authored, by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to

  2. Use of QUADRICS supercomputer as embedded simulator in emergency management systems

    International Nuclear Information System (INIS)

    Bove, R.; Di Costanzo, G.; Ziparo, A.

    1996-07-01

    The experience related to the implementation of a MRBT, atmospheric spreading model with a short duration releasing, are reported. This model was implemented on a QUADRICS-Q1 supercomputer. First is reported a description of the MRBT model. It is an analytical model to study the speadings of light gases realised in the atmosphere cause incidental releasing. The solution of diffusion equation is Gaussian like. It yield the concentration of pollutant substance released. The concentration is function of space and time. Thus the QUADRICS architecture is introduced. And the implementation of the model is described. At the end it will be consider the integration of the QUADRICS-based model as simulator in a emergency management system

  3. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Science.gov (United States)

    DeTar, Carleton; Gottlieb, Steven; Li, Ruizi; Toussaint, Doug

    2018-03-01

    With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  4. MILC Code Performance on High End CPU and GPU Supercomputer Clusters

    Directory of Open Access Journals (Sweden)

    DeTar Carleton

    2018-01-01

    Full Text Available With recent developments in parallel supercomputing architecture, many core, multi-core, and GPU processors are now commonplace, resulting in more levels of parallelism, memory hierarchy, and programming complexity. It has been necessary to adapt the MILC code to these new processors starting with NVIDIA GPUs, and more recently, the Intel Xeon Phi processors. We report on our efforts to port and optimize our code for the Intel Knights Landing architecture. We consider performance of the MILC code with MPI and OpenMP, and optimizations with QOPQDP and QPhiX. For the latter approach, we concentrate on the staggered conjugate gradient and gauge force. We also consider performance on recent NVIDIA GPUs using the QUDA library.

  5. SUPERCOMPUTER SIMULATION OF CRITICAL PHENOMENA IN COMPLEX SOCIAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Petrus M.A. Sloot

    2014-09-01

    Full Text Available The paper describes a problem of computer simulation of critical phenomena in complex social systems on a petascale computing systems in frames of complex networks approach. The three-layer system of nested models of complex networks is proposed including aggregated analytical model to identify critical phenomena, detailed model of individualized network dynamics and model to adjust a topological structure of a complex network. The scalable parallel algorithm covering all layers of complex networks simulation is proposed. Performance of the algorithm is studied on different supercomputing systems. The issues of software and information infrastructure of complex networks simulation are discussed including organization of distributed calculations, crawling the data in social networks and results visualization. The applications of developed methods and technologies are considered including simulation of criminal networks disruption, fast rumors spreading in social networks, evolution of financial networks and epidemics spreading.

  6. Lectures in Supercomputational Neurosciences Dynamics in Complex Brain Networks

    CERN Document Server

    Graben, Peter beim; Thiel, Marco; Kurths, Jürgen

    2008-01-01

    Computational Neuroscience is a burgeoning field of research where only the combined effort of neuroscientists, biologists, psychologists, physicists, mathematicians, computer scientists, engineers and other specialists, e.g. from linguistics and medicine, seem to be able to expand the limits of our knowledge. The present volume is an introduction, largely from the physicists' perspective, to the subject matter with in-depth contributions by system neuroscientists. A conceptual model for complex networks of neurons is introduced that incorporates many important features of the real brain, such as various types of neurons, various brain areas, inhibitory and excitatory coupling and the plasticity of the network. The computational implementation on supercomputers, which is introduced and discussed in detail in this book, will enable the readers to modify and adapt the algortihm for their own research. Worked-out examples of applications are presented for networks of Morris-Lecar neurons to model the cortical co...

  7. Width and dip of the southern San Andreas Fault at Salt Creek from modeling of geophysical data

    Science.gov (United States)

    Langenheim, Victoria; Athens, Noah D.; Scheirer, Daniel S.; Fuis, Gary S.; Rymer, Michael J.; Goldman, Mark R.; Reynolds, Robert E.

    2014-01-01

    We investigate the geometry and width of the southernmost stretch of the San Andreas Fault zone using new gravity and magnetic data along line 7 of the Salton Seismic Imaging Project. In the Salt Creek area of Durmid Hill, the San Andreas Fault coincides with a complex magnetic signature, with high-amplitude, short-wavelength magnetic anomalies superposed on a broader magnetic anomaly that is at least 5 km wide centered 2–3 km northeast of the fault. Marine magnetic data show that high-frequency magnetic anomalies extend more than 1 km west of the mapped trace of the San Andreas Fault. Modeling of magnetic data is consistent with a moderate to steep (> 50 degrees) northeast dip of the San Andreas Fault, but also suggests that the sedimentary sequence is folded west of the fault, causing the short wavelength of the anomalies west of the fault. Gravity anomalies are consistent with the previously modeled seismic velocity structure across the San Andreas Fault. Modeling of gravity data indicates a steep dip for the San Andreas Fault, but does not resolve unequivocally the direction of dip. Gravity data define a deeper basin, bounded by the Powerline and Hot Springs Faults, than imaged by the seismic experiment. This basin extends southeast of Line 7 for nearly 20 km, with linear margins parallel to the San Andreas Fault. These data suggest that the San Andreas Fault zone is wider than indicated by its mapped surface trace.

  8. Aggregate Settling Velocities in San Francisco Estuary Margins

    Science.gov (United States)

    Allen, R. M.; Stacey, M. T.; Variano, E. A.

    2015-12-01

    One way that humans impact aquatic ecosystems is by adding nutrients and contaminants, which can propagate up the food web and cause blooms and die-offs, respectively. Often, these chemicals are attached to fine sediments, and thus where sediments go, so do these anthropogenic influences. Vertical motion of sediments is important for sinking and burial, and also for indirect effects on horizontal transport. The dynamics of sinking sediment (often in aggregates) are complex, thus we need field data to test and validate existing models. San Francisco Bay is well studied and is often used as a test case for new measurement and model techniques (Barnard et al. 2013). Settling velocities for aggregates vary between 4*10-5 to 1.6*10-2 m/s along the estuary backbone (Manning and Schoellhamer 2013). Model results from South San Francisco Bay shoals suggest two populations of settling particles, one fast (ws of 9 to 5.8*10-4 m/s) and one slow (ws of Brand et al. 2015). While the open waters of San Francisco Bay and other estuaries are well studied and modeled, sediment and contaminants often originate from the margin regions, and the margins remain poorly characterized. We conducted a 24 hour field experiment in a channel slough of South San Francisco Bay, and measured settling velocity, turbulence and flow, and suspended sediment concentration. At this margin location, we found average settling velocities of 4-5*10-5 m/s, and saw settling velocities decrease with decreasing suspended sediment concentration. These results are consistent with, though at the low end of, those seen along the estuary center, and they suggest that the two population model that has been successful along the shoals may also apply in the margins.

  9. The San Bernabe power substation; La subestacion San Bernabe

    Energy Technology Data Exchange (ETDEWEB)

    Chavez Sanudo, Andres D. [Luz y Fuerza del Centro, Mexico, D. F. (Mexico)

    1997-12-31

    The first planning studies that gave rise to the San Bernabe substation go back to year 1985. The main circumstance that supports this decision is the gradual restriction for electric power generation that has been suffering the Miguel Aleman Hydro System, until its complete disappearance, to give priority to the potable water supply through the Cutzamala pumping system, that feeds in an important way Mexico City and the State of Mexico. In this document the author describes the construction project of the San Bernabe Substation; mention is made of the technological experiences obtained during the construction and its geographical location is shown, as well as the one line diagram of the same [Espanol] Los primeros estudios de planeacion que dieron origen a la subestacion San Bernabe se remontan al ano de 1985. La circunstancia principal que soporta esta decision es la restriccion paulatina para generar energia que ha venido experimentando el Sistema Hidroelectrico Miguel Aleman, hasta su desaparicion total, para dar prioridad al suministro de agua potable por medio del sistema de bombeo Cutzamala, que alimenta en forma importante a la Ciudad de Mexico y al Estado de Mexico. En este documento el autor describe el proyecto de construccion de la subestacion San Bernabe; se mencionan las experiencias tecnologicas obtenidas durante su construccion y se ilustra su ubicacion geografica, asi como un diagrama unifilar de la misma

  10. Symbolic simulation of engineering systems on a supercomputer

    International Nuclear Information System (INIS)

    Ragheb, M.; Gvillo, D.; Makowitz, H.

    1986-01-01

    Model-Based Production-Rule systems for analysis are developed for the symbolic simulation of Complex Engineering systems on a CRAY X-MP Supercomputer. The Fault-Tree and Event-Tree Analysis methodologies from Systems-Analysis are used for problem representation and are coupled to the Rule-Based System Paradigm from Knowledge Engineering to provide modelling of engineering devices. Modelling is based on knowledge of the structure and function of the device rather than on human expertise alone. To implement the methodology, we developed a production-Rule Analysis System that uses both backward-chaining and forward-chaining: HAL-1986. The inference engine uses an Induction-Deduction-Oriented antecedent-consequent logic and is programmed in Portable Standard Lisp (PSL). The inference engine is general and can accommodate general modifications and additions to the knowledge base. The methodologies used will be demonstrated using a model for the identification of faults, and subsequent recovery from abnormal situations in Nuclear Reactor Safety Analysis. The use of the exposed methodologies for the prognostication of future device responses under operational and accident conditions using coupled symbolic and procedural programming is discussed

  11. Dal CERN, flusso si dati a una media di 600 megabytes al secondo per dieci giorni consecutivi

    CERN Multimedia

    2005-01-01

    The supercomputer Grid took up successfully its first technologic challenge. Egiht supercomputing centers have supported on internet a continuous flow of data from CERN in Geneva and directed them to seven centers in Europe and United States

  12. Geology and petrography of the Socoscora Sierra . Province of San Luis. Republica Argentina

    International Nuclear Information System (INIS)

    Carugno Duran, A.

    1998-01-01

    The following paper include an study geological and petrographic of the Sierra de Socoscora. San Luis, Argentina. This mountainas is a block with less elevation that the Sierra de San Luis, and it located in the west center of it. It' s formed by an crystalline basement composed by metamorphic haigh grade rocks, with a penetrative foliation of strike N-S. in this context is possible to define petrographicly the following units, migmatitic that fill a big part of the mountain, amphibolites, marbles, skarns, milonites and pegmatites. This units have amphibolitic facies assemblanges mineral and in some them, we can observe retrograde metamorphism of the greesnschist facies. The metamorphic structure is complex and evidence at least three deformation event

  13. Adult Basic Learning in an Activity Center: A Demonstration Approach.

    Science.gov (United States)

    Metropolitan Adult Education Program, San Jose, CA.

    Escuela Amistad, an activity center in San Jose, California, is now operating at capacity, five months after its origin. Average daily attendance has been 125 adult students, 18-65, most of whom are females of Mexican-American background. Activities and services provided by the center are: instruction in English as a second language, home…

  14. Description of gravity cores from San Pablo Bay and Carquinez Strait, San Francisco Bay, California

    Science.gov (United States)

    Woodrow, Donald L.; John L. Chin,; Wong, Florence L.; Fregoso, Theresa A.; Jaffe, Bruce E.

    2017-06-27

    Seventy-two gravity cores were collected by the U.S. Geological Survey in 1990, 1991, and 2000 from San Pablo Bay and Carquinez Strait, California. The gravity cores collected within San Pablo Bay contain bioturbated laminated silts and sandy clays, whole and broken bivalve shells (mostly mussels), fossil tube structures, and fine-grained plant or wood fragments. Gravity cores from the channel wall of Carquinez Strait east of San Pablo Bay consist of sand and clay layers, whole and broken bivalve shells (less than in San Pablo Bay), trace fossil tubes, and minute fragments of plant material.

  15. 78 FR 34123 - Notice of Inventory Completion: San Francisco State University NAGPRA Program, San Francisco, CA

    Science.gov (United States)

    2013-06-06

    ... completion of an inventory of human remains and associated funerary objects under the control of the San....R50000] Notice of Inventory Completion: San Francisco State University NAGPRA Program, San Francisco, CA... NAGPRA Program has completed an inventory of human remains and associated funerary objects, in...

  16. 78 FR 21403 - Notice of Inventory Completion: San Francisco State University NAGPRA Program, San Francisco, CA

    Science.gov (United States)

    2013-04-10

    ... completion of an inventory of human remains and associated funerary objects under the control of the San....R50000] Notice of Inventory Completion: San Francisco State University NAGPRA Program, San Francisco, CA... NAGPRA Program has completed an inventory of human remains and associated funerary objects, in...

  17. Micro-mechanical Simulations of Soils using Massively Parallel Supercomputers

    Directory of Open Access Journals (Sweden)

    David W. Washington

    2004-06-01

    Full Text Available In this research a computer program, Trubal version 1.51, based on the Discrete Element Method was converted to run on a Connection Machine (CM-5,a massively parallel supercomputer with 512 nodes, to expedite the computational times of simulating Geotechnical boundary value problems. The dynamic memory algorithm in Trubal program did not perform efficiently in CM-2 machine with the Single Instruction Multiple Data (SIMD architecture. This was due to the communication overhead involving global array reductions, global array broadcast and random data movement. Therefore, a dynamic memory algorithm in Trubal program was converted to a static memory arrangement and Trubal program was successfully converted to run on CM-5 machines. The converted program was called "TRUBAL for Parallel Machines (TPM." Simulating two physical triaxial experiments and comparing simulation results with Trubal simulations validated the TPM program. With a 512 nodes CM-5 machine TPM produced a nine-fold speedup demonstrating the inherent parallelism within algorithms based on the Discrete Element Method.

  18. Large scale simulations of lattice QCD thermodynamics on Columbia Parallel Supercomputers

    International Nuclear Information System (INIS)

    Ohta, Shigemi

    1989-01-01

    The Columbia Parallel Supercomputer project aims at the construction of a parallel processing, multi-gigaflop computer optimized for numerical simulations of lattice QCD. The project has three stages; 16-node, 1/4GF machine completed in April 1985, 64-node, 1GF machine completed in August 1987, and 256-node, 16GF machine now under construction. The machines all share a common architecture; a two dimensional torus formed from a rectangular array of N 1 x N 2 independent and identical processors. A processor is capable of operating in a multi-instruction multi-data mode, except for periods of synchronous interprocessor communication with its four nearest neighbors. Here the thermodynamics simulations on the two working machines are reported. (orig./HSI)

  19. 75 FR 15611 - Safety Zone; United Portuguese SES Centennial Festa, San Diego Bay, San Diego, CA

    Science.gov (United States)

    2010-03-30

    ...-AA00 Safety Zone; United Portuguese SES Centennial Festa, San Diego Bay, San Diego, CA AGENCY: Coast... navigable waters of the San Diego Bay in support of the United Portuguese SES Centennial Festa. This... Centennial Festa, which will include a fireworks presentation originating from a tug and barge combination in...

  20. Unique Methodologies for Nano/Micro Manufacturing Job Training Via Desktop Supercomputer Modeling and Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kimball, Clyde [Northern Illinois Univ., DeKalb, IL (United States); Karonis, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Lurio, Laurence [Northern Illinois Univ., DeKalb, IL (United States); Piot, Philippe [Northern Illinois Univ., DeKalb, IL (United States); Xiao, Zhili [Northern Illinois Univ., DeKalb, IL (United States); Glatz, Andreas [Northern Illinois Univ., DeKalb, IL (United States); Pohlman, Nicholas [Northern Illinois Univ., DeKalb, IL (United States); Hou, Minmei [Northern Illinois Univ., DeKalb, IL (United States); Demir, Veysel [Northern Illinois Univ., DeKalb, IL (United States); Song, Jie [Northern Illinois Univ., DeKalb, IL (United States); Duffin, Kirk [Northern Illinois Univ., DeKalb, IL (United States); Johns, Mitrick [Northern Illinois Univ., DeKalb, IL (United States); Sims, Thomas [Northern Illinois Univ., DeKalb, IL (United States); Yin, Yanbin [Northern Illinois Univ., DeKalb, IL (United States)

    2012-11-21

    This project establishes an initiative in high speed (Teraflop)/large-memory desktop supercomputing for modeling and simulation of dynamic processes important for energy and industrial applications. It provides a training ground for employment of current students in an emerging field with skills necessary to access the large supercomputing systems now present at DOE laboratories. It also provides a foundation for NIU faculty to quantum leap beyond their current small cluster facilities. The funding extends faculty and student capability to a new level of analytic skills with concomitant publication avenues. The components of the Hewlett Packard computer obtained by the DOE funds create a hybrid combination of a Graphics Processing System (12 GPU/Teraflops) and a Beowulf CPU system (144 CPU), the first expandable via the NIU GAEA system to ~60 Teraflops integrated with a 720 CPU Beowulf system. The software is based on access to the NVIDIA/CUDA library and the ability through MATLAB multiple licenses to create additional local programs. A number of existing programs are being transferred to the CPU Beowulf Cluster. Since the expertise necessary to create the parallel processing applications has recently been obtained at NIU, this effort for software development is in an early stage. The educational program has been initiated via formal tutorials and classroom curricula designed for the coming year. Specifically, the cost focus was on hardware acquisitions and appointment of graduate students for a wide range of applications in engineering, physics and computer science.

  1. Performance of BATAN-SANS instrument

    Energy Technology Data Exchange (ETDEWEB)

    Ikram, Abarrul; Insani, Andon [National Nuclear Energy Agency, P and D Centre for Materials Science and Technology, Serpong (Indonesia)

    2003-03-01

    SANS data from some standard samples have been obtained using BATAN-SANS instrument in Serpong. The experiments were performed for various experimental set-ups that involve different detector positions and collimator lengths. This paper describes the BATAN-SANS instrument briefly as well as the data taken from those experiments and followed with discussion of the results concerning the performance and calibration of the instrument. The standard samples utilized in these experiments include porous silica, polystyrene-poly isoprene, silver behenate, poly ball and polystyrene-poly (ethylene-alt-propylene). Even though the results show that BATAN-SANS instrument is in good shape, but rooms for improvements are still widely open especially for the velocity selector and its control system. (author)

  2. A case for historic joint rupture of the San Andreas and San Jacinto faults

    OpenAIRE

    Lozos, Julian C.

    2016-01-01

    The San Andreas fault is considered to be the primary plate boundary fault in southern California and the most likely fault to produce a major earthquake. I use dynamic rupture modeling to show that the San Jacinto fault is capable of rupturing along with the San Andreas in a single earthquake, and interpret these results along with existing paleoseismic data and historic damage reports to suggest that this has likely occurred in the historic past. In particular, I find that paleoseismic data...

  3. High Temporal Resolution Mapping of Seismic Noise Sources Using Heterogeneous Supercomputers

    Science.gov (United States)

    Paitz, P.; Gokhberg, A.; Ermert, L. A.; Fichtner, A.

    2017-12-01

    The time- and space-dependent distribution of seismic noise sources is becoming a key ingredient of modern real-time monitoring of various geo-systems like earthquake fault zones, volcanoes, geothermal and hydrocarbon reservoirs. We present results of an ongoing research project conducted in collaboration with the Swiss National Supercomputing Centre (CSCS). The project aims at building a service providing seismic noise source maps for Central Europe with high temporal resolution. We use source imaging methods based on the cross-correlation of seismic noise records from all seismic stations available in the region of interest. The service is hosted on the CSCS computing infrastructure; all computationally intensive processing is performed on the massively parallel heterogeneous supercomputer "Piz Daint". The solution architecture is based on the Application-as-a-Service concept to provide the interested researchers worldwide with regular access to the noise source maps. The solution architecture includes the following sub-systems: (1) data acquisition responsible for collecting, on a periodic basis, raw seismic records from the European seismic networks, (2) high-performance noise source mapping application responsible for the generation of source maps using cross-correlation of seismic records, (3) back-end infrastructure for the coordination of various tasks and computations, (4) front-end Web interface providing the service to the end-users and (5) data repository. The noise source mapping itself rests on the measurement of logarithmic amplitude ratios in suitably pre-processed noise correlations, and the use of simplified sensitivity kernels. During the implementation we addressed various challenges, in particular, selection of data sources and transfer protocols, automation and monitoring of daily data downloads, ensuring the required data processing performance, design of a general service-oriented architecture for coordination of various sub-systems, and

  4. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  5. An Optimized Parallel FDTD Topology for Challenging Electromagnetic Simulations on Supercomputers

    Directory of Open Access Journals (Sweden)

    Shugang Jiang

    2015-01-01

    Full Text Available It may not be a challenge to run a Finite-Difference Time-Domain (FDTD code for electromagnetic simulations on a supercomputer with more than 10 thousands of CPU cores; however, to make FDTD code work with the highest efficiency is a challenge. In this paper, the performance of parallel FDTD is optimized through MPI (message passing interface virtual topology, based on which a communication model is established. The general rules of optimal topology are presented according to the model. The performance of the method is tested and analyzed on three high performance computing platforms with different architectures in China. Simulations including an airplane with a 700-wavelength wingspan, and a complex microstrip antenna array with nearly 2000 elements are performed very efficiently using a maximum of 10240 CPU cores.

  6. A Retail Center Facing Change: Using Data to Determine Marketing Strategy

    Science.gov (United States)

    Walker, Kristen L.; Curren, Mary T.; Kiesler, Tina

    2013-01-01

    Plaza del Valle is an open-air shopping center in the San Fernando Valley region of Los Angeles. The new marketing manager must review primary and secondary data to determine a target market, a product positioning strategy, and a promotion strategy for the retail shopping center with the ultimate goal of increasing revenue for the Plaza. She is…

  7. San Marco C-2 (San Marco-4) Post Launch Report No. 1

    Science.gov (United States)

    1974-01-01

    The San Marco C-2 spacecraft, now designated San Marco-4, was successfully launched by a Scout vehicle from the San Marco Platform on 18 February 1974 at 6:05 a.m. EDT. The launch occurred 2 hours 50 minutes into the 3-hour window due co low cloud cover at the launch site. All spacecraft subsystems have been checked and are functioning normally. The protective caps for the two U.S. experiments were ejected and the Omegatron experiment activated on 19 February. The neutral mass spectrometer was activated as scheduled on 22 February after sufficient time to allow for spacecraft outgassing and to avoid the possibility of corona occurring. Both instruments are performing properly and worthwhile scientific data is being acquired.

  8. National Energy Research Scientific Computing Center (NERSC): Advancing the frontiers of computational science and technology

    Energy Technology Data Exchange (ETDEWEB)

    Hules, J. [ed.

    1996-11-01

    National Energy Research Scientific Computing Center (NERSC) provides researchers with high-performance computing tools to tackle science`s biggest and most challenging problems. Founded in 1974 by DOE/ER, the Controlled Thermonuclear Research Computer Center was the first unclassified supercomputer center and was the model for those that followed. Over the years the center`s name was changed to the National Magnetic Fusion Energy Computer Center and then to NERSC; it was relocated to LBNL. NERSC, one of the largest unclassified scientific computing resources in the world, is the principal provider of general-purpose computing services to DOE/ER programs: Magnetic Fusion Energy, High Energy and Nuclear Physics, Basic Energy Sciences, Health and Environmental Research, and the Office of Computational and Technology Research. NERSC users are a diverse community located throughout US and in several foreign countries. This brochure describes: the NERSC advantage, its computational resources and services, future technologies, scientific resources, and computational science of scale (interdisciplinary research over a decade or longer; examples: combustion in engines, waste management chemistry, global climate change modeling).

  9. 77 FR 34988 - Notice of Inventory Completion: San Diego State University, San Diego, CA

    Science.gov (United States)

    2012-06-12

    .... ACTION: Notice. SUMMARY: San Diego State University Archeology Collections Management Program has... that believes itself to be culturally affiliated with the human remains and associated funerary objects may contact San Diego State University Archeology Collections Management Program. Repatriation of the...

  10. Development of a high performance eigensolver on the peta-scale next generation supercomputer system

    International Nuclear Information System (INIS)

    Imamura, Toshiyuki; Yamada, Susumu; Machida, Masahiko

    2010-01-01

    For the present supercomputer systems, a multicore and multisocket processors are necessary to build a system, and choice of interconnection is essential. In addition, for effective development of a new code, high performance, scalable, and reliable numerical software is one of the key items. ScaLAPACK and PETSc are well-known software on distributed memory parallel computer systems. It is needless to say that highly tuned software towards new architecture like many-core processors must be chosen for real computation. In this study, we present a high-performance and high-scalable eigenvalue solver towards the next-generation supercomputer system, so called 'K-computer' system. We have developed two versions, the standard version (eigen s) and enhanced performance version (eigen sx), which are developed on the T2K cluster system housed at University of Tokyo. Eigen s employs the conventional algorithms; Householder tridiagonalization, divide and conquer (DC) algorithm, and Householder back-transformation. They are carefully implemented with blocking technique and flexible two-dimensional data-distribution to reduce the overhead of memory traffic and data transfer, respectively. Eigen s performs excellently on the T2K system with 4096 cores (theoretical peak is 37.6 TFLOPS), and it shows fine performance 3.0 TFLOPS with a two hundred thousand dimensional matrix. The enhanced version, eigen sx, uses more advanced algorithms; the narrow-band reduction algorithm, DC for band matrices, and the block Householder back-transformation with WY-representation. Even though this version is still on a test stage, it shows 4.7 TFLOPS with the same dimensional matrix on eigen s. (author)

  11. Activity report of Computing Research Center

    Energy Technology Data Exchange (ETDEWEB)

    1997-07-01

    On April 1997, National Laboratory for High Energy Physics (KEK), Institute of Nuclear Study, University of Tokyo (INS), and Meson Science Laboratory, Faculty of Science, University of Tokyo began to work newly as High Energy Accelerator Research Organization after reconstructing and converting their systems, under aiming at further development of a wide field of accelerator science using a high energy accelerator. In this Research Organization, Applied Research Laboratory is composed of four Centers to execute assistance of research actions common to one of the Research Organization and their relating research and development (R and D) by integrating the present four centers and their relating sections in Tanashi. What is expected for the assistance of research actions is not only its general assistance but also its preparation and R and D of a system required for promotion and future plan of the research. Computer technology is essential to development of the research and can communize for various researches in the Research Organization. On response to such expectation, new Computing Research Center is required for promoting its duty by coworking and cooperating with every researchers at a range from R and D on data analysis of various experiments to computation physics acting under driving powerful computer capacity such as supercomputer and so forth. Here were described on report of works and present state of Data Processing Center of KEK at the first chapter and of the computer room of INS at the second chapter and on future problems for the Computing Research Center. (G.K.)

  12. Current situation of sexual and reproductive health of men deprived of liberty in the Institutional Care Center of San Jose

    Directory of Open Access Journals (Sweden)

    Dorita Rivas Fonseca

    2013-10-01

    Full Text Available The objective of this research was to determine the current status of the issue of sexual and reproductive health ofthe prisoners Institutional Care Center (CAI of San Jose. It is a descriptive study. Through a strategic samplingdetermined the participation of 102 men. The information was obtained by applying a self-administeredquestionnaire with closed and open questions. As a result relevant to your socio-demographic profile, it appearsthat deprived of their liberty is a very heterogeneous group. As regards sexual and reproductive health, the firstconcept they relate to the prevention of disease and the second reproductive aspects, this shows limitations inknowledge on the topics, something that affects the daily life activities and self-care. It is concluded that researchby nurses Gyneco-obstetric in the deprived of liberty is almost null not only in the country but in the world,especially if it comes with the male population. In the case of CAI Prison, health care is not enough for thenumber of inmates who inhabit (overpopulation of almost 50%, this implies a deterioration in health and physicalcondition of these people, as well as sexual and reproductive health

  13. Decolonizing our plates : analyzing San Diego and vegans of color food politics

    OpenAIRE

    Navarro, Marilisa Cristina

    2011-01-01

    This project focuses on discursive formations of race, gender, class, and sexuality within food justice movements as well as these discursive formations within veganism. In particular, I analyze how mainstream food justice movements in San Diego engage in discourses of colorblindness, universalism, individualism, whiteness, and consumption. I also examine how these movements are centered on possessive individualism, or one's capacity to own private property, as the means through which they se...

  14. Watson will see you now: a supercomputer to help clinicians make informed treatment decisions.

    Science.gov (United States)

    Doyle-Lindrud, Susan

    2015-02-01

    IBM has collaborated with several cancer care providers to develop and train the IBM supercomputer Watson to help clinicians make informed treatment decisions. When a patient is seen in clinic, the oncologist can input all of the clinical information into the computer system. Watson will then review all of the data and recommend treatment options based on the latest evidence and guidelines. Once the oncologist makes the treatment decision, this information can be sent directly to the insurance company for approval. Watson has the ability to standardize care and accelerate the approval process, a benefit to the healthcare provider and the patient.

  15. Building FLOW Federating Libraries on the Web

    CERN Document Server

    Keller-Gold, A; Le Meur, Jean-Yves; Baldridge, K K

    2002-01-01

    Individuals, teams, organizations, and networks can be thought of as tiers or classes within the complex grid of technology and practice in which research documentation is both consumed and generated. The panoply of possible classes share with the others a common need for document management tools and practices. The distinctive document management tools and practices used within each represent boundaries across which information could flow openly if technology and metadata standards were to provide an accessible digital framework. The CERN Document Server (CDS), implemented by a research partnership at the San Diego Supercomputer Center (SDSC), establishes a prototype tiered repository system for such a panoply. Research suggests modifications to enable cross-domain information flow and is represented as a metadata grid.

  16. Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers

    Science.gov (United States)

    Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi

    2018-03-01

    Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.

  17. Developing solar power programs : San Francisco's experience

    International Nuclear Information System (INIS)

    Schwartz, F.

    2006-01-01

    This keynote address discussed an array of solar programs initiated in government-owned buildings in San Francisco. The programs were strongly supported by the city's mayor,and the voting public. Known for its fog and varying microclimates, 11 monitoring stations were set up throughout the city to determine viable locations for the successful application of solar technologies. It was observed that 90 per cent of the available sunshine occurred in the central valley, whereas fog along the Pacific shore was problematic. Seven of the monitoring sites showed excellent results. Relationships with various city departments were described, as well as details of study loads, load profiles, electrical systems, roofs and the structural capabilities of the selected government buildings. There was a focus on developing good relations with the local utility. The Moscone Convention Center was selected for the program's flagship installation, a 675 kW solar project which eventually won the US EPA Green Power Award for 2004 and received high press coverage. Cost of the project was $4.2 million. 825,000 kWh solar electricity was generated, along with 4,500,000 kWh electricity saved annually from efficiency measures, resulting in a net reduction of 5,325,000 kWh. Savings on utilities bills for the center were an estimated $1,078,000. A pipeline of solar projects followed, with installations at a sewage treatment plant and a large recycling depot. A program of smaller sites included libraries, schools and health facilities. Details of plans to apply solar technology to a 500 acre redevelopment site in southeast San Francisco with an aging and inadequate electrical infrastructure were described. A model of efficient solar housing for the development was presented, with details of insulation, windows, heating ventilation and air-conditioning (HVAC), water heating, lighting, appliances and a 1.2 kilowatt solar system. Peak demand reductions were also presented. tabs., figs

  18. Re-inventing electromagnetics - Supercomputing solution of Maxwell's equations via direct time integration on space grids

    International Nuclear Information System (INIS)

    Taflove, A.

    1992-01-01

    This paper summarizes the present state and future directions of applying finite-difference and finite-volume time-domain techniques for Maxwell's equations on supercomputers to model complex electromagnetic wave interactions with structures. Applications so far have been dominated by radar cross section technology, but by no means are limited to this area. In fact, the gains we have made place us on the threshold of being able to make tremendous contributions to non-defense electronics and optical technology. Some of the most interesting research in these commercial areas is summarized. 47 refs

  19. Earth and environmental science in the 1980's: Part 1: Environmental data systems, supercomputer facilities and networks

    Science.gov (United States)

    1986-01-01

    Overview descriptions of on-line environmental data systems, supercomputer facilities, and networks are presented. Each description addresses the concepts of content, capability, and user access relevant to the point of view of potential utilization by the Earth and environmental science community. The information on similar systems or facilities is presented in parallel fashion to encourage and facilitate intercomparison. In addition, summary sheets are given for each description, and a summary table precedes each section.

  20. 78 FR 53243 - Safety Zone; TriRock San Diego, San Diego Bay, San Diego, CA

    Science.gov (United States)

    2013-08-29

    ... this rule because the logistical details of the San Diego Bay triathlon swim were not finalized nor... September 22, 2013. (c) Definitions. The following definition applies to this section: Designated...

  1. PRODEEDINGS OF RIKEN BNL RESEARCH CENTER WORKSHOP : HIGH PERFORMANCE COMPUTING WITH QCDOC AND BLUEGENE.

    Energy Technology Data Exchange (ETDEWEB)

    CHRIST,N.; DAVENPORT,J.; DENG,Y.; GARA,A.; GLIMM,J.; MAWHINNEY,R.; MCFADDEN,E.; PESKIN,A.; PULLEYBLANK,W.

    2003-03-11

    Staff of Brookhaven National Laboratory, Columbia University, IBM and the RIKEN BNL Research Center organized a one-day workshop held on February 28, 2003 at Brookhaven to promote the following goals: (1) To explore areas other than QCD applications where the QCDOC and BlueGene/L machines can be applied to good advantage, (2) To identify areas where collaboration among the sponsoring institutions can be fruitful, and (3) To expose scientists to the emerging software architecture. This workshop grew out of an informal visit last fall by BNL staff to the IBM Thomas J. Watson Research Center that resulted in a continuing dialog among participants on issues common to these two related supercomputers. The workshop was divided into three sessions, addressing the hardware and software status of each system, prospective applications, and future directions.

  2. Patient Workload Profile: National Naval Medical Center (NNMC), Bethesda, MD.

    Science.gov (United States)

    1980-06-01

    AD-A09a 729 WESTEC SERVICES NC SAN DIEGOCA0S / PATIENT WORKLOAD PROFILE: NATIONAL NAVAL MEDICAL CENTER NNMC),- ETC(U) JUN 80 W T RASMUSSEN, H W...provides site workload data for the National Naval Medical Center (NNMC) within the following functional support areas: Patient Appointment...on managing medical and patient data, thereby offering the health care provider and administrator more powerful capabilities in dealing with and

  3. LaRC Modeling of Ozone Formation in San Antonio, Texas

    Science.gov (United States)

    Guo, F.; Griffin, R. J.; Bui, A.; Schulze, B.; Wallace, H. W., IV; Flynn, J. H., III; Erickson, M.; Kotsakis, A.; Alvarez, S. L.; Usenko, S.; Sheesley, R. J.; Yoon, S.

    2017-12-01

    Ozone (O3) is one of the most important trace species within the troposphere and results from photochemistry involving emissions from a complex array of sources. Ground-level O3 is detrimental to ecosystems and causes a variety of human health problems including respiratory irritation, asthma and reduction in lung capacity. However, the O3 Design Value in San Antonio, Texas, was in violation of the federal threshold set by the EPA (70 ppb, 8-hr max) based on the average for the most recent three-year period (2014-2016). To understand the sources of high O3 concentrations in this nonattainment area, we assembled and deployed a mobile air quality laboratory and operated it in two locations in the southeast (Traveler's World RV Park) and northwest (University of Texas at San Antonio) of downtown San Antonio during summer 2017 to measure O3 and its precursors, including total nitrogen oxides (NOx) and volatile organic compounds (VOCs). Additional measurements included temperature, relative humidity, pressure, solar radiation, wind speed, wind direction, total reactive nitrogen (NOy), carbon monoxide (CO), and aerosol composition and concentration. We will use the campaign data and the NASA Langley Research Center (LaRC) Zero-Dimensional Box Model (Crawford et al., 1999; Olson et al., 2006) to calculate O3 production rate, NOx and hydroxyl radical chain length, and NOx versus VOCs sensitivity at different times of a day with different photochemical and meteorological conditions. A key to our understanding is to combine model results with measurements of precursor gases, particle chemistry and particle size to support the identification of O3 sources, its major formation pathways, and how the ozone production efficiency (OPE) depends on various factors. The resulting understanding of the causes of high O3 concentrations in the San Antonio area will provide insight into future air quality protection.

  4. 77 FR 59969 - Notice of Inventory Completion: San Francisco State University, Department of Anthropology, San...

    Science.gov (United States)

    2012-10-01

    ... Inventory Completion: San Francisco State University, Department of Anthropology, San Francisco, CA... Francisco State University, NAGPRA Program (formerly in the Department of Anthropology). The human remains... State University Department of Anthropology records. In the Federal Register (73 FR 30156-30158, May 23...

  5. 33 CFR 165.1182 - Safety/Security Zone: San Francisco Bay, San Pablo Bay, Carquinez Strait, and Suisun Bay, CA.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Safety/Security Zone: San... Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) PORTS AND WATERWAYS SAFETY... Areas Eleventh Coast Guard District § 165.1182 Safety/Security Zone: San Francisco Bay, San Pablo Bay...

  6. Efficient development of memory bounded geo-applications to scale on modern supercomputers

    Science.gov (United States)

    Räss, Ludovic; Omlin, Samuel; Licul, Aleksandar; Podladchikov, Yuri; Herman, Frédéric

    2016-04-01

    Numerical modeling is an actual key tool in the area of geosciences. The current challenge is to solve problems that are multi-physics and for which the length scale and the place of occurrence might not be known in advance. Also, the spatial extend of the investigated domain might strongly vary in size, ranging from millimeters for reactive transport to kilometers for glacier erosion dynamics. An efficient way to proceed is to develop simple but robust algorithms that perform well and scale on modern supercomputers and permit therefore very high-resolution simulations. We propose an efficient approach to solve memory bounded real-world applications on modern supercomputers architectures. We optimize the software to run on our newly acquired state-of-the-art GPU cluster "octopus". Our approach shows promising preliminary results on important geodynamical and geomechanical problematics: we have developed a Stokes solver for glacier flow and a poromechanical solver including complex rheologies for nonlinear waves in stressed rocks porous rocks. We solve the system of partial differential equations on a regular Cartesian grid and use an iterative finite difference scheme with preconditioning of the residuals. The MPI communication happens only locally (point-to-point); this method is known to scale linearly by construction. The "octopus" GPU cluster, which we use for the computations, has been designed to achieve maximal data transfer throughput at minimal hardware cost. It is composed of twenty compute nodes, each hosting four Nvidia Titan X GPU accelerators. These high-density nodes are interconnected with a parallel (dual-rail) FDR InfiniBand network. Our efforts show promising preliminary results for the different physics investigated. The glacier flow solver achieves good accuracy in the relevant benchmarks and the coupled poromechanical solver permits to explain previously unresolvable focused fluid flow as a natural outcome of the porosity setup. In both cases

  7. SAN MICHELE. ENTRE CIELO Y MAR / San Michele, between sky and sea

    Directory of Open Access Journals (Sweden)

    Pablo Blázquez Jesús

    2012-11-01

    Full Text Available RESUMEN El cementerio es uno de los tipos arquitectónicos más profundos y metafóricos. El concurso para la ampliación del cementerio de San Michele, convocado en 1998 por la administración Municipal de Venecia, se convierte en un excelente campo de pruebas sobre el que poder analizar el contexto histórico en torno a esta tipología, y su relación con la ciudad y el territorio. El estudio de este caso concreto nos permite descubrir personajes, relaciones casuales y hallazgos que se despliegan a lo largo del texto. La historia del cementerio de San Michele es también la crónica de la transformación de la ciudad de Venecia y su Laguna. Interpretando este concurso como un instrumento de investigación, el objetivo del artículo es el de comprender la realidad contemporánea de la arquitectura funeraria a través de la isla de San Michele, Venecia, y las propuestas finalistas de Carlos Ferrater, Enric Miralles y David Chipperfield. Una historia bajo la cual se vislumbran claves que nos sirven para reflexionar acerca del cementerio contemporáneo, la ciudad y el territorio. SUMMARY The cemetery is one of the most profound and metaphorical kinds of architecture. The competition for the extension of the San Michele Cemetery, called in 1998 by the Venice municipal administration, is an excellent testing ground on which to analyse the historical context surrounding this type of architecture, and its relationship with the city and the region. The study of this particular case allows us to uncover characters, casual relationships and findings that unfold throughout the text. The history of the San Michele cemetery is also the chronicle of the transformation of the city of Venice and its Lagoon. Interpreting this competition as a research tool, the aim of the paper is to understand the contemporary reality of funerary architecture through the island of San Michele, Venice, and the finalist proposals of Carlos Ferrater, Enric Miralles and David

  8. Pleistocene Brawley and Ocotillo Formations: Evidence for initial strike-slip deformation along the San Felipe and San Jacinto fault zonez, Southern California

    Science.gov (United States)

    Kirby, S.M.; Janecke, S.U.; Dorsey, R.J.; Housen, B.A.; Langenheim, V.E.; McDougall, K.A.; Steeley, A.N.

    2007-01-01

    We examine the Pleistocene tectonic reorganization of the Pacific-North American plate boundary in the Salton Trough of southern California with an integrated approach that includes basin analysis, magnetostratigraphy, and geologic mapping of upper Pliocene to Pleistocene sedimentary rocks in the San Felipe Hills. These deposits preserve the earliest sedimentary record of movement on the San Felipe and San Jacinto fault zones that replaced and deactivated the late Cenozoic West Salton detachment fault. Sandstone and mudstone of the Brawley Formation accumulated between ???1.1 and ???0.6-0.5 Ma in a delta on the margin of an arid Pleistocene lake, which received sediment from alluvial fans of the Ocotillo Formation to the west-southwest. Our analysis indicates that the Ocotillo and Brawley formations prograded abruptly to the east-northeast across a former mud-dominated perennial lake (Borrego Formation) at ???1.1 Ma in response to initiation of the dextral-oblique San Felipe fault zone. The ???25-km-long San Felipe anticline initiated at about the same time and produced an intrabasinal basement-cored high within the San Felipe-Borrego basin that is recorded by progressive unconformities on its north and south limbs. A disconformity at the base of the Brawley Formation in the eastern San Felipe Hills probably records initiation and early blind slip at the southeast tip of the Clark strand of the San Jacinto fault zone. Our data are consistent with abrupt and nearly synchronous inception of the San Jacinto and San Felipe fault zones southwest of the southern San Andreas fault in the early Pleistocene during a pronounced southwestward broadening of the San Andreas fault zone. The current contractional geometry of the San Jacinto fault zone developed after ???0.5-0.6 Ma during a second, less significant change in structural style. ?? 2007 by The University of Chicago. All rights reserved.

  9. Volcano hazards in the San Salvador region, El Salvador

    Science.gov (United States)

    Major, J.J.; Schilling, S.P.; Sofield, D.J.; Escobar, C.D.; Pullinger, C.R.

    2001-01-01

    San Salvador volcano is one of many volcanoes along the volcanic arc in El Salvador (figure 1). This volcano, having a volume of about 110 cubic kilometers, towers above San Salvador, the country’s capital and largest city. The city has a population of approximately 2 million, and a population density of about 2100 people per square kilometer. The city of San Salvador and other communities have gradually encroached onto the lower flanks of the volcano, increasing the risk that even small events may have serious societal consequences. San Salvador volcano has not erupted for more than 80 years, but it has a long history of repeated, and sometimes violent, eruptions. The volcano is composed of remnants of multiple eruptive centers, and these remnants are commonly referred to by several names. The central part of the volcano, which contains a large circular crater, is known as El Boquerón, and it rises to an altitude of about 1890 meters. El Picacho, the prominent peak of highest elevation (1960 meters altitude) to the northeast of the crater, and El Jabali, the peak to the northwest of the crater, represent remnants of an older, larger edifice. The volcano has erupted several times during the past 70,000 years from vents central to the volcano as well as from smaller vents and fissures on its flanks [1] (numerals in brackets refer to end notes in the report). In addition, several small cinder cones and explosion craters are located within 10 kilometers of the volcano. Since about 1200 A.D., eruptions have occurred almost exclusively along, or a few kilometers beyond, the northwest flank of the volcano, and have consisted primarily of small explosions and emplacement of lava flows. However, San Salvador volcano has erupted violently and explosively in the past, even as recently as 800 years ago. When such eruptions occur again, substantial population and infrastructure will be at risk. Volcanic eruptions are not the only events that present a risk to local

  10. MEGADOCK 4.0: an ultra-high-performance protein-protein docking software for heterogeneous supercomputers.

    Science.gov (United States)

    Ohue, Masahito; Shimoda, Takehiro; Suzuki, Shuji; Matsuzaki, Yuri; Ishida, Takashi; Akiyama, Yutaka

    2014-11-15

    The application of protein-protein docking in large-scale interactome analysis is a major challenge in structural bioinformatics and requires huge computing resources. In this work, we present MEGADOCK 4.0, an FFT-based docking software that makes extensive use of recent heterogeneous supercomputers and shows powerful, scalable performance of >97% strong scaling. MEGADOCK 4.0 is written in C++ with OpenMPI and NVIDIA CUDA 5.0 (or later) and is freely available to all academic and non-profit users at: http://www.bi.cs.titech.ac.jp/megadock. akiyama@cs.titech.ac.jp Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  11. Do PEV Drivers Park Near Publicly Accessible EVSE in San Diego but Not Use Them?

    Energy Technology Data Exchange (ETDEWEB)

    Francfort, James Edward [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-06-01

    The PEV charging stations deployed as part of The EV Project included both residential and non-residential sites. Non-residential sites included EVSE installed in workplace environments, fleet applications and those that were publicly accessible near retail centers, parking lots, and similar locations. The EV Project utilized its Micro-Climate® planning process to determine potential sites for publicly accessible EVSE in San Diego. This process worked with local stakeholders to target EVSE deployment near areas where significant PEV traffic and parking was expected. This planning process is described in The Micro-Climate deployment Process in San Diego1. The EV Project issued its deployment plan for San Diego in November 2010, prior to the sale of PEVs by Nissan and Chevrolet. The Project deployed residential EVSE concurrent with vehicle delivery starting in December 2010. The installation of non-residential EVSE commenced in April 2011 consistent with the original Project schedule, closely following the adoption of PEVs. The residential participation portion of The EV Project was fully subscribed by January 2013 and the non-residential EVSE deployment was essentially completed by August 2013.

  12. California State Waters Map Series: offshore of San Gregorio, California

    Science.gov (United States)

    Cochrane, Guy R.; Dartnell, Peter; Greene, H. Gary; Watt, Janet T.; Golden, Nadine E.; Endris, Charles A.; Phillips, Eleyne L.; Hartwell, Stephen R.; Johnson, Samuel Y.; Kvitek, Rikk G.; Erdey, Mercedes D.; Bretz, Carrie K.; Manson, Michael W.; Sliter, Ray W.; Ross, Stephanie L.; Dieter, Bryan E.; Chin, John L.; Cochran, Susan A.; Cochrane, Guy R.; Cochran, Susan A.

    2014-01-01

    In 2007, the California Ocean Protection Council initiated the California Seafloor Mapping Program (CSMP), designed to create a comprehensive seafloor map of high-resolution bathymetry, marine benthic habitats, and geology within the 3-nautical-mile limit of California's State Waters. The CSMP approach is to create highly detailed seafloor maps through collection, integration, interpretation, and visualization of swath sonar data, acoustic backscatter, seafloor video, seafloor photography, high-resolution seismic-reflection profiles, and bottom-sediment sampling data. The map products display seafloor morphology and character, identify potential marine benthic habitats, and illustrate both the surficial seafloor geology and shallow (to about 100 m) subsurface geology. The Offshore of San Gregorio map area is located in northern California, on the Pacific coast of the San Francisco Peninsula about 50 kilometers south of the Golden Gate. The map area lies offshore of the Santa Cruz Mountains, part of the northwest-trending Coast Ranges that run roughly parallel to the San Andreas Fault Zone. The Santa Cruz Mountains lie between the San Andreas Fault Zone and the San Gregorio Fault system. The nearest significant onshore cultural centers in the map area are San Gregorio and Pescadero, both unincorporated communities with populations well under 1,000. Both communities are situated inland of state beaches that share their names. No harbor facilities are within the Offshore of San Gregorio map area. The hilly coastal area is virtually undeveloped grazing land for sheep and cattle. The coastal geomorphology is controlled by late Pleistocene and Holocene slip in the San Gregorio Fault system. A westward bend in the San Andreas Fault Zone, southeast of the map area, coupled with right-lateral movement along the San Gregorio Fault system have caused regional folding and uplift. The coastal area consists of high coastal bluffs and vertical sea cliffs. Coastal promontories in

  13. Richness and diversity patterns of birds in urban green areas in the center of San Salvador, El Salvador

    Directory of Open Access Journals (Sweden)

    Gabriel L. Vides-Hernández

    2017-10-01

    Full Text Available Increasing urbanization has led to natural ecosystems being constantly replaced by an urban landscape, a process that is very noticeable in El Salvador, due to its small territorial extension (21.041 km. and high population density (291 hab/km.. We performed an inventory in 12 urban green areas, with different sizes, shape and distances from the largest forest area in the metropolitan zone, based on the McArthur and Wilson’s (1967 island biogeography theory. We evaluated if the richness, diversity and equitability of birds were related to the size and distance of the green areas and if their shape had any effect on the richness of birds. We observed a total of 20 bird species and we classified them according to their diet (generalist and specialist. We observed that the distance did not influence the bird richness and that there was no interaction between size and distance variables, but the size of the green area did influence. The richness of birds with specialist diet increased in the more circular green areas than in the irregular ones. We conclude that in the urban center of San Salvador, the presence of large and circular green areas contributes more to the specialist diet birds’ richness, than areas of similar size but of irregular shape. However, small areas contribute more to the specialist diet birds’ richness, if its shape is more circular.

  14. Vegetation - San Felipe Valley [ds172

    Data.gov (United States)

    California Natural Resource Agency — This Vegetation Map of the San Felipe Valley Wildlife Area in San Diego County, California is based on vegetation samples collected in the field in 2002 and 2005 and...

  15. El urbanismo de Santiago de Compostela : un plano con las plazuelas de San Martín y de San Miguel de 1709

    Directory of Open Access Journals (Sweden)

    Miguel Taín Guzmán

    1998-01-01

    Full Text Available El presente artículo está dedicado al estudio de un plano inédito de 1709 donde se representan las plazuelas de San Martín y de San Miguel, en el barrio intramuros de la Puerta de la Peña de Santiago de Compostela. Gracias al referido dibujo, analizo al detalle el entramado urbano de ambos espacios públicos y los edificios que los delimitan, particularmente la iglesia de San Martín Pinario, el desaparecido Palacio del Tribunal de la Santa Inquisición y la iglesia parroquial de San Miguel dos Agros.The article focuses on the study of a 1709 inpublished street plan of two squares —San Martín and San Miguel— in the Puerta de la Peña quarter (Santiago de Compostela. This oíd drawing shows the urban framework of both public spaces and also the buildings around: San Martín Pinario, the lost Palacio del Tribunal de la Santa Inquisición and the paroquial church of San Miguel de los Agros.

  16. Shifting shoals and shattered rocks : How man has transformed the floor of west-central San Francisco Bay

    Science.gov (United States)

    Chin, John L.; Wong, Florence L.; Carlson, Paul R.

    2004-01-01

    San Francisco Bay, one of the world's finest natural harbors and a major center for maritime trade, is referred to as the 'Gateway to the Pacific Rim.' The bay is an urbanized estuary that is considered by many to be the major estuary in the United States most modified by man's activities. The population around the estuary has grown rapidly since the 1850's and now exceeds 7 million people. The San Francisco Bay area's economy ranks as one of the largest in the world, larger even than that of many countries. More than 10 million tourists are estimated to visit the bay region each year. The bay area's population and associated development have increasingly changed the estuary and its environment. San Francisco Bay and the contiguous Sacramento-San Joaquin Delta encompass roughly 1,600 square miles (4,100 km2) and are the outlet of a major watershed that drains more than 40 percent of the land area of the State of California. This watershed provides drinking water for 20 million people (two thirds of the State's population) and irrigates 4.5 million acres of farmland and ranchland. During the past several decades, much has been done to clean up the environment and waters of San Francisco Bay. Conservationist groups have even bought many areas on the margins of the bay with the intention of restoring them to a condition more like the natural marshes they once were. However, many of the major manmade changes to the bay's environment occurred so long ago that the nature of them has been forgotten. In addition, many changes continue to occur today, such as the introduction of exotic species and the loss of commercial and sport fisheries because of declining fish populations. The economy and population of the nine counties that surround the bay continue to grow and put increasing pressure on the bay, both direct and indirect. Therefore, there are mixed signals for the future health and welfare of San Francisco Bay. The San Francisco Bay estuary consists of three

  17. 78 FR 57482 - Safety Zone; America's Cup Aerobatic Box, San Francisco Bay, San Francisco, CA

    Science.gov (United States)

    2013-09-19

    ...-AA00 Safety Zone; America's Cup Aerobatic Box, San Francisco Bay, San Francisco, CA AGENCY: Coast Guard... America's Cup air shows. These safety zones are established to provide a clear area on the water for... announced by America's Cup Race Management. ADDRESSES: Documents mentioned in this preamble are part of...

  18. Damage Detection Response Characteristics of Open Circuit Resonant (SansEC) Sensors

    Science.gov (United States)

    Dudley, Kenneth L.; Szatkowski, George N.; Smith, Laura J.; Koppen, Sandra V.; Ely, Jay J.; Nguyen, Truong X.; Wang, Chuantong; Ticatch, Larry A.; Mielnik, John J.

    2013-01-01

    The capability to assess the current or future state of the health of an aircraft to improve safety, availability, and reliability while reducing maintenance costs has been a continuous goal for decades. Many companies, commercial entities, and academic institutions have become interested in Integrated Vehicle Health Management (IVHM) and a growing effort of research into "smart" vehicle sensing systems has emerged. Methods to detect damage to aircraft materials and structures have historically relied on visual inspection during pre-flight or post-flight operations by flight and ground crews. More quantitative non-destructive investigations with various instruments and sensors have traditionally been performed when the aircraft is out of operational service during major scheduled maintenance. Through the use of reliable sensors coupled with data monitoring, data mining, and data analysis techniques, the health state of a vehicle can be detected in-situ. NASA Langley Research Center (LaRC) is developing a composite aircraft skin damage detection method and system based on open circuit SansEC (Sans Electric Connection) sensor technology. Composite materials are increasingly used in modern aircraft for reducing weight, improving fuel efficiency, and enhancing the overall design, performance, and manufacturability of airborne vehicles. Materials such as fiberglass reinforced composites (FRC) and carbon-fiber-reinforced polymers (CFRP) are being used to great advantage in airframes, wings, engine nacelles, turbine blades, fairings, fuselage structures, empennage structures, control surfaces and aircraft skins. SansEC sensor technology is a new technical framework for designing, powering, and interrogating sensors to detect various types of damage in composite materials. The source cause of the in-service damage (lightning strike, impact damage, material fatigue, etc.) to the aircraft composite is not relevant. The sensor will detect damage independent of the cause

  19. Academic Medical Centers as digital health catalysts.

    Science.gov (United States)

    DePasse, Jacqueline W; Chen, Connie E; Sawyer, Aenor; Jethwani, Kamal; Sim, Ida

    2014-09-01

    Emerging digital technologies offer enormous potential to improve quality, reduce cost, and increase patient-centeredness in healthcare. Academic Medical Centers (AMCs) play a key role in advancing medical care through cutting-edge medical research, yet traditional models for invention, validation and commercialization at AMCs have been designed around biomedical initiatives, and are less well suited for new digital health technologies. Recently, two large bi-coastal Academic Medical Centers, the University of California, San Francisco (UCSF) through the Center for Digital Health Innovation (CDHI) and Partners Healthcare through the Center for Connected Health (CCH) have launched centers focused on digital health innovation. These centers show great promise but are also subject to significant financial, organizational, and visionary challenges. We explore these AMC initiatives, which share the following characteristics: a focus on academic research methodology; integration of digital technology in educational programming; evolving models to support "clinician innovators"; strategic academic-industry collaboration and emergence of novel revenue models. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  1. 77 FR 42649 - Safety Zone: Sea World San Diego Fireworks, Mission Bay; San Diego, CA

    Science.gov (United States)

    2012-07-20

    ... 1625-AA00 Safety Zone: Sea World San Diego Fireworks, Mission Bay; San Diego, CA AGENCY: Coast Guard... authorized by the Captain of the Port, or his designated representative. DATES: This rule is effective from 8... to ensure the public's safety. B. Basis and Purpose The Ports and Waterways Safety Act gives the...

  2. 75 FR 4090 - Center for Scientific Review; Notice of Closed Meetings

    Science.gov (United States)

    2010-01-26

    ... El Camino Real, San Diego, CA 92130. Contact Person: William A. Greenberg, PhD., Scientific Review... (Virtual Meeting). Contact Person: Fouad A. El-Zaatari, PhD., Scientific Review Officer, Center for... for Scientific Review Special Emphasis Panel, Small Business: Experimental Cancer Therapeutics. Date...

  3. Identifying Telemedicine Services to Improve Access to Specialty Care for the Underserved in the San Francisco Safety Net

    Directory of Open Access Journals (Sweden)

    Ken Russell Coelho

    2011-01-01

    Full Text Available Safety-net settings across the country have grappled with providing adequate access to specialty care services. San Francisco General Hospital and Trauma Center, serving as the city's primary safety-net hospital, has also had to struggle with the same issue. With Healthy San Francisco, the City and County of San Francisco's Universal Healthcare mandate, the increased demand for specialty care services has placed a further strain on the system. With the recent passage of California Proposition 1D, infrastructural funds are now set aside to assist in connecting major hospitals with primary care clinics in remote areas all over the state of California, using telemedicine. Based on a selected sample of key informant interviews with local staff physicians, this study provides further insight into the current process of e-referral which uses electronic communication for making referrals to specialty care. It also identifies key services for telemedicine in primary and specialty care settings within the San Francisco public health system. This study concludes with proposals for a framework that seek to increase collaboration between the referring primary care physician and specialist, to prioritize institution of these key services for telemedicine.

  4. Cataclastic rocks of the San Gabriel fault—an expression of deformation at deeper crustal levels in the San Andreas fault zone

    Science.gov (United States)

    Anderson, J. Lawford; Osborne, Robert H.; Palmer, Donald F.

    1983-10-01

    The San Gabriel fault, a deeply eroded late Oligocene to middle Pliocene precursor to the San Andreas, was chosen for petrologic study to provide information regarding intrafault material representative of deeper crustal levels. Cataclastic rocks exposed along the present trace of the San Andreas in this area are exclusively a variety of fault gouge that is essentially a rock flour with a quartz, feldspar, biotite, chlorite, amphibole, epidote, and Fe-Ti oxide mineralogy representing the milled-down equivalent of the original rock (Anderson and Osborne, 1979; Anderson et al., 1980). Likewise, fault gouge and associated breccia are common along the San Gabriel fault, but only where the zone of cataclasis is several tens of meters wide. At several localities, the zone is extremely narrow (several centimeters), and the cataclastic rock type is cataclasite, a dark, aphanitic, and highly comminuted and indurated rock. The cataclastic rocks along the San Gabriel fault exhibit more comminution than that observed for gouge along the San Andreas. The average grain diameter for the San Andreas gouge ranges from 0.01 to 0.06 mm. For the San Gabriel cataclastic rocks, it ranges from 0.0001 to 0.007 mm. Whereas the San Andreas gouge remains particulate to the smallest grain-size, the ultra-fine grain matrix of the San Gabriel cataclasite is composed of a mosaic of equidimensional, interlocking grains. The cataclastic rocks along the San Gabriel fault also show more mineralogiec changes compared to gouge from the San Andreas fault. At the expense of biotite, amphibole, and feldspar, there is some growth of new albite, chlorite, sericite, laumontite, analcime, mordenite (?), and calcite. The highest grade of metamorphism is laumontite-chlorite zone (zeolite facies). Mineral assemblages and constrained uplift rates allow temperature and depth estimates of 200 ± 30° C and 2-5 km, thus suggesting an approximate geothermal gradient of ~50°C/km. Such elevated temperatures imply a

  5. ENERGY RESOURCES CENTER

    Energy Technology Data Exchange (ETDEWEB)

    Sternberg, Virginia

    1979-11-01

    First I will give a short history of this Center which has had three names and three moves (and one more in the offing) in three years. Then I will tell you about the accomplishments made in the past year. And last, I will discuss what has been learned and what is planned for the future. The Energy and Environment Information Center (EEIC), as it was first known, was organized in August 1975 in San Francisco as a cooperative venture by the Federal Energy Administration (FEA), Energy Research and Development Administration (ERDA) and the Environmental Protection Agency (EPA). These three agencies planned this effort to assist the public in obtaining information about energy and the environmental aspects of energy. The Public Affairs Offices of FEA, ERDA and EPA initiated the idea of the Center. One member from each agency worked at the Center, with assistance from the Lawrence Berkeley Laboratory Information Research Group (LBL IRG) and with on-site help from the EPA Library. The Center was set up in a corner of the EPA Library. FEA and ERDA each contributed one staff member on a rotating basis to cover the daily operation of the Center and money for books and periodicals. EPA contributed space, staff time for ordering, processing and indexing publications, and additional money for acquisitions. The LBL Information Research Group received funds from ERDA on a 189 FY 1976 research project to assist in the development of the Center as a model for future energy centers.

  6. The disappearing San of southeastern Africa and their genetic affinities.

    Science.gov (United States)

    Schlebusch, Carina M; Prins, Frans; Lombard, Marlize; Jakobsson, Mattias; Soodyall, Himla

    2016-12-01

    Southern Africa was likely exclusively inhabited by San hunter-gatherers before ~2000 years ago. Around that time, East African groups assimilated with local San groups and gave rise to the Khoekhoe herders. Subsequently, Bantu-speaking farmers, arriving from the north (~1800 years ago), assimilated and displaced San and Khoekhoe groups, a process that intensified with the arrival of European colonists ~350 years ago. In contrast to the western parts of southern Africa, where several Khoe-San groups still live today, the eastern parts are largely populated by Bantu speakers and individuals of non-African descent. Only a few scattered groups with oral traditions of Khoe-San ancestry remain. Advances in genetic research open up new ways to understand the population history of southeastern Africa. We investigate the genomic variation of the remaining individuals from two South African groups with oral histories connecting them to eastern San groups, i.e., the San from Lake Chrissie and the Duma San of the uKhahlamba-Drakensberg. Using ~2.2 million genetic markers, combined with comparative published data sets, we show that the Lake Chrissie San have genetic ancestry from both Khoe-San (likely the ||Xegwi San) and Bantu speakers. Specifically, we found that the Lake Chrissie San are closely related to the current southern San groups (i.e., the Karretjie people). Duma San individuals, on the other hand, were genetically similar to southeastern Bantu speakers from South Africa. This study illustrates how genetic tools can be used to assess hypotheses about the ancestry of people who seemingly lost their historic roots, only recalling a vague oral tradition of their origin.

  7. Chain conformations of ABA triblock coplymers in microphase-separated structures for SANS

    International Nuclear Information System (INIS)

    Matsushita, Y.; Nomura, M.; Watanabe, J.; Mogi, Y.; Noda, I.; Han, C.C.

    1993-01-01

    Single chain conformations of center block, polystyrene, of poly(2-vinylpyridine-b-styrene-b-2-vinylpyridine)(PSP) triblock copolymers of the ABA type in bulk were measured by small angle neutron scattering (SANS), while microphase separation structures were studied by small angle X-ray Scattering (SAXS) and transmission electron microscopy (TEM). From the morphological observations, PSP block copolymers have confirmed to have alternating lamellar structure both when φs = 0.33 and φs = 0.5, where φs is the volume fraction of polystyrene blocks. It was also clarified that the chain dimension of center blocks of sample with φs = 0.33 is smaller than that of sample with φs = 0.5. This result may mean that the center blocks have bridge-righ conformation when φs = 0.33 while they have loop-rich conformation when φs = 0.5. (author)

  8. 75 FR 27432 - Security Zone; Golden Guardian 2010 Regional Exercise; San Francisco Bay, San Francisco, CA

    Science.gov (United States)

    2010-05-17

    ... can better evaluate its effects on them and participate in the rulemaking process. Small businesses... DEPARTMENT OF HOMELAND SECURITY Coast Guard 33 CFR Part 165 [Docket No. USCG-2010-0221] RIN 1625-AA87 Security Zone; Golden Guardian 2010 Regional Exercise; San Francisco Bay, San Francisco, CA AGENCY...

  9. Toxic phytoplankton in San Francisco Bay

    Science.gov (United States)

    Rodgers, Kristine M.; Garrison, David L.; Cloern, James E.

    1996-01-01

    The Regional Monitoring Program (RMP) was conceived and designed to document the changing distribution and effects of trace substances in San Francisco Bay, with focus on toxic contaminants that have become enriched by human inputs. However, coastal ecosystems like San Francisco Bay also have potential sources of naturally-produced toxic substances that can disrupt food webs and, under extreme circumstances, become threats to public health. The most prevalent source of natural toxins is from blooms of algal species that can synthesize metabolites that are toxic to invertebrates or vertebrates. Although San Francisco Bay is nutrient-rich, it has so far apparently been immune from the epidemic of harmful algal blooms in the world’s nutrient-enriched coastal waters. This absence of acute harmful blooms does not imply that San Francisco Bay has unique features that preclude toxic blooms. No sampling program has been implemented to document the occurrence of toxin-producing algae in San Francisco Bay, so it is difficult to judge the likelihood of such events in the future. This issue is directly relevant to the goals of RMP because harmful species of phytoplankton have the potential to disrupt ecosystem processes that support animal populations, cause severe illness or death in humans, and confound the outcomes of toxicity bioassays such as those included in the RMP. Our purpose here is to utilize existing data on the phytoplankton community of San Francisco Bay to provide a provisional statement about the occurrence, distribution, and potential threats of harmful algae in this Estuary.

  10. 78 FR 21399 - Notice of Inventory Completion: Center for Archaeological Research at the University of Texas at...

    Science.gov (United States)

    2013-04-10

    ...-PPWOCRADN0] Notice of Inventory Completion: Center for Archaeological Research at the University of Texas at San Antonio, TX AGENCY: National Park Service, Interior. ACTION: Notice. SUMMARY: The Center for... consultation with the appropriate Indian tribe, and has determined that there is a cultural affiliation between...

  11. Computational fluid dynamics: complex flows requiring supercomputers. January 1975-July 1988 (Citations from the INSPEC: Information Services for the Physics and Engineering Communities data base). Report for January 1975-July 1988

    International Nuclear Information System (INIS)

    1988-08-01

    This bibliography contains citations concerning computational fluid dynamics (CFD), a new method in computational science to perform complex flow simulations in three dimensions. Applications include aerodynamic design and analysis for aircraft, rockets, and missiles, and automobiles; heat-transfer studies; and combustion processes. Included are references to supercomputers, array processors, and parallel processors where needed for complete, integrated design. Also included are software packages and grid-generation techniques required to apply CFD numerical solutions. Numerical methods for fluid dynamics, not requiring supercomputers, are found in a separate published search. (Contains 83 citations fully indexed and including a title list.)

  12. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu; Taylor, Valerie

    2013-01-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  13. Performance Characteristics of Hybrid MPI/OpenMP Scientific Applications on a Large-Scale Multithreaded BlueGene/Q Supercomputer

    KAUST Repository

    Wu, Xingfu

    2013-07-01

    In this paper, we investigate the performance characteristics of five hybrid MPI/OpenMP scientific applications (two NAS Parallel benchmarks Multi-Zone SP-MZ and BT-MZ, an earthquake simulation PEQdyna, an aerospace application PMLB and a 3D particle-in-cell application GTC) on a large-scale multithreaded Blue Gene/Q supercomputer at Argonne National laboratory, and quantify the performance gap resulting from using different number of threads per node. We use performance tools and MPI profile and trace libraries available on the supercomputer to analyze and compare the performance of these hybrid scientific applications with increasing the number OpenMP threads per node, and find that increasing the number of threads to some extent saturates or worsens performance of these hybrid applications. For the strong-scaling hybrid scientific applications such as SP-MZ, BT-MZ, PEQdyna and PLMB, using 32 threads per node results in much better application efficiency than using 64 threads per node, and as increasing the number of threads per node, the FPU (Floating Point Unit) percentage decreases, and the MPI percentage (except PMLB) and IPC (Instructions per cycle) per core (except BT-MZ) increase. For the weak-scaling hybrid scientific application such as GTC, the performance trend (relative speedup) is very similar with increasing number of threads per node no matter how many nodes (32, 128, 512) are used. © 2013 IEEE.

  14. Research to application: Supercomputing trends for the 90's - Opportunities for interdisciplinary computations

    International Nuclear Information System (INIS)

    Shankar, V.

    1991-01-01

    The progression of supercomputing is reviewed from the point of view of computational fluid dynamics (CFD), and multidisciplinary problems impacting the design of advanced aerospace configurations are addressed. The application of full potential and Euler equations to transonic and supersonic problems in the 70s and early 80s is outlined, along with Navier-Stokes computations widespread during the late 80s and early 90s. Multidisciplinary computations currently in progress are discussed, including CFD and aeroelastic coupling for both static and dynamic flexible computations, CFD, aeroelastic, and controls coupling for flutter suppression and active control, and the development of a computational electromagnetics technology based on CFD methods. Attention is given to computational challenges standing in a way of the concept of establishing a computational environment including many technologies. 40 refs

  15. ASTER Flyby of San Francisco

    Science.gov (United States)

    2002-01-01

    The Advanced Spaceborne Thermal Emission and Reflection radiometer, ASTER, is an international project: the instrument was supplied by Japan's Ministry of International Trade and Industry. A joint US/Japan science team developed algorithms for science data products, and is validating instrument performance. With its 14 spectral bands, extremely high spatial resolution, and 15 meter along-track stereo capability, ASTER is the zoom lens of the Terra satellite. The primary mission goals are to characterize the Earth's surface; and to monitor dynamic events and processes that influence habitability at human scales. ASTER's monitoring and mapping capabilities are illustrated by this series of images of the San Francisco area. The visible and near infrared image reveals suspended sediment in the bays, vegetation health, and details of the urban environment. Flying over San Francisco (3.2MB) (high-res (18.3MB)), we see the downtown, and shadows of the large buildings. Past the Golden Gate Bridge and Alcatraz Island, we cross San Pablo Bay and enter Suisun Bay. Turning south, we fly over the Berkeley and Oakland Hills. Large salt evaporation ponds come into view at the south end of San Francisco Bay. We turn northward, and approach San Francisco Airport. Rather than landing and ending our flight, we see this is as only the beginning of a 6 year mission to better understand the habitability of the world on which we live. For more information: ASTER images through Visible Earth ASTER Web Site Image courtesy of MITI, ERSDAC, JAROS, and the U.S./Japan ASTER Science Team

  16. The Effect of Bangpungtongsung-san Extracts on Adipocyte Metabolism

    Directory of Open Access Journals (Sweden)

    Sang Min, Lee

    2008-03-01

    Full Text Available Objective : The purpose of this study is to investigate the effects of Bangpungtongsung-san extracts on the preadipocytes proliferation, of 3T3-L1 cell line. lipolysis of adipocytes in rat's epididymis and localized fat accumulation of porcine by extraction methods(alcohol and water. Methods : Diminish 3T3-L1 proliferation and lipogenesis do primary role to reduce obesity. So, 3T3-L1 preadipocyte and adipocytes were performed on cell cultures, and using Sprague-Dawley rats for the lipogenesis, and treated with 0.01-1 ㎎/㎖ Bangpungtongsung-san Extracts depend on concentrations. Porcine skin including fat tissue after treated Bangpungtongsung-san Extracts by means of the dosage dependent variation are investigated the histologic changes after injection of these extracts. Results : Following results were obtained from the 3T3-L1 preadipocyte proliferation and lipolysis of adipocyte in rats and histologic investigation of fat tissue. 1. Bangpungtongsung-san extracts were showed the effect of decreased preadipocyte proliferation on the high dosage(1.0㎎/㎖. 2. Bangpungtongsung-san extracts were showed the effect of decreased the activity of glycerol-3-phosphate dehydrogenase(GPDH on the high dosage(1.0㎎/㎖ and Specially, alcohol extract of Bangpungtongsung -san was clear as time goes by high concentration. 3. Bangpungtongsung-san extracts were showed tries to compare the effect of lipolysis, alcohol extract of Bangpungtongsung-san on the high dosage(1.0㎎/㎖ was observed the effect is higher than water extract. 4. Investigated the histological changes in porcine fat tissue after treated Bangpungtongsung-san extracts, we knew that water extract of Bangpungtongsung-san was showed the effect of lipolysis on the high dosage(10.0㎎/㎖ and alcohol extract of Bangpungtongsung-san was showed significant activity to the lysis of cell membranes in all concentration. Conclusion : These results suggest that Bangpungtongsung-san extracts efficiently

  17. Study of a conceptual nuclear energy center at Green River, Utah: water allocation issues

    International Nuclear Information System (INIS)

    Harper, N.J.

    1982-04-01

    According to preliminary studies, operation of a nine-reactor Nuclear Energy Center near Green River, Utah would require the acquisition of 126,630 acre-feet per year. Groundwater aquifers are a potential source of supply but do not present a viable option at this time due to insufficient data on aquifer characteristics. Surface supplies are available from the nearby Green and San Rafael Rivers, tributaries of the Colorado River, but are subject to important constraints. Because of these constraints, the demand for a dependable water supply for a Nuclear Energy Center could best be met by the acquisition of vested water rights from senior appropriators in either the Green or San Rafael Rivers. The Utah Water Code provides a set of procedures to accomplish such a transfer of water rights

  18. Modeling pesticide loadings from the San Joaquin watershed into the Sacramento-San Joaquin Delta using SWAT

    Science.gov (United States)

    Chen, H.; Zhang, M.

    2016-12-01

    The Sacramento-San Joaquin Delta is an ecologically rich, hydrologically complex area that serves as the hub of California's water supply. However, pesticides have been routinely detected in the Delta waterways, with concentrations exceeding the benchmark for the protection of aquatic life. Pesticide loadings into the Delta are partially attributed to the San Joaquin watershed, a highly productive agricultural watershed located upstream. Therefore, this study aims to simulate pesticide loadings to the Delta by applying the Soil and Water Assessment Tool (SWAT) model to the San Joaquin watershed, under the support of the USDA-ARS Delta Area-Wide Pest Management Program. Pesticide use patterns in the San Joaquin watershed were characterized by combining the California Pesticide Use Reporting (PUR) database and GIS analysis. Sensitivity/uncertainty analyses and multi-site calibration were performed in the simulation of stream flow, sediment, and pesticide loads along the San Joaquin River. Model performance was evaluated using a combination of graphic and quantitative measures. Preliminary results indicated that stream flow was satisfactorily simulated along the San Joaquin River and the major eastern tributaries, whereas stream flow was less accurately simulated in the western tributaries, which are ephemeral small streams that peak during winter storm events and are mainly fed by irrigation return flow during the growing season. The most sensitive parameters to stream flow were CN2, SOL_AWC, HRU_SLP, SLSUBBSN, SLSOIL, GWQMN and GW_REVAP. Regionalization of parameters is important as the sensitivity of parameters vary significantly spatially. In terms of evaluation metric, NSE tended to overrate model performance when compared to PBIAS. Anticipated results will include (1) pesticide use pattern analysis, (2) calibration and validation of stream flow, sediment, and pesticide loads, and (3) characterization of spatial patterns and temporal trends of pesticide yield.

  19. Butterfly fauna in Mount Gariwang-san, Korea

    Directory of Open Access Journals (Sweden)

    Cheol Min Lee

    2016-06-01

    Full Text Available The aim of this study is to elucidate butterfly fauna in Mt. Gariwang-san, Korea. A field survey was conducted from 2010 to 2015 using the line transect method. A literature survey was also conducted. A total of 2,037 butterflies belonging to 105 species were recorded. In the estimation of species richness of butterfly, 116 species were estimated to live in Mt. Gariwang-san. In butterfly fauna in Mt. Gariwang-san, the percentage of northern species was very high and the percentage of grassland species was relatively higher than that of forest edge species and forest interior species. Sixteen red list species were found. In particular, Mimathyma nycteis was only recorded in Mt. Gariwang-san. When comparing the percentage of northern species and southern species including those recorded in previous studies, the percentage of northern species was found to have decreased significantly whereas that of southern species increased. We suggest that the butterfly community, which is distributed at relatively high altitudes on Mt. Gariwang-san, will gradually change in response to climate change.

  20. Distribution and demography of San Francisco gartersnakes (Thamnophis sirtalis tetrataenia) at Mindego Ranch, Russian Ridge Open Space Preserve, San Mateo County, California

    Science.gov (United States)

    Kim, Richard; Halstead, Brian J.; Wylie, Glenn D.; Casazza, Michael L.

    2018-04-26

    San Francisco gartersnakes (Thamnophis sirtalis tetrataenia) are a subspecies of common gartersnakes endemic to the San Francisco Peninsula of northern California. Because of habitat loss and collection for the pet trade, San Francisco gartersnakes were listed as endangered under the precursor to the Federal Endangered Species Act. A population of San Francisco gartersnakes resides at Mindego Ranch, San Mateo County, which is part of the Russian Ridge Open Space Preserve owned and managed by the Midpeninsula Regional Open Space District (MROSD). Because the site contained non-native fishes and American bullfrogs (Lithobates catesbeianus), MROSD implemented management to eliminate or reduce the abundance of these non-native species in 2014. We monitored the population using capture-mark-recapture techniques to document changes in the population during and following management actions. Although drought confounded some aspects of inference about the effects of management, prey and San Francisco gartersnake populations generally increased following draining of Aquatic Feature 3. Continued management of the site to keep invasive aquatic predators from recolonizing or increasing in abundance, as well as vegetation management that promotes heterogeneous grassland/shrubland near wetlands, likely would benefit this population of San Francisco gartersnakes.

  1. Car2x with software defined networks, network functions virtualization and supercomputers technical and scientific preparations for the Amsterdam Arena telecoms fieldlab

    NARCIS (Netherlands)

    Meijer R.J.; Cushing R.; De Laat C.; Jackson P.; Klous S.; Koning R.; Makkes M.X.; Meerwijk A.

    2015-01-01

    In the invited talk 'Car2x with SDN, NFV and supercomputers' we report about how our past work with SDN [1, 2] allows the design of a smart mobility fieldlab in the huge parking lot the Amsterdam Arena. We explain how we can engineer and test software that handle the complex conditions of the Car2X

  2. DARPA (Defense Advanced Research Projects Agency) Review on EHF Devices Held in San Diego, California on 24-25 January 1989

    Science.gov (United States)

    1989-04-01

    representing the official policies , either expressed or implied, of the Naval Ocean Systems Center or the U.S. Government. * I I NAVAL OCEAN SYSTEMS CENTER San...0 00 NNNc~ ___CI rI CncD I S zo I II C CC II I II....118 Iz oj z I. w z I w 0 z E 04 ww In jl ~ 11 u -i00 z~zo> Lu 0 F-J LL >- cr 0 - 0- M F-J CL z

  3. SANS facility at the Pitesti 14 MW Triga reactor

    International Nuclear Information System (INIS)

    Ionita, I.; Anghel, E.; Mincu, M.; Datcu, A.; Grabcev, B.; Todireanu, S.; Constantin, F.; Shvetsov, V.; Popescu, G.

    2006-01-01

    Full text of publication follows: At the present time, an important not yet fully exploited potentiality is represented by the SANS instruments existent at lower power reactors and reactors in developing countries even if they are, generally, endowed with a simpler equipment and are characterized by the lack of infrastructure to maintain and repair high technology accessories. The application of SANS at lower power reactors and in developing countries nevertheless is possible in well selected topics where only a restricted Q range is required, when scattering power is expected to be sufficiently high or when the sample size can be increased at the expense of resolution. Examples of this type of applications are: 1) Phase separation and precipitates in material science, 2) Ultrafine grained materials (nano-crystals, ceramics), 3) Porous materials such as concretes and filter materials, 4) Conformation and entanglements of polymer-chains, 5) Aggregates of micelles in microemulsions, gels and colloids, 6) Radiation damage in steels and alloys. The need for the installation of a new SANS facility at the Triga Reactor of the Institute of Nuclear Researches in Pitesti, Romania become actual especially after the shutting down of the VVRS Reactor from Bucharest. A monochromatic neutron beam with 1.5 Angstrom ≤ λ ≤ 5 Angstrom is produced by a mechanical velocity selector with helical slots.The distance between sample and detectors plane is (5.2 m ). The sample width may be fixed between 10 mm and 20 mm. The minimum value of the scattering vector is Q min = 0.005 Angstrom -1 while the maximal value is Q max = 0.5 Angstrom -1 . The relative error is ΔQ/Q min = 0.5. The cooperation partnership between advanced research centers and the smaller ones from developing countries could be fruitful. The formers act as mentors in solving specific problems. Such a partnership was established between INR Pitesti, Romania and JINR Dubna, Russia. The first step in this cooperation

  4. University of California San Francisco (UCSF-2): Expression Analysis of Superior Cervical Ganglion from Backcrossed TH-MYCN Transgenic Mice | Office of Cancer Genomics

    Science.gov (United States)

    The CTD2 Center at University of California San Francisco (UCSF-2) used genetic analysis of the peripheral sympathetic nervous system to identify potential therapeutic targets in neuroblastoma. Read the abstract Experimental Approaches Read the detailed Experimental Approaches

  5. Adaptive Management Methods to Protect the California Sacramento-San Joaquin Delta Water Resource

    Science.gov (United States)

    Bubenheim, David

    2016-01-01

    The California Sacramento-San Joaquin River Delta is the hub for California's water supply, conveying water from Northern to Southern California agriculture and communities while supporting important ecosystem services, agriculture, and communities in the Delta. Changes in climate, long-term drought, water quality changes, and expansion of invasive aquatic plants threatens ecosystems, impedes ecosystem restoration, and is economically, environmentally, and sociologically detrimental to the San Francisco Bay/California Delta complex. NASA Ames Research Center and the USDA-ARS partnered with the State of California and local governments to develop science-based, adaptive-management strategies for the Sacramento-San Joaquin Delta. The project combines science, operations, and economics related to integrated management scenarios for aquatic weeds to help land and waterway managers make science-informed decisions regarding management and outcomes. The team provides a comprehensive understanding of agricultural and urban land use in the Delta and the major water sheds (San Joaquin/Sacramento) supplying the Delta and interaction with drought and climate impacts on the environment, water quality, and weed growth. The team recommends conservation and modified land-use practices and aids local Delta stakeholders in developing management strategies. New remote sensing tools have been developed to enhance ability to assess conditions, inform decision support tools, and monitor management practices. Science gaps in understanding how native and invasive plants respond to altered environmental conditions are being filled and provide critical biological response parameters for Delta-SWAT simulation modeling. Operational agencies such as the California Department of Boating and Waterways provide testing and act as initial adopter of decision support tools. Methods developed by the project can become routine land and water management tools in complex river delta systems.

  6. Making lemonade from lemons: a case study on loss of space at the Dolph Briscoe, Jr. Library, University of Texas Health Science Center at San Antonio.

    Science.gov (United States)

    Tobia, Rajia C; Feldman, Jonquil D

    2010-01-01

    The setting for this case study is the Dolph Briscoe, Jr. Library, University of Texas Health Science Center at San Antonio, a health sciences campus with medical, dental, nursing, health professions, and graduate schools. During 2008-2009, major renovations to the library building were completed including office space for a faculty development department, multipurpose classrooms, a 24/7 study area, study rooms, library staff office space, and an information commons. The impetus for changes to the library building was the decreasing need to house collections in an increasingly electronic environment, the need for office space for other departments, and growth of the student body. About 40% of the library building was remodeled or repurposed, with a loss of approximately 25% of the library's original space. Campus administration proposed changes to the library building, and librarians worked with administration, architects, and construction managers to seek renovation solutions that meshed with the library's educational mission.

  7. A user-friendly web portal for T-Coffee on supercomputers

    Directory of Open Access Journals (Sweden)

    Koetsier Jos

    2011-05-01

    Full Text Available Abstract Background Parallel T-Coffee (PTC was the first parallel implementation of the T-Coffee multiple sequence alignment tool. It is based on MPI and RMA mechanisms. Its purpose is to reduce the execution time of the large-scale sequence alignments. It can be run on distributed memory clusters allowing users to align data sets consisting of hundreds of proteins within a reasonable time. However, most of the potential users of this tool are not familiar with the use of grids or supercomputers. Results In this paper we show how PTC can be easily deployed and controlled on a super computer architecture using a web portal developed using Rapid. Rapid is a tool for efficiently generating standardized portlets for a wide range of applications and the approach described here is generic enough to be applied to other applications, or to deploy PTC on different HPC environments. Conclusions The PTC portal allows users to upload a large number of sequences to be aligned by the parallel version of TC that cannot be aligned by a single machine due to memory and execution time constraints. The web portal provides a user-friendly solution.

  8. San Francisco Bay Long Term Management Strategy for Dredging

    Science.gov (United States)

    The San Francisco Bay Long Term Management Strategy (LTMS) is a cooperative effort to develop a new approach to dredging and dredged material disposal in the San Francisco Bay area. The LTMS serves as the Regional Dredging Team for the San Francisco area.

  9. San Juan Uchucuanicu: évolution historique

    Directory of Open Access Journals (Sweden)

    1975-01-01

    Full Text Available La communauté de San Juan est reconnue depuis 1939. Une première partie concerne l’organisation de la reducción de San Juan vers le milieu du XVIe siècle. Le poids fiscal s’exerce durement sur le village et la crise est générale dans toute la vallée du Chancay au XVIIe. siècle. La christianisation des habitants est définitive au milieu de ce même siècle. C’est vers la fin du XVIIe siècle et durant tout le XVIIIe que se multiplient les conflits entre San Juan et les villages voisins liés aux terrains de pâture et à la possession de l’eau. La deuxième partie du travail concerne les rapports de la communauté de San Juan avec le Pérou contemporain : contrainte fiscale toujours très lourde durant la fin de l’époque coloniale, exactions des militaires juste avant l’indépendance. La période républicaine voit toujours les conflits avec les villages voisins mais aussi la naissance de familles qui cherchent à retirer le maximum de la communauté. Les terres sont divisées et attribuées : la détérioration de l’organisation communale traditionnelle est manifeste. L4es conflits se multiplient entre petits propriétaires, mais aussi avec les haciendas voisines : c’est l’apparition d’une véritable lutte de classes. La situation actuelle est incertaine, le poids de l’économie marchande se développe avec l’exode des jeunes. Que sera la communauté San Juan à la fin de ce siècle? La comunidad de San Juan está reconocida desde 1939. La primera parte concierne a la organización de la 'reducción' de San Juan hacia mediados del siglo XVI. El peso fiscal se ejerce duramente sobre el pueblo y en el siglo XVII la crisis es general en todo el valle de Chancay. Hacia mediados del mismo siglo la cristianización de los habitantes es definitiva. Es hacia fines del siglo XVII y durante todo el siglo XVIII que se multiplican los conflictos entre San Juan y los pueblos vecinos, los que están relacionados con los terrenos de

  10. Quaternary geology of Alameda County, and parts of Contra Costa, Santa Clara, San Mateo, San Francisco, Stanislaus, and San Joaquin counties, California: a digital database

    Science.gov (United States)

    Helley, E.J.; Graymer, R.W.

    1997-01-01

    Alameda County is located at the northern end of the Diablo Range of Central California. It is bounded on the north by the south flank of Mount Diablo, one of the highest peaks in the Bay Area, reaching an elevation of 1173 meters (3,849 ft). San Francisco Bay forms the western boundary, the San Joaquin Valley borders it on the east and an arbitrary line from the Bay into the Diablo Range forms the southern boundary. Alameda is one of the nine Bay Area counties tributary to San Francisco Bay. Most of the country is mountainous with steep rugged topography. Alameda County is covered by twenty-eight 7.5' topographic Quadrangles which are shown on the index map. The Quaternary deposits in Alameda County comprise three distinct depositional environments. One, forming a transgressive sequence of alluvial fan and fan-delta facies, is mapped in the western one-third of the county. The second, forming only alluvial fan facies, is mapped in the Livermore Valley and San Joaquin Valley in the eastern part of the county. The third, forming a combination of Eolian dune and estuarine facies, is restricted to the Alameda Island area in the northwestern corner of the county.

  11. Digital Preservation Theory and Application: Transcontinental Persistent Archives Testbed Activity

    Directory of Open Access Journals (Sweden)

    Paul Watry

    2007-12-01

    Full Text Available The National Archives and Records Administration (NARA and EU SHAMAN projects are working with multiple research institutions on tools and technologies that will supply a comprehensive, systematic, and dynamic means for preserving virtually any type of electronic record, free from dependence on any specific hardware or software. This paper describes the joint development work between the University of Liverpool and the San Diego Supercomputer Center (SDSC at the University of California, San Diego on the NARA and SHAMAN prototypes. The aim is to provide technologies in support of the required generic data management infrastructure. We describe a Theory of Preservation that quantifies how communication can be accomplished when future technologies are different from those available at present. This includes not only different hardware and software, but also different standards for encoding information. We describe the concept of a “digital ontology” to characterize preservation processes; this is an advance on the current OAIS Reference Model of providing representation information about records. To realize a comprehensive Theory of Preservation, we describe the ongoing integration of distributed shared collection management technologies, digital library browsing, and presentation technologies for the NARA and SHAMAN Persistent Archive Testbeds.

  12. San Francisco Bay Water Quality Improvement Fund

    Science.gov (United States)

    EPAs grant program to protect and restore San Francisco Bay. The San Francisco Bay Water Quality Improvement Fund (SFBWQIF) has invested in 58 projects along with 70 partners contributing to restore wetlands, water quality, and reduce polluted runoff.,

  13. Development of 40m SANS and Its Utilization Techniques

    International Nuclear Information System (INIS)

    Choi, Sung Min; Kim, Tae Hwan

    2010-06-01

    Small angle neutron scattering (SANS) has been a very powerful tool to study nanoscale (1-100 nm) bulk structures in various materials such as polymer, self assembled materials, nano-porous materials, nano-magnetic materials, metal and ceramics. Understanding the importance of the SANS instrument, the 8m SANS instrument was installed at the CN beam port of HANARO in 2001. However, without having a cold neutron source, the beam intensity is fairly low and the Q-range is rather limited due to short instrument length. In July 1, 2003, therefore, the HANARO cold neutron research facility project was launched and a state of the art 40m SANS instrument was selected as top-priority instrument. The development of the 40m SANS instrument was completed as a joint project between Korea Advanced Institute of Science and Technology and the HANARO in 2010. Here, we report the specification of a state of art 40m SANS instrument at HANARO

  14. Portable implementation model for CFD simulations. Application to hybrid CPU/GPU supercomputers

    Science.gov (United States)

    Oyarzun, Guillermo; Borrell, Ricard; Gorobets, Andrey; Oliva, Assensi

    2017-10-01

    Nowadays, high performance computing (HPC) systems experience a disruptive moment with a variety of novel architectures and frameworks, without any clarity of which one is going to prevail. In this context, the portability of codes across different architectures is of major importance. This paper presents a portable implementation model based on an algebraic operational approach for direct numerical simulation (DNS) and large eddy simulation (LES) of incompressible turbulent flows using unstructured hybrid meshes. The strategy proposed consists in representing the whole time-integration algorithm using only three basic algebraic operations: sparse matrix-vector product, a linear combination of vectors and dot product. The main idea is based on decomposing the nonlinear operators into a concatenation of two SpMV operations. This provides high modularity and portability. An exhaustive analysis of the proposed implementation for hybrid CPU/GPU supercomputers has been conducted with tests using up to 128 GPUs. The main objective consists in understanding the challenges of implementing CFD codes on new architectures.

  15. San Diego's High School Dropout Crisis

    Science.gov (United States)

    Wilson, James C.

    2012-01-01

    This article highlights San Diego's dropout problem and how much it's costing the city and the state. Most San Diegans do not realize the enormous impact high school dropouts on their city. The California Dropout Research Project, located at the University of California at Santa Barbara, has estimated the lifetime cost of one class or cohort of…

  16. Coal exploration in the Alto San Jorge area, Cordoba Department. Exploracion de carbones en el Ato San Jorge, Departamento de Cordoba

    Energy Technology Data Exchange (ETDEWEB)

    Ospina, L H; Oquendo, G G [Geominas Ltda, Medellin (Colombia)

    1989-01-01

    A Mining Feasibility Study in the Area of Alto San Jorge, Department of Cordoba, Colombia, was commissioned by CARBOCOL S.A. to the Consortium Geominas-NACI. An area of 800 Ka2 was explored to define surface mining possibilities within two subareas referred to as Alto San Jorge and San Pedro Ure. Rocks of Cretaceous, Tertiary and Quaternary age crop out in the zone. In the subarea Alto San Jorge the principal structure is a syncline with a south-north direction. The San Pedro Ure subarea is formed by undulations with flanks of low dip, the most important being the San Antonio Syncline because it contains the mining block. The geological study of the surface demonstrated the existence of coal in the Oligocene Cienaga de Oro Formation and the Niocene Cerrito Formation, with potential resources of 6.3 billion tons. The subsequent exploration of the subsoil, with 20.618 m of drilling, permitted determination of demonstrated reserves in the order of 2.9 billion tons within two areas. In the sector selected for the mine plan, in the area of San Pedro-Puerto Libertador, 7.791 m of drilling was accomplished to define a demonstrated reserve of 515 million tons of coal down to a depth of 200. The combustible type coal has 5.000 cal/g. Complete mining schedules were developed at the prefeasibility level for two surface mines with productions of 1.5 MMTY and 4 MMTY. 9 figs., 3 tabs., 28 refs.

  17. Pandemic (H1N1) 2009 Surveillance in Marginalized Populations, Tijuana, Mexico, and West Nile Virus Knowledge among Hispanics, San Diego, California, 2006

    Centers for Disease Control (CDC) Podcasts

    2010-08-10

    This podcast describes public health surveillance and communication in hard to reach populations in Tijuana, Mexico, and San Diego County, California. Dr. Marian McDonald, Associate Director of CDC's Health Disparities in the National Center for Emerging and Zoonotic Infectious Diseases, discusses the importance of being flexible in determining the most effective media for health communications.  Created: 8/10/2010 by National Center for Emerging and Zoonotic Infectious Diseases, National Center for Immunization and Respiratory Diseases.   Date Released: 8/10/2010.

  18. Cenobios leoneses altomedievales ante la europeización: San Pedro y San Pablo de Montes, Santiago y San Martín de Peñalba y San Miguel de Escalada

    Directory of Open Access Journals (Sweden)

    Martínez Tejera, Artemio Manuel

    2002-06-01

    Full Text Available The following paper analyses the behaviour of three of the most important monastic communities in the reing of Asturias-Leon for the ninth and then centuries. During this period we witness the implementation of a new ordo, or liturgical ritual that replaces the Hispanic one, strongly established in the Territorium. The liturgical adaptation produces tension and conflicts among the members of different monastic communities, and even between the Episcopate and the monarchy - being King Alfonso VI. In some of the monasteries, the arrival of the new ordo causes the adaptation of the liturgical space, with subsequent changes in liturgical furniture.

    El presente estudio pretende analizar el comportamiento de tres de las más importantes comunidades monásticas astur-leonesas de los siglos IX y X (San Pedro y San Pablo de Montes, Santiago y San Martín de Peñalba y San Miguel de Escalada ante la recepción e implantación de aquel nuevo ordo o ritual litúrgico que vino a sustituir al Hispánico, fuertemente asentado en el territorium. Readaptación litúrgica que, con distinta intensidad, producirá tensiones y enfrentamientos entre los miembros de las distintas comunidades monásticas, incluso entre el episcopado y la monarquía (personificada en la figura de Alfonso VI, pero no únicamente. En alguno de estos monasterios la llegada del nuevo ordo supondrá, además, la readaptación de su espacio litúrgico, lo que trajo consigo significativas modificaciones constructivas.

  19. EDF's experience with supercomputing and challenges ahead - towards multi-physics and multi-scale approaches

    International Nuclear Information System (INIS)

    Delbecq, J.M.; Banner, D.

    2003-01-01

    Nuclear power plants are a major asset of the EDF company. To remain so, in particular in a context of deregulation, competitiveness, safety and public acceptance are three conditions. These stakes apply both to existing plants and to future reactors. The purpose of the presentation is to explain how supercomputing can help EDF to satisfy these requirements. Three examples are described in detail: ensuring optimal use of nuclear fuel under wholly safe conditions, understanding and simulating the material deterioration mechanisms and moving forward with numerical simulation for the performance of EDF's activities. In conclusion, a broader vision of EDF long term R and D in the field of numerical simulation is given and especially of five challenges taken up by EDF together with its industrial and scientific partners. (author)

  20. Hydrologic Modeling at the National Water Center: Operational Implementation of the WRF-Hydro Model to support National Weather Service Hydrology

    Science.gov (United States)

    Cosgrove, B.; Gochis, D.; Clark, E. P.; Cui, Z.; Dugger, A. L.; Fall, G. M.; Feng, X.; Fresch, M. A.; Gourley, J. J.; Khan, S.; Kitzmiller, D.; Lee, H. S.; Liu, Y.; McCreight, J. L.; Newman, A. J.; Oubeidillah, A.; Pan, L.; Pham, C.; Salas, F.; Sampson, K. M.; Smith, M.; Sood, G.; Wood, A.; Yates, D. N.; Yu, W.; Zhang, Y.

    2015-12-01

    The National Weather Service (NWS) National Water Center(NWC) is collaborating with the NWS National Centers for Environmental Prediction (NCEP) and the National Center for Atmospheric Research (NCAR) to implement a first-of-its-kind operational instance of the Weather Research and Forecasting (WRF)-Hydro model over the Continental United States (CONUS) and contributing drainage areas on the NWS Weather and Climate Operational Supercomputing System (WCOSS) supercomputer. The system will provide seamless, high-resolution, continuously cycling forecasts of streamflow and other hydrologic outputs of value from both deterministic- and ensemble-type runs. WRF-Hydro will form the core of the NWC national water modeling strategy, supporting NWS hydrologic forecast operations along with emergency response and water management efforts of partner agencies. Input and output from the system will be comprehensively verified via the NWC Water Resource Evaluation Service. Hydrologic events occur on a wide range of temporal scales, from fast acting flash floods, to long-term flow events impacting water supply. In order to capture this range of events, the initial operational WRF-Hydro configuration will feature 1) hourly analysis runs, 2) short-and medium-range deterministic forecasts out to two day and ten day horizons and 3) long-range ensemble forecasts out to 30 days. All three of these configurations are underpinned by a 1km execution of the NoahMP land surface model, with channel routing taking place on 2.67 million NHDPlusV2 catchments covering the CONUS and contributing areas. Additionally, the short- and medium-range forecasts runs will feature surface and sub-surface routing on a 250m grid, while the hourly analyses will feature this same 250m routing in addition to nudging-based assimilation of US Geological Survey (USGS) streamflow observations. A limited number of major reservoirs will be configured within the model to begin to represent the first-order impacts of

  1. EX1103L1: Exploration and Mapping, Galapagos Spreading Center: Mapping, CTD and Tow-yo

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This project will be a transit from San Diego, CA to the Galapagos Spreading Center, where multibeam mapping, CTD casts, and CTD tow-yo operations will be performed....

  2. In the San Joaquin Valley, hardly a sprinkle

    International Nuclear Information System (INIS)

    Holson, L.M.

    1993-01-01

    California has declared its six-year drought over, but in the San Joaquin Valley, center of the state's $18.5 billion agriculture industry, it lives on. The two weeks of strong rain this winter that swelled reservoirs and piled snow on the mountains is only trickling toward the region's nearly 20,000 farms. Federal water officials are under heavy pressure from the Environmental Protection Agency, which wants to improve water quality, and are worried about the plight of endangered fish in the Sacramento River. So, on March 12 they announced they will send farmers only 40% of the water allotments they got before the drought. The rest is being held against possible shortages. For the once-green valley, another year without water has brought many farmers perilously close to extinction

  3. 77 FR 46115 - Notice of Inventory Completion: San Diego Museum of Man, San Diego, CA

    Science.gov (United States)

    2012-08-02

    ...The San Diego Museum of Man has completed an inventory of human remains in consultation with the appropriate Indian tribe, and has determined that there is a cultural affiliation between the human remains and a present-day Indian tribe. Representatives of any Indian tribe that believes itself to be culturally affiliated with the human remains may contact the San Diego Museum of Man. Repatriation of the human remains to the Indian tribe stated below may occur if no additional claimants come forward.

  4. Effects of Choto-san and Chotoko on thiopental-induced sleeping time

    OpenAIRE

    JEENAPONGSA, Rattima; Tohda, Michihisa; Watanabe, Hiroshi

    2003-01-01

    Choto-san has been used for treatment of centrally regulated disorders such as dementia, hypertension, headache and vertigo. Our laboratory showed that Choto-san improved learning memory in ischemic mice. It is noticeable that Choto-san treated animals and animals that underwent conducting occlusion of common carotid arteries (2VO) operation slept longer than the normal animals. Therefore, this study aimed to clarify the effects of Choto-san and its related component; Chotoko and Choto-san wi...

  5. SOFTWARE FOR SUPERCOMPUTER SKIF “ProLit-lC” and “ProNRS-lC” FOR FOUNDRY AND METALLURGICAL PRODUCTIONS

    Directory of Open Access Journals (Sweden)

    A. N. Chichko

    2008-01-01

    Full Text Available The data of modeling on supercomputer system SKIF of technological process of  molds filling by means of computer system 'ProLIT-lc', and also data of modeling of the steel pouring process by means ofTroNRS-lc'are presented. The influence of number of  processors of  multinuclear computer system SKIF on acceleration and time of  modeling of technological processes, connected with production of castings and slugs, is shown.

  6. 33 CFR 110.120 - San Luis Obispo Bay, Calif.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false San Luis Obispo Bay, Calif. 110... ANCHORAGES ANCHORAGE REGULATIONS Special Anchorage Areas § 110.120 San Luis Obispo Bay, Calif. (a) Area A-1. Area A-1 is the water area bounded by the San Luis Obispo County wharf, the shoreline, a line drawn...

  7. Palacios and Kitten : high performance operating systems for scalable virtualized and native supercomputing.

    Energy Technology Data Exchange (ETDEWEB)

    Widener, Patrick (University of New Mexico); Jaconette, Steven (Northwestern University); Bridges, Patrick G. (University of New Mexico); Xia, Lei (Northwestern University); Dinda, Peter (Northwestern University); Cui, Zheng.; Lange, John (Northwestern University); Hudson, Trammell B.; Levenhagen, Michael J.; Pedretti, Kevin Thomas Tauke; Brightwell, Ronald Brian

    2009-09-01

    Palacios and Kitten are new open source tools that enable applications, whether ported or not, to achieve scalable high performance on large machines. They provide a thin layer over the hardware to support both full-featured virtualized environments and native code bases. Kitten is an OS under development at Sandia that implements a lightweight kernel architecture to provide predictable behavior and increased flexibility on large machines, while also providing Linux binary compatibility. Palacios is a VMM that is under development at Northwestern University and the University of New Mexico. Palacios, which can be embedded into Kitten and other OSes, supports existing, unmodified applications and operating systems by using virtualization that leverages hardware technologies. We describe the design and implementation of both Kitten and Palacios. Our benchmarks show that they provide near native, scalable performance. Palacios and Kitten provide an incremental path to using supercomputer resources that is not performance-compromised.

  8. Species Observations (poly) - San Diego County [ds648

    Data.gov (United States)

    California Natural Resource Agency — Created in 2009, the SanBIOS database serves as a single repository of species observations collected by various departments within the County of San Diego's Land...

  9. Mammal Track Counts - San Diego County [ds442

    Data.gov (United States)

    California Natural Resource Agency — The San Diego Tracking Team (SDTT) is a non-profit organization dedicated to promoting the preservation of wildlife habitat in San Diego County through citizen-based...

  10. Species Observations (poly) - San Diego County [ds648

    Data.gov (United States)

    California Department of Resources — Created in 2009, the SanBIOS database serves as a single repository of species observations collected by various departments within the County of San Diego's Land...

  11. Biological and associated water-quality data for lower Olmos Creek and upper San Antonio River, San Antonio, Texas, March-October 1990

    Science.gov (United States)

    Taylor, R. Lynn

    1995-01-01

    Biological and associated water-quality data were collected from lower Olmos Creek and upper San Antonio River in San Antonio, Texas, during March-October 1990, the second year of a multiyear data-collection program. The data will be used to document water-quality conditions prior to implementation of a proposal to reuse treated wastewater to irrigate city properties in Olmos Basin and Brackenridge Parks and to augment flows in the Olmos Creek/San Antonio River system.

  12. Remembering San Diego

    International Nuclear Information System (INIS)

    Chuyanov, V.

    1999-01-01

    After 6 years of existence the ITER EDA project in San Diego, USA, was terminated by desition of the US Congress. This article describes how nice it was for everybody as long as it lasted and how sad it is now

  13. Historical context and workers lifestyle in Mexico: San Rafael paper mill (1894-1940

    Directory of Open Access Journals (Sweden)

    José Gustavo Becerril Montero

    2014-12-01

    Full Text Available This article aims to describe the main features of the constructions made by the factories, mainly in the center and around the city of Mexico, since the late nineteenth century to the twentieth. The factories of Mexico in the nineteenth century were characterized by various constructive and technological elements that were giving them a unique profile within the productive landscape of the country. To observe the process of construction, mainly of spaces for the workers, the case of one of the most important factories attached role in Mexico, paper and addressed San Rafael. The San Rafael Company, established in the state of Mexico in the late nineteenth century, pursued since its founding supply the paper market. To achieve this goal, he implemented an ambitious production system, I need to build large apartments for the production of paper, and at the same time to concentrate spaces and ensure their workforce. So, in a few years he managed to develop an advanced, for the time, labor and industrial complex, giving their workers from living rooms to recreation and leisure spaces.

  14. Update: San Andreas Fault experiment

    Science.gov (United States)

    Christodoulidis, D. C.; Smith, D. E.

    1984-01-01

    Satellite laser ranging techniques are used to monitor the broad motion of the tectonic plates comprising the San Andreas Fault System. The San Andreas Fault Experiment, (SAFE), has progressed through the upgrades made to laser system hardware and an improvement in the modeling capabilities of the spaceborne laser targets. Of special note is the launch of the Laser Geodynamic Satellite, LAGEOS spacecraft, NASA's only completely dedicated laser satellite in 1976. The results of plate motion projected into this 896 km measured line over the past eleven years are summarized and intercompared.

  15. Vabariigi aastapäev San Franciscos / Heino Valvur ; foto: Heino Valvur

    Index Scriptorium Estoniae

    Valvur, Heino

    2006-01-01

    veebruarikuu möödus San Franciscos Eesti Vabariigi 88. aastapäeva pühitsedes: traditsiooniliselt tähistas aastapäeva San Francisco Seenioride Klubi koosviibimisega, E.E.L.K. San Francisco koguduses peeti jumalateenistus ja koosviibimine, kus noored esitasid rahvalaule, San Francisco Eesti Selts tähistas aastapäeva 25. veebruaril aktuse ja koosviibimisega

  16. 76 FR 70480 - Otay River Estuary Restoration Project, South San Diego Bay Unit of the San Diego Bay National...

    Science.gov (United States)

    2011-11-14

    ... River Estuary Restoration Project, South San Diego Bay Unit of the San Diego Bay National Wildlife...), intend to prepare an environmental impact statement (EIS) for the proposed Otay River Estuary Restoration... any one of the following methods. Email: [email protected] . Please include ``Otay Estuary NOI'' in the...

  17. Performance Analysis and Scaling Behavior of the Terrestrial Systems Modeling Platform TerrSysMP in Large-Scale Supercomputing Environments

    Science.gov (United States)

    Kollet, S. J.; Goergen, K.; Gasper, F.; Shresta, P.; Sulis, M.; Rihani, J.; Simmer, C.; Vereecken, H.

    2013-12-01

    In studies of the terrestrial hydrologic, energy and biogeochemical cycles, integrated multi-physics simulation platforms take a central role in characterizing non-linear interactions, variances and uncertainties of system states and fluxes in reciprocity with observations. Recently developed integrated simulation platforms attempt to honor the complexity of the terrestrial system across multiple time and space scales from the deeper subsurface including groundwater dynamics into the atmosphere. Technically, this requires the coupling of atmospheric, land surface, and subsurface-surface flow models in supercomputing environments, while ensuring a high-degree of efficiency in the utilization of e.g., standard Linux clusters and massively parallel resources. A systematic performance analysis including profiling and tracing in such an application is crucial in the understanding of the runtime behavior, to identify optimum model settings, and is an efficient way to distinguish potential parallel deficiencies. On sophisticated leadership-class supercomputers, such as the 28-rack 5.9 petaFLOP IBM Blue Gene/Q 'JUQUEEN' of the Jülich Supercomputing Centre (JSC), this is a challenging task, but even more so important, when complex coupled component models are to be analysed. Here we want to present our experience from coupling, application tuning (e.g. 5-times speedup through compiler optimizations), parallel scaling and performance monitoring of the parallel Terrestrial Systems Modeling Platform TerrSysMP. The modeling platform consists of the weather prediction system COSMO of the German Weather Service; the Community Land Model, CLM of NCAR; and the variably saturated surface-subsurface flow code ParFlow. The model system relies on the Multiple Program Multiple Data (MPMD) execution model where the external Ocean-Atmosphere-Sea-Ice-Soil coupler (OASIS3) links the component models. TerrSysMP has been instrumented with the performance analysis tool Scalasca and analyzed

  18. Backwater Flooding in San Marcos, TX from the Blanco River

    Science.gov (United States)

    Earl, Richard; Gaenzle, Kyle G.; Hollier, Andi B.

    2016-01-01

    Large sections of San Marcos, TX were flooded in Oct. 1998, May 2015, and Oct. 2015. Much of the flooding in Oct. 1998 and Oct. 2015 was produced by overbank flooding of San Marcos River and its tributaries by spills from upstream dams. The May 2015 flooding was almost entirely produced by backwater flooding from the Blanco River whose confluence is approximately 2.2 miles southeast of downtown. We use the stage height of the Blanco River to generate maps of the areas of San Marcos that are lower than the flood peaks and compare those results with data for the observed extent of flooding in San Marcos. Our preliminary results suggest that the flooding occurred at locations more than 20 feet lower than the maximum stage height of the Blanco River at San Marcos gage (08171350). This suggest that the datum for either gage 08171350 or 08170500 (San Marcos River at San Marcos) or both are incorrect. There are plans for the U.S. Army Corps of Engineers to construct a Blanco River bypass that will divert Blanco River floodwaters approximately 2 miles farther downstream, but the $60 million price makes its implementation problematic.

  19. 33 CFR 165.1187 - Security Zones; Golden Gate Bridge and the San Francisco-Oakland Bay Bridge, San Francisco Bay...

    Science.gov (United States)

    2010-07-01

    ... Limited Access Areas Eleventh Coast Guard District § 165.1187 Security Zones; Golden Gate Bridge and the... Golden Gate Bridge and the San Francisco-Oakland Bay Bridge, in San Francisco Bay, California. (b... siren, radio, flashing light, or other means, the operator of a vessel shall proceed as directed. [COTP...

  20. The BlueGene/L Supercomputer and Quantum ChromoDynamics

    International Nuclear Information System (INIS)

    Vranas, P; Soltz, R

    2006-01-01

    In summary our update contains: (1) Perfect speedup sustaining 19.3% of peak for the Wilson D D-slash Dirac operator. (2) Measurements of the full Conjugate Gradient (CG) inverter that inverts the Dirac operator. The CG inverter contains two global sums over the entire machine. Nevertheless, our measurements retain perfect speedup scaling demonstrating the robustness of our methods. (3) We ran on the largest BG/L system, the LLNL 64 rack BG/L supercomputer, and obtained a sustained speed of 59.1 TFlops. Furthermore, the speedup scaling of the Dirac operator and of the CG inverter are perfect all the way up to the full size of the machine, 131,072 cores (please see Figure II). The local lattice is rather small (4 x 4 x 4 x 16) while the total lattice has been a lattice QCD vision for thermodynamic studies (a total of 128 x 128 x 256 x 32 lattice sites). This speed is about five times larger compared to the speed we quoted in our submission. As we have pointed out in our paper QCD is notoriously sensitive to network and memory latencies, has a relatively high communication to computation ratio which can not be overlapped in BGL in virtual node mode, and as an application is in a class of its own. The above results are thrilling to us and a 30 year long dream for lattice QCD

  1. Holocene slip rates along the San Andreas Fault System in the San Gorgonio Pass and implications for large earthquakes in southern California

    Science.gov (United States)

    Heermance, Richard V.; Yule, Doug

    2017-06-01

    The San Gorgonio Pass (SGP) in southern California contains a 40 km long region of structural complexity where the San Andreas Fault (SAF) bifurcates into a series of oblique-slip faults with unknown slip history. We combine new 10Be exposure ages (Qt4: 8600 (+2100, -2200) and Qt3: 5700 (+1400, -1900) years B.P.) and a radiocarbon age (1260 ± 60 years B.P.) from late Holocene terraces with scarp displacement of these surfaces to document a Holocene slip rate of 5.7 (+2.7, -1.5) mm/yr combined across two faults. Our preferred slip rate is 37-49% of the average slip rates along the SAF outside the SGP (i.e., Coachella Valley and San Bernardino sections) and implies that strain is transferred off the SAF in this area. Earthquakes here most likely occur in very large, throughgoing SAF events at a lower recurrence than elsewhere on the SAF, so that only approximately one third of SAF ruptures penetrate or originate in the pass.Plain Language SummaryHow large are earthquakes on the southern San Andreas Fault? The answer to this question depends on whether or not the earthquake is contained only along individual fault sections, such as the Coachella Valley section north of Palm Springs, or the rupture crosses multiple sections including the area through the San Gorgonio Pass. We have determined the age and offset of faulted stream deposits within the San Gorgonio Pass to document slip rates of these faults over the last 10,000 years. Our results indicate a long-term slip rate of 6 mm/yr, which is almost 1/2 of the rates east and west of this area. These new rates, combined with faulted geomorphic surfaces, imply that large magnitude earthquakes must occasionally rupture a 300 km length of the San Andreas Fault from the Salton Sea to the Mojave Desert. Although many ( 65%) earthquakes along the southern San Andreas Fault likely do not rupture through the pass, our new results suggest that large >Mw 7.5 earthquakes are possible on the southern San Andreas Fault and likely

  2. Modeling radiative transport in ICF plasmas on an IBM SP2 supercomputer

    International Nuclear Information System (INIS)

    Johansen, J.A.; MacFarlane, J.J.; Moses, G.A.

    1995-01-01

    At the University of Wisconsin-Madison the authors have integrated a collisional-radiative-equilibrium model into their CONRAD radiation-hydrodynamics code. This integrated package allows them to accurately simulate the transport processes involved in ICF plasmas; including the important effects of self-absorption of line-radiation. However, as they increase the amount of atomic structure utilized in their transport models, the computational demands increase nonlinearly. In an attempt to meet this increased computational demand, they have recently embarked on a mission to parallelize the CONRAD program. The parallel CONRAD development is being performed on an IBM SP2 supercomputer. The parallelism is based on a message passing paradigm, and is being implemented using PVM. At the present time they have determined that approximately 70% of the sequential program can be executed in parallel. Accordingly, they expect that the parallel version will yield a speedup on the order of three times that of the sequential version. This translates into only 10 hours of execution time for the parallel version, whereas the sequential version required 30 hours

  3. City of San Francisco, California street tree resource analysis

    Science.gov (United States)

    E.G. McPherson; J.R. Simpson; P.J. Peper; Q. Xiao

    2004-01-01

    Street trees in San Francisco are comprised of two distinct populations, those managed by the city’s Department of Public Works (DPW) and those managed by private property owners with or without the help of San Francisco’s urban forestry nonprofit, Friends of the Urban Forest (FUF). These two entities believe that the public’s investment in stewardship of San Francisco...

  4. Trouble Brewing in San Francisco. Policy Brief

    Science.gov (United States)

    Buck, Stuart

    2010-01-01

    The city of San Francisco will face enormous budgetary pressures from the growing deficits in public pensions, both at a state and local level. In this policy brief, the author estimates that San Francisco faces an aggregate $22.4 billion liability for pensions and retiree health benefits that are underfunded--including $14.1 billion for the city…

  5. Summer Research Program - 1997 Summer Faculty Research Program Volume 6 Arnold Engineering Development Center United States Air Force Academy Air Logistics Centers

    Science.gov (United States)

    1997-12-01

    Fracture Analysis of the F-5, 15%-Spar Bolt DR Devendra Kumar SAALC/LD 6- 16 CUNY-City College, New York, NY A Simple, Multiversion Concurrency Control...Program, University of Dayton, Dayton, OH. [3]AFGROW, Air Force Crack Propagation Analysis Program, Version 3.82 (1997) 15-8 A SIMPLE, MULTIVERSION ...Office of Scientific Research Boiling Air Force Base, DC and San Antonio Air Logistic Center August 1997 16-1 A SIMPLE, MULTIVERSION CONCURRENCY

  6. 77 FR 66499 - Environmental Impact Statement: San Bernardino and Los Angeles Counties, CA

    Science.gov (United States)

    2012-11-05

    ... San Bernardino, 285 East Hospitality Lane, San Bernardino, California 92408 (2) Sheraton Ontario..., November 13, 2012 from 5-7 p.m. at the Hilton San Bernardino, 285 East Hospitality Lane, San Bernardino...

  7. 33 CFR 110.74c - Bahia de San Juan, PR.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Bahia de San Juan, PR. 110.74c Section 110.74c Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY ANCHORAGES ANCHORAGE REGULATIONS Special Anchorage Areas § 110.74c Bahia de San Juan, PR. The waters of San Antonio...

  8. San Francisco Accelerator Conference

    International Nuclear Information System (INIS)

    Southworth, Brian

    1991-01-01

    'Where are today's challenges in accelerator physics?' was the theme of the open session at the San Francisco meeting, the largest ever gathering of accelerator physicists and engineers

  9. Development of a Free-Electron Laser Center and Research in Medicine, Biology and Materials Science,

    Science.gov (United States)

    1992-05-14

    the reduced electron- larons cause localized distortions in an ionic lattice lattice coupling strength leads to molecule emission, which are... syndrome . Health Science Center at San Antonio and the University Buerger’s disease, palmar hyperhidrosis, frostbite and of Mi.imi School of Medicine, Miami

  10. Trouble Brewing in San Diego. Policy Brief

    Science.gov (United States)

    Buck, Stuart

    2010-01-01

    The city of San Diego will face enormous budgetary pressures from the growing deficits in public pensions, both at a state and local level. In this policy brief, the author estimates that San Diego faces total of $45.4 billion, including $7.95 billion for the county pension system, $5.4 billion for the city pension system, and an estimated $30.7…

  11. Corps sans organes et anamnèse

    DEFF Research Database (Denmark)

    Wilson, Alexander

    2011-01-01

    Je trace certains liens entre le corps sans organes de Deleuze et Guattari et les principes de l’organologie générale que décrit Bernard Stiegler.......Je trace certains liens entre le corps sans organes de Deleuze et Guattari et les principes de l’organologie générale que décrit Bernard Stiegler....

  12. Coastal Cactus Wren, San Diego Co. - 2009 [ds702

    Data.gov (United States)

    California Natural Resource Agency — The San Diego Multiple Species Conservation program (MSCP) was developed for the conservation of plants and animals in the southeast portion of San Diego County....

  13. Coastal Cactus Wren, San Diego Co. - 2011 [ds708

    Data.gov (United States)

    California Natural Resource Agency — The San Diego Multiple Species Conservation program (MSCP) was developed for the conservation of plants and animals in the southeast portion of San Diego County....

  14. Cacao use and the San Lorenzo Olmec

    Science.gov (United States)

    Powis, Terry G.; Cyphers, Ann; Gaikwad, Nilesh W.; Grivetti, Louis; Cheong, Kong

    2011-01-01

    Mesoamerican peoples had a long history of cacao use—spanning more than 34 centuries—as confirmed by previous identification of cacao residues on archaeological pottery from Paso de la Amada on the Pacific Coast and the Olmec site of El Manatí on the Gulf Coast. Until now, comparable evidence from San Lorenzo, the premier Olmec capital, was lacking. The present study of theobromine residues confirms the continuous presence and use of cacao products at San Lorenzo between 1800 and 1000 BCE, and documents assorted vessels forms used in its preparation and consumption. One elite context reveals cacao use as part of a mortuary ritual for sacrificial victims, an event that occurred during the height of San Lorenzo's power. PMID:21555564

  15. Mammal Track Counts - San Diego County, 2010 [ds709

    Data.gov (United States)

    California Natural Resource Agency — The San Diego Tracking Team (SDTT) is a non-profit organization dedicated to promoting the preservation of wildlife habitat in San Diego County through citizen-based...

  16. Solving sparse linear least squares problems on some supercomputers by using large dense blocks

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Ostromsky, T; Sameh, A

    1997-01-01

    technique is preferable to sparse matrix technique when the matrices are not large, because the high computational speed compensates fully the disadvantages of using more arithmetic operations and more storage. For very large matrices the computations must be organized as a sequence of tasks in each......Efficient subroutines for dense matrix computations have recently been developed and are available on many high-speed computers. On some computers the speed of many dense matrix operations is near to the peak-performance. For sparse matrices storage and operations can be saved by operating only...... and storing only nonzero elements. However, the price is a great degradation of the speed of computations on supercomputers (due to the use of indirect addresses, to the need to insert new nonzeros in the sparse storage scheme, to the lack of data locality, etc.). On many high-speed computers a dense matrix...

  17. SANS observations on weakly flocculated dispersions

    DEFF Research Database (Denmark)

    Mischenko, N.; Ourieva, G.; Mortensen, K.

    1997-01-01

    Structural changes occurring in colloidal dispersions of poly-(methyl metacrylate) (PMMA) particles, sterically stabilized with poly-(12-hydroxystearic acid) (PHSA), while varying the solvent quality, temperature and shear rate, are investigated by small-angle neutron scattering (SANS......). For a moderately concentrated dispersion in a marginal solvent the transition on cooling from the effective stability to a weak attraction is monitored, The degree of attraction is determined in the framework of the sticky spheres model (SSM), SANS and rheological results are correlated....

  18. Benchmarking MILC code with OpenMP and MPI

    International Nuclear Information System (INIS)

    Gottlieb, Steven; Tamhankar, Sonali

    2001-01-01

    A trend in high performance computers that is becoming increasingly popular is the use of symmetric multi-processing (SMP) rather than the older paradigm of MPP. MPI codes that ran and scaled well on MPP machines can often be run on an SMP machine using the vendor's version of MPI. However, this approach may not make optimal use of the (expensive) SMP hardware. More significantly, there are machines like Blue Horizon, an IBM SP with 8-way SMP nodes at the San Diego Supercomputer Center that can only support 4 MPI processes per node (with the current switch). On such a machine it is imperative to be able to use OpenMP parallelism on the node, and MPI between nodes. We describe the challenges of converting MILC MPI code to using a second level of OpenMP parallelism, and benchmarks on IBM and Sun computers

  19. Seamless Synthetic Aperture Radar Archive for Interferometry Analysis

    Science.gov (United States)

    Baker, S.; Baru, C.; Bryson, G.; Buechler, B.; Crosby, C.; Fielding, E.; Meertens, C.; Nicoll, J.; Youn, C.

    2014-11-01

    The NASA Advancing Collaborative Connections for Earth System Science (ACCESS) seamless synthetic aperture radar (SAR) archive (SSARA) project is a collaboration between UNAVCO, the Alaska Satellite Facility (ASF), the Jet Propulsion Laboratory (JPL), and OpenTopography at the San Diego Supercomputer Center (SDSC) to design and implement a seamless distributed access system for SAR data and derived interferometric SAR (InSAR) data products. A unified application programming interface (API) has been created to search the SAR archives at ASF and UNAVCO, 30 and 90-m SRTM DEM data available through OpenTopography, and tropospheric data from the NASA OSCAR project at JPL. The federated query service provides users a single access point to search for SAR granules, InSAR pairs, and corresponding DEM and tropospheric data products from the four archives, as well as the ability to search and download pre-processed InSAR products from ASF and UNAVCO.

  20. HEP Computing Tools, Grid and Supercomputers for Genome Sequencing Studies

    Science.gov (United States)

    De, K.; Klimentov, A.; Maeno, T.; Mashinistov, R.; Novikov, A.; Poyda, A.; Tertychnyy, I.; Wenaus, T.

    2017-10-01

    PanDA - Production and Distributed Analysis Workload Management System has been developed to address ATLAS experiment at LHC data processing and analysis challenges. Recently PanDA has been extended to run HEP scientific applications on Leadership Class Facilities and supercomputers. The success of the projects to use PanDA beyond HEP and Grid has drawn attention from other compute intensive sciences such as bioinformatics. Recent advances of Next Generation Genome Sequencing (NGS) technology led to increasing streams of sequencing data that need to be processed, analysed and made available for bioinformaticians worldwide. Analysis of genomes sequencing data using popular software pipeline PALEOMIX can take a month even running it on the powerful computer resource. In this paper we will describe the adaptation the PALEOMIX pipeline to run it on a distributed computing environment powered by PanDA. To run pipeline we split input files into chunks which are run separately on different nodes as separate inputs for PALEOMIX and finally merge output file, it is very similar to what it done by ATLAS to process and to simulate data. We dramatically decreased the total walltime because of jobs (re)submission automation and brokering within PanDA. Using software tools developed initially for HEP and Grid can reduce payload execution time for Mammoths DNA samples from weeks to days.

  1. Converting positive and negative symptom scores between PANSS and SAPS/SANS.

    Science.gov (United States)

    van Erp, Theo G M; Preda, Adrian; Nguyen, Dana; Faziola, Lawrence; Turner, Jessica; Bustillo, Juan; Belger, Aysenil; Lim, Kelvin O; McEwen, Sarah; Voyvodic, James; Mathalon, Daniel H; Ford, Judith; Potkin, Steven G; Fbirn

    2014-01-01

    The Scale for the Assessment of Positive Symptoms (SAPS), the Scale for the Assessment of Negative Symptoms (SANS), and the Positive and Negative Syndrome Scale for Schizophrenia (PANSS) are the most widely used schizophrenia symptom rating scales, but despite their co-existence for 25 years no easily usable between-scale conversion mechanism exists. The aim of this study was to provide equations for between-scale symptom rating conversions. Two-hundred-and-five schizophrenia patients [mean age±SD=39.5±11.6, 156 males] were assessed with the SANS, SAPS, and PANSS. Pearson's correlations between symptom scores from each of the scales were computed. Linear regression analyses, on data from 176 randomly selected patients, were performed to derive equations for converting ratings between the scales. Intraclass correlations, on data from the remaining 29 patients, not part of the regression analyses, were performed to determine rating conversion accuracy. Between-scale positive and negative symptom ratings were highly correlated. Intraclass correlations between the original positive and negative symptom ratings and those obtained via conversion of alternative ratings using the conversion equations were moderate to high (ICCs=0.65 to 0.91). Regression-based equations may be useful for conversion between schizophrenia symptom severity as measured by the SANS/SAPS and PANSS, though additional validation is warranted. This study's conversion equations, implemented at http:/converteasy.org, may aid in the comparison of medication efficacy studies, in meta- and mega-analyses examining symptoms as moderator variables, and in retrospective combination of symptom data in multi-center data sharing projects that need to pool symptom rating data when such data are obtained using different scales. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. SANS-II at SINQ: Installation of the former Risø-SANS facility

    DEFF Research Database (Denmark)

    Strunz, P.; Mortensen, K.; Janssen, S.

    2004-01-01

    SANS-II facility at SINQ (Paul Scherrer Institute)-the reinstalled former Riso small-angle neutron scattering instrument-is presented. Its operational characteristics are listed. Approaches for precise determination of wavelength, detector dead time and attenuation factors are described as well. (C...

  3. Marketing San Juan Basin gas

    International Nuclear Information System (INIS)

    Posner, D.M.

    1988-01-01

    Marketing natural gas produced in the San Juan Basin of New Mexico and Colorado principally involves four gas pipeline companies with significant facilities in the basin. The system capacity, transportation rates, regulatory status, and market access of each of these companies is evaluated. Because of excess gas supplies available to these pipeline companies, producers can expect improved take levels and prices by selling gas directly to end users and utilities as opposed to selling gas to the pipelines for system supply. The complexities of transporting gas today suggest that the services of an independent gas marketing company may be beneficial to smaller producers with gas supplies in the San Juan Basin

  4. Transformation and reconstitution of Khoe-San identities : AAS le Fleur I, Griqua identities and post-apartheid Khoe-San revivalism (1894-2004)

    NARCIS (Netherlands)

    Besten, M.P.

    2006-01-01

    Focussing on AAS le fleur I (1867-1941), the Griqua, and post-apartheid Khoe-San revivalism, the dissertation examines changes in the articulation of Khoe-San identities in South-Africa. It shows the significance of shifting political, cultural and ideological power relations on the articulation of

  5. Chronopolis Digital Preservation Network

    Directory of Open Access Journals (Sweden)

    David Minor

    2010-07-01

    Full Text Available The Chronopolis Digital Preservation Initiative, one of the Library of Congress’ latest efforts to collect and preserve at-risk digital information, has completed its first year of service as a multi-member partnership to meet the archival needs of a wide range of domains.Chronopolis is a digital preservation data grid framework developed by the San Diego Supercomputer Center (SDSC at UC San Diego, the UC San Diego Libraries (UCSDL, and their partners at the National Center for Atmospheric Research (NCAR in Colorado and the University of Maryland's Institute for Advanced Computer Studies (UMIACS.Chronopolis addresses a critical problem by providing a comprehensive model for the cyberinfrastructure of collection management, in which preserved intellectual capital is easily accessible, and research results, education material, and new knowledge can be incorporated smoothly over the long term. Integrating digital library, data grid, and persistent archive technologies, Chronopolis has created trusted environments that span academic institutions and research projects, with the goal of long-term digital preservation.A key goal of the Chronopolis project is to provide cross-domain collection sharing for long-term preservation. Using existing high-speed educational and research networks and mass-scale storage infrastructure investments, the partnership is leveraging the data storage capabilities at SDSC, NCAR, and UMIACS to provide a preservation data grid that emphasizes heterogeneous and highly redundant data storage systems.In this paper we will explore the major themes within Chronopolis, including:a The philosophy and theory behind a nationally federated data grid for preservation. b The core tools and technologies used in Chronopolis. c The metadata schema that is being developed within Chronopolis for all of the data elements. d Lessons learned from the first year of the project.e Next steps in digital preservation using Chronopolis: how we

  6. Geological literature on the San Joaquin Valley of California

    Science.gov (United States)

    Maher, J.C.; Trollman, W.M.; Denman, J.M.

    1973-01-01

    The following list of references includes most of the geological literature on the San Joaquin Valley and vicinity in central California (see figure 1) published prior to January 1, 1973. The San Joaquin Valley comprises all or parts of 11 counties -- Alameda, Calaveras, Contra Costa, Fresno, Kern, Kings, Madera, Merced, San Joaquin, Stanislaus, and Tulare (figure 2). As a matter of convenient geographical classification the boundaries of the report area have been drawn along county lines, and to include San Benito and Santa Clara Counties on the west and Mariposa and Tuolumne Counties on the east. Therefore, this list of geological literature includes some publications on the Diablo and Temblor Ranges on the west, the Tehachapi Mountains and Mojave Desert on the south, and the Sierra Nevada Foothills and Mountains on the east.

  7. A Qualitative Study of Information Technology Managers' Experiences and Perceptions Regarding Outsourced Data Centers

    Science.gov (United States)

    Reid, Eric Justin

    2015-01-01

    This qualitative study explored the perceptions and experiences of IT Managers in publicly traded companies within the San Antonio, Texas area about outsourced data centers. Narrative data was collected using open-ended questions and face-to-face interviews within semi-structured environments. The research questions guided the study: (1)…

  8. 75 FR 61611 - Modification of Class E Airspace; San Clemente, CA

    Science.gov (United States)

    2010-10-06

    ... INFORMATION CONTACT: Eldon Taylor, Federal Aviation Administration, Operations Support Group, Western Service... extension to a Class D surface area, at San Clemente Island NALF (Fredrick Sherman Field), San Clemente, CA... within the scope of that authority as it amends controlled airspace at San Clemente Island NALF (Fredrick...

  9. 77 FR 34984 - Notice of Intent To Repatriate a Cultural Item: San Diego Museum of Man, San Diego, CA

    Science.gov (United States)

    2012-06-12

    ...The San Diego Museum of Man, in consultation with the appropriate Indian tribes, has determined that a cultural item meets the definition of unassociated funerary object and repatriation to the Indian tribes stated below may occur if no additional claimants come forward. Representatives of any Indian tribe that believes itself to be culturally affiliated with the cultural item may contact the San Diego Museum of Man.

  10. Environmental Assessment for a Global Reach Deployment Center and Ancillary Facilities

    Science.gov (United States)

    2005-07-07

    akali milkvetch (Astragalus tener var. tener), Contra Costa goldfields (Lasthenia conjugens), and the San Joaquin spearscale (Atriplex joaquiniana... Costa goldfields (Lasthenia conjugens), a federally listed plant species. Building the Center at this site would also involve building within the land...AFB. Contra Costa goldfields is listed as federally endangered. Vernal pools are found throughout the Base. These sites vary in size from 1 acre

  11. 40 CFR 81.176 - San Luis Intrastate Air Quality Control Region.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 17 2010-07-01 2010-07-01 false San Luis Intrastate Air Quality Control Region. 81.176 Section 81.176 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Quality Control Regions § 81.176 San Luis Intrastate Air Quality Control Region. The San Luis Intrastate...

  12. Utilizing Lean Six Sigma Methodology to Improve the Authored Works Command Approval Process at Naval Medical Center San Diego.

    Science.gov (United States)

    Valdez, Michelle M; Liwanag, Maureen; Mount, Charles; Rodriguez, Rechell; Avalos-Reyes, Elisea; Smith, Andrew; Collette, David; Starsiak, Michael; Green, Richard

    2018-03-14

    Inefficiencies in the command approval process for publications and/or presentations negatively impact DoD Graduate Medical Education (GME) residency programs' ability to meet ACGME scholarly activity requirements. A preliminary review of the authored works approval process at Naval Medical Center San Diego (NMCSD) disclosed significant inefficiency, variation in process, and a low level of customer satisfaction. In order to facilitate and encourage scholarly activity at NMCSD, and meet ACGME requirements, the Executive Steering Council (ESC) chartered an interprofessional team to lead a Lean Six Sigma (LSS) Rapid Improvement Event (RIE) project. Two major outcome metrics were identified: (1) the number of authored works submissions containing all required signatures and (2) customer satisfaction with the authored works process. Primary metric baseline data were gathered utilizing a Clinical Investigations database tracking publications and presentations. Secondary metric baseline data were collected via a customer satisfaction survey to GME faculty and residents. The project team analyzed pre-survey data and utilized LSS tools and methodology including a "gemba" (environment) walk, cause and effect diagram, critical to quality tree, voice of the customer, "muda" (waste) chart, and a pre- and post-event value stream map. The team selected an electronic submission system as the intervention most likely to positively impact the RIE project outcome measures. The number of authored works compliant with all required signatures improved from 52% to 100%. Customer satisfaction rated as "completely or mostly satisfied" improved from 24% to 97%. For both outcomes, signature compliance and customer satisfaction, statistical significance was achieved with a p methodology and tools to improve signature compliance and increase customer satisfaction with the authored works approval process, leading to 100% signature compliance, a comprehensive longitudinal repository of all

  13. L’Europe et les sans-papiers

    OpenAIRE

    Simonnot, Nathalie; Intrand, Caroline

    2013-01-01

    En Europe, les sans-papiers vivent des conditions socio-économiques particulièrement défavorables. Les systèmes de santé des pays européens sont peu performants pour le suivi des personnes sans papiers. Ils sont en outre souvent victimes de refus de soins. Pire, l’accès aux soins est dans certains pays progressivement instrumentalisé au profit du contrôle de l’immigration. Ces politiques grossissent les rangs des populations qui n’accèdent pas aux soins et doivent avoir recours à Médecins du ...

  14. Quick-Reaction Report on the Audit of Defense Base Realignment and Closure Budget Data for Naval Training Center Great Lakes, Illinois

    National Research Council Canada - National Science Library

    Granetto, Paul

    1994-01-01

    .... The Hull Technician School will share building 520 with the Advanced Hull Technician School, which is being realigned from the Naval Training Center San Diego, California, under project P-608T...

  15. Rocks and geology in the San Francisco Bay region

    Science.gov (United States)

    Stoffer, Philip W.

    2002-01-01

    The landscape of the San Francisco Bay region is host to a greater variety of rocks than most other regions in the United States. This introductory guide provides illustrated descriptions of 46 common and important varieties of igneous, sedimentary, and metamorphic rock found in the region. Rock types are described in context of their identification qualities, how they form, and where they occur in the region. The guide also provides discussion about of regional geology, plate tectonics, the rock cycle, the significance of the selected rock types in relation to both earth history and the impact of mineral resources on the development in the region. Maps and text also provide information where rocks, fossils, and geologic features can be visited on public lands or in association with public displays in regional museums, park visitor centers, and other public facilities.

  16. Voice and Valency in San Luis Potosi Huasteco

    Science.gov (United States)

    Munoz Ledo Yanez, Veronica

    2014-01-01

    This thesis presents an analysis of the system of transitivity, voice and valency alternations in Huasteco of San Luis Potosi (Mayan) within a functional-typological framework. The study is based on spoken discourse and elicited data collected in the municipalities of Aquismon and Tancanhuitz de Santos in the state of San Luis Potosi, Mexico. The…

  17. Industrial assessment center program. Final Report

    International Nuclear Information System (INIS)

    Ahmad R. Ganji, Ph.D., P.E., IAC DIrector

    2007-01-01

    The Industrial Assessment Center (IAC) at San Francisco State University (SFSU) has served the cause of energy efficiency as a whole, and in particular for small and medium-sized manufacturing facilities in northern and central California, within a approximately 150 miles (radial) of San Francisco since 1992. In the current reporting period (September 1, 2002 through November 31, 2006) we have had major accomplishments, which include but are not limited to: Performing a total of 94 energy efficiency and waste minimization audit days of 87 industrial plants; Recommending and analysis of 809 energy efficiency measures; Training 22 energy engineers, most of whom have joined energy services companies in California; Disseminating energy efficiency information among local manufacturers; Acting as an information source for energy efficiency for local manufacturers and utilizes; Cooperating with local utilities and California Energy Commission in their energy efficiency projects; Performing various assignments by DOE such as dissemination of information on SEN initiative, conducting workshops on energy efficiency issues, contacting large energy user plants--Establishing a course on 'Energy: Resources, Alternatives and Conservation' as a general education course at SFSU; Bringing energy issues to the attention of students in classrooms

  18. EDF's experience with supercomputing and challenges ahead - towards multi-physics and multi-scale approaches

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M.; Banner, D. [Electricite de France (EDF)- R and D Division, 92 - Clamart (France)

    2003-07-01

    Nuclear power plants are a major asset of the EDF company. To remain so, in particular in a context of deregulation, competitiveness, safety and public acceptance are three conditions. These stakes apply both to existing plants and to future reactors. The purpose of the presentation is to explain how supercomputing can help EDF to satisfy these requirements. Three examples are described in detail: ensuring optimal use of nuclear fuel under wholly safe conditions, understanding and simulating the material deterioration mechanisms and moving forward with numerical simulation for the performance of EDF's activities. In conclusion, a broader vision of EDF long term R and D in the field of numerical simulation is given and especially of five challenges taken up by EDF together with its industrial and scientific partners. (author)

  19. Performance Evaluation of an Intel Haswell- and Ivy Bridge-Based Supercomputer Using Scientific and Engineering Applications

    Science.gov (United States)

    Saini, Subhash; Hood, Robert T.; Chang, Johnny; Baron, John

    2016-01-01

    We present a performance evaluation conducted on a production supercomputer of the Intel Xeon Processor E5- 2680v3, a twelve-core implementation of the fourth-generation Haswell architecture, and compare it with Intel Xeon Processor E5-2680v2, an Ivy Bridge implementation of the third-generation Sandy Bridge architecture. Several new architectural features have been incorporated in Haswell including improvements in all levels of the memory hierarchy as well as improvements to vector instructions and power management. We critically evaluate these new features of Haswell and compare with Ivy Bridge using several low-level benchmarks including subset of HPCC, HPCG and four full-scale scientific and engineering applications. We also present a model to predict the performance of HPCG and Cart3D within 5%, and Overflow within 10% accuracy.

  20. 369 TFlop/s molecular dynamics simulations on the Roadrunner general-purpose heterogeneous supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Swaminarayan, Sriram [Los Alamos National Laboratory; Germann, Timothy C [Los Alamos National Laboratory; Kadau, Kai [Los Alamos National Laboratory; Fossum, Gordon C [IBM CORPORATION

    2008-01-01

    The authors present timing and performance numbers for a short-range parallel molecular dynamics (MD) code, SPaSM, that has been rewritten for the heterogeneous Roadrunner supercomputer. Each Roadrunner compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, so that there are four MPI ranks per node, each with one Opteron and one Cell. The interatomic forces are computed on the Cells (each with one PPU and eight SPU cores), while the Opterons are used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard-Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), corresponding to 124 MFlop/Watt/s at a price of approximately 3.69 MFlops/dollar. They demonstrate an initial target application, the jetting and ejection of material from a shocked surface.

  1. Timber resource statistics for the San Joaquin and southern resource areas of California.

    Science.gov (United States)

    Karen L. Waddell; Patricia M. Bassett

    1997-01-01

    This report is a summary of timber resource statistics for the San Joaquin and Southern Resource Areas of California, which include Alpine, Amador, Calaveras, Fresno, Imperial, Inyo, Kern, Kings, Los Angeles, Madera, Mariposa, Merced, Mono, Orange, Riverside, San Bernardino, San Diego, San Joaquin, Stanislaus, Tulare, and Tuolumne Counties. Data were collected as part...

  2. The green areas of San Juan, Puerto Rico

    Directory of Open Access Journals (Sweden)

    Olga M. Ramos-González

    2014-09-01

    Full Text Available Green areas, also known as green infrastructure or urban vegetation, are vital to urbanites for their critical roles in mitigating urban heat island effects and climate change and for their provision of multiple ecosystem services and aesthetics. Here, I provide a high spatial resolution snapshot of the green cover distribution of the city of San Juan, Puerto Rico, by incorporating the use of morphological spatial pattern analysis (MSPA as a tool to describe the spatial pattern and connectivity of the city's urban green areas. Analysis of a previously developed IKONOS 4-m spatial resolution classification of the city of San Juan from 2002 revealed a larger area of vegetation (green areas or green infrastructure than previously estimated by moderate spatial resolution imagery. The city as a whole had approximately 42% green cover and 55% impervious surfaces. Although the city appeared greener in its southern upland sector compared to the northern coastal section, where most built-up urban areas occurred (66% impervious surfaces, northern San Juan had 677 ha more green area cover dispersed across the city than the southern component. MSPA revealed that most forest cover occurred as edges and cores, and green areas were most commonly forest cores, with larger predominance in the southern sector of the municipality. In dense, built-up, urban land, most of the green areas occurred in private yards as islets. When compared to other cities across the United States, San Juan was most similar in green cover features to Boston, Massachusetts, and Miami, Florida. Per capita green space for San Juan (122.2 m²/inhabitant was also comparable to these two U.S. cities. This study explores the intra-urban vegetation variation in the city of San Juan, which is generally overlooked by moderate spatial resolution classifications in Puerto Rico. It serves as a starting point for green infrastructure mapping and landscape pattern analysis of the urban green spaces

  3. COMPUTATIONAL SCIENCE CENTER

    International Nuclear Information System (INIS)

    DAVENPORT, J.

    2006-01-01

    Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to bring together

  4. ATLAS utilisation of the Czech national HPC center

    CERN Document Server

    Svatos, Michal; The ATLAS collaboration

    2018-01-01

    The Czech national HPC center IT4Innovations located in Ostrava provides two HPC systems, Anselm and Salomon. The Salomon HPC is amongst the hundred most powerful supercomputers on Earth since its commissioning in 2015. Both clusters were tested for usage by the ATLAS experiment for running simulation jobs. Several thousand core hours were allocated to the project for tests, but the main aim is to use free resources waiting for large parallel jobs of other users. Multiple strategies for ATLAS job execution were tested on the Salomon and Anselm HPCs. The solution described herein is based on the ATLAS experience with other HPC sites. ARC Compute Element (ARC-CE) installed at the grid site in Prague is used for job submission to Salomon. The ATLAS production system submits jobs to the ARC-CE via ARC Control Tower (aCT). The ARC-CE processes job requirements from aCT and creates a script for a batch system which is then executed via ssh. Sshfs is used to share scripts and input files between the site and the HPC...

  5. Pandemic (H1N1) 2009 Surveillance in Marginalized Populations, Tijuana, Mexico, and West Nile Virus Knowledge among Hispanics, San Diego, California, 2006

    Centers for Disease Control (CDC) Podcasts

    This podcast describes public health surveillance and communication in hard to reach populations in Tijuana, Mexico, and San Diego County, California. Dr. Marian McDonald, Associate Director of CDC's Health Disparities in the National Center for Emerging and Zoonotic Infectious Diseases, discusses the importance of being flexible in determining the most effective media for health communications.

  6. Designing and application of SAN extension interface based on CWDM

    Science.gov (United States)

    Qin, Leihua; Yu, Shengsheng; Zhou, Jingli

    2005-11-01

    As Fibre Channel (FC) becomes the protocol of choice within corporate data centers, enterprises are increasingly deploying SANs in their data central. In order to mitigate the risk of losing data and improve the availability of data, more and more enterprises are increasingly adopting storage extension technologies to replicate their business critical data to a secondary site. Transmitting this information over distance requires a carrier grade environment with zero data loss, scalable throughput, low jitter, high security and ability to travel long distance. To address this business requirements, there are three basic architectures for storage extension, they are Storage over Internet Protocol, Storage over Synchronous Optical Network/Synchronous Digital Hierarchy (SONET/SDH) and Storage over Dense Wavelength Division Multiplexing (DWDM). Each approach varies in functionality, complexity, cost, scalability, security, availability , predictable behavior (bandwidth, jitter, latency) and multiple carrier limitations. Compared with these connectiviy technologies,Coarse Wavelength Division Multiplexing (CWDM) is a Simplified, Low Cost and High Performance connectivity solutions for enterprises to deploy their storage extension. In this paper, we design a storage extension connectivity over CWDM and test it's electrical characteristic and random read and write performance of disk array through the CWDM connectivity, testing result show us that the performance of the connectivity over CWDM is acceptable. Furthermore, we propose three kinds of network architecture of SAN extension based on CWDM interface. Finally the credit-Based flow control mechanism of FC, and the relationship between credits and extension distance is analyzed.

  7. Sediment transport of streams tributary to San Francisco, San Pablo, and Suisun Bays, California, 1909-66

    Science.gov (United States)

    Porterfield, George

    1980-01-01

    A review of historical sedimentation data is presented, results of sediment-data collection for water years 1957-59 are summarized, and long-term sediment-discharge estimates from a preliminary report are updated. Comparison of results based on 3 years of data to those for the 10 water years, 1957-66, provides an indication of the adequacy of the data obtained during the short period to define the long-term relation between sediment transport and streamflow. During 1909-66, sediment was transported to the entire San Francisco Bay system at an average rate of 8.6 million cubic yards per year. The Sacramento and San Joaquin River basins provided about 83% of the sediment inflow to the system annually during 1957-66 and 86% during 1909-66. About 98% of this inflow was measured or estimated at sediment measuring sites. Measured sediment inflow directly to the bays comprised only about 40% of the total discharged by basins directly tributary to the bays. About 90% of the total sediment discharge to the delta and the bays in the San Francisco Bay system thus was determined on the basis of systematic measurements. (USGS)

  8. San Andreas tremor cascades define deep fault zone complexity

    Science.gov (United States)

    Shelly, David R.

    2015-01-01

    Weak seismic vibrations - tectonic tremor - can be used to delineate some plate boundary faults. Tremor on the deep San Andreas Fault, located at the boundary between the Pacific and North American plates, is thought to be a passive indicator of slow fault slip. San Andreas Fault tremor migrates at up to 30 m s-1, but the processes regulating tremor migration are unclear. Here I use a 12-year catalogue of more than 850,000 low-frequency earthquakes to systematically analyse the high-speed migration of tremor along the San Andreas Fault. I find that tremor migrates most effectively through regions of greatest tremor production and does not propagate through regions with gaps in tremor production. I interpret the rapid tremor migration as a self-regulating cascade of seismic ruptures along the fault, which implies that tremor may be an active, rather than passive participant in the slip propagation. I also identify an isolated group of tremor sources that are offset eastwards beneath the San Andreas Fault, possibly indicative of the interface between the Monterey Microplate, a hypothesized remnant of the subducted Farallon Plate, and the North American Plate. These observations illustrate a possible link between the central San Andreas Fault and tremor-producing subduction zones.

  9. 77 FR 123 - Proposed CERCLA Administrative Cost Recovery Settlement; North Hollywood Operable Unit of the San...

    Science.gov (United States)

    2012-01-03

    ...In accordance with Section 122(i) of the Comprehensive Environmental Response, Compensation, and Liability Act, as amended (``CERCLA''), 42 U.S.C. 9622(i), notice is hereby given of a proposed administrative settlement for recovery of response costs concerning the North Hollywood Operable Unit of the San Fernando Valley Area 1 Superfund Site, located in the vicinity of Los Angeles, California, with the following settling party: Waste Management Recycling & Disposal Services of California, Inc., dba Bradley Landfill & Recycling Center. The settlement requires the settling party to pay a total of $185,734 to the North Hollywood Operable Unit Special Account within the Hazardous Substance Superfund. The settlement also includes a covenant not to sue the settling party pursuant to Section 107(a) of CERCLA, 42 U.S.C. 9607(a). For thirty (30) days following the date of publication of this notice, the Agency will receive written comments relating to the settlement. The Agency will consider all comments received and may modify or withdraw its consent to the settlement if comments received disclose facts or considerations which indicate that the settlement is inappropriate, improper, or inadequate. The Agency's response to any comments received will be available for public inspection at the City of Los Angeles Central Library, Science and Technology Department, 630 West 5th Street, Los Angeles CA 90071 and at the EPA Region 9 Superfund Records Center, Mail Stop SFD-7C, 95 Hawthorne Street, Room 403, San Francisco, CA 94105.

  10. Foreign Language Folio. A Guide to Cultural Resources and Field Trip Opportunities in the San Francisco Bay Area for Teachers and Students of Foreign Languages, 1983-85.

    Science.gov (United States)

    Gonzales, Tony, Ed.; O'Connor, Roger, Ed.

    A listing of San Francisco area cultural resources and opportunities of use to foreign language teachers is presented. Included are the following: museums and galleries, schools, art sources, churches, clubs, cultural centers and organizations, publications and publishing companies, restaurants, food stores and markets, travel and tourism,…

  11. Hydrologic assessment and numerical simulation of groundwater flow, San Juan Mine, San Juan County, New Mexico, 2010–13

    Science.gov (United States)

    Stewart, Anne M.

    2018-04-03

    Coal combustion byproducts (CCBs), which are composed of fly ash, bottom ash, and flue gas desulfurization material, produced at the coal-fired San Juan Generating Station (SJGS), located in San Juan County, New Mexico, have been buried in former surface-mine pits at the San Juan Mine, also referred to as the San Juan Coal Mine, since operations began in the early 1970s. This report, prepared by the U.S. Geological Survey in cooperation with the Mining and Minerals Division of the New Mexico Energy, Minerals and Natural Resources Department, describes results of a hydrogeologic assessment, including numerical groundwater modeling, to identify the timing of groundwater recovery and potential pathways for groundwater transport of metals that may be leached from stored CCBs and reach hydrologic receptors after operations cease. Data collected for the hydrologic assessment indicate that groundwater in at least one centrally located reclaimed surface-mining pit has already begun to recover.The U.S. Geological Survey numerical modeling package MODFLOW–NWT was used with MODPATH particle-tracking software to identify advective flow paths from CCB storage areas toward potential hydrologic receptors. Results indicate that groundwater at CCB storage areas will recover to the former steady state, or in some locations, groundwater may recover to a new steady state in 6,600 to 10,600 years at variable rates depending on the proximity to a residual cone-of-groundwater depression caused by mine dewatering and regional oil and gas pumping as well as on actual, rather than estimated, groundwater recharge and evapotranspirational losses. Advective particle-track modeling indicates that the number of particles and rates of advective transport will vary depending on hydraulic properties of the mine spoil, particularly hydraulic conductivity and porosity. Modeling results from the most conservative scenario indicate that particles can migrate from CCB repositories to either the

  12. Effectiveness of Kampo medicine Gorei-san for chronic subdural hematoma

    International Nuclear Information System (INIS)

    Miyagami, Mitsusuke; Kagawa, Yukihide

    2009-01-01

    Chronic subdural hematomas (CSDHs) are basically treated by surgery. In some cases with no or minimum symptoms, however, they may be treated conservatively. In the present study, we evaluated the therapeutic effect of a Kampo medicine (Japanese traditional herbal medicine), Gorei-san, in the treatment of those CSDHs. Gorei-san 7.5 g t.i.d. was orally administered for 4 weeks in 22 patients with 27 CSDHs. Maximum thickness of the hematoma was followed up on CT scan for 4 to 29 weeks after administration of Gorei-san. In 7 of 22 patients, tranexamic acid and/or carbazochrome sodium sulfonate were also administrated. Gorei-san was effective in 23 of 27 CSDHs. In 12 of them, the hematoma was completely disappeared within 14 weeks after administration. In the other 11 CSDHs, the thickness was decreased. In those effective cases, thickness began to decrease 3 to 4 weeks after administration of Gorei-san. It was more effective in CSDHs with iso-/high or mixed density than with low density on CT. It was not effective in 4 out of 27 CSDHs. No apparent adverse effect was noted in the present series of patients. The present study suggests that a Kampo medicine, Gorei-san, is a useful option in the conservative treatment of CSDHs with no or minimum symptoms. (author)

  13. 75 FR 65985 - Safety Zone: Epic Roasthouse Private Party Firework Display, San Francisco, CA

    Science.gov (United States)

    2010-10-27

    ... the navigable waters of San Francisco Bay 1,000 yards off Epic Roasthouse Restaurant, San Francisco.... Wright, Program Manager, Docket Operations, telephone 202-366-9826. SUPPLEMENTARY INFORMATION: Regulatory... waters of San Francisco Bay, 1,000 yards off Epic Roasthouse Restaurant, San Francisco, CA. The fireworks...

  14. Establishing a Research Center: The Minority Male Community College Collaborative (M2C3)

    Science.gov (United States)

    Wood, J. Luke; Urias, Marissa Vasquez; Harris, Frank, III

    2016-01-01

    This chapter describes the establishment of the Minority Male Community College Collaborative (M2C3), a research and practice center at San Diego State University. M2C3 partners with community colleges across the United States to enhance access, achievement, and success among men of color. This chapter begins with a description of the national…

  15. New Center Links Earth, Space, and Information Sciences

    Science.gov (United States)

    Aswathanarayana, U.

    2004-05-01

    Broad-based geoscience instruction melding the Earth, space, and information technology sciences has been identified as an effective way to take advantage of the new jobs created by technological innovations in natural resources management. Based on this paradigm, the University of Hyderabad in India is developing a Centre of Earth and Space Sciences that will be linked to the university's super-computing facility. The proposed center will provide the basic science underpinnings for the Earth, space, and information technology sciences; develop new methodologies for the utilization of natural resources such as water, soils, sediments, minerals, and biota; mitigate the adverse consequences of natural hazards; and design innovative ways of incorporating scientific information into the legislative and administrative processes. For these reasons, the ethos and the innovatively designed management structure of the center would be of particular relevance to the developing countries. India holds 17% of the world's human population, and 30% of its farm animals, but only about 2% of the planet's water resources. Water will hence constitute the core concern of the center, because ecologically sustainable, socially equitable, and economically viable management of water resources of the country holds the key to the quality of life (drinking water, sanitation, and health), food security, and industrial development of the country. The center will be focused on interdisciplinary basic and pure applied research that is relevant to the practical needs of India as a developing country. These include, for example, climate prediction, since India is heavily dependent on the monsoon system, and satellite remote sensing of soil moisture, since agriculture is still a principal source of livelihood in India. The center will perform research and development in areas such as data assimilation and validation, and identification of new sensors to be mounted on the Indian meteorological

  16. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  17. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  18. Publisher Correction

    DEFF Research Database (Denmark)

    Bonàs-Guarch, Sílvia; Guindo-Martínez, Marta; Miguel-Escalada, Irene

    2018-01-01

    In the originally published version of this Article, the affiliation details for Santi González, Jian'an Luan and Claudia Langenberg were inadvertently omitted. Santi González should have been affiliated with 'Barcelona Supercomputing Center (BSC), Joint BSC-CRG-IRB Research Program in Computatio......In the originally published version of this Article, the affiliation details for Santi González, Jian'an Luan and Claudia Langenberg were inadvertently omitted. Santi González should have been affiliated with 'Barcelona Supercomputing Center (BSC), Joint BSC-CRG-IRB Research Program...

  19. New Mexico High School Supercomputing Challenge, 1990--1995: Five years of making a difference to students, teachers, schools, and communities. Progress report

    Energy Technology Data Exchange (ETDEWEB)

    Foster, M.; Kratzer, D.

    1996-02-01

    The New Mexico High School Supercomputing Challenge is an academic program dedicated to increasing interest in science and math among high school students by introducing them to high performance computing. This report provides a summary and evaluation of the first five years of the program, describes the program and shows the impact that it has had on high school students, their teachers, and their communities. Goals and objectives are reviewed and evaluated, growth and development of the program are analyzed, and future directions are discussed.

  20. 78 FR 42027 - Safety Zone; San Diego Bayfair; Mission Bay, San Diego, CA

    Science.gov (United States)

    2013-07-15

    ... safety zones. Thunderboats Unlimited Inc. is sponsoring San Diego Bayfair, which is held on the navigable... distribution of power and responsibilities between the Federal Government and Indian tribes. 12. Energy Effects This proposed rule is not a ``significant energy action'' under Executive Order 13211, Actions...

  1. Parallel simulation of tsunami inundation on a large-scale supercomputer

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2013-12-01

    An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the

  2. Chemical and Physical Characteristics of Pulverized Granitic Rock Adjacent to the San Andreas, Garlock and San Jacinto Faults: Implications for Earthquake Physics

    Science.gov (United States)

    Rockwell, T. K.; Sisk, M.; Stillings, M.; Girty, G.; Dor, O.; Wechsler, N.; Ben-Zion, Y.

    2008-12-01

    We present new detailed analyses of pulverized granitic rocks from sections adjacent to the San Andreas, Garlock and San Jacinto faults in southern California. Along the San Andreas and Garlock faults, the Tejon Lookout Granite is pulverized in all exposures within about 100 m of both faults. Along the Clark strand of the San Jacinto fault in Horse Canyon, the pulverization of granitic rocks is highly asymmetric, with a much broader zone of pulverization along the southwest side of the Clark fault. In areas where the granite is injected as dyke rock into schist, only the granitic rock shows pulverization, demonstrating the control of rock type on the pulverization process. Chemical analyses indicate little or no weathering in the bulk of the rock, although XRD analysis shows the presence of smectite, illite, and minor kaolinite in the clay-sized fraction. Weathering products may dominate in the less than 1 micron fraction. The average grain size in all samples of pulverized granitic rock range between about 20 and 200 microns (silt to fine sand), with the size distribution in part a function of proximity to the primary slip zone. The San Andreas fault samples are generally finer than those collected from along the Garlock or San Jacinto faults. The particle size distribution for all samples is non-fractal, with a distinct slope break in the 60-100 micron range, which suggests that pulverization is not a consequence of direct shear. This average particle size is quite coarser than previous reports, which we attribute to possible measurement errors in the prior work. Our data and observations suggest that dynamic fracturing in the wall rock of these three major faults only accounts for 1% or less of the earthquake energy budget.

  3. 77 FR 60897 - Safety Zone: America's Cup World Series Finish-Line, San Francisco, CA

    Science.gov (United States)

    2012-10-05

    ... navigable waters of the San Francisco Bay in vicinity of San Francisco West Yacht Harbor Light 2... vicinity of San Francisco West Yacht Harbor Light 2. Unauthorized persons or vessels are prohibited from... San Francisco West Yacht Harbor Light 2. This safety zone establishes a temporary restricted area on...

  4. Reliability Lessons Learned From GPU Experience With The Titan Supercomputer at Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Gallarno, George [Christian Brothers University; Rogers, James H [ORNL; Maxwell, Don E [ORNL

    2015-01-01

    The high computational capability of graphics processing units (GPUs) is enabling and driving the scientific discovery process at large-scale. The world s second fastest supercomputer for open science, Titan, has more than 18,000 GPUs that computational scientists use to perform scientific simu- lations and data analysis. Understanding of GPU reliability characteristics, however, is still in its nascent stage since GPUs have only recently been deployed at large-scale. This paper presents a detailed study of GPU errors and their impact on system operations and applications, describing experiences with the 18,688 GPUs on the Titan supercom- puter as well as lessons learned in the process of efficient operation of GPUs at scale. These experiences are helpful to HPC sites which already have large-scale GPU clusters or plan to deploy GPUs in the future.

  5. San Juanico Hybrid System Technical and Institutional Assessment: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Corbus, D.; Newcomb, C.; Yewdall, Z.

    2004-07-01

    San Juanico is a fishing village of approximately 120 homes in the Municipality of Comondu, Baja California. In April, 1999, a hybrid power system was installed in San Juanico to provide 24-hour power, which was not previously available. Before the installation of the hybrid power system, a field study was conducted to characterize the electrical usage and institutional and social framework of San Juanico. One year after the installation of the hybrid power system a''post-electrification'' study was performed to document the changes that had occurred after the installation. In December of 2003, NREL visited the site to conduct a technical assessment of the system.

  6. Achieving Extreme Resolution in Numerical Cosmology Using Adaptive Mesh Refinement: Resolving Primordial Star Formation

    Directory of Open Access Journals (Sweden)

    Greg L. Bryan

    2002-01-01

    Full Text Available As an entry for the 2001 Gordon Bell Award in the "special" category, we describe our 3-d, hybrid, adaptive mesh refinement (AMR code Enzo designed for high-resolution, multiphysics, cosmological structure formation simulations. Our parallel implementation places no limit on the depth or complexity of the adaptive grid hierarchy, allowing us to achieve unprecedented spatial and temporal dynamic range. We report on a simulation of primordial star formation which develops over 8000 subgrids at 34 levels of refinement to achieve a local refinement of a factor of 1012 in space and time. This allows us to resolve the properties of the first stars which form in the universe assuming standard physics and a standard cosmological model. Achieving extreme resolution requires the use of 128-bit extended precision arithmetic (EPA to accurately specify the subgrid positions. We describe our EPA AMR implementation on the IBM SP2 Blue Horizon system at the San Diego Supercomputer Center.

  7. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  8. Steam, solarization, and tons of prevention: the San Francisco Public Utilities Commission's fight to contain Phytophthoras in San Francisco Bay area restoration sites

    Science.gov (United States)

    Greg Lyman; Jessica Appel; Mia Ingolia; Ellen Natesan; Joe Ortiz

    2017-01-01

    To compensate for unavoidable impacts associated with critical water infrastructure capital improvement projects, the San Francisco Public Utilities Commission (SFPUC) restored over 2,050 acres of riparian, wetland, and upland habitat on watershed lands in Alameda, Santa Clara, and San Mateo Counties. Despite strict bio-sanitation protocols, plant pathogens (...

  9. Effects of Choto-san (Diao-Teng-San) on microcirculation of bulbar conjunctiva and hemorheological factors in patients with asymptomatic cerebral infarction

    OpenAIRE

    YANG, Qiao; Kita, Toshiaki; Hikiami, Hiroaki; Shimada, Yutaka; Itoh, Takashi; Terasawa, Katsutoshi

    1999-01-01

    In this study, the effects of Choto-san ( 釣藤散 ) on the microcirculation of bulbar conjunctiva in 16 patients with asymptornatic cerebral infarction were investigated with a video-microscopic system. After the administration of Choto-san for four weeks, variables of microcirculatory flow of the bulbar conjunctiva, that is, the internal diameter of vessels, flow velocity and flow volume rate were increased (p

  10. Riparian Habitat - San Joaquin River

    Data.gov (United States)

    California Natural Resource Agency — The immediate focus of this study is to identify, describe and map the extent and diversity of riparian habitats found along the main stem of the San Joaquin River,...

  11. ASTEC and MODEL: Controls software development at Goddard Space Flight Center

    Science.gov (United States)

    Downing, John P.; Bauer, Frank H.; Surber, Jeffrey L.

    1993-01-01

    The ASTEC (Analysis and Simulation Tools for Engineering Controls) software is under development at the Goddard Space Flight Center (GSFC). The design goal is to provide a wide selection of controls analysis tools at the personal computer level, as well as the capability to upload compute-intensive jobs to a mainframe or supercomputer. In the last three years the ASTEC (Analysis and Simulation Tools for Engineering Controls) software has been under development. ASTEC is meant to be an integrated collection of controls analysis tools for use at the desktop level. MODEL (Multi-Optimal Differential Equation Language) is a translator that converts programs written in the MODEL language to FORTRAN. An upgraded version of the MODEL program will be merged into ASTEC. MODEL has not been modified since 1981 and has not kept with changes in computers or user interface techniques. This paper describes the changes made to MODEL in order to make it useful in the 90's and how it relates to ASTEC.

  12. Neutron beam applications - Polymer study and sample environment development for HANARO SANS instrument

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hong Doo [Kyunghee University, Seoul (Korea); Char, Kook Heon [Seoul National University, Seoul (Korea)

    2000-04-01

    A new SANS instrument will be installed in HANARO reactor near future and in parallel it is necessary to develop the sample environment facilities. One of the basic items is the equipment to control the sample temperature of cell block with auto-sample changer. It is required to develop a control software for this purpose. In addition, softwares of the aquisition and analysis for SANS instrument must be developed and supplied in order to function properly. PS/PI block copolymer research in NIST will provide the general understanding of SANS instrument and instrument-related valuable informations such as standard sample for SANS and know-hows of the instrument building. The following are the results of this research. a. Construction of sample cell block. b. Software to control the temperature and auto-sample changer. c. Acquisition of the SANS data analysis routine and its modification for HANARO SANS. d. PS/PI block copolymer research in NIST. e. Calibration data of NIST and HANARO SANS for comparison. 39 figs., 2 tabs. (Author)

  13. 33 CFR 80.1130 - San Luis Obispo Bay, CA.

    Science.gov (United States)

    2010-07-01

    ... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false San Luis Obispo Bay, CA. 80.1130 Section 80.1130 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY INTERNATIONAL NAVIGATION RULES COLREGS DEMARCATION LINES Pacific Coast § 80.1130 San Luis Obispo Bay, CA. A line drawn from...

  14. San Antonio Bay 1986-1989

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The effect of salinity on utilization of shallow-water nursery habitats by aquatic fauna was assessed in San Antonio Bay, Texas. Overall, 272 samples were collected...

  15. A criticality safety analysis code using a vectorized Monte Carlo method on the HITAC S-810 supercomputer

    International Nuclear Information System (INIS)

    Morimoto, Y.; Maruyama, H.

    1987-01-01

    A vectorized Monte Carlo criticality safety analysis code has been developed on the vector supercomputer HITAC S-810. In this code, a multi-particle tracking algorithm was adopted for effective utilization of the vector processor. A flight analysis with pseudo-scattering was developed to reduce the computational time needed for flight analysis, which represents the bulk of computational time. This new algorithm realized a speed-up of factor 1.5 over the conventional flight analysis. The code also adopted the multigroup cross section constants library of the Bodarenko type with 190 groups, with 132 groups being for fast and epithermal regions and 58 groups being for the thermal region. Evaluation work showed that this code reproduce the experimental results to an accuracy of about 1 % for the effective neutron multiplication factor. (author)

  16. San Gregorio mining: general presentation of the enterprise

    International Nuclear Information System (INIS)

    1997-01-01

    This work is a project presented by San Gregorio Mine.This company is responsible for the extraction and gold ore deposits benefits in San Gregorio and East extension in Minas de Corrales. For this project was carried out an environmental impact study as well as and agreement with the LATU for the laboratory analyzes and the surface and groundwater monitoring within the Environmental program established by the Company

  17. Defining competencies for education in health care value: recommendations from the University of California, San Francisco Center for Healthcare Value Training Initiative.

    Science.gov (United States)

    Moriates, Christopher; Dohan, Daniel; Spetz, Joanne; Sawaya, George F

    2015-04-01

    Leaders in medical education have increasingly called for the incorporation of cost awareness and health care value into health professions curricula. Emerging efforts have thus far focused on physicians, but foundational competencies need to be defined related to health care value that span all health professions and stages of training. The University of California, San Francisco (UCSF) Center for Healthcare Value launched an initiative in 2012 that engaged a group of educators from all four health professions schools at UCSF: Dentistry, Medicine, Nursing, and Pharmacy. This group created and agreed on a multidisciplinary set of comprehensive competencies related to health care value. The term "competency" was used to describe components within the larger domain of providing high-value care. The group then classified the competencies as beginner, proficient, or expert level through an iterative process and group consensus. The group articulated 21 competencies. The beginner competencies include basic principles of health policy, health care delivery, health costs, and insurance. Proficient competencies include real-world applications of concepts to clinical situations, primarily related to the care of individual patients. The expert competencies focus primarily on systems-level design, advocacy, mentorship, and policy. These competencies aim to identify a standard that may help inform the development of curricula across health professions training. These competencies could be translated into the learning objectives and evaluation methods of resources to teach health care value, and they should be considered in educational settings for health care professionals at all levels of training and across a variety of specialties.

  18. Estimating natural recharge in San Gorgonio Pass watersheds, California, 1913–2012

    Science.gov (United States)

    Hevesi, Joseph A.; Christensen, Allen H.

    2015-12-21

    A daily precipitation-runoff model was developed to estimate spatially and temporally distributed recharge for groundwater basins in the San Gorgonio Pass area, southern California. The recharge estimates are needed to define transient boundary conditions for a groundwater-flow model being developed to evaluate the effects of pumping and climate on the long-term availability of groundwater. The area defined for estimating recharge is referred to as the San Gorgonio Pass watershed model (SGPWM) and includes three watersheds: San Timoteo Creek, Potrero Creek, and San Gorgonio River. The SGPWM was developed by using the U.S. Geological Survey INFILtration version 3.0 (INFILv3) model code used in previous studies of recharge in the southern California region, including the San Gorgonio Pass area. The SGPWM uses a 150-meter gridded discretization of the area of interest in order to account for spatial variability in climate and watershed characteristics. The high degree of spatial variability in climate and watershed characteristics in the San Gorgonio Pass area is caused, in part, by the high relief and rugged topography of the area.

  19. Benchmarking Further Single Board Computers for Building a Mini Supercomputer for Simulation of Telecommunication Systems

    Directory of Open Access Journals (Sweden)

    Gábor Lencse

    2016-01-01

    Full Text Available Parallel Discrete Event Simulation (PDES with the conservative synchronization method can be efficiently used for the performance analysis of telecommunication systems because of their good lookahead properties. For PDES, a cost effective execution platform may be built by using single board computers (SBCs, which offer relatively high computation capacity compared to their price or power consumption and especially to the space they take up. A benchmarking method is proposed and its operation is demonstrated by benchmarking ten different SBCs, namely Banana Pi, Beaglebone Black, Cubieboard2, Odroid-C1+, Odroid-U3+, Odroid-XU3 Lite, Orange Pi Plus, Radxa Rock Lite, Raspberry Pi Model B+, and Raspberry Pi 2 Model B+. Their benchmarking results are compared to find out which one should be used for building a mini supercomputer for parallel discrete-event simulation of telecommunication systems. The SBCs are also used to build a heterogeneous cluster and the performance of the cluster is tested, too.

  20. Site in a box: Improving the Tier 3 experience

    Science.gov (United States)

    Dost, J. M.; Fajardo, E. M.; Jones, T. R.; Martin, T.; Tadel, A.; Tadel, M.; Würthwein, F.

    2017-10-01

    The Pacific Research Platform is an initiative to interconnect Science DMZs between campuses across the West Coast of the United States over a 100 gbps network. The LHC @ UC is a proof of concept pilot project that focuses on interconnecting 6 University of California campuses. It is spearheaded by computing specialists from the UCSD Tier 2 Center in collaboration with the San Diego Supercomputer Center. A machine has been shipped to each campus extending the concept of the Data Transfer Node to a cluster in a box that is fully integrated into the local compute, storage, and networking infrastructure. The node contains a full HTCondor batch system, and also an XRootD proxy cache. User jobs routed to the DTN can run on 40 additional slots provided by the machine, and can also flock to a common GlideinWMS pilot pool, which sends jobs out to any of the participating UCs, as well as to Comet, the new supercomputer at SDSC. In addition, a common XRootD federation has been created to interconnect the UCs and give the ability to arbitrarily export data from the home university, to make it available wherever the jobs run. The UC level federation also statically redirects to either the ATLAS FAX or CMS AAA federation respectively to make globally published datasets available, depending on end user VO membership credentials. XRootD read operations from the federation transfer through the nearest DTN proxy cache located at the site where the jobs run. This reduces wide area network overhead for subsequent accesses, and improves overall read performance. Details on the technical implementation, challenges faced and overcome in setting up the infrastructure, and an analysis of usage patterns and system scalability will be presented.

  1. Timber resource statistics for the San Joaquin and southern California resource areas.

    Science.gov (United States)

    Bruce Hiserote; Joel Moen; Charles L. Bolsinger

    1986-01-01

    This report is one of five that provide timber resource statistics for 57 of the 58 counties in California (San Francisco is excluded). This report presents statistics from a 1982-84 inventory of the timber resources of Alpine, Amador, Calaveras, Fresno, Imperial, Inyo, Kern, Kings, Los Angeles, Madera, Mariposa, Merced, Mono, Orange, Riverside, San Bernardino, San...

  2. The Erasmus Computing Grid - Building a Super-Computer Virtually for Free at the Erasmus Medical Center and the Hogeschool Rotterdam

    NARCIS (Netherlands)

    T.A. Knoch (Tobias); L.V. de Zeeuw (Luc)

    2006-01-01

    textabstractThe Set-Up of the 20 Teraflop Erasmus Computing Grid: To meet the enormous computational needs of live- science research as well as clinical diagnostics and treatment the Hogeschool Rotterdam and the Erasmus Medical Center are currently setting up one of the largest desktop

  3. EX1103: Exploration and Mapping, Galapagos Spreading Center: Mapping, CTD, Tow-Yo, and ROV on NOAA Ship Okeanos Explorer (EM302)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This cruise will be composed of two separate legs. The first leg will be a transit from San Diego, CA to the Galapagos Spreading Center, where multibeam mapping, CTD...

  4. A simulation of the San Andreas fault experiment

    Science.gov (United States)

    Agreen, R. W.; Smith, D. E.

    1974-01-01

    The San Andreas fault experiment (Safe), which employs two laser tracking systems for measuring the relative motion of two points on opposite sides of the fault, has been simulated for an 8-yr observation period. The two tracking stations are located near San Diego on the western side of the fault and near Quincy on the eastern side; they are roughly 900 km apart. Both will simultaneously track laser reflector equipped satellites as they pass near the stations. Tracking of the Beacon Explorer C spacecraft has been simulated for these two stations during August and September for 8 consecutive years. An error analysis of the recovery of the relative location of Quincy from the data has been made, allowing for model errors in the mass of the earth, the gravity field, solar radiation pressure, atmospheric drag, errors in the position of the San Diego site, and biases and noise in the laser systems. The results of this simulation indicate that the distance of Quincy from San Diego will be determined each year with a precision of about 10 cm. Projected improvements in these model parameters and in the laser systems over the next few years will bring the precision to about 1-2 cm by 1980.

  5. Gravity data from the San Pedro River Basin, Cochise County, Arizona

    Science.gov (United States)

    Kennedy, Jeffrey R.; Winester, Daniel

    2011-01-01

    The U.S. Geological Survey, Arizona Water Science Center in cooperation with the National Oceanic and Atmospheric Administration, National Geodetic Survey has collected relative and absolute gravity data at 321 stations in the San Pedro River Basin of southeastern Arizona since 2000. Data are of three types: observed gravity values and associated free-air, simple Bouguer, and complete Bouguer anomaly values, useful for subsurface-density modeling; high-precision relative-gravity surveys repeated over time, useful for aquifer-storage-change monitoring; and absolute-gravity values, useful as base stations for relative-gravity surveys and for monitoring gravity change over time. The data are compiled, without interpretation, in three spreadsheet files. Gravity values, GPS locations, and driving directions for absolute-gravity base stations are presented as National Geodetic Survey site descriptions.

  6. Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Villa, Oreste; Tumeo, Antonino; Secchi, Simone; Manzano Franco, Joseph B.

    2012-12-31

    Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, we introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.

  7. PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan; Mills, Richard T.

    2012-04-18

    PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors per realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.

  8. 75 FR 39166 - Safety Zone; San Francisco Giants Baseball Game Promotion, San Francisco, CA

    Science.gov (United States)

    2010-07-08

    ... San Francisco, CA. The fireworks display is meant for entertainment purposes. This safety zone is... National Technology Transfer and Advancement Act (NTTAA) (15 U.S.C. 272 note) directs agencies to use...), of the Instruction. This rule involves establishing, disestablishing, or changing Regulated...

  9. Demonstration of reliability centered maintenance

    International Nuclear Information System (INIS)

    Schwan, C.A.; Morgan, T.A.

    1991-04-01

    Reliability centered maintenance (RCM) is an approach to preventive maintenance planning and evaluation that has been used successfully by other industries, most notably the airlines and military. Now EPRI is demonstrating RCM in the commercial nuclear power industry. Just completed are large-scale, two-year demonstrations at Rochester Gas ampersand Electric (Ginna Nuclear Power Station) and Southern California Edison (San Onofre Nuclear Generating Station). Both demonstrations were begun in the spring of 1988. At each plant, RCM was performed on 12 to 21 major systems. Both demonstrations determined that RCM is an appropriate means to optimize a PM program and improve nuclear plant preventive maintenance on a large scale. Such favorable results had been suggested by three earlier EPRI pilot studies at Florida Power ampersand Light, Duke Power, and Southern California Edison. EPRI selected the Ginna and San Onofre sites because, together, they represent a broad range of utility and plant size, plant organization, plant age, and histories of availability and reliability. Significant steps in each demonstration included: selecting and prioritizing plant systems for RCM evaluation; performing the RCM evaluation steps on selected systems; evaluating the RCM recommendations by a multi-disciplinary task force; implementing the RCM recommendations; establishing a system to track and verify the RCM benefits; and establishing procedures to update the RCM bases and recommendations with time (a living program). 7 refs., 1 tab

  10. The GF11 supercomputer

    International Nuclear Information System (INIS)

    Beetem, J.; Weingarten, D.

    1986-01-01

    GF11 is a parallel computer currently under construction at the IBM Yorktown Research Center. The machine incorporates 576 floating-point processors arrangedin a modified SIMD architecture. Each has space for 2 Mbytes of memory and is capable of 20 Mflops, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 Gflops. The floating-point processors are interconnected by a dynamically reconfigurable non-blocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics

  11. The GF11 supercomputer

    International Nuclear Information System (INIS)

    Beetem, J.; Denneau, M.; Weingarten, D.

    1985-01-01

    GF11 is a parallel computer currently under construction at the IBM Yorktown Research Center. The machine incorporates 576 floating- point processors arranged in a modified SIMD architecture. Each has space for 2 Mbytes of memory and is capable of 20 Mflops, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 Gflops. The floating-point processors are interconnected by a dynamically reconfigurable nonblocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics

  12. SANS studies of solutions and molecular composites prepared from cellulose tricarbanilate

    CERN Document Server

    Alava, C; Cameron, J D; Cowie, J M G; Vaqueiro, P; Möller, A; Triolo, A

    2002-01-01

    We report on SANS measurements carried out on the instrument SANS1 (V4) at the BENSC facility on solutions and composites of cellulose tricarbanilate (CTC). This cellulose derivative exhibits lyotropic behaviour in methylacrylate (MA). The SANS data indicate that in the isotropic liquid state (up to 25% wt CTC in MA) the CTC chains behave like rods of mass per unit length (M/L). In the liquid crystalline (LC) phase (at and above 35% wt CTC in MA), the Q dependence varies from Q sup - sup 1 to Q sup - sup 4 , probably as a result of self-assembling of the CTC chains. The general aim of our work is to prepare molecular composites, i.e. miscible blends of rigid-rod and flexible-coil polymers, from CTC solutions in polymerizable media. To establish the degree of homogeneity of the composites, we performed SANS measurements on UV-cured CTC/MA solutions. Here, we compare the SANS data of CTC/monomer solutions with those of the corresponding composites. (orig.)

  13. 78 FR 35593 - Special Local Regulation; Christmas Boat Parade, San Juan Harbor; San Juan, PR

    Science.gov (United States)

    2013-06-13

    ... individually or cumulatively have a significant effect on the human environment. This proposed rule involves.... Pearson, Captain, U.S. Coast Guard, Captain of the Port San Juan. [FR Doc. 2013-13994 Filed 6-12-13; 8:45...

  14. 76 FR 6517 - San Luis & Rio Grande Railroad-Petition for a Declaratory Order

    Science.gov (United States)

    2011-02-04

    ... DEPARTMENT OF TRANSPORTATION Surface Transportation Board [Docket No. FD 35380] San Luis & Rio... petition filed by San Luis & Rio Grande Railroad (SLRG), the Board instituted a declaratory order... proposed operation of a truck-to-rail transload facility in Antonito, Colorado. See San Luis & Rio Grande R...

  15. Planning for the Mercy Center for Breast Health.

    Science.gov (United States)

    Olivares, V Ed

    2002-01-01

    During the last months of 2000, administrators at the Mercy San Juan Medical Center in Carmichael, Calif., convened a steering committee to plan the Mercy Center for Breast Health. The Steering Committee was composed of the director of ancillary and support services, the oncology clinical nurse specialist, the RN manager of the oncology nursing unit, the RN surgery center manager, and me, the manager of imaging services. The committee was responsible for creating a new business with five specific objectives: to position the Center as a comprehensive diagnostic and resource center for women; to generate physician referrals to the Breast Center through various vehicles; to create awareness of the Breast Center's capabilities among area radiologists; to create awareness of the Breast Center among employees of six sister facilities; to create "brand awareness" for the Mercy Center for Breast Health among referring physicians and patients who could use competing centers in the area. The Steering Committee's charter was to design a center with a feminine touch and ambience and to provide a "one-stop shopping" experience for patients. A major component of the Breast Center is the Dianne Haselwood Resource Center, which provides patients with educational support and information. The Steering Committee brought its diverse experience and interests to bear on arranging for equipment acquisition, information and clerical systems, staffing, clinic office design, patient care and marketing. Planning the Mercy Center for Breast Health has been a positive challenge that brought together many elements of the organization and people from different departments and specialties to create a new business venture. Our charge now is to grow and to live up to our vision of offering complete breast diagnostic, education and support services in one location.

  16. New evidence on the state of stress of the san andreas fault system.

    Science.gov (United States)

    Zoback, M D; Zoback, M L; Mount, V S; Suppe, J; Eaton, J P; Healy, J H; Oppenheimer, D; Reasenberg, P; Jones, L; Raleigh, C B; Wong, I G; Scotti, O; Wentworth, C

    1987-11-20

    Contemporary in situ tectonic stress indicators along the San Andreas fault system in central California show northeast-directed horizontal compression that is nearly perpendicular to the strike of the fault. Such compression explains recent uplift of the Coast Ranges and the numerous active reverse faults and folds that trend nearly parallel to the San Andreas and that are otherwise unexplainable in terms of strike-slip deformation. Fault-normal crustal compression in central California is proposed to result from the extremely low shear strength of the San Andreas and the slightly convergent relative motion between the Pacific and North American plates. Preliminary in situ stress data from the Cajon Pass scientific drill hole (located 3.6 kilometers northeast of the San Andreas in southern California near San Bernardino, California) are also consistent with a weak fault, as they show no right-lateral shear stress at approximately 2-kilometer depth on planes parallel to the San Andreas fault.

  17. NASA's Climate in a Box: Desktop Supercomputing for Open Scientific Model Development

    Science.gov (United States)

    Wojcik, G. S.; Seablom, M. S.; Lee, T. J.; McConaughy, G. R.; Syed, R.; Oloso, A.; Kemp, E. M.; Greenseid, J.; Smith, R.

    2009-12-01

    NASA's High Performance Computing Portfolio in cooperation with its Modeling, Analysis, and Prediction program intends to make its climate and earth science models more accessible to a larger community. A key goal of this effort is to open the model development and validation process to the scientific community at large such that a natural selection process is enabled and results in a more efficient scientific process. One obstacle to others using NASA models is the complexity of the models and the difficulty in learning how to use them. This situation applies not only to scientists who regularly use these models but also non-typical users who may want to use the models such as scientists from different domains, policy makers, and teachers. Another obstacle to the use of these models is that access to high performance computing (HPC) accounts, from which the models are implemented, can be restrictive with long wait times in job queues and delays caused by an arduous process of obtaining an account, especially for foreign nationals. This project explores the utility of using desktop supercomputers in providing a complete ready-to-use toolkit of climate research products to investigators and on demand access to an HPC system. One objective of this work is to pre-package NASA and NOAA models so that new users will not have to spend significant time porting the models. In addition, the prepackaged toolkit will include tools, such as workflow, visualization, social networking web sites, and analysis tools, to assist users in running the models and analyzing the data. The system architecture to be developed will allow for automatic code updates for each user and an effective means with which to deal with data that are generated. We plan to investigate several desktop systems, but our work to date has focused on a Cray CX1. Currently, we are investigating the potential capabilities of several non-traditional development environments. While most NASA and NOAA models are

  18. 76 FR 41804 - Town Hall Discussion With the Director of the Center for Devices and Radiological Health and...

    Science.gov (United States)

    2011-07-15

    ... two meetings: one in Dallas, TX, and one in Orlando, FL. The meeting in San Francisco will be our... Senior Center Management AGENCY: Food and Drug Administration, HHS. ACTION: Notice of public meeting; request for comments. The Food and Drug Administration (FDA) is announcing a public meeting entitled...

  19. Dal "San Marco" al "Vega". (English Title: From "San Marco" to Vega)

    Science.gov (United States)

    Savi, E.

    2017-10-01

    Apart from the two superpowers, among the other countries Italy has had an important role in astronautics. The roots of Italian astronautics' history runs deep in the hottest years of the Cold War, and it had its first remarkable achievement in the San Marco project..after years of advanced technologies testing, they achieved European cooperation and built VEGA, the current Arianespace light launcher.

  20. Relocating San Miguel Volcanic Seismic Events for Receiver Functions and Tomographic Models

    Science.gov (United States)

    Patlan, E.; Velasco, A. A.; Konter, J.

    2009-12-01

    The San Miguel volcano lies near the city of San Miguel, El Salvador (13.43N and -88.26W). San Miguel volcano, an active stratovolcano, presents a significant natural hazard for the city of San Miguel. Furthermore, the internal state and activity of volcanoes remains an important component to understanding volcanic hazard. The main technology for addressing volcanic hazards and processes is through the analysis of data collected from the deployment of seismic sensors that record ground motion. Six UTEP seismic stations were deployed around San Miguel volcano from 2007-2008 to define the magma chamber and assess the seismic and volcanic hazard. We utilize these data to develop images of the earth structure beneath the volcano, studying the volcanic processes by identifying different sources, and investigating the role of earthquakes and faults in controlling the volcanic processes. We will calculate receiver functions to determine the thickness of San Miguel volcano internal structure, within the Caribbean plate. Crustal thicknesses will be modeled using calculated receiver functions from both theoretical and hand-picked P-wave arrivals. We will use this information derived from receiver functions, along with P-wave delay times, to map the location of the magma chamber.

  1. El círculo meridiano automático de San Fernando - San Juan. Sus primeros pasos en el hemisferio sur

    Science.gov (United States)

    Mallamaci, C. C.; Muiños, J. L.; Gallego, M.; Pérez, J. A.; Marmolejo, L.; Navarro, J. L.; Sedeño, J.; Vallejos, M.; Belizón, F.

    Se informa sobre el estado actual del Círculo Meridiano Automático de San Fernando-San Juan. El instrumento (Grubb-Parson, de 178mm de abertura y 2665 mm de distancia focal) es gemelo del que se encuentra en las Islas Canarias, y fue instalado durante los meses de julio y agosto de 1996 en la estación astronómica ``Dr. C.U.Cesco" (El Leoncito, Barreal), a unos 200 km de distancia de la ciudad de San Juan, merced a un Convenio de Cooperación Científica, firmado en 1994 entre el ROA (España) y el OAFA (Argentina). En la actualidad se está llevando a cabo un programa de prueba cuyos resultados preliminares muestran que el telescopio está en buenas condiciones para observar estrellas de hasta magnitud aproximada 14.5, con buenos errores de observación (<0.12" en ascensión recta y declinación).

  2. Tectonic Implications of Changes in the Paleogene Paleodrainage Network in the West-Central Part of the San Luis Basin, Northern Rio Grande Rift, New Mexico and Colorado, USA

    Science.gov (United States)

    Thompson, R. A.; Turner, K. J.; Cosca, M. A.; Drenth, B.

    2016-12-01

    The San Luis Basin is the largest of extensional basins in the northern Rio Grande rift (>11,400 km2). The modern basin configuration is the result of Neogene deformation that has been the focus of numerous studies. In contrast, Paleogene extensional deformation is relatively little studied owing to a fragmentary or poorly exposed stratigraphic record in most areas. However, volcanic and volcaniclastic deposits exposed along the western margin of the basin provide the spatial and temporal framework for interpretation of paleodrainage patterns that changed in direct response to Oligocene basin subsidence and the migration of centers of Tertiary volcanism. The early Oligocene (34 to 30 Ma) drainage pattern that originated in the volcanic highlands of the San Juan Mountains flowed south into the northern Tusas Mountains. A structural and topographic high composed of Proterozoic rocks in the Tusas Mountains directed flow to the southeast at least as late as 29 Ma, as ash-flow tuffs sourced in the southeast San Juan Mountains are restricted to the north side of the paleohigh. Construction of volcanic highlands in the San Luis Hills between 30 and 28.5 Ma provided an abundant source of volcanic debris that combined with volcanic detritus sourced in the southeast San Juan Mountains and was deposited (Los Pinos Formation) throughout the northern Tusas Mountains progressively onlapping the paleotopographic high. By 29 Ma, subsidence of the Las Mesitas graben, a structural sub-basin, between the San Luis Hills and the southeast San Juan and northern Tusas Mountains is reflected by thick deposits of Los Pinos Formation beneath 26.5 Ma basalts. Regional tectonism responsible for the formation of the graben may have also lowered the topographic and structural high in the Tusas Mountains, which allowed development of a southwest-flowing paleodrainage that likely flowed onto the Colorado Plateau. Tholeiitic basalt flows erupted in the San Luis Hills at 25.8 Ma, that presently cap

  3. Geophysical Characterization of Groundwater-Fault Dynamics at San Andreas Oasis

    Science.gov (United States)

    Faherty, D.; Polet, J.; Osborn, S. G.

    2017-12-01

    The San Andreas Oasis has historically provided a reliable source of fresh water near the northeast margin of the Salton Sea, although since the recent completion of the Coachella Canal Lining Project and persistent drought in California, surface water at the site has begun to disappear. This may be an effect of the canal lining, however, the controls on groundwater are complicated by the presence of the Hidden Springs Fault (HSF), a northeast dipping normal fault that trends near the San Andreas Oasis. Its surface expression is apparent as a lineation against which all plant growth terminates, suggesting that it may form a partial barrier to subsurface groundwater flow. Numerous environmental studies have detailed the chemical evolution of waters resources at San Andreas Spring, although there remains a knowledge gap on the HSF and its relation to groundwater at the site. To better constrain flow paths and characterize groundwater-fault interactions, we have employed resistivity surveys near the surface trace of the HSF to generate profiles of lateral and depth-dependent variations in resistivity. The survey design is comprised of lines installed in Wenner Arrays, using an IRIS Syscal Kid, with 24 electrodes, at a maximum electrode spacing of 5 meters. In addition, we have gathered constraints on the geometry of the HSF using a combination of ground-based magnetic and gravity profiles, conducted with a GEM walking Proton Precession magnetometer and a Lacoste & Romberg gravimeter. Seventeen gravity measurements were acquired across the surface trace of the fault. Preliminary resistivity results depict a shallow conductor localized at the oasis and discontinuous across the HSF. Magnetic data reveal a large contrast in subsurface magnetic susceptibility that appears coincident with the surface trace and trend of the HSF, while gravity data suggests a shallow, relatively high density anomaly centered near the oasis. These data also hint at a second, previously

  4. Characterization of alumina using small angle neutron scattering (SANS)

    International Nuclear Information System (INIS)

    Megat Harun Al Rashidn Megat Ahmad; Abdul Aziz Mohamed; Azmi Ibrahim; Che Seman Mahmood; Edy Giri Rachman Putra; Muhammad Rawi Muhammad Zin; Razali Kassim; Rafhayudi Jamro

    2007-01-01

    Alumina powder was synthesized from an aluminium precursor and studied using small angle neutron scattering (SANS) technique and complemented with transmission electron microscope (TEM). XRD measurement confirmed that the alumina produced was high purity and highly crystalline αphase. SANS examination indicates the formation of mass fractals microstructures with fractal dimension of about 2.8 on the alumina powder. (Author)

  5. Species - San Diego Co. [ds121

    Data.gov (United States)

    California Natural Resource Agency — This is the Biological Observation Database point layer representing baseline observations of sensitive species (as defined by the MSCP) throughout San Diego County....

  6. Lipid based drug delivery systems: Kinetics by SANS

    Science.gov (United States)

    Uhríková, D.; Teixeira, J.; Hubčík, L.; Búcsi, A.; Kondela, T.; Murugova, T.; Ivankov, O. I.

    2017-05-01

    N,N-dimethyldodecylamine-N-oxide (C12NO) is a surfactant that may exist either in a neutral or protonated form depending on the pH of aqueous solutions. Using small angle X-ray diffraction (SAXD) we demonstrate structural responsivity of C12NO/dioleoylphospha-tidylethanolamine (DOPE)/DNA complexes designed as pH sensitive gene delivery vectors. Small angle neutron scattering (SANS) was employed to follow kinetics of C12NO protonization and DNA binding into C12NO/DOPE/DNA complexes in solution of 150 mM NaCl at acidic condition. SANS data analyzed using paracrystal lamellar model show the formation of complexes with stacking up to ∼32 bilayers, spacing ∼ 62 Å, and lipid bilayer thickness ∼37 Å in 3 minutes after changing pH from 7 to 4. Subsequent structural reorganization of the complexes was observed along 90 minutes of SANS mesurements.

  7. Lipid based drug delivery systems: Kinetics by SANS

    International Nuclear Information System (INIS)

    Uhríková, D; Hubčík, L; Búcsi, A; Kondela, T; Teixeira, J; Murugova, T; Ivankov, O I

    2017-01-01

    N,N-dimethyldodecylamine-N-oxide (C 12 NO) is a surfactant that may exist either in a neutral or protonated form depending on the pH of aqueous solutions. Using small angle X-ray diffraction (SAXD) we demonstrate structural responsivity of C 12 NO/dioleoylphospha-tidylethanolamine (DOPE)/DNA complexes designed as pH sensitive gene delivery vectors. Small angle neutron scattering (SANS) was employed to follow kinetics of C 12 NO protonization and DNA binding into C 12 NO/DOPE/DNA complexes in solution of 150 mM NaCl at acidic condition. SANS data analyzed using paracrystal lamellar model show the formation of complexes with stacking up to ∼32 bilayers, spacing ∼ 62 Å, and lipid bilayer thickness ∼37 Å in 3 minutes after changing pH from 7 to 4. Subsequent structural reorganization of the complexes was observed along 90 minutes of SANS mesurements. (paper)

  8. The GF11 supercomputer

    International Nuclear Information System (INIS)

    Beetem, J.; Denneau, M.; Weingarten, D.

    1985-01-01

    GF11 is a parallel computer currently under construction at the Yorktown Research Center. The machine incorporates 576 floating-point processors arranged in a modified SIMD architecture. Each processor has space for 2 Mbytes of memory and is capable of 20 MFLOPS, giving the total machine a peak of 1.125 Gbytes of memory and 11.52 GFLOPS. The floating-point processors are interconnected by a dynamically reconfigurable non-blocking switching network. At each machine cycle any of 1024 pre-selected permutations of data can be realized among the processors. The main intended application of GF11 is a class of calculations arising from quantum chromodynamics, a proposed theory of the elementary particles which participate in nuclear interactions

  9. Geologic Map of the San Luis Quadrangle, Costilla County, Colorado

    Science.gov (United States)

    Machette, Michael N.; Thompson, Ren A.; Drenth, Benjamin J.

    2008-01-01

    The map area includes San Luis and the primarily rural surrounding area. San Luis, the county seat of Costilla County, is the oldest surviving settlement in Colorado (1851). West of the town are San Pedro and San Luis mesas (basalt-covered tablelands), which are horsts with the San Luis fault zone to the east and the southern Sangre de Cristo fault zone to the west. The map also includes the Sanchez graben (part of the larger Culebra graben), a deep structural basin that lies between the San Luis fault zone (on the west) and the central Sangre de Cristo fault zone (on the east). The oldest rocks exposed in the map area are the Pliocene to upper Oligocene basin-fill sediments of the Santa Fe Group, and Pliocene Servilleta Basalt, a regional series of 3.7?4.8 Ma old flood basalts. Landslide deposits and colluvium that rest on sediments of the Santa Fe Group cover the steep margins of the mesas. Rare exposures of the sediment are comprised of siltstones, sandstones, and minor fluvial conglomerates. Most of the low ground surrounding the mesas and in the graben is covered by surficial deposits of Quaternary age. The alluvial deposits are subdivided into three Pleistocene-age units and three Holocene-age units. The oldest Pleistocene gravel (unit Qao) forms extensive coalesced alluvial fan and piedmont surfaces, the largest of which is known as the Costilla Plain. This surface extends west from San Pedro Mesa to the Rio Grande. The primary geologic hazards in the map area are from earthquakes, landslides, and localized flooding. There are three major fault zones in the area (as discussed above), and they all show evidence for late Pleistocene to possible Holocene movement. The landslides may have seismogenic origins; that is, they may be stimulated by strong ground shaking during large earthquakes. Machette and Thompson based this geologic map entirely on new mapping, whereas Drenth supplied geophysical data and interpretations.

  10. 33 CFR 165.758 - Security Zone; San Juan, Puerto Rico.

    Science.gov (United States)

    2010-07-01

    ... Security Zone; San Juan, Puerto Rico. (a) Location. Moving and fixed security zones are established 50... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Security Zone; San Juan, Puerto Rico. 165.758 Section 165.758 Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND...

  11. Resistance Management for San Jose Scale (Hemiptera: Diaspididae).

    Science.gov (United States)

    Buzzetti, K; Chorbadjian, R A; Nauen, R

    2015-12-01

    The San Jose scale Diaspidiotus perniciosus Comstock is one of the most important pests of deciduous fruit trees. The major cause of recent outbreaks in apple orchards is thought to be the development of insecticide resistance, specifically organophosphates. The first report was given in North America, and now, in Chile. In the present study, San Jose scale populations collected from two central regions of Chile were checked for their susceptibility to different mode of action insecticides in order to establish alternatives to manage this pest. No evidence of cross resistance between organophosphates insecticides and acetamiprid, buprofezin, pyriproxyfen, spirotetramat, sulfoxaflor, or thiacloprid was found. Baselines of LC50-LC95 for different life stages of San Jose scale are given, as reference to future studies of resistance monitoring. The systemic activity of acetamiprid, spirotetramat, and thiacloprid was higher than the contact residue effect of these compounds. For sulfoxaflor, both values were similar. Program treatments including one or more of these compounds are compared in efficacy and impact on resistance ratio values. In order to preserve new insecticides as an important tool to control San Jose scale, resistance management programs should be implemented, considering insecticide mode of action classes alternated or mixed. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. A new record for American Bullfrog (Lithobates catesbeianus in San Juan, Argentina Nuevo registro de rana toro americana (Lithobates catesbeianus en San Juan, Argentina

    Directory of Open Access Journals (Sweden)

    Eduardo Sanabria

    2011-03-01

    Full Text Available We report a new record of Lithobates catesbeianus (American bullfrog from Argentina. L. catesbeianus was first introduced to San Juan Province 11 years ago in Calingasta Department, where the habitat is pre-cordilleran. The new record is for Zonda Department, San Juan Province, in the Monte desert region. Here, L. catesbeianus uses artificial ponds for reproduction and tadpole development. These ponds receive water from an irrigation system that connects the whole agriculture land in the region. The tadpoles use the irrigation canals to move among ponds. We suggest that legislation should be established to prevent future invasions and to achieve sustainable management of the wild American bullfrog populations in San Juan. Prevention of future invasion and management of established populations of this species requires the cooperation of numerous stake holders.Se presenta un nuevo registro de Lithobates catesbeianus (rana toro americana en Argentina. L. catesbeianus fue introducida por primera vez a la provincia de San Juan hace 11 años en el Departamento Calingasta, donde el hábitat es pre-cordillerano. El nuevo registro es para el Departamento Zonda en la provincia de San Juan, en el desierto del Monte. En este sitio, L. catesbeianus usa estanques artificiales para la reproducción y desarrollo del renacuajo. Los estanques reciben agua de un sistema de riego que conecta todas las tierras de la agricultura en la región. Los renacuajos utilizan los canales de riego para moverse entre los estanques. Sugerimos que se establezcan leyes para prevenir invasiones futuras y para lograr un manejo integrado de las poblaciones silvestres de rana toro que se encuentran en San Juan. La prevención de futuras invasiones y el manejo de las poblaciones establecidas de esta especie requieren la cooperación de numerosas entidades tanto gubernamentales como privadas.

  13. Structure of the 1906 near-surface rupture zone of the San Andreas Fault, San Francisco Peninsula segment, near Woodside, California

    Science.gov (United States)

    Rosa, C.M.; Catchings, R.D.; Rymer, M.J.; Grove, Karen; Goldman, M.R.

    2016-07-08

    High-resolution seismic-reflection and refraction images of the 1906 surface rupture zone of the San Andreas Fault near Woodside, California reveal evidence for one or more additional near-surface (within about 3 meters [m] depth) fault strands within about 25 m of the 1906 surface rupture. The 1906 surface rupture above the groundwater table (vadose zone) has been observed in paleoseismic trenches that coincide with our seismic profile and is seismically characterized by a discrete zone of low P-wave velocities (Vp), low S-wave velocities (Vs), high Vp/Vs ratios, and high Poisson’s ratios. A second near-surface fault strand, located about 17 m to the southwest of the 1906 surface rupture, is inferred by similar seismic anomalies. Between these two near-surface fault strands and below 5 m depth, we observed a near-vertical fault strand characterized by a zone of high Vp, low Vs, high Vp/Vs ratios, and high Poisson’s ratios on refraction tomography images and near-vertical diffractions on seismic-reflection images. This prominent subsurface zone of seismic anomalies is laterally offset from the 1906 surface rupture by about 8 m and likely represents the active main (long-term) strand of the San Andreas Fault at 5 to 10 m depth. Geometries of the near-surface and subsurface (about 5 to 10 m depth) fault zone suggest that the 1906 surface rupture dips southwestward to join the main strand of the San Andreas Fault at about 5 to 10 m below the surface. The 1906 surface rupture forms a prominent groundwater barrier in the upper 3 to 5 m, but our interpreted secondary near-surface fault strand to the southwest forms a weaker barrier, suggesting that there has been less or less-recent near-surface slip on that strand. At about 6 m depth, the main strand of the San Andreas Fault consists of water-saturated blue clay (collected from a hand-augered borehole), which is similar to deeply weathered serpentinite observed within the main strand of the San Andreas Fault at

  14. COMPUTATIONAL SCIENCE CENTER

    Energy Technology Data Exchange (ETDEWEB)

    DAVENPORT, J.

    2006-11-01

    Computational Science is an integral component of Brookhaven's multi science mission, and is a reflection of the increased role of computation across all of science. Brookhaven currently has major efforts in data storage and analysis for the Relativistic Heavy Ion Collider (RHIC) and the ATLAS detector at CERN, and in quantum chromodynamics. The Laboratory is host for the QCDOC machines (quantum chromodynamics on a chip), 10 teraflop/s computers which boast 12,288 processors each. There are two here, one for the Riken/BNL Research Center and the other supported by DOE for the US Lattice Gauge Community and other scientific users. A 100 teraflop/s supercomputer will be installed at Brookhaven in the coming year, managed jointly by Brookhaven and Stony Brook, and funded by a grant from New York State. This machine will be used for computational science across Brookhaven's entire research program, and also by researchers at Stony Brook and across New York State. With Stony Brook, Brookhaven has formed the New York Center for Computational Science (NYCCS) as a focal point for interdisciplinary computational science, which is closely linked to Brookhaven's Computational Science Center (CSC). The CSC has established a strong program in computational science, with an emphasis on nanoscale electronic structure and molecular dynamics, accelerator design, computational fluid dynamics, medical imaging, parallel computing and numerical algorithms. We have been an active participant in DOES SciDAC program (Scientific Discovery through Advanced Computing). We are also planning a major expansion in computational biology in keeping with Laboratory initiatives. Additional laboratory initiatives with a dependence on a high level of computation include the development of hydrodynamics models for the interpretation of RHIC data, computational models for the atmospheric transport of aerosols, and models for combustion and for energy utilization. The CSC was formed to

  15. New fault picture points toward San Francisco Bay area earthquakes

    Science.gov (United States)

    Kerr, R. A.

    1989-01-01

    Recent earthquakes and a new way of looking at faults suggest that damaging earthquakes are closing in on the San Francisco area. Earthquakes Awareness Week 1989 in northern California started off with a bang on Monday, 3 April, when a magnitude 4.8 earthquake struck 15 kilometers northeast of San Jose. The relatively small shock-its primary damage was the shattering of an air-control tower window-got the immediate attention of three U.S Geological Survey seismologists in Menlo Park near San Francisco. David Oppenheimer, William Bakun, and Allan Lindh had forecast a nearby earthquake in a just completed report, and this, they thought, might be it. 

  16. The San values of conflict prevention and avoidance in Platfontein

    Directory of Open Access Journals (Sweden)

    Nina Mollema

    2017-09-01

    Full Text Available The aim of this article is to identify measures that can prevent violent conflict through the maintenance of traditional cultural values that guide conflict avoidance. Moreover, the article focuses on the concepts of conflict prevention and conflict avoidance as applied by the San community of Platfontein. The causes of the inter-communal tensions between the San community members are also examined. A selected conflict situation, that of superstition and witchcraft, is assessed as factors increasing interpersonal conflict in the Platfontein community. This investigation is made to determine if the San preventive measures have an impact in the community, so as to prevent ongoing conflicts from escalating further.

  17. El castillo de San Romualdo (San Fernando, Cádiz. Aproximación estratigráfica y evolución constructiva

    Directory of Open Access Journals (Sweden)

    Utrera Burgal, Raquel M.

    2009-12-01

    Full Text Available This article shows the results of the archaeological research carried out in the castle of San Romualdo, the most emblematic building of the city of San Fernando and a defensive fortress tied to the control access of Cádiz. The standing building recording has enabled to know the building evolution from its origins until the present. Studies have hitherto confirmed the chronological conclusions already proposed in 2003. That is, the current castle is a medieval Christian building erected in the second half of the 13th century thanks to Mudejar workers and materials reused from a previous building.Presentamos en este artículo los resultados de nuestra investigación arqueológica en el castillo de San Romualdo, el edificio más emblemático de la ciudad de San Fernando y fortaleza defensiva ligada al control del acceso a Cádiz. El análisis estratigráfico de alzados ha permitido conocer la evolución constructiva del edificio, desde sus orígenes hasta la actualidad. Hasta ahora los estudios confirman las conclusiones en cuanto a su datación presentadas en el año 2003, es decir, el castillo, tal y como hoy lo conocemos, es una construcción medieval cristiana, realizada durante la segunda mitad del siglo XIII, con mano de obra mudéjar y con materiales reutilizados de una edificación anterior.

  18. The San Diego Panasonic Partnership: A Case Study in Restructuring.

    Science.gov (United States)

    Holzman, Michael; Tewel, Kenneth J.

    1992-01-01

    The Panasonic Foundation provides resources for restructuring school districts. The article examines its partnership with the San Diego City School District, highlighting four schools that demonstrate promising practices and guiding principles. It describes recent partnership work on systemic issues, noting the next steps to be taken in San Diego.…

  19. Characterizing the Organic Matter in Surface Sediments from the San Juan Bay Estuary,

    Science.gov (United States)

    The San Juan Bay Estuary (SJBE) is located on the north coast of Puerto Rico and includes the San Juan Bay, San José Lagoon, La Torrecilla Lagoon and Piñones Lagoon, as well as the Martín Peña and the Suárez Canals. The SJBE watershed has the highest...

  20. Cuartel San Carlos. Yacimiento veterano

    Directory of Open Access Journals (Sweden)

    Mariana Flores

    2007-01-01

    Full Text Available El Cuartel San Carlos es un monumento histórico nacional (1986 de finales del siglo XVIII (1785-1790, caracterizado por sufrir diversas adversidades en su construcción y soportar los terremotos de 1812 y 1900. En el año 2006, el organismo encargado de su custodia, el Instituto de Patrimonio Cultural del Ministerio de Cultura, ejecutó tres etapas de exploración arqueológica, que abarcaron las áreas Traspatio, Patio Central y las Naves Este y Oeste de la edificación. Este trabajo reseña el análisis de la documentación arqueológica obtenida en el sitio, a partir de la realización de dicho proyecto, denominado EACUSAC (Estudio Arqueológico del Cuartel San Carlos, que representa además, la tercera campaña realizada en el sitio. La importancia de este yacimiento histórico, radica en su participación en los acontecimientos que propiciaron conflictos de poder durante el surgimiento de la República y en los sucesos políticos del siglo XX. De igual manera, se encontró en el sitio una amplia muestra de materiales arqueológicos que reseñan un estilo de vida cotidiana militar, así como las dinámicas sociales internas ocurridas en el San Carlos, como lugar estratégico para la defensa de los diferentes regímenes que atravesó el país, desde la época del imperialismo español hasta nuestros días.

  1. San Francisco urban partnership agreement, national evaluation : exogenous factors test plan.

    Science.gov (United States)

    2011-06-01

    This report presents the test plan for collecting and analyzing exogenous factors data for the San Francisco Urban : Partnership Agreement (UPA) under the United States Department of Transportation (U.S. DOT) UPA Program. : The San Francisco UPA proj...

  2. EX1103: Exploration and Mapping, Galapagos Spreading Center: Mapping, CTD, Tow-Yo, and ROV on NOAA Ship Okeanos Explorer between 20110608 and 20110728

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This cruise will be composed of two separate legs. The first leg will be a transit from San Diego, CA to the Galapagos Spreading Center, where multibeam mapping, CTD...

  3. Data Files for Ground-Motion Simulations of the 1906 San Francisco Earthquake and Scenario Earthquakes on the Northern San Andreas Fault

    Science.gov (United States)

    Aagaard, Brad T.; Barall, Michael; Brocher, Thomas M.; Dolenc, David; Dreger, Douglas; Graves, Robert W.; Harmsen, Stephen; Hartzell, Stephen; Larsen, Shawn; McCandless, Kathleen; Nilsson, Stefan; Petersson, N. Anders; Rodgers, Arthur; Sjogreen, Bjorn; Zoback, Mary Lou

    2009-01-01

    This data set contains results from ground-motion simulations of the 1906 San Francisco earthquake, seven hypothetical earthquakes on the northern San Andreas Fault, and the 1989 Loma Prieta earthquake. The bulk of the data consists of synthetic velocity time-histories. Peak ground velocity on a 1/60th degree grid and geodetic displacements from the simulations are also included. Details of the ground-motion simulations and analysis of the results are discussed in Aagaard and others (2008a,b).

  4. San Francisco Bay Water Quality Improvement Fund Map Service, San Francisco CA, 2012, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — The San Francisco Bay Water Quality Improvement Fund is a competitive grant program that is helping implement TMDLs to improve water quality, protect wetlands, and...

  5. San Francisco Bay Water Quality Improvement Fund Project Locations, San Francisco CA, 2017, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — The San Francisco Bay Water Quality Improvement Fund is a competitive grant program that is helping implement TMDLs to improve water quality, protect wetlands, and...

  6. Rating the quality of the landscape of Sierra de las Quijadas National Park, Province of San Luis, Argentina

    International Nuclear Information System (INIS)

    Maero, I.; Rivarola, D.; Tognelli, G.

    2007-01-01

    The National Park Sierra de las Quijadas is located to 120 km to the northwest of the Province of San Luis, Argentina. The study area is of 24,000 hectares, that correspond to 32 % of the total surface, this surface covers the totality with the Potrero de la Aguada and the next zones, the same one was selected because it conforms at the present time the zone of greater frequency of visitors within the Park. The objective of this work is centered in the obtaining of the Total Quality of the Landscape, having compared the demand of beauty to the rest of the other natural resources, to be able to make proposals to improve the Plan of Handling that takes ahead the Administration of National Parks. The used Methodology is the described one by Cendrero et. al. (1987), it is an indirect valuation that is carried out through the components of the landscape and allows to determine the Intrinsic Visual Quality and the Fragility of each one of the Environmental Units in which the park is divided. This analysis allowed to determine 2 Total Qualities of Landscape, that have been mapped using aerial photography equipment and materials and SIG, with field control. This investigation is developed within the Project of Investigation Geology of the Neogeno and Cuaternario of the Mountain range of San Luis, Faculty of Sciences Physical, Mathematics and Natural - National University of San Luis, Argentina. (author)

  7. An overview of San Francisco Bay PORTS

    Science.gov (United States)

    Cheng, Ralph T.; McKinnie, David; English, Chad; Smith, Richard E.

    1998-01-01

    The Physical Oceanographic Real-Time System (PORTS) provides observations of tides, tidal currents, and meteorological conditions in real-time. The San Francisco Bay PORTS (SFPORTS) is a decision support system to facilitate safe and efficient maritime commerce. In addition to real-time observations, SFPORTS includes a nowcast numerical model forming a San Francisco Bay marine nowcast system. SFPORTS data and nowcast numerical model results are made available to users through the World Wide Web (WWW). A brief overview of SFPORTS is presented, from the data flow originated at instrument sensors to final results delivered to end users on the WWW. A user-friendly interface for SFPORTS has been designed and implemented. Appropriate field data analysis, nowcast procedures, design and generation of graphics for WWW display of field data and nowcast results are presented and discussed. Furthermore, SFPORTS is designed to support hazardous materials spill prevention and response, and to serve as resources to scientists studying the health of San Francisco Bay ecosystem. The success (or failure) of the SFPORTS to serve the intended user community is determined by the effectiveness of the user interface.

  8. Evaluating the quality of life of people with profound and multiple disabilities: Use of the San Martín Scale at the Obra San Martín Foundation

    Directory of Open Access Journals (Sweden)

    Irene Hierro Zorrilla

    2015-06-01

    Full Text Available The San Martín Scale is an instrument used to measure the quality of life of people with significant disabilities with adequate levels reliability and validity. In 2012, the San Martín Scale was administered to 85 adults with intellectual disabilities who were provided supports at Obra San Martin Foundation (Santander. In this article, we describe the results obtained at the mesosystem level, an example at the microsystem level, and future areas of work identified from the results.

  9. Think globally, act locally, and collaborate internationally: global health sciences at the University of California, San Francisco.

    Science.gov (United States)

    Macfarlane, Sarah B; Agabian, Nina; Novotny, Thomas E; Rutherford, George W; Stewart, Christopher C; Debas, Haile T

    2008-02-01

    The University of California, San Francisco (UCSF) established Global Health Sciences (GHS) as a campus-wide initiative in 2003. The mission of GHS is to facilitate UCSF's engagement in global health across its four schools by (1) creating a supportive environment that promotes UCSF's leadership role in global health, (2) providing education and training in global health, (3) convening and coordinating global health research activities, (4) establishing global health outreach programs locally in San Francisco and California, (5) partnering with academic centers, especially less-well-resourced institutions in low- and middle-income countries, and (6) developing and collaborating in international initiatives that address neglected global health issues.GHS education programs include a master of science (MS) program expected to start in September 2008, an introduction to global health for UCSF residents, and a year of training at UCSF for MS and PhD students from low- and middle-income countries that is "sandwiched" between years in their own education program and results in a UCSF Sandwich Certificate. GHS's work with partner institutions in California has a preliminary focus on migration and health, and its work with academic centers in low- and middle-income countries focuses primarily on academic partnerships to train human resources for health. Recognizing that the existing academic structure at UCSF may be inadequate to address the complexity of global health threats in the 21st century, GHS is working with the nine other campuses of the University of California to develop a university-wide transdisciplinary initiative in global health.

  10. AMS San Diego Testbed - Calibration Data

    Data.gov (United States)

    Department of Transportation — The data in this repository were collected from the San Diego, California testbed, namely, I-15 from the interchange with SR-78 in the north to the interchange with...

  11. Hydrologic data from wells at or in the vicinity of the San Juan coal mine, San Juan County, New Mexico

    Science.gov (United States)

    Stewart, Anne M.; Thomas, Nicole

    2015-01-01

    In 2010, in cooperation with the Mining and Minerals Division (MMD) of the State of New Mexico Energy, Minerals and Natural Resources Department, the U.S. Geological Survey (USGS) initiated a 4-year assessment of hydrologic conditions at the San Juan coal mine (SJCM), located about 14 miles west-northwest of the city of Farmington, San Juan County, New Mexico. The mine produces coal for power generation at the adjacent San Juan Generating Station (SJGS) and stores coal-combustion byproducts from the SJGS in mined-out surface-mining pits. The purpose of the hydrologic assessment is to identify groundwater flow paths away from SJCM coal-combustion-byproduct storage sites that might allow metals that may be leached from coal-combustion byproducts to eventually reach wells or streams after regional dewatering ceases and groundwater recovers to predevelopment levels. The hydrologic assessment, undertaken between 2010 and 2013, included compilation of existing data. The purpose of this report is to present data that were acquired and compiled by the USGS for the SJCM hydrologic assessment.

  12. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  13. When it happens again: impact of future San Francisco Bay area earthquakes

    Science.gov (United States)

    Zoback, M.; Boatwright, J.; Kornfield, L.; Scawthorn, C.; Rojahn, C.

    2005-12-01

    San Francisco Bay area earthquakes, like major floods and hurricanes, have the potential for massive damage to dense urban population centers concentrated in vulnerable zones-along active faults, in coastal regions, and along major river arteries. The recent destruction of Hurricane Katrina does have precedent in the destruction following the 1906 "San Francisco" earthquake and fire in which more than 3000 people were killed and 225,000 were left homeless in San Francisco alone, a city of 400,000 at the time. Analysis of a comprehensive set of damage reports from the magnitude (M) 7.9 1906 earthquake indicates a region of ~ 18,000 km2 was subjected to shaking of Modified Mercalli Intensity of VIII or more - motions capable of damaging even modern, well-built structures; more than 60,000 km2 was subjected to shaking of Intensity VII or greater - the threshold for damage to masonry and poorly designed structures. By comparison, Katrina's hurricane force winds and intense rainfall impacted an area of ~100,000 km2 on the Gulf Coast. Thus, the anticipated effects of a future major Bay Area quake to lives, property, and infrastructure are comparable in scale to Katrina. Secondary hazards (levee failure and flooding in the case of Katrina and fire following the 1906 earthquake) greatly compounded the devastation in both disasters. A recent USGS-led study concluded there is a 62% chance of one or more damaging (M6.7 or greater) earthquakes striking the greater San Francisco Bay area over the next 30 years. The USGS prepared HAZUS loss estimates for the 10 most likely forecast earthquakes which range in size from a M6.7 event on a blind thrust to the largest anticipated event, a M7.9 repeat of the 1906 earthquake. The largest economic loss is expected for a repeat of the 1906 quake. Losses in the Bay region for this event are nearly double those predicted for a M6.9 rupture of the entire Hayward Fault in the East Bay. However, because of high density of population along the

  14. San Telmo, backpackers y otras globalizaciones

    Directory of Open Access Journals (Sweden)

    Fernando Firmo

    2015-12-01

    Full Text Available Este artículo pretende contribuir al debate sobre otras formas de globalización  presentando una etnografía realizada en el barrio de San Telmo sobre mochileros que combinan en sus experiencias viaje y trabajo. Su objetivo es viajar al mismo tiempo que sacan provecho de esto para conseguir el capital necesario que les permita continuar en movimiento alrededor del globo. En este texto quiero hablar sobre estos auténticos actores de la globalización popular que ponen el foco en procesos y agentes alternativos no hegemónicos y que en este caso desarrollan su actividad en el contexto de la experiencia mochilera en San Telmo, siendo mi intención enriquecer las reflexiones sobre la globalización desde abajo.

  15. Low strength of deep San Andreas fault gouge from SAFOD core.

    Science.gov (United States)

    Lockner, David A; Morrow, Carolyn; Moore, Diane; Hickman, Stephen

    2011-04-07

    The San Andreas fault accommodates 28-34 mm yr(-1) of right lateral motion of the Pacific crustal plate northwestward past the North American plate. In California, the fault is composed of two distinct locked segments that have produced great earthquakes in historical times, separated by a 150-km-long creeping zone. The San Andreas Fault Observatory at Depth (SAFOD) is a scientific borehole located northwest of Parkfield, California, near the southern end of the creeping zone. Core was recovered from across the actively deforming San Andreas fault at a vertical depth of 2.7 km (ref. 1). Here we report laboratory strength measurements of these fault core materials at in situ conditions, demonstrating that at this locality and this depth the San Andreas fault is profoundly weak (coefficient of friction, 0.15) owing to the presence of the smectite clay mineral saponite, which is one of the weakest phyllosilicates known. This Mg-rich clay is the low-temperature product of metasomatic reactions between the quartzofeldspathic wall rocks and serpentinite blocks in the fault. These findings provide strong evidence that deformation of the mechanically unusual creeping portions of the San Andreas fault system is controlled by the presence of weak minerals rather than by high fluid pressure or other proposed mechanisms. The combination of these measurements of fault core strength with borehole observations yields a self-consistent picture of the stress state of the San Andreas fault at the SAFOD site, in which the fault is intrinsically weak in an otherwise strong crust. ©2011 Macmillan Publishers Limited. All rights reserved

  16. Small angle neutron scattering (SANS) under non-equilibrium conditions

    International Nuclear Information System (INIS)

    Oberthur, R.C.

    1984-01-01

    The use of small angle neutron scattering (SANS) for the study of systems under non-equilibrium conditions is illustrated by three types of experiments in the field of polymer research: - the relaxation of a system from an initial non-equilibrium state towards equilibrium, - the cyclic or repetitive installation of a series of non-equilibrium states in a system, - the steady non-equilibrium state maintained by a constant dissipation of energy within the system. Characteristic times obtained in these experiments with SANS are compared with the times obtained from quasi-elastic neutron and light scattering, which yield information about the equilibrium dynamics of the system. The limits of SANS applied to non-equilibrium systems for the measurement of relaxation times at different length scales are shown and compared to the limits of quasielastic neutron and light scattering

  17. San Jacinto Tries Management by Objectives

    Science.gov (United States)

    Deegan, William

    1974-01-01

    San Jacinto, California, has adopted a measurable institutional objectives approach to management by objectives. Results reflect, not only improved cost effectiveness of community college education, but also more effective educational programs for students. (Author/WM)

  18. Solar-energy-system performance evaluation. San Anselmo School, San Jose, California, April 1981-March 1982

    Energy Technology Data Exchange (ETDEWEB)

    Pakkala, P.A.

    1982-01-01

    The San Anselmo School is a one-story brick elementary school building in San Jose, California. The active solar energy system is designed to supply 70% of the space heating and 72% of the cooling load. It is equipped with 3740 square feet of evacuated tube collectors, a 2175-gallon tank for heat storage, a solar-supplied absorption chiller, and four auxiliary gas-fired absorption chillers/heaters. The measured solar fraction of 19% is far below the expected values and is attributed to severe system control and HVAC problems. Other performance data given for the year include the solar savings ratio, conventional fuel savings, system performance factor, and solar system coefficient of performance. Also tabulated are monthly performance data for the overall solar energy system, collector subsystem, space heating and cooling subsystems. Typical hourly operation data for a day are tabulated, including hourly isolation, collector array temperatures (inlet and outlet), and storage fluid temperatures. The solar energy use and percentage of losses are also graphed. (LEW)

  19. Solar-energy system performance evaluation. San Anselmo School, San Jose, California, July 1980-March 1981

    Energy Technology Data Exchange (ETDEWEB)

    Pakkala, P.A.

    1981-01-01

    The San Anselmo School is a one-story, brick elementary school building located in San Jose, California. The active solar energy system is designed to supply 70% of the heating load and 72% of the cooling load. It is equipped with 3.740 square feet of evacuated tube collectors, 2175-gallon tank for storage, four auxiliary gas-fired absorption chiller/heaters, and a solar-supplied absorption chiller. The measured heating and cooling solar fractions were 9% and 19%, respectively, for an overall solar fraction of 16%, the lowered performance being attributed to severe system control problems. Performance data include the solar savings ratio, conventional fuel savings, system performance factor, and solar system coefficient of performance. Performance data are presented for the overall system and for each subsystem. System operation and solar energy utilization data are included. Also included are a description of the system, performance evaluation techniques, sensor technology, and typical performance data for a month. Weather data are also tabulated. (LEW)

  20. Como güelfos y gibelinos: los colegios de San Bernardo y San Antonio Abad en el Cuzco durante el siglo XVII

    Directory of Open Access Journals (Sweden)

    Guibovich Pérez, Pedro M

    2006-04-01

    Full Text Available This article deals with the conflicts that involved the San Bernardo and San Antonio schools all along the seventeenth century. The author proposes a new approach to explain the social history of colonial Cuzco. He mantains that the root of the confrontation has to do with the privileges that enjoy the jesuits in the provision of academical degrees, a basic requirement to obtain appointments in the civil and eclesiastical administration. To understand this social dinamic, he reconstructs the institutional history of both schools, and reveals the interests that defend the main actors of this secular conflict.

    Los conflictos que enfrentaron a los colegios de San Bernardo y San Antonio Abad a lo largo del siglo XVII es el tema central de estudio de este ensayo. El autor propone una nueva lectura a este episodio de la historia social del Cuzco colonial. Sostiene que en la raíz de los enfrentamientos estuvo el privilegio que gozaban los jesuitas para la concesión de grados académicos, requisitos fundamentales para obtener cargos en la administración civil y eclesiástica. Para entender la dinámica social, el autor reconstruye la historia institucional de los colegios y los intereses en juego de los principales protagonistas del secular conflicto.

  1. April 1906 San Francisco, USA Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The 1906 San Francisco earthquake was the largest event (magnitude 8.3) to occur in the conterminous United States in the 20th Century. Recent estimates indicate...

  2. SANS from interpenetrating polymer networks

    International Nuclear Information System (INIS)

    Markotsis, M.G.; Burford, R.P.; Knott, R.B.; Australian Nuclear Science and Technology Organisation, Menai, NSW; Hanley, T.L.; CRC for Polymers,; Australian Nuclear Science and Technology Organisation, Menai, NSW; Papamanuel, N.

    2003-01-01

    Full text: Interpenetrating polymer networks (IPNs) have been formed by combining two polymeric systems in order to gain enhanced material properties. IPNs are a combination of two or more polymers in network form with one network polymerised and/or crosslinked in the immediate presence of the other(s).1 IPNs allow better blending of two or more crosslinked networks. In this study two sets of IPNs were produced and their microstructure studied using a variety of techniques including small angle neutron scattering (SANS). The first system combined a glassy polymer (polystyrene) with an elastomeric polymer (SBS) with the glassy polymer predominating, to give a high impact plastic. The second set of IPNs contained epichlorohydrin (CO) and nitrile rubber (NBR), and was formed in order to produce novel materials with enhanced chemical and gas barrier properties. In both cases if the phase mixing is optimised the probability of controlled morphologies and synergistic behaviour is increased. The PS/SBS IPNs were prepared using sequential polymerisation. The primary SBS network was thermally crosslinked, then the polystyrene network was polymerised and crosslinked using gamma irradiation to avoid possible thermal degradation of the butadiene segment of the SBS. Tough transparent systems were produced with no apparent thermal degradation of the polybutadiene segments. The epichlorohydrin/nitrile rubber IPNs were formed by simultaneous thermal crosslinking reactions. The epichlorohydrin network was formed using lead based crosslinker, while the nitrile rubber was crosslinked by peroxide methods. The use of two different crosslinking systems was employed in order to achieve independent crosslinking thus resulting in an IPN with minimal grafting between the component networks. SANS, Transmission electron microscopy (TEM), and atomic force microscopy (AFM) were used to examine the size and shape of the phase domains and investigate any variation with crosslinking level and

  3. The Upper San Pedro Partnership: A Case Study of Successful Strategies to Connect Science to Societal Needs

    Science.gov (United States)

    Goodrich, D. C.; Richter, H.; Varady, R.; Browning-Aiken, A.; Shuttleworth, J.

    2006-12-01

    The Upper San Pedro Partnership (USPP) (http://www.usppartnership.com/) has been in existence since 1998. Its purpose is to coordinate and cooperate in the implementation of comprehensive policies and projects to meet the long-term water needs of residents within the U.S. side of the basin and of the San Pedro Riparian National Conservation Area. The Partnership consists of 21 local, state, and Federal agencies, NGO's and a private water company. In 2004 it was recognized by Congress in Section 321 of Public Law 108-136 and required to make annual reports to Congress on its progress in bringing the basin water budget into balance by 2011. The Partnership is dedicated to science-based decision making. This presentation will provide an overview of the evolution of natural resources research in the binational (U.S.-Mexico) San Pedro Basin into a mature example of integrated science and decision making embodied in the USPP. It will discuss the transition through science and research for understanding; to science for addressing a need; to integrated policy development and science. At each stage the research conducted becomes more interdisciplinary, first across abiotic disciplines (hydrology, remote sensing, atmospheric science), then a merging of abiotic and biotic disciplines (adding ecology and plant physiology), and finally a further merging with the social sciences and policy and decision making for resource management. Federal, university, and NSF SAHRA Science and Technology Center research has been planned and conducted directly with the USPP. Because of the success the San Pedro has been designated as an operational HELP (Hydrology for the Environment, Life, and Policy) demonstration basin—the most advanced category. Lessons learned from this experience will be reviewed with the intent providing guidance to ensure that hydrologic and watershed research is socially and scientifically relevant and will directly address the needs of policy makers and resource

  4. Dielectric RheoSANS - Simultaneous Interrogation of Impedance, Rheology and Small Angle Neutron Scattering of Complex Fluids.

    Science.gov (United States)

    Richards, Jeffrey J; Gagnon, Cedric V L; Krzywon, Jeffery R; Wagner, Norman J; Butler, Paul D

    2017-04-10

    A procedure for the operation of a new dielectric RheoSANS instrument capable of simultaneous interrogation of the electrical, mechanical and microstructural properties of complex fluids is presented. The instrument consists of a Couette geometry contained within a modified forced convection oven mounted on a commercial rheometer. This instrument is available for use on the small angle neutron scattering (SANS) beamlines at the National Institute of Standards and Technology (NIST) Center for Neutron Research (NCNR). The Couette geometry is machined to be transparent to neutrons and provides for measurement of the electrical properties and microstructural properties of a sample confined between titanium cylinders while the sample undergoes arbitrary deformation. Synchronization of these measurements is enabled through the use of a customizable program that monitors and controls the execution of predetermined experimental protocols. Described here is a protocol to perform a flow sweep experiment where the shear rate is logarithmically stepped from a maximum value to a minimum value holding at each step for a specified period of time while frequency dependent dielectric measurements are made. Representative results are shown from a sample consisting of a gel composed of carbon black aggregates dispersed in propylene carbonate. As the gel undergoes steady shear, the carbon black network is mechanically deformed, which causes an initial decrease in conductivity associated with the breaking of bonds comprising the carbon black network. However, at higher shear rates, the conductivity recovers associated with the onset of shear thickening. Overall, these results demonstrate the utility of the simultaneous measurement of the rheo-electro-microstructural properties of these suspensions using the dielectric RheoSANS geometry.

  5. The Basic Design Report of the 40M SANS Instrument

    Energy Technology Data Exchange (ETDEWEB)

    Han, Young Soo; Lee, Chang Hee; Hwang, Dong Gil; Kim, Hak Rho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Tae Hwan; Choi, Sung Min [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2006-04-15

    The HANARO cold neutron research facility project was launched on July 1, 2003. A state of the art SANS instrument was selected as a top-priority instrument by an instrument selection committee, which consisted of domestic users and HANARO personnel. An instrument development team and an international and domestic instrument advisory team were formulated. The guide and the instrument simulation were performed using Vitess software and the optimum basic design was completed based on the simulation results and the international advisory team reviews. The optimum design of the guide for the 40M SANS instrument was completed and the optimum basic design of the 40M the SANS instrument was also completed based on the Vitess simulation results. The Q range of the instrument will cover from 0.0008 to 1.0 A-1 and the maximum flux at a sample position can reach about 5.5x10 7 n/cm2sec. The simulation results and the basic design product will be used for the detailed design and the construction of the SANS instrument. The simulation results could be applied to the development of the other instrument.

  6. Radon emanation on San Andreas Fault

    International Nuclear Information System (INIS)

    King, C.-Y.

    1978-01-01

    It is stated that subsurface radon emanation monitored in shallow dry holes along an active segment of the San Andreas fault in central California shows spatially coherent large temporal variations that seem to be correlated with local seismicity. (author)

  7. Parallel processor programs in the Federal Government

    Science.gov (United States)

    Schneck, P. B.; Austin, D.; Squires, S. L.; Lehmann, J.; Mizell, D.; Wallgren, K.

    1985-01-01

    In 1982, a report dealing with the nation's research needs in high-speed computing called for increased access to supercomputing resources for the research community, research in computational mathematics, and increased research in the technology base needed for the next generation of supercomputers. Since that time a number of programs addressing future generations of computers, particularly parallel processors, have been started by U.S. government agencies. The present paper provides a description of the largest government programs in parallel processing. Established in fiscal year 1985 by the Institute for Defense Analyses for the National Security Agency, the Supercomputing Research Center will pursue research to advance the state of the art in supercomputing. Attention is also given to the DOE applied mathematical sciences research program, the NYU Ultracomputer project, the DARPA multiprocessor system architectures program, NSF research on multiprocessor systems, ONR activities in parallel computing, and NASA parallel processor projects.

  8. DNA array analysis of gene expression changes by Choto-san in the ischemic rat brain

    OpenAIRE

    Tohda, Michihisa; Matsumoto, Kinzo; Hayashi, Hisae; Murakami, Yukihisa; Watanabe, Hiroshi

    2004-01-01

    The effects of Choto-san on gene expression in the dementia model rat brain were studied using a DNA microarray system. Choto-san inhibited the expression of 181 genes that has been enhanced by permanent occlusion of the bilateral common carotid arteries (2VO). Choto-san also reversed the expression inhibition of 32 genes induced by 2VO. These results may suggest that Choto-san, which has been therapeutically used as an antidementive drug, shows therapeutic effects through gene expression cha...

  9. grid will help physicists' global hunt for particles Researchers have begun running experiments with the MidWest Tier 2 Center, one of five regional computing centers in the US.

    CERN Multimedia

    Ames, Ben

    2006-01-01

    "When physicists at Switzerland's CERN laboratory turn on their newsest particle collider in 2007, they will rely on computer scientists in Chicago and Indianapolis to help sift through the results using a worldwide supercomputing grid." (1/2 page)

  10. Comparison of SANS instruments at reactors and pulsed sources

    International Nuclear Information System (INIS)

    Thiyagarajan, P.; Epperson, J.E.; Crawford, R.K.; Carpenter, J.M.; Hjelm, R.P. Jr.

    1992-01-01

    Small angle neutron scattering is a general purpose technique to study long range fluctuations and hence has been applied in almost every field of science for material characterization. SANS instruments can be built at steady state reactors and at the pulsed neutron sources where time-of-flight (TOF) techniques are used. The steady state instruments usually give data over small q ranges and in order to cover a large q range these instruments have to be reconfigured several times and SANS measurements have to be made. These instruments have provided better resolution and higher data rates within their restricted q ranges until now, but the TOF instruments are now developing to comparable performance. The TOF-SANS instruments, by using a wide band of wavelengths, can cover a wide dynamic q range in a single measurement. This is a big advantage for studying systems that are changing and those which cannot be exactly reproduced. This paper compares the design concepts and performances of these two types of instruments

  11. Implementation of an Online Climate Science Course at San Antonio College

    Science.gov (United States)

    Reyes, R.; Strybos, J.

    2016-12-01

    San Antonio College (SAC) plans to incorporate an online climate science class into the curriculum with a focus on local weather conditions and data. SAC is part of a network of five community colleges based around San Antonio, Texas, has over 20,000 students enrolled, and its student population reflects the diversity in ethnicity, age and gender of the San Antonio community. The college understands the importance of educating San Antonio residents on climate science and its complexities. San Antonio residents are familiar with weather changes and extreme conditions. The region has experienced an extreme drought, including water rationing in the city. Then, this year's El Niño intensified expected annual rainfalls and flash floods. The proposed climate science course will uniquely prepare students to understand weather data and the evidence of climate change impacting San Antonio at a local level. This paper will discuss the importance and challenges of introducing the new climate science course into the curriculum, and the desired class format that will increase the course's success. Two of the most significant challenges are informing students about the value of this class and identifying the best teaching format. Additionally, measuring and monitoring enrollment will be essential to determine the course performance and success. At the same time, Alamo Colleges is modifying the process of teaching online classes and is officially working to establish an online college. Around 23% of students enrolled in SAC offered courses are currently enrolled in online courses only, representing an opportunity to incorporate the climate science class as an online course. Since the proposed course will be using electronic textbooks and online applications to access hyperlocal weather data, the class is uniquely suited for online students.

  12. SANS study of three-layer micellar particles

    CERN Document Server

    Plestil, J; Kuklin, A I; Cubitt, R

    2002-01-01

    Three-layer nanoparticles were prepared by polymerization of methyl methacrylate (MMA) in aqueous micellar solutions of poly(methyl methacrylate)-block-poly(methacrylic acid) (PMMA-b-PMA) and polystyrene-block-poly(methacrylic acid) (PS-b-PMA). The resulting polymer forms a layer on the core surface of the original micelles. SANS curves were fitted using an ellipsoidal (PMMA/PMMA/PMA) or spherical (PS/PMMA/PMA) model for the particle core. The particle size (for the presented series of the PMMA/PMMA/PMA particles, the core semiaxes ranged from 87 to 187 A and the axis ratio was about 6) can be finely tuned by variation of monomer concentration. Time-resolved SANS experiments were carried out to describe the growth of the PS/PMMA/PMA particles during polymerization. (orig.)

  13. Identifying clinically meaningful symptom response cut-off values on the SANS in predominant negative symptoms.

    Science.gov (United States)

    Levine, Stephen Z; Leucht, Stefan

    2013-04-01

    The treatment and measurement of negative symptoms are currently at issue in schizophrenia, but the clinical meaning of symptom severity and change is unclear. To offer a clinically meaningful interpretation of severity and change scores on the Scale for the Assessment of Negative Symptoms (SANS). Patients were intention-to-treat participants (n=383) in two double-blind randomized placebo-controlled clinical trials that compared amisulpride with placebo for the treatment of predominant negative symptoms. Equipercentile linking was used to examine extrapolation from (a) CGI-S to SANS severity ratings, and (b) CGI-I to SANS percentage change (n=383). Linking was conducted at baseline, 8-14 days, 28-30 days, and 56-60 days of the trials. Across visits, CGI-S ratings of 'not ill' linked to SANS scores of 0-13, and ranged to 'extreme' ratings that linked to SANS scores of 102-105. The relationship between the CGI-S and the SANS severity scores assumed a linear trend (1=0-13, 2=15-56, 3=37-61, 4=49-66, 5=63-75, 6=79-89, 7=102-105). Similarly the relationship between CGI-I ratings and SANS percentage change followed a linear trend. For instance, CGI-I ratings of 'very much improved' were linked to SANS percent changes of -90 to -67, 'much improved' to -50 to -42, and 'minimally improved' to -21 to -13. The current results uniquely contribute to the debate surrounding negative symptoms by providing clinical meaning to SANS severity and change scores and so offer direction regarding clinically meaningful response cut-off scores to guide treatment targets of predominant negative symptoms. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Characterization of the Drosophila ortholog of the human Usher Syndrome type 1G protein sans.

    Directory of Open Access Journals (Sweden)

    Fabio Demontis

    Full Text Available BACKGROUND: The Usher syndrome (USH is the most frequent deaf-blindness hereditary disease in humans. Deafness is attributed to the disorganization of stereocilia in the inner ear. USH1, the most severe subtype, is associated with mutations in genes encoding myosin VIIa, harmonin, cadherin 23, protocadherin 15, and sans. Myosin VIIa, harmonin, cadherin 23, and protocadherin 15 physically interact in vitro and localize to stereocilia tips in vivo, indicating that they form functional complexes. Sans, in contrast, localizes to vesicle-like structures beneath the apical membrane of stereocilia-displaying hair cells. How mutations in sans result in deafness and blindness is not well understood. Orthologs of myosin VIIa and protocadherin 15 have been identified in Drosophila melanogaster and their genetic analysis has identified essential roles in auditory perception and microvilli morphogenesis, respectively. PRINCIPAL FINDINGS: Here, we have identified and characterized the Drosophila ortholog of human sans. Drosophila Sans is expressed in tubular organs of the embryo, in lens-secreting cone cells of the adult eye, and in microvilli-displaying follicle cells during oogenesis. Sans mutants are viable, fertile, and mutant follicle cells appear to form microvilli, indicating that Sans is dispensable for fly development and microvilli morphogenesis in the follicle epithelium. In follicle cells, Sans protein localizes, similar to its vertebrate ortholog, to intracellular punctate structures, which we have identified as early endosomes associated with the syntaxin Avalanche. CONCLUSIONS: Our work is consistent with an evolutionary conserved function of Sans in vesicle trafficking. Furthermore it provides a significant basis for further understanding of the role of this Usher syndrome ortholog in development and disease.

  15. The aquatic annelid fauna of the San Marcos River headsprings, Hays County, Texas

    Directory of Open Access Journals (Sweden)

    McLean L.D. Worsham

    2016-09-01

    Full Text Available The San Marcos River in Central Texas has been well studied and has been demonstrated to be remarkably specious. Prior to the present study, research on free-living invertebrates in the San Marcos River only dealt with hard bodied taxa with the exception of the report of one gastrotrich, and one subterranean platyhelminth that only incidentally occurs in the head spring outflows. The remainder of the soft-bodied metazoan fauna that inhabit the San Marcos River had never been studied. Our study surveyed the annelid fauna and some other soft-bodied invertebrates of the San Marcos River headsprings. At least four species of Hirudinida, two species of Aphanoneura, one species of Branchiobdellida, and 11 (possibly 13 species of oligochaetous clitellates were collected. Other vermiform taxa collected included at least three species of Turbellaria and one species of Nemertea. We provide the results of the first survey of the aquatic annelid fauna of the San Marcos Springs, along with a dichotomous key to these annelids that includes photos of some representative specimens, and line drawings to elucidate potentially confusing diagnostic structures.

  16. The aquatic annelid fauna of the San Marcos River headsprings, Hays County, Texas

    Science.gov (United States)

    Worsham, McLean L. D.; Gibson, Randy; Huffman, David G.

    2016-01-01

    Abstract The San Marcos River in Central Texas has been well studied and has been demonstrated to be remarkably specious. Prior to the present study, research on free-living invertebrates in the San Marcos River only dealt with hard bodied taxa with the exception of the report of one gastrotrich, and one subterranean platyhelminth that only incidentally occurs in the head spring outflows. The remainder of the soft-bodied metazoan fauna that inhabit the San Marcos River had never been studied. Our study surveyed the annelid fauna and some other soft-bodied invertebrates of the San Marcos River headsprings. At least four species of Hirudinida, two species of Aphanoneura, one species of Branchiobdellida, and 11 (possibly 13) species of oligochaetous clitellates were collected. Other vermiform taxa collected included at least three species of Turbellaria and one species of Nemertea. We provide the results of the first survey of the aquatic annelid fauna of the San Marcos Springs, along with a dichotomous key to these annelids that includes photos of some representative specimens, and line drawings to elucidate potentially confusing diagnostic structures. PMID:27853397

  17. SAFOD Penetrates the San Andreas Fault

    Directory of Open Access Journals (Sweden)

    Mark D. Zoback

    2006-03-01

    Full Text Available SAFOD, the San Andreas Fault Observatory at Depth (Fig. 1, completed an important milestone in July 2005 by drilling through the San Andreas Fault at seismogenic depth. SAFOD is one of three major components of EarthScope, a U.S. National Science Foundation (NSF initiative being conducted in collaboration with the U.S. Geological Survey (USGS. The International Continental Scientific DrillingProgram (ICDP provides engineering and technical support for the project as well as online access to project data and information (http://www.icdp-online.de/sites/sanandreas/news/news1.html. In 2002, the ICDP, the NSF, and the USGS provided funding for a pilot hole project at the SAFOD site. Twenty scientifi c papers summarizing the results of the pilot hole project as well as pre-SAFOD site characterization studies were published in Geophysical Research Letters (Vol.31, Nos. 12 and 15, 2004.

  18. High intensity multi beam design of SANS instrument for Dhruva reactor

    Energy Technology Data Exchange (ETDEWEB)

    Abbas, Sohrab, E-mail: abbas@barc.gov.in; Aswal, V. K. [Solid State Physics Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Désert, S. [Laboratoire Leon Brillouin, CEA, Saclay, 91191 (France)

    2016-05-23

    A new and versatile design of Small Angle Neutron Scattering (SANS) instrument based on utilization of multi-beam is presented. The multi-pinholes and multi-slits as SANS collimator for medium flux Dhruva rearctor have been proposed and their designs have been validated using McStas simulations. Various instrument configurations to achieve different minimum wave vector transfers in scattering experiments are envisioned. These options enable smooth access to minimum wave vector transfers as low as ~ 6×10{sup −4} Å{sup −1} with a significant improvement in neutron intensity, allowing faster measurements. Such angularly well defined and intense neutron beam will allow faster SANS studies of agglomerates larger than few tens of nm.

  19. Data Management as a Cluster Middleware Centerpiece

    Science.gov (United States)

    Zero, Jose; McNab, David; Sawyer, William; Cheung, Samson; Duffy, Daniel; Rood, Richard; Webster, Phil; Palm, Nancy; Salmon, Ellen; Schardt, Tom

    2004-01-01

    Through earth and space modeling and the ongoing launches of satellites to gather data, NASA has become one of the largest producers of data in the world. These large data sets necessitated the creation of a Data Management System (DMS) to assist both the users and the administrators of the data. Halcyon Systems Inc. was contracted by the NASA Center for Computational Sciences (NCCS) to produce a Data Management System. The prototype of the DMS was produced by Halcyon Systems Inc. (Halcyon) for the Global Modeling and Assimilation Office (GMAO). The system, which was implemented and deployed within a relatively short period of time, has proven to be highly reliable and deployable. Following the prototype deployment, Halcyon was contacted by the NCCS to produce a production DMS version for their user community. The system is composed of several existing open source or government-sponsored components such as the San Diego Supercomputer Center s (SDSC) Storage Resource Broker (SRB), the Distributed Oceanographic Data System (DODS), and other components. Since Data Management is one of the foremost problems in cluster computing, the final package not only extends its capabilities as a Data Management System, but also to a cluster management system. This Cluster/Data Management System (CDMS) can be envisioned as the integration of existing packages.

  20. San Francisco-Pacifica Coast Landslide Susceptibility 2011

    Data.gov (United States)

    California Natural Resource Agency — The San Francisco-Pacifica Coast grid map was extracted from the California Geological Survey Map Sheet 58 that covers the entire state of California and originally...